Re: hundreds of millions row dBs - Mailing list pgsql-general

From Tom Lane
Subject Re: hundreds of millions row dBs
Date
Msg-id 20464.1104874596@sss.pgh.pa.us
Whole thread Raw
In response to Re: hundreds of millions row dBs  ("Dann Corbit" <DCorbit@connx.com>)
Responses Re: hundreds of millions row dBs
List pgsql-general
"Dann Corbit" <DCorbit@connx.com> writes:
> Here is an instance where a really big ram disk might be handy.
> You could create a database on a big ram disk and load it, then build
> the indexes.
> Then shut down the database and move it to hard disk.

Actually, if you have a RAM disk, just change the $PGDATA/base/nnn/pgsql_tmp
subdirectory into a symlink to some temp directory on the RAM disk.
Should get you pretty much all the win with no need to move stuff around
afterwards.

You have to be sure the RAM disk is bigger than your biggest index though.

            regards, tom lane

pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: hundreds of millions row dBs
Next
From: Lonni J Friedman
Date:
Subject: vacuum is failing