Re: hundreds of millions row dBs - Mailing list pgsql-general

From Pierre-Frédéric Caillaud
Subject Re: hundreds of millions row dBs
Date
Msg-id opsj3sk0f7cq72hf@musicbox
Whole thread Raw
In response to Re: hundreds of millions row dBs  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
    To speed up load :
    - make less checkpoints (tweak checkpoint interval and other parameters
in config)
    - disable fsync (not sure if it really helps)
    - have source data, database tables, and log on three physically
different disks
    - have the temporary on a different disk too, or in ramdisk
    - gunzip while restoring to read less data from the disk



> "Dann Corbit" <DCorbit@connx.com> writes:
>> Here is an instance where a really big ram disk might be handy.
>> You could create a database on a big ram disk and load it, then build
>> the indexes.
>> Then shut down the database and move it to hard disk.
>
> Actually, if you have a RAM disk, just change the
> $PGDATA/base/nnn/pgsql_tmp
> subdirectory into a symlink to some temp directory on the RAM disk.
> Should get you pretty much all the win with no need to move stuff around
> afterwards.
>
> You have to be sure the RAM disk is bigger than your biggest index
> though.
>
>             regards, tom lane
>
> ---------------------------(end of broadcast)---------------------------
> TIP 5: Have you checked our extensive FAQ?
>
>                http://www.postgresql.org/docs/faqs/FAQ.html
>



pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: vacuum is failing
Next
From: Lonni J Friedman
Date:
Subject: Re: vacuum is failing