Re: Disk Performance Problem on Large DB - Mailing list pgsql-admin

From Kevin Grittner
Subject Re: Disk Performance Problem on Large DB
Date
Msg-id 4CD2D95702000025000372EC@gw.wicourts.gov
Whole thread Raw
In response to Disk Performance Problem on Large DB  ("Jonathan Hoover" <jhoover@yahoo-inc.com>)
List pgsql-admin
"Jonathan  Hoover" <jhoover@yahoo-inc.com> wrote:

> I have a simple database, with one table for now. It has 4
> columns:
>
> anid serial primary key unique,
> time timestamp,
> source varchar(5),
> unitid varchar(15),
> guid varchar(32)
>
> There is a btree index on each.
>
> I am loading data 1,000,000 (1M) rows at a time using psql and a
> COPY command. Once I hit 2M rows, my performance just drops out

Drop the indexes and the primary key before you copy in.
Personally, I strongly recommend a VACUUM FREEZE ANALYZE after the
bulk load.  Then use ALTER TABLE to restore the primary key, and
create all the other indexes.

Also, if you don't mind starting over from initdb if it crashes
partway through you can turn fsync off.  You want a big
maintenance_work_mem setting during the index builds -- at least
200 MB.

-Kevin

pgsql-admin by date:

Previous
From: Kenneth Marshall
Date:
Subject: Re: Disk Performance Problem on Large DB
Next
From: Dimitri Fontaine
Date:
Subject: Re: Installation Questions (FreeBSD / Windows / Postgres 9)