Re: Optimizing large data loads - Mailing list pgsql-general

From John Wells
Subject Re: Optimizing large data loads
Date
Msg-id 52979.172.16.3.2.1123336687.squirrel@devsea.net
Whole thread Raw
In response to Re: Optimizing large data loads  (Richard Huxton <dev@archonet.com>)
List pgsql-general
Richard Huxton said:
> You don't say what the limitations of Hibernate are. Usually you might
> look to:
> 1. Use COPY not INSERTs

Not an option, unfortunately.

> 2. If not, block INSERTS into BEGIN/COMMIT transactions of say 100-1000

We're using 50/commit...we can easily up this I suppose.

> 3. Turn fsync off

Done.

> 4. DROP/RESTORE constraints/triggers/indexes while you load your data

Hmmm...will have to think about this a bit...not a bad idea but not sure
how we can make it work in our situation.

> 5. Increase sort_mem/work_mem in your postgresql.conf when recreating
> indexes etc.
> 6. Use multiple processes to make sure the I/O is maxed out.

5. falls in line with 4.  6. is definitely doable.

Thanks for the suggestions!

John


pgsql-general by date:

Previous
From: "Frank Millman"
Date:
Subject: Case sensitivity
Next
From: Tom Lane
Date:
Subject: Re: timestamp default values