Re: Postgres batch write very slow - what to do - Mailing list pgsql-performance

From Merlin Moncure
Subject Re: Postgres batch write very slow - what to do
Date
Msg-id b42b73150703141926p220b86c3m663f12aabd678d78@mail.gmail.com
Whole thread Raw
In response to Re: Postgres batch write very slow - what to do  (femski <hypertree@yahoo.com>)
Responses Re: Postgres batch write very slow - what to do  (femski <hypertree@yahoo.com>)
List pgsql-performance
On 3/15/07, femski <hypertree@yahoo.com> wrote:
>
> I am using Oracle XE so its using only one core and doing just fine.
> How do I split backend to 4+ processes ?
> I don't want to write a multithreaded loader app.
> And I didn't think Postgres can utilize multiple cores for the
> same insert statement.

well, what sql is the jdbc driver creating exactly?  It is probably
running inserts in a transaction.   your load is about 17k inserts/sec
which about right for postgres on your hardware.  you have the
following options to play increase insert performance:

* tweak postgresql.conf
  fsync: off it is not already
  wal_segments: bump to at least 24 or so
  maintenance_work_mem: if you create key after insert, bump this high
(it speeds create index)
  bgwriter settings: you can play with these, try disabling bgwriter
first (maxpages=0)
  full_page_writes=off might help, not 100% sure about this

* distribute load
  make load app multi thrreaded.

* use copy for bulk load
  [is there a way to make jdbc driver do this?]

* use multi-line inserts (at least 10 rows/insert)...nearly as fast as copy

* if jdbc driver is not already doing so, prepare your statements and execute.

merlin

pgsql-performance by date:

Previous
From: femski
Date:
Subject: Re: Postgres batch write very slow - what to do
Next
From: Alexey Romanchuk
Date:
Subject: Determine dead tuples size