Re: Looking for tips - Mailing list pgsql-performance

From Dawid Kuroczko
Subject Re: Looking for tips
Date
Msg-id 758d5e7f0507191319636afaa3@mail.gmail.com
Whole thread Raw
In response to Re: Looking for tips  (Oliver Crosby <ryusei@gmail.com>)
Responses Re: Looking for tips
Re: Looking for tips
List pgsql-performance
On 7/19/05, Oliver Crosby <ryusei@gmail.com> wrote:
> > We had low resource utilization and poor throughput on inserts of
> > thousands of rows within a single database transaction.  There were a
> > lot of configuration parameters we changed, but the one which helped the
> > most was wal_buffers -- we wound up setting it to 1000.  This may be
> > higher than it needs to be, but when we got to something which ran well,
> > we stopped tinkering.  The default value clearly caused a bottleneck.
>
> I just tried wal_buffers = 1000, sort_mem at 10% and
> effective_cache_size at 75%.
> The performance refuses to budge.. I guess that's as good as it'll go?

If it is possible try:
1) wrapping many inserts into one transaction
(BEGIN;INSERT;INSERT;...INSERT;COMMIT;).  As PostgreSQL will need to
handle less transactions per second (each your insert is a transaction), it
may work faster.

2) If you can do 1, you could go further and use a COPY command which is
the fastest way to bulk-load a database.

Sometimes I insert data info temporary table, and then do:
INSERT INTO sometable SELECT * FROM tmp_table;
(but I do it when I want to do some select, updates, etc on
the data before "commiting" them to main table; dropping
temporary table is much cheaper than vacuuming many-a-row
table).

  Regards,
     Dawid

PS: Where can I find benchmarks comparing PHP vs Perl vs Python in
terms of speed of executing prepared statements?

pgsql-performance by date:

Previous
From: Oliver Crosby
Date:
Subject: Re: Looking for tips
Next
From: PFC
Date:
Subject: Re: Looking for tips