Re: tuning questions - Mailing list pgsql-performance

From Josh Berkus
Subject Re: tuning questions
Date
Msg-id 200312040859.06963.josh@agliodbs.com
Whole thread Raw
In response to tuning questions  (Jack Coates <jack@lyris.com>)
List pgsql-performance
Jack,

> The application is on another server, and does this torture test: it
> builds a large table (~6 million rows in one test, ~18 million in
> another). Rows are then pulled in chunks of 4 to 6 thousand, acted on,
> and inserted back into another table (which will of course eventually
> grow to the full size of the first).

>e tweaked shared buffers to 8192, pushed sort memory to 2048, vacuum
> memory to 8192, and effective cache size to 10000.
> /proc/sys/kernel/shmmax is set to 1600000000 and /proc/sys/fs/file-max
> is set to 65536. Ulimit -n 3192.

Have you read this?
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html

Actually, your situation is not "worst case".  For one thing, your process is
effectively single-user; this allows you to throw all of your resources at
one user.    The problem is that your settings have effectively throttled PG
at a level appropriate to a many-user and/or multi-purpose system.  You need
to "open them up".

For something involving massive updating/transformation like this, once you've
done the basics (see that URL above) the main settings which will affect you
are sort_mem and checkpoint_segments, both of which I'd advise jacking way up
(test by increments).  Raising wal_buffers wouldn't hurt either.

Also, give some thought to running VACUUM and/or ANALYZE between segments of
your procedure.    Particularly if you do updates to many rows of a table and
then query based on the changed data, it is vital to run an ANALYZE first,
and usually a good idea to run a VACUUM if it was an UPDATE or DELETE and not
an INSERT.

--
Josh Berkus
Aglio Database Solutions
San Francisco

pgsql-performance by date:

Previous
From: Jeff
Date:
Subject: Re: tuning questions
Next
From: Rob Fielding
Date:
Subject: Re: tuning questions