On Jul 28, 2004, at 1:08 PM, Stephane Tessier wrote:
> we have a BIG problem of performance,it's slow....
Can you isolate which part is slow? (log_min_duration is useful for
finding your slow running queries)
> we use postgres 7.3 for php security application with approximately 4
> millions of insertion by day and 4 millions of delete and update
That is pretty heavy write volume. Are these updates done in batches
or "now and then"? If they are done in batches you could speed them up
by wrapping them inside a transaction.
> #shared_buffers = 256 # min max_connections*2 or 16, 8KB each
> #shared_buffers = 196000 # min max_connections*2 or 16,
> 8KB each
> shared_buffers = 128000 # min max_connections*2 or 16, 8KB each
>
Too much. Generally over 10000 will stop benefitting you.
> #wal_buffers = 8 # min 4, typically 8KB each
Might want to bump this up
> #checkpoint_segments = 3 # in logfile segments, min 1, 16MB each
Given your write volume, increase this up a bit.. oh.. 20 or 30 of them
will help a lot.
But it will use 16*30MB of disk space.
Oracle is *NOT* a silver bullet.
It will not instantly make your problems go away.
I'm working on a project porting some things to Oracle and as a test I
also ported it to Postgres. And you know what? Postgres is running
about 30% faster than Oracle. The Oracle lovers here are not too happy
with that one :) Just so you know..
--
Jeff Trout <jeff@jefftrout.com>
http://www.jefftrout.com/
http://www.stuarthamm.net/