Hi,
I've two questions for which I not really found answers in the web.
Intro:
I've a Website with some traffic.
2 Million queries a day, during daylight.
Postgres is running on a dedicated server P4 DualCore, 4 Gig Ram.
Mainly updates on 1 tuple. And more or less complex SELECT statements.
I noticed that the overall performance of postgres is decreasing
when one or more long
readers are present. Where a long reader here is already a Select
count(*) from table.
As postgres gets slower an slower, and users still hammering on the
reload button to get their
page loaded. Postgres begins to reach max connections, and web site
is stuck.
It's not because of a bad schema or bad select statements. As I said,
a select count(*) on big table is already
triggering this behaviour.
Why do long readers influence the rest of the transactions in such a
heavy way?
Any configuration changes which can help here?
Is it a disc-IO bottleneck thing?
Second question. What is the right choice for the shared_buffers size?
On a dedicated postgres server with 4 Giga RAM. Is there any rule of
thumb?
Actually I set it to +-256M.
thanks for any suggestions.
Patric
My Setup:
Debian Etch
PSQL: 8.1.4
WAL files are located on another disc than the dbase itself.
max_connections = 190
shared_buffers = 30000
temp_buffers = 3000
work_mem = 4096
maintenance_work_mem = 16384
fsync = on
wal_buffers = 16
effective_cache_size = 5000