Hello,
we installed a new Postgres 7.4.0 on a Suse 9 system.
This is used as a part of an extranet , based on Apache+PHP and has besides
a ldap
server no services running. The system has dual xeon 2ghz and 2GB RAM.
When migrating all applications from 2 other postgres7.2 servers to the new
one,
we had heavy load problems.
At the beginning there where problems with to much allocated shared memory,
as the system was swapping 5-10 mb / sec . So we now reconfigured the
shared_buffers to 2048, which should mean 2mb (linux=buffer each one kb) per
process.
We corrected higher values from sort_mem and vacuum_mem back to sort_mem=512
and
vacuum_mem=8192 , too, to reduce memory usage, although we have
kernel.shmall = 1342177280 and kernel.shmmax = 1342177280 .
Currenty i have limited the max_connections to 800, because every larger
value results in
a system load to 60+ and at least 20.000 context switches.
My problem is, that our apache produces much more than 800 open connections,
because we are using > 15 diff. databases and apache seems to keep
connections to every
database open , the same httpd-process has connected before.
For now i solved it in a very dirty way, i limited the number and the
lifetime
of each httpd process with those values :
MaxKeepAliveRequests 10
KeepAliveTimeout 2
MaxClients 100
MaxRequestsPerChild 300
We use php 4.3.4 and PHP 4.2.3 on the webservers. PHP ini says:
[PostgresSQL]
; Allow or prevent persistent links.
pgsql.allow_persistent = On
; Maximum number of persistent links. -1 means no limit.
pgsql.max_persistent = -1
; Maximum number of links (persistent+non persistent). -1 means no limit.
pgsql.max_links = -1
We are now running for days with an extremly unstable database backend...
Are 1.000 processes the natural limit on a linux based postgresql ?
Can we realize a more efficient connection pooling/reusing ?
thanks a lot for help and every idea is welcome,
Andre
BTW: Does anyone know commercial administration trainings in Germany, near
Duesseldorf?