Re: PHP + Postgres: More than 1000 postmasters produce 70.000 context switches - Mailing list pgsql-general

From Richard Huxton
Subject Re: PHP + Postgres: More than 1000 postmasters produce 70.000 context switches
Date
Msg-id 200402201702.16561.dev@archonet.com
Whole thread Raw
In response to PHP + Postgres: More than 1000 postmasters produce 70.000 context switches  ("Gellert, Andre" <AGellert@ElectronicPartner.de>)
List pgsql-general
On Friday 20 February 2004 15:32, Gellert, Andre wrote:
> Hello,
> we installed a new Postgres 7.4.0 on a Suse 9 system.
> This is used as a part of an extranet , based on Apache+PHP and has besides
> a ldap
> server no services running. The system has dual xeon 2ghz and 2GB RAM.
> When migrating all applications from 2 other postgres7.2 servers to the new
> one,
> we had heavy load problems.
> At the beginning there where problems with to much allocated shared memory,
> as the system was swapping 5-10 mb / sec . So we now reconfigured the
> shared_buffers to 2048, which should mean 2mb (linux=buffer each one kb)
> per process.

Actually it's probably 8kB each = 16MB, but thats between *all* the backends.
You probably want something a fair bit larger than this. Go to
  http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
and read the section on performance tuning and on the annotated
postgresql.conf

> We corrected higher values from sort_mem and vacuum_mem back to
> sort_mem=512 and
>  vacuum_mem=8192 , too, to reduce memory usage, although we have
>  kernel.shmall = 1342177280 and kernel.shmmax = 1342177280 .

You can probably put vaccum_mem back up.

> Currenty i have limited the max_connections to 800, because every larger
> value results in
> a system load to 60+ and at least 20.000 context switches.

Might be your shared_buffers being too low, but we'll let someone else
comment.

> My problem is, that our apache produces much more than 800 open
> connections,
>
> because we are using > 15 diff. databases and apache seems to keep
> connections to every
> database open , the same httpd-process has connected before.
> For now i solved it in a very dirty way, i limited the number and the
> lifetime
> of each httpd process with those values :
>  MaxKeepAliveRequests 10
>  KeepAliveTimeout 2
>  MaxClients 100
>  MaxRequestsPerChild 300

You do want to limit the MaxRequestsPerChild if you're using persistent
connections. The problem seems to be with your PHP though

> We use php 4.3.4 and PHP 4.2.3 on the webservers. PHP ini says:
> [PostgresSQL]
> ; Allow or prevent persistent links.
> pgsql.allow_persistent = On
> ; Maximum number of persistent links.  -1 means no limit.
> pgsql.max_persistent = -1
> ; Maximum number of links (persistent+non persistent).  -1 means no limit.
> pgsql.max_links = -1

So - you let PHP open persistent connections to PG and have no limit to the
number of different connections open at any one time?
Turn the persistent connections off - you'll probably find your problems go
away.

> We are now running for days with an extremly unstable database backend...
> Are 1.000 processes the natural limit on a linux based postgresql ?
> Can we realize a more efficient connection pooling/reusing ?

You probably can pool your connections better, but difficult to say without
knowing what your PHP is doing.

--
  Richard Huxton
  Archonet Ltd

pgsql-general by date:

Previous
From: Stephan Szabo
Date:
Subject: Re: How can I delete a primary or foreign key?
Next
From: tibor
Date:
Subject: Re: How can I delete a primary or foreign key?