Thread: PHP + Postgres: More than 1000 postmasters produce 70.000 context switches

PHP + Postgres: More than 1000 postmasters produce 70.000 context switches

From
"Gellert, Andre"
Date:
Hello,
we installed a new Postgres 7.4.0 on a Suse 9 system.
This is used as a part of an extranet , based on Apache+PHP and has besides
a ldap
server no services running. The system has dual xeon 2ghz and 2GB RAM.
When migrating all applications from 2 other postgres7.2 servers to the new
one,
we had heavy load problems.
At the beginning there where problems with to much allocated shared memory,
as the system was swapping 5-10 mb / sec . So we now reconfigured the
shared_buffers to 2048, which should mean 2mb (linux=buffer each one kb) per
process.
We corrected higher values from sort_mem and vacuum_mem back to sort_mem=512
and
 vacuum_mem=8192 , too, to reduce memory usage, although we have
 kernel.shmall = 1342177280 and kernel.shmmax = 1342177280 .

Currenty i have limited the max_connections to 800, because every larger
value results in
a system load to 60+ and at least 20.000 context switches.

My problem is, that our apache produces much more than 800 open connections,

because we are using > 15 diff. databases and apache seems to keep
connections to every
database open , the same httpd-process has connected before.
For now i solved it in a very dirty way, i limited the number and the
lifetime
of each httpd process with those values :
 MaxKeepAliveRequests 10
 KeepAliveTimeout 2
 MaxClients 100
 MaxRequestsPerChild 300

We use php 4.3.4 and PHP 4.2.3 on the webservers. PHP ini says:
[PostgresSQL]
; Allow or prevent persistent links.
pgsql.allow_persistent = On
; Maximum number of persistent links.  -1 means no limit.
pgsql.max_persistent = -1
; Maximum number of links (persistent+non persistent).  -1 means no limit.
pgsql.max_links = -1

We are now running for days with an extremly unstable database backend...
Are 1.000 processes the natural limit on a linux based postgresql ?
Can we realize a more efficient connection pooling/reusing ?

thanks a lot for help and every idea is welcome,
Andre

BTW: Does anyone know commercial administration trainings in Germany, near
Duesseldorf?


Re: PHP + Postgres: More than 1000 postmasters produce 70.000 context switches

From
Richard Huxton
Date:
On Friday 20 February 2004 15:32, Gellert, Andre wrote:
> Hello,
> we installed a new Postgres 7.4.0 on a Suse 9 system.
> This is used as a part of an extranet , based on Apache+PHP and has besides
> a ldap
> server no services running. The system has dual xeon 2ghz and 2GB RAM.
> When migrating all applications from 2 other postgres7.2 servers to the new
> one,
> we had heavy load problems.
> At the beginning there where problems with to much allocated shared memory,
> as the system was swapping 5-10 mb / sec . So we now reconfigured the
> shared_buffers to 2048, which should mean 2mb (linux=buffer each one kb)
> per process.

Actually it's probably 8kB each = 16MB, but thats between *all* the backends.
You probably want something a fair bit larger than this. Go to
  http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
and read the section on performance tuning and on the annotated
postgresql.conf

> We corrected higher values from sort_mem and vacuum_mem back to
> sort_mem=512 and
>  vacuum_mem=8192 , too, to reduce memory usage, although we have
>  kernel.shmall = 1342177280 and kernel.shmmax = 1342177280 .

You can probably put vaccum_mem back up.

> Currenty i have limited the max_connections to 800, because every larger
> value results in
> a system load to 60+ and at least 20.000 context switches.

Might be your shared_buffers being too low, but we'll let someone else
comment.

> My problem is, that our apache produces much more than 800 open
> connections,
>
> because we are using > 15 diff. databases and apache seems to keep
> connections to every
> database open , the same httpd-process has connected before.
> For now i solved it in a very dirty way, i limited the number and the
> lifetime
> of each httpd process with those values :
>  MaxKeepAliveRequests 10
>  KeepAliveTimeout 2
>  MaxClients 100
>  MaxRequestsPerChild 300

You do want to limit the MaxRequestsPerChild if you're using persistent
connections. The problem seems to be with your PHP though

> We use php 4.3.4 and PHP 4.2.3 on the webservers. PHP ini says:
> [PostgresSQL]
> ; Allow or prevent persistent links.
> pgsql.allow_persistent = On
> ; Maximum number of persistent links.  -1 means no limit.
> pgsql.max_persistent = -1
> ; Maximum number of links (persistent+non persistent).  -1 means no limit.
> pgsql.max_links = -1

So - you let PHP open persistent connections to PG and have no limit to the
number of different connections open at any one time?
Turn the persistent connections off - you'll probably find your problems go
away.

> We are now running for days with an extremly unstable database backend...
> Are 1.000 processes the natural limit on a linux based postgresql ?
> Can we realize a more efficient connection pooling/reusing ?

You probably can pool your connections better, but difficult to say without
knowing what your PHP is doing.

--
  Richard Huxton
  Archonet Ltd

Re: PHP + Postgres: More than 1000 postmasters produce

From
Csaba Nagy
Date:
Well, it seems for your application is better to limit php's persistent
connection pool as a quick measure.
Try to set these values to something sensible for you:

; Maximum number of persistent links.  -1 means no limit.
pgsql.max_persistent = 20
; Maximum number of links (persistent+non persistent).  -1 means no limit.
pgsql.max_links = 30

Or just disable persistent connections altogether, and see if that is
not resulting in better performance:

; Allow or prevent persistent links.
pgsql.allow_persistent = Off

In the long term look for some better connection pooling mechanism, I'm
sure you'll find something for PHP too (I'm not using php, maybe
somebody else on the list can help ?).

Cheers,
Csaba.

On Fri, 2004-02-20 at 16:32, Gellert, Andre wrote:
> Hello,
> we installed a new Postgres 7.4.0 on a Suse 9 system.
> This is used as a part of an extranet , based on Apache+PHP and has besides
> a ldap
> server no services running. The system has dual xeon 2ghz and 2GB RAM.
> When migrating all applications from 2 other postgres7.2 servers to the new
> one,
> we had heavy load problems.
> At the beginning there where problems with to much allocated shared memory,
> as the system was swapping 5-10 mb / sec . So we now reconfigured the
> shared_buffers to 2048, which should mean 2mb (linux=buffer each one kb) per
> process.
> We corrected higher values from sort_mem and vacuum_mem back to sort_mem=512
> and
>  vacuum_mem=8192 , too, to reduce memory usage, although we have
>  kernel.shmall = 1342177280 and kernel.shmmax = 1342177280 .
>
> Currenty i have limited the max_connections to 800, because every larger
> value results in
> a system load to 60+ and at least 20.000 context switches.
>
> My problem is, that our apache produces much more than 800 open connections,
>
> because we are using > 15 diff. databases and apache seems to keep
> connections to every
> database open , the same httpd-process has connected before.
> For now i solved it in a very dirty way, i limited the number and the
> lifetime
> of each httpd process with those values :
>  MaxKeepAliveRequests 10
>  KeepAliveTimeout 2
>  MaxClients 100
>  MaxRequestsPerChild 300
>
> We use php 4.3.4 and PHP 4.2.3 on the webservers. PHP ini says:
> [PostgresSQL]
> ; Allow or prevent persistent links.
> pgsql.allow_persistent = On
> ; Maximum number of persistent links.  -1 means no limit.
> pgsql.max_persistent = -1
> ; Maximum number of links (persistent+non persistent).  -1 means no limit.
> pgsql.max_links = -1
>
> We are now running for days with an extremly unstable database backend...
> Are 1.000 processes the natural limit on a linux based postgresql ?
> Can we realize a more efficient connection pooling/reusing ?
>
> thanks a lot for help and every idea is welcome,
> Andre
>
> BTW: Does anyone know commercial administration trainings in Germany, near
> Duesseldorf?
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org


Re: PHP + Postgres: More than 1000 postmasters produce

From
"scott.marlowe"
Date:
Have you tested it with regular pg_connects instead of pg_pconnect?  while
many people expect pconnects to be faster, often, when they result in the
database having lots of open idle connections, they actually make the
system slower than just using plain connects.

You might want to look into some of the connection pooling options out
there that work with PHP, as persistant connections work well only for a
smaller number of hard working threads, and not so well for a large number
of connections of which only a few are actually hitting the db at the
same time.  The becomes especially bad in your situation, where it sounds
like you have multiple databases to connect to, so php is keeping multiple
backends alive for each front end thread.


Re: PHP + Postgres: More than 1000 postmasters produce

From
"scott.marlowe"
Date:
On 20 Feb 2004, Csaba Nagy wrote:

> Well, it seems for your application is better to limit php's persistent
> connection pool as a quick measure.
> Try to set these values to something sensible for you:
>
> ; Maximum number of persistent links.  -1 means no limit.
> pgsql.max_persistent = 20

Please note that pgsql.max_persistant is PER apache / php backend process.

http://www.php.net/manual/en/ref.pgsql.php
QUOTE:
pgsql.max_persistent  integer

    The maximum number of persistent Postgres connections per process.
UNQUOTE:

> ; Maximum number of links (persistent+non persistent).  -1 means no limit.
> pgsql.max_links = 30

This one too is per process

> Or just disable persistent connections altogether, and see if that is
> not resulting in better performance:

My recommendation.