Thread: Connection Pooling directly on Postgres Server
I'm looking for connection pooling solutions for our php/apache server. I already checked pgpool and pgbouncer but during the tests, I had the following (mad) idea... Why not to implement a connection pooling server side as apache for example does? I try to explain my idea... The postgres server maintains a number of connections always alive (as apache for example does) even if a client disconnects. The following parameters could be used to tune the number of connections kept alive server side: StartServers: number of postgres already active connections at server start MinSpareServers: If there are fewer than MinSpareServers, it creates a new spare (connection) MaxSpareServers: If there are more than MaxSpareServers, some of the spares (connections) die off. The parameters has been taken directly from an apache httpd.conf sample... Could it be possible to implement a similar solution on postgres? What to do you think about this? Thank you, Denis
On 07.09.2007 10:16, Denis Gasparin wrote: > What to do you think about this? I think there is no need for this and it's the wrong place to implement it. Why should PostgreSQL do something to work around broken application design? An application that closes its database connection just to create a new one a second later is broken. In this particular case the thing to fix is Apache's legacy architecture. Prefork based web servers are really something from the last decade. Consider Lighttpd, Litespeed or Zeus and connect a fixed amount of PHP/Perl/Python/Ruby/Lua workers via FCGI. Each worker keeps a persistent connection to PG and load is distributed over the workers. -- Regards, Hannes Dorbath
You'll want to evaluate pgBouncer to see if it meets your needs. It works very well for general proxying, connection pooling. On 9/7/07, Denis Gasparin <denis@edistar.com> wrote: > I'm looking for connection pooling solutions for our php/apache server. > > I already checked pgpool and pgbouncer but during the tests, I had the > following (mad) idea... > > Why not to implement a connection pooling server side as apache for > example does? > > I try to explain my idea... > > The postgres server maintains a number of connections always alive (as > apache for example does) > even if a client disconnects. > > The following parameters could be used to tune the number of connections > kept alive server side: > > StartServers: number of postgres already active connections at server start > MinSpareServers: If there are fewer than MinSpareServers, it creates a > new spare (connection) > MaxSpareServers: If there are more than MaxSpareServers, some of the > spares (connections) die off. > > The parameters has been taken directly from an apache httpd.conf sample... > > Could it be possible to implement a similar solution on postgres? > > What to do you think about this? > > Thank you, > Denis > > ---------------------------(end of broadcast)--------------------------- > TIP 6: explain analyze is your friend >
Hi, Denis Gasparin wrote: > Why not to implement a connection pooling server side as apache for > example does? This has certainly been discussed before. IIRC the real argument against that was, that fork() isn't the most expensive thing to do anymore. And Postgres does lots of other stuff after accept(), namely connecting to a certain database, authenticating the user, etc.. If you still want to optimize that, you'd end up having n spare backends *per database*. I do that in Postgres-R - not for connection pooling, but for application of remote transactions. Regards Markus
> This has certainly been discussed before. > > IIRC the real argument against that was, that fork() isn't the most > expensive thing to do anymore. And Postgres does lots of other stuff > after accept(), namely connecting to a certain database, > authenticating the user, etc.. Ok. I knew that. I made the question because it seems that, for example, Oracle in release 11g is moving to a similar solution in order to solve connection pooling problems. For example look at the following link: http://pbarut.blogspot.com/2007/08/oracle-11g-drcp-database-resident.html Denis
2007/9/8, Denis Gasparin <denis@edistar.com>: > > > This has certainly been discussed before. > > > > IIRC the real argument against that was, that fork() isn't the most > > expensive thing to do anymore. And Postgres does lots of other stuff > > after accept(), namely connecting to a certain database, > > authenticating the user, etc.. > Ok. I knew that. I made the question because it seems that, for example, > Oracle in release 11g is moving to a similar solution in order to solve > connection pooling problems. > > For example look at the following link: > > http://pbarut.blogspot.com/2007/08/oracle-11g-drcp-database-resident.html > sure... regarding Oracle, it's different story because of their development model which is not opensource and has to rely on own solutions instead of following unix tradition of modularity. regarding Apache, it's different story because HTTP is stateless protocol! which enables random backend switching, in contrary to postgres backend protocol. Anyway, stateless connection pooling is already implemented (pgpool/pgbouncer/?) Stateful connection pooling is hard to implement without rewriting the protocol itself or disrupting existing behaviour. -- Filip Rembiałkowski