Re: Connection pooling. - Mailing list pgsql-hackers

From Chris Bitmead
Subject Re: Connection pooling.
Date
Msg-id 396BEA84.1A06F51F@nimrod.itg.telecom.com.au
Whole thread Raw
In response to Connection pooling.  (Alfred Perlstein <bright@wintelcom.net>)
Responses Re: Connection pooling.  (Alfred Perlstein <bright@wintelcom.net>)
Re: Connection pooling.  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
Seems a lot trickier than you think. A backend can only be running
one transaction at a time, so you'd have to keep track of which backends
are in the middle of a transaction. I can imagine race conditions here.
And backends can have contexts that are set by various clients using
SET and friends. Then you'd have to worry about authentication each
time. And you'd have to have algorithms for cleaning up old processes
and/or dead processes. It all really sounds a bit hard. 

Alfred Perlstein wrote:
> 
> In an effort to complicate the postmaster beyond recognition I'm
> proposing an idea that I hope can be useful to the developers.
> 
> Connection pooling:
> 
> The idea is to have the postmaster multiplex and do hand-offs of
> database connections to other postgresql processes when the max
> connections has been exceeded.
> 
> This allows several gains:
> 
> 1) Postgresql can support a large number of connections without
> requiring a large amount of processes to do so.
> 
> 2) Connection startup/finish will be cheaper because Postgresql
> processes will not exit and need to reninit things such as shared
> memory attachments and file opens.  This will also reduce the load
> on the supporting operating system and make postgresql much 'cheaper'
> to run on systems that don't support the fork() model of execution
> gracefully.
> 
> 3) Long running connections can be preempted at transaction boundries
> allowing other connections to gain process timeslices from the
> connection pool.
> 
> The idea is to make the postmaster that accepts connections a broker
> for the connections.  It will dole out descriptors using file
> descriptor passing to children.  If there's a demand for connections
> meaning that all the postmasters are busy and there are pending
> connections the postmaster can ask for a yeild on one of the
> connections.
> 
> A yeild involves the child postgresql process passing back the
> client connection at a transaction boundry (between transactions)
> so it can later be given to another (perhaps the same) child process.
> 
> I spoke with Bruce briefly about this and he suggested that system
> tables containing unique IDs could be used to identify passed
> connections to the children and back to the postmaster.
> 
> When a handoff occurs, the descriptor along with an ID referencing
> things like temp tables and enviornment variables and authentication
> information could be handed out as well allowing the child to resume
> service to the interrupted connection.
> 
> I really don't have the knowledge of Postgresql internals to
> accomplish this, but the concepts are simple and the gains would
> seem to be very high.
> 
> Comments?
> 
> --
> -Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]
> "I have the heart of a child; I keep it in a jar on my desk."


pgsql-hackers by date:

Previous
From: "Hiroshi Inoue"
Date:
Subject: RE: Vacuum only with 20% old tuples
Next
From: Tom Lane
Date:
Subject: Re: Vacuum only with 20% old tuples