Thread: Connection pooling.

Connection pooling.

From
Alfred Perlstein
Date:
In an effort to complicate the postmaster beyond recognition I'm
proposing an idea that I hope can be useful to the developers.

Connection pooling:

The idea is to have the postmaster multiplex and do hand-offs of
database connections to other postgresql processes when the max
connections has been exceeded.

This allows several gains:

1) Postgresql can support a large number of connections without
requiring a large amount of processes to do so.

2) Connection startup/finish will be cheaper because Postgresql
processes will not exit and need to reninit things such as shared
memory attachments and file opens.  This will also reduce the load
on the supporting operating system and make postgresql much 'cheaper'
to run on systems that don't support the fork() model of execution
gracefully.

3) Long running connections can be preempted at transaction boundries
allowing other connections to gain process timeslices from the
connection pool.

The idea is to make the postmaster that accepts connections a broker
for the connections.  It will dole out descriptors using file
descriptor passing to children.  If there's a demand for connections
meaning that all the postmasters are busy and there are pending
connections the postmaster can ask for a yeild on one of the
connections.

A yeild involves the child postgresql process passing back the
client connection at a transaction boundry (between transactions)
so it can later be given to another (perhaps the same) child process.

I spoke with Bruce briefly about this and he suggested that system
tables containing unique IDs could be used to identify passed
connections to the children and back to the postmaster.

When a handoff occurs, the descriptor along with an ID referencing
things like temp tables and enviornment variables and authentication
information could be handed out as well allowing the child to resume
service to the interrupted connection.

I really don't have the knowledge of Postgresql internals to
accomplish this, but the concepts are simple and the gains would
seem to be very high.

Comments?

-- 
-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]
"I have the heart of a child; I keep it in a jar on my desk."


Re: Connection pooling.

From
Lamar Owen
Date:
On Tue, 11 Jul 2000, Alfred Perlstein wrote:
> In an effort to complicate the postmaster beyond recognition I'm
> proposing an idea that I hope can be useful to the developers.
> Connection pooling:
> The idea is to have the postmaster multiplex and do hand-offs of
> database connections to other postgresql processes when the max
> connections has been exceeded.

AOLserver is one client that already does this, using the existing fe-be
protocol.  It would be a good model to emulate -- although, to date, there
hasn't been much interest from the main developers on spending the time to do
this.

If you need or want this performance on a db-backed website, use AOLserver :-P
or some good connection pooling module for Apache, et al.  PHP does a form of
persistent connections, but I don't know enough about them to know if they are
truly pooled (as AOLserver's are).  I do know that AOLserver's pooling is a
major performance win.

As Ben has already said, this is a good place for client-side optimization,
which is really where it would get the most use anyway.

AOLserver has done this since around early 1995.

--
Lamar Owen
WGCR Internet Radio
1 Peter 4:11


Re: Connection pooling.

From
Chris Bitmead
Date:
Seems a lot trickier than you think. A backend can only be running
one transaction at a time, so you'd have to keep track of which backends
are in the middle of a transaction. I can imagine race conditions here.
And backends can have contexts that are set by various clients using
SET and friends. Then you'd have to worry about authentication each
time. And you'd have to have algorithms for cleaning up old processes
and/or dead processes. It all really sounds a bit hard. 

Alfred Perlstein wrote:
> 
> In an effort to complicate the postmaster beyond recognition I'm
> proposing an idea that I hope can be useful to the developers.
> 
> Connection pooling:
> 
> The idea is to have the postmaster multiplex and do hand-offs of
> database connections to other postgresql processes when the max
> connections has been exceeded.
> 
> This allows several gains:
> 
> 1) Postgresql can support a large number of connections without
> requiring a large amount of processes to do so.
> 
> 2) Connection startup/finish will be cheaper because Postgresql
> processes will not exit and need to reninit things such as shared
> memory attachments and file opens.  This will also reduce the load
> on the supporting operating system and make postgresql much 'cheaper'
> to run on systems that don't support the fork() model of execution
> gracefully.
> 
> 3) Long running connections can be preempted at transaction boundries
> allowing other connections to gain process timeslices from the
> connection pool.
> 
> The idea is to make the postmaster that accepts connections a broker
> for the connections.  It will dole out descriptors using file
> descriptor passing to children.  If there's a demand for connections
> meaning that all the postmasters are busy and there are pending
> connections the postmaster can ask for a yeild on one of the
> connections.
> 
> A yeild involves the child postgresql process passing back the
> client connection at a transaction boundry (between transactions)
> so it can later be given to another (perhaps the same) child process.
> 
> I spoke with Bruce briefly about this and he suggested that system
> tables containing unique IDs could be used to identify passed
> connections to the children and back to the postmaster.
> 
> When a handoff occurs, the descriptor along with an ID referencing
> things like temp tables and enviornment variables and authentication
> information could be handed out as well allowing the child to resume
> service to the interrupted connection.
> 
> I really don't have the knowledge of Postgresql internals to
> accomplish this, but the concepts are simple and the gains would
> seem to be very high.
> 
> Comments?
> 
> --
> -Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]
> "I have the heart of a child; I keep it in a jar on my desk."


Re: Connection pooling.

From
Jeffery Collins
Date:
It seems like a first step would be to just have postmaster cache unused
connections.  In other words if a client closes a connection, postmaster
keeps the connection and the child process around for the next connect
request.  This has many of your advantages, but not all.  However, it seems
like it would be simpler than attempting to multiplex a connection between
multiple clients.

Jeff

>
> Alfred Perlstein wrote:
> >
> > In an effort to complicate the postmaster beyond recognition I'm
> > proposing an idea that I hope can be useful to the developers.
> >
> > Connection pooling:
> >
> > The idea is to have the postmaster multiplex and do hand-offs of
> > database connections to other postgresql processes when the max
> > connections has been exceeded.
> >
> > This allows several gains:
> >
> > 1) Postgresql can support a large number of connections without
> > requiring a large amount of processes to do so.
> >
> > 2) Connection startup/finish will be cheaper because Postgresql
> > processes will not exit and need to reninit things such as shared
> > memory attachments and file opens.  This will also reduce the load
> > on the supporting operating system and make postgresql much 'cheaper'
> > to run on systems that don't support the fork() model of execution
> > gracefully.
> >
> > 3) Long running connections can be preempted at transaction boundries
> > allowing other connections to gain process timeslices from the
> > connection pool.
> >
> > The idea is to make the postmaster that accepts connections a broker
> > for the connections.  It will dole out descriptors using file
> > descriptor passing to children.  If there's a demand for connections
> > meaning that all the postmasters are busy and there are pending
> > connections the postmaster can ask for a yeild on one of the
> > connections.
> >
> > A yeild involves the child postgresql process passing back the
> > client connection at a transaction boundry (between transactions)
> > so it can later be given to another (perhaps the same) child process.
> >
> > I spoke with Bruce briefly about this and he suggested that system
> > tables containing unique IDs could be used to identify passed
> > connections to the children and back to the postmaster.
> >
> > When a handoff occurs, the descriptor along with an ID referencing
> > things like temp tables and enviornment variables and authentication
> > information could be handed out as well allowing the child to resume
> > service to the interrupted connection.
> >
> > I really don't have the knowledge of Postgresql internals to
> > accomplish this, but the concepts are simple and the gains would
> > seem to be very high.
> >
> > Comments?
> >
> > --
> > -Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]
> > "I have the heart of a child; I keep it in a jar on my desk."



Re: Connection pooling.

From
Philip Warner
Date:
At 23:10 11/07/00 -0400, Jeffery Collins wrote:
>It seems like a first step would be to just have postmaster cache unused
>connections.  In other words if a client closes a connection, postmaster
>keeps the connection and the child process around for the next connect
>request.  This has many of your advantages, but not all.  However, it seems
>like it would be simpler than attempting to multiplex a connection between
>multiple clients.
>

Add the ability to tell the postmaster to keep a certain number of 'free'
servers (up to a max total, of course), and you can then design your apps
to connect/disconnect very quickly. This way you don't need to request a
client to get off - you trust the app designer to disconnect whenever they
can.


----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.C.N. 008 659 498)             |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 0500 83 82 82         |                 ___________ |
Http://www.rhyme.com.au          |                /           \|                                |    --________--
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


Re: Connection pooling.

From
Bruce Momjian
Date:
> It seems like a first step would be to just have postmaster cache unused
> connections.  In other words if a client closes a connection, postmaster
> keeps the connection and the child process around for the next connect
> request.  This has many of your advantages, but not all.  However, it seems
> like it would be simpler than attempting to multiplex a connection between
> multiple clients.
> 

This does seem like a good optimization.

--  Bruce Momjian                        |  http://candle.pha.pa.us pgman@candle.pha.pa.us               |  (610)
853-3000+  If your life is a hard drive,     |  830 Blythe Avenue +  Christ can be your backup.        |  Drexel Hill,
Pennsylvania19026
 


Re: Connection pooling.

From
Alfred Perlstein
Date:
* Chris Bitmead <chrisb@nimrod.itg.telstra.com.au> [000711 20:53] wrote:
> 
> Seems a lot trickier than you think. A backend can only be running
> one transaction at a time, so you'd have to keep track of which backends
> are in the middle of a transaction. I can imagine race conditions here.
> And backends can have contexts that are set by various clients using
> SET and friends. Then you'd have to worry about authentication each
> time. And you'd have to have algorithms for cleaning up old processes
> and/or dead processes. It all really sounds a bit hard. 

The backends can simply inform the postmaster when they are ready
either because they are done with a connection or because they
have just closed a transaction.

All the state (auth/temp tables) can be held in the system tables.

It's complicated, but no where on the order of something like
a new storage manager.

-Alfred


Re: Connection pooling.

From
Alfred Perlstein
Date:
* Bruce Momjian <pgman@candle.pha.pa.us> [000711 21:31] wrote:
> > It seems like a first step would be to just have postmaster cache unused
> > connections.  In other words if a client closes a connection, postmaster
> > keeps the connection and the child process around for the next connect
> > request.  This has many of your advantages, but not all.  However, it seems
> > like it would be simpler than attempting to multiplex a connection between
> > multiple clients.
> > 
> 
> This does seem like a good optimization.

I'm not sure if the postmaster is needed besideds just to fork/exec
the backend, if so then when a backend finishes it can just call
accept() on the listening socket inherited from the postmaster to
get the next incomming connection.

-Alfred


Re: Connection pooling.

From
Tom Lane
Date:
Chris Bitmead <chrisb@nimrod.itg.telstra.com.au> writes:
> Seems a lot trickier than you think. A backend can only be running
> one transaction at a time, so you'd have to keep track of which backends
> are in the middle of a transaction. I can imagine race conditions here.

Aborting out of a transaction is no problem; we have code for that
anyway.  More serious problems:

* We have no code for reassigning a backend to a different database, so the pooling would have to be per-database.

* AFAIK there is no portable way to pass a socket connection from the postmaster to an already-existing backend
process. If you do a fork() then the connection is inherited ... otherwise you've got a problem.  (You could work
aroundthis if the postmaster relays every single byte in both directions between client and backend, but the
performanceproblems with that should be obvious.)
 

> And backends can have contexts that are set by various clients using
> SET and friends.

Resetting SET variables would be a problem, and there's also the
assigned user name to be reset.  This doesn't seem impossible, but
it does seem tedious and error-prone.  (OTOH, Peter E's recent work
on guc.c might have unified option-handling enough to bring it
within reason.)

The killer problem here is that you can't hand off a connection
accepted by the postmaster to a backend except by fork() --- at least
not with methods that work on a wide variety of Unixen.  Unless someone
has a way around that, I think the idea is dead in the water; the lesser
issues don't matter.
        regards, tom lane


Re: Connection pooling.

From
Philip Warner
Date:
At 01:52 12/07/00 -0400, Tom Lane wrote:
>
>The killer problem here is that you can't hand off a connection
>accepted by the postmaster to a backend except by fork() --- at least
>not with methods that work on a wide variety of Unixen.  Unless someone
>has a way around that, I think the idea is dead in the water; the lesser
>issues don't matter.
>

My understanding of pg client interfaces is that the client uses ont of the
pg interface libraries to make a connection to the db; they specify host &
port and get back some kind of connection object.

What stops the interface library from using the host & port to talk to the
postmaster, find the host & port the spare db server, then connect directly
to the server? This second connection is passed back in the connection object.

When the client disconnects from the server, it tells the postmaster it's
available again etc.

ie. in very rough terms:
   client calls interface to connect
   interface talks to postmaster on port 5432, says "I want a server for
xyz db"
   postmaster replies with "Try port ABCD" OR "no servers available"   postmaster marks the nominated server as 'used'
postmaster disconnects from client
 
   interface connects to port ABCD as per normal protocols   interface fills in connection object & returns
   ...client does some work...
   client disconnects
   db server tells postmaster it's available again.


There would also need to be timeout code to handle the case where the
interface did not do the second connect.

You could  also have the interface allocate a port send it's number to the
postmaster then listen on it, but I think that would represent a potential
security hole.


----------------------------------------------------------------
Philip Warner                    |     __---_____
Albatross Consulting Pty. Ltd.   |----/       -  \
(A.C.N. 008 659 498)             |          /(@)   ______---_
Tel: (+61) 0500 83 82 81         |                 _________  \
Fax: (+61) 0500 83 82 82         |                 ___________ |
Http://www.rhyme.com.au          |                /           \|                                |    --________--
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/


Re: Connection pooling.

From
Alfred Perlstein
Date:
* Tom Lane <tgl@sss.pgh.pa.us> [000711 22:53] wrote:
> Chris Bitmead <chrisb@nimrod.itg.telstra.com.au> writes:
> > Seems a lot trickier than you think. A backend can only be running
> > one transaction at a time, so you'd have to keep track of which backends
> > are in the middle of a transaction. I can imagine race conditions here.
> 
> Aborting out of a transaction is no problem; we have code for that
> anyway.  More serious problems:
> 
> * We have no code for reassigning a backend to a different database,
>   so the pooling would have to be per-database.

That would need to be fixed.  How difficult would that be?

> * AFAIK there is no portable way to pass a socket connection from the
>   postmaster to an already-existing backend process.  If you do a
>   fork() then the connection is inherited ... otherwise you've got a
>   problem.  (You could work around this if the postmaster relays
>   every single byte in both directions between client and backend,
>   but the performance problems with that should be obvious.)

no, see below.

> > And backends can have contexts that are set by various clients using
> > SET and friends.
> 
> Resetting SET variables would be a problem, and there's also the
> assigned user name to be reset.  This doesn't seem impossible, but
> it does seem tedious and error-prone.  (OTOH, Peter E's recent work
> on guc.c might have unified option-handling enough to bring it
> within reason.)

What can be done is that each incomming connection can be assigned an
ID into a system table.  As options are added the system would assign
them to key-value pairs in this table.  Once someone detects that the
remote side has closed the connection the data can be destroyed, but
until then along with the descriptor passing the ID of the client
as an index into the table can be passed for the backend to fetch.

> The killer problem here is that you can't hand off a connection
> accepted by the postmaster to a backend except by fork() --- at least
> not with methods that work on a wide variety of Unixen.  Unless someone
> has a way around that, I think the idea is dead in the water; the lesser
> issues don't matter.

The code has been around since 4.2BSD, it takes a bit of #ifdef to
get it right on all systems but it's not impossible, have a look at
http://www.fhttpd.org/ for a web server that does this in a portable
fashion.

I should have a library whipped up for you guys really soon now
to handle the descriptor and message passing.

-- 
-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]
"I have the heart of a child; I keep it in a jar on my desk."


Re: Connection pooling.

From
Tom Lane
Date:
Alfred Perlstein <bright@wintelcom.net> writes:
> * Tom Lane <tgl@sss.pgh.pa.us> [000711 22:53] wrote:
>> The killer problem here is that you can't hand off a connection
>> accepted by the postmaster to a backend except by fork() --- at least
>> not with methods that work on a wide variety of Unixen.

> The code has been around since 4.2BSD, it takes a bit of #ifdef to
> get it right on all systems but it's not impossible, have a look at
> http://www.fhttpd.org/ for a web server that does this in a portable
> fashion.

I looked at this to see if it would teach me something I didn't know.
It doesn't.  It depends on sendmsg() which is a BSD-ism and not very
portable.
        regards, tom lane


Re: Connection pooling.

From
Alfred Perlstein
Date:
* Tom Lane <tgl@sss.pgh.pa.us> [000712 00:04] wrote:
> Alfred Perlstein <bright@wintelcom.net> writes:
> > * Tom Lane <tgl@sss.pgh.pa.us> [000711 22:53] wrote:
> >> The killer problem here is that you can't hand off a connection
> >> accepted by the postmaster to a backend except by fork() --- at least
> >> not with methods that work on a wide variety of Unixen.
> 
> > The code has been around since 4.2BSD, it takes a bit of #ifdef to
> > get it right on all systems but it's not impossible, have a look at
> > http://www.fhttpd.org/ for a web server that does this in a portable
> > fashion.
> 
> I looked at this to see if it would teach me something I didn't know.
> It doesn't.  It depends on sendmsg() which is a BSD-ism and not very
> portable.

It's also specified by Posix.1g if that means anything.

-Alfred


Re: Connection pooling.

From
Tom Lane
Date:
Philip Warner <pjw@rhyme.com.au> writes:
> What stops the interface library from using the host & port to talk to
> the postmaster, find the host & port the spare db server, then connect
> directly to the server?

You're assuming that we can change the on-the-wire protocol freely and
only the API presented by the client libraries matters.  In a perfect
world that might be true, but reality is that we can't change the wire
protocol easily.  If we do, it breaks all existing precompiled clients.
Updating clients can be an extremely painful experience in multiple-
machine installations.
Also, we don't have just one set of client libraries to fix.  There are
at least three client-side implementations that don't depend on libpq.

We have done protocol updates in the past --- in fact I was responsible
for the last one.  (And I'm still carrying the scars, which is why I'm
not too enthusiastic about another one.)  It's not impossible, but it
needs more evidence than "this should speed up connections by
I-don't-know-how-much"...

It might also be worth pointing out that the goal was to speed up the
end-to-end connection time.  Redirecting as you suggest is not free
(at minimum it would appear to require two TCP connection setups and two
authentication cycles).  What evidence have you got that it'd be faster
than spawning a new backend?

I tend to agree with the opinion that connection-pooling on the client
side offers more bang for the buck than we could hope to get by doing
surgery on the postmaster/backend setup.

Also, to return to the original point, AFAIK we have not tried hard
to cut the backend startup time, other than the work that was done
a year or so back to eliminate exec() of a separate executable.
It'd be worth looking to see what could be done there with zero
impact on existing clients.
        regards, tom lane