Re: Connection Pooling, a year later - Mailing list pgsql-hackers

From Mark Pritchard
Subject Re: Connection Pooling, a year later
Date
Msg-id EGECIAPHKLJFDEJBGGOBGEIJFNAA.mark@tangent.net.au
Whole thread Raw
In response to Re: Connection Pooling, a year later  (Bruce Momjian <pgman@candle.pha.pa.us>)
List pgsql-hackers
> I think it is the startup cost that most people want to avoid, and our's
> is higher than most db's that use threads; at least I think so.
>
> It would just be nice to have it done internally rather than have all
> the clients do it, iff it can be done cleanly.

I'd add that client side connection pooling isn't effective in some cases
anyway - one application we work with has 4 physical application servers
running around 6 applications. Each of the applications was written by a
different vendor, and thus a pool size of five gives you 120 open
connections.

From another message, implementing it in libpq doesn't solve for JDBC
connectivity either.

My knowledge of the PostgreSQL internals is rather limited, but could you
not kick off a number of backends and use the already existing block of
shared memory to grab and process requests?

Cheers,

Mark Pritchard



pgsql-hackers by date:

Previous
From: "Christopher Kings-Lynne"
Date:
Subject: Re: Connection Pooling, a year later
Next
From: "Christopher Kings-Lynne"
Date:
Subject: FreeBSD/alpha