I have a question that may be related to connection pooling.
We create a bunch of high-performance lightweight Postgres clients that serve up images (via mod_perl and Apache::DBI).
We have roughly ten web sites, with ten mod_perl instances each, so we always have around 100 Postgres backends sitting
aroundall the time waiting. When a lightweight request comes in, it's a single query on an primary key with no joins,
soit's very fast.
We also have a very heavyweight process (our primary search technology) that can take many seconds, even minutes, to do
asearch and generate a web page.
The lightweight backends are mostly idle, but when a heavyweight search finishes, it causes a burst on the lightweight
backends,which must be very fast. (They provide all of the images in the results page.)
This mixture seems to make it hard to configure Postgres with the right amount of memory and such. The primary query
needssome elbow room to do its work, but the lightweight queries all get the same resources.
I figured that having these lightweight Postgres backends sitting around was harmless -- they allocate shared memory
andother resources, but they never use them, so what's the harm? But recent discussions about connection pooling seem
tosuggest otherwise, that merely having 100 backends sitting around might be a problem.
Craig