Kirk Strauser wrote:
> On Jan 15, 2009, at 12:30 PM, Steve Crawford wrote:
>
>> But if your application is designed to work well with pooling, it can
>> provide dramatic performance benefits.
>
> I think that's the problem. As I mentioned at one point, a lot of our
> applications have connections open for hours at a time and fire off
> queries when the user does something. I'm coming to think that
> pooling wouldn't give much benefit to long-living processes like that.
>
If you know that the application does not change GUC variables then you
will probably benefit greatly by using pgbouncer. If all the queries are
single-statements then set pool_mode=statement. If you have
multiple-statement transactions then configure pgbouncer to use
pool_mode=transaction. Either way, your app won't tie up a back-end
connection when it is sitting idle. You will probably find that you can
handle your hundreds of clients with a pretty small pool of backend
connections. Pgbouncer will give you some nice statistics to help you
adjust the pool sizing and such.
> On a related note, is max_connections=400 reasonably sized for a
> server with 8GB of RAM? Again, most of these are dormant at any given
> time. The database itself is currently hosted on a dual Xeon server
> with 3GB of RAM and other applications so I'm sure the new 8-core/8GB
> hardware is bound to do better at any rate.
Too little info (and others here can answer better anyway). But I think
you should test pooling and find out how many you really need before
jumping into tuning. I haven't tried Pgpool* but have found pgbouncer to
be easy-to-use, reliable and effective.
Cheers,
Steve