Connection Pooling, a year later - Mailing list pgsql-hackers

From August Zajonc
Subject Connection Pooling, a year later
Date
Msg-id OJEJIPPNGKHEBGFEHPLMAEPGCCAA.ml@augustz.com
Whole thread Raw
List pgsql-hackers
I feel there was a reasonably nice client side attempt at this using a
worker pool model or something. Can't seem to track it down at this moment.
Also would spread queries in different ways to get a hot backup equivalent
etc. It was slick.

The key is that pgsql be able to support a very significant number of
transactions. Be neat to see some numbers on your attempt.

Site I used to run had 6 front end webservers running PHP apps. Each
persistent connection (a requirement to avoid overhead of set-up/teardowns)
lived as long as the httpd process lived, even if idle. That meant at 250
processes per server we had a good 1500 connections clicking over. Our
feeling was that rather than growing to 3,000 connections as the frontend
grew, why not pool those connections off each machine down to perhaps
75/machine worker threads that actually did the work.

Looks like that's not an issue if these backends suck up few resources.
Doing something similar with MySQL we'd experiance problems if we got into
the 2,000 connection range. (kernel/system limits bumped plenty high).

While we are on TODO's I would like to point out that some way to fully
vacume (ie recover deleted and changed) while a db is in full swing is
critical to larger installtions. We did 2 billion queries between reboots on
a quad zeon MySQL box, and those are real user based queries not data loads
or anything like that. At 750-1000 queries/second bringing the database down
or seriously degrading its performance is not a good option.

Enjoy playing with pgsql as always....

- AZ



pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Thoughts on the location of configuration files
Next
From: Bruce Momjian
Date:
Subject: Re: Thoughts on the location of configuration files