Re: Wiki link for max_connections? (Fwd: Re: [ADMIN] PostgreSQL oom_adj postmaster process to -17) - Mailing list pgsql-hackers

From Robert Haas
Subject Re: Wiki link for max_connections? (Fwd: Re: [ADMIN] PostgreSQL oom_adj postmaster process to -17)
Date
Msg-id CA+TgmoYZoR7at+6cMEZ4hPt3L=1dP0+FyNxRHRznRzcCe-orEw@mail.gmail.com
Whole thread Raw
In response to Re: Wiki link for max_connections? (Fwd: Re: [ADMIN] PostgreSQL oom_adj postmaster process to -17)  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
List pgsql-hackers
On Tue, Sep 4, 2012 at 11:15 AM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:
> It is something which has to be considered, and I don't think it's
> theoretical at all.  Here's how we deal with it.  We don't use a
> plain FIFO queue for our transaction requests, but a prioritized
> FIFO with 10 levels of priority (0 to 9).  The highest priority (9)
> is reserved for "utility" requests -- where a running transaction
> needs to spin off a related transaction to do some work for it.  For
> the lowest level (0) we normally allocate only a single connection,
> and it is used for very long-running reports which we want to queue
> to run one-at-a-time.  As examples of how we categorize queries,
> filling a large list in an interactive application will run at
> priority 3, while translating a key which must cause a description
> on the screen to display is run at priority 8.  Normal single-row
> updates and deletes from an interactive application run at priority
> 5.
>
> Each connection in the pool has a worker thread, and is assigned a
> minimum priority that it will handle.  When all threads are busy and
> transaction requests are queued, any thread completing a database
> transaction pulls from the front of the highest priority queue with
> a waiting request to run a transaction, looking only at priorities
> which are not "beneath it".  If there are no waiting requests of
> high enough priority, the thread waits for one to arrive.

I well believe that with this sort of sophisticated system you can
make the connection pool much smaller and get a benefit out of it.
However, I think it's quite rare for people to have a system this
sophisticated.  I suspect that's why I typically see much larger pool
sizes.

Here's my other thought about this: we talk a lot about how a system
with 32 cores and 40 drives can't do more than 72 things at once, and
that's absolutely true.  But I think much of the reason why PostgreSQL
users get a benefit out of connection pooling is unrelated to that
effect.  What I think we're really working around, in many cases, is
internal lock contention.  That's why people are talking about
adjusting formulas for 9.2.  It's not that a system with 72 resources
can suddenly do more than 72 things; it's that in the old world lock
contention could easily make it a loser to have even half that many
tasks running at once, and now that's less true.  Hopefully we'll make
further improvements in the future and it'll become even less true
still.  So is the real issue the hardware limits of the server, or is
it the limits of our software?  The former is certainly in the mix,
but I personally believe the latter has a lot more to do with pool
size selection than we typically credit.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Heikki Linnakangas
Date:
Subject: Re: Cascading replication and recovery_target_timeline='latest'
Next
From: Robert Haas
Date:
Subject: Re: too much pgbench init output