Re: Proposal of tunable fix for scalability of 8.4 - Mailing list pgsql-performance

From Kevin Grittner
Subject Re: Proposal of tunable fix for scalability of 8.4
Date
Msg-id 49B91B5B.EE98.0025.0@wicourts.gov
Whole thread Raw
In response to Re: Proposal of tunable fix for scalability of 8.4  ("Jignesh K. Shah" <J.K.Shah@Sun.COM>)
List pgsql-performance
>>> "Jignesh K. Shah" <J.K.Shah@Sun.COM> wrote:
> What we have is a pool of 2000 users and we start making each user
> do series of transactions on different rows and see how much the
> database can handle linearly before some bottleneck (system or
> database) kicks in and there can be no more linear increase in
> active users. Many times there is drop after reaching some value of
> active users. If all 2000 users can scale linearly then another test
> with say 2500 can be executed .. All to do is what's the limit we
> can go till typically there are no system resources still remaining
> to be exploited.

> I dont think I have misconfigured the system.

If you're not using a queuing connection pool with that many users, I
think you have.  Let me illustrate with a simple example.

Imagine you have one CPU and negligible hardware resource delays, and
you have 100 queries submitted at the same moment which each take one
second of CPU time.  If you start them all concurrently, they will all
be done in about 100 seconds, with an average run time of 100 seconds.
If you queue them and run them one at a time, the first will be done
in one second, and the last will be done in 100 seconds, with an
average run time of 50.5 seconds.  The context switching and extra RAM
needed for the multiple connections would tend to make the difference
worse.

What makes concurrent queries helpful is that one might block waiting
on a resource, and another can run during that time.  Still, there is
a concurrency level at which the above effect comes into play.  The
more CPUs and spindles you have, the higher the count of useful
concurrent sessions; but there will always be a point where you're
better off queuing additional requests and scheduling them.  The RAM
usage per connection and the cost of context switching pretty much
guarantee that.

With our hardware and workloads, I've been able to spot the pattern
that we settle in best with a pool which allows the number of active
queries to be about 2 times the CPU count plus the number of effective
spindles.  Other hardware environments and workloads will undoubtedly
have different "sweet spots"; however, 2000 concurrent queries running
on 64 CPUs with no significant latency on storage or network is almost
certainly *not* a sweet spot.  Changing PostgreSQL to be well
optimized for such a misconfigured system seems ill-advised to me.

On the other hand, I'd love to see numbers for your change in a more
optimally configured environment, since we found that allowing the
"thundering herd" worked pretty well in allowing threads in our
framework's database service to compete for pulling requests off the
prioritized queue of requests -- as long as the herd didn't get too
big.  I just want to see some plausible evidence from a test
environment which seems reasonable to me before I spend time setting
up my own benchmarks.

> I am trying another run where I limit the waked up threads to a
> pre-configured number to see how various numbers pans out in terms
> of throughput on this server.

Please ensure that requests are queued when all allowed connections
are busy, and that when a connection completes a request it will
immediately begin serving another.  Routing requests through a method
which introduces an arbitrary sleep delay before waking up and
checking again is not going to be very convincing.  It would help if
the number of connections used is related to your pool size, and the
max_connections is adjusted proportionally.

-Kevin

pgsql-performance by date:

Previous
From: Alvaro Herrera
Date:
Subject: Re: Proposal of tunable fix for scalability of 8.4
Next
From: "Jignesh K. Shah"
Date:
Subject: Re: Proposal of tunable fix for scalability of 8.4