Glenn Maynard <glennfmaynard@gmail.com> wrote:
> I'm sorry, but I'm confused. Everyone keeps talking about
> connection pooling, but Dimitri has said repeatedly that each client
> makes a single connection and then keeps it open until the end of
> the test, not that it makes a single connection per SQL query.
> Connection startup costs shouldn't be an issue. Am I missing
> something here?
Quite aside from the overhead of spawning new processes, if you have
more active connections than you have resources for them to go after,
you just increase context switching and resource contention, both of
which have some cost, without any offsetting gains. That would tend
to explain why performance tapers off after a certain point. A
connection pool which queues requests prevents this degradation.
It would be interesting, with each of the CPU counts, to profile
PostgreSQL at the peak of each curve to see where the time goes when
it is operating with an optimal poolsize. Tapering after that point
is rather uninteresting, and profiles would be less useful beyond that
point, as the noise from the context switching and resource contention
would make it harder to spot issues that really matter..
-Kevin