Re: how much postgres can scale up? - Mailing list pgsql-performance

From Craig Ringer
Subject Re: how much postgres can scale up?
Date
Msg-id 4DF215B7.7040005@postnewspapers.com.au
Whole thread Raw
In response to how much postgres can scale up?  ("Anibal David Acosta" <aa@devshock.com>)
List pgsql-performance
On 06/10/2011 07:29 PM, Anibal David Acosta wrote:

> I know that with this information you can figure out somethigns, but in
> normal conditions, Is normal the degradation of performance per connection
> when connections are incremented?

With most loads, you will find that the throughput per-worker decreases
as you add workers. The overall throughput will usually increase with
number of workers until you reach a certain "sweet spot" then decrease
as you add more workers after that.

Where that sweet spot is depends on how much your queries rely on CPU vs
disk vs memory, your Pg version, how many disks you have, how fast they
are and in what configuration they are in, what/how many CPUs you have,
how much RAM you have, how fast your RAM is, etc. There's no simple
formula because it's so workload dependent.

The usual *very* rough rule of thumb given here is that your sweet spot
should be *vaguely* number of cpu cores + number of hard drives. That's
*incredibly* rough; if you care you should benchmark it using your real
workload.

If you need lots and lots of clients then it may be beneficial to use a
connection pool like pgbouncer or PgPool-II so you don't have lots more
connections trying to do work at once than your hardware can cope with.
Having fewer connections doing work in the database at the same time can
improve overall performance.

--
Craig Ringer

pgsql-performance by date:

Previous
From: "Anibal David Acosta"
Date:
Subject: Re: how much postgres can scale up?
Next
From: Craig Ringer
Date:
Subject: Re: how much postgres can scale up?