On Wed, Mar 21, 2012 at 2:13 PM, Kjetil Nygård <polpot78@gmail.com> wrote:
> I understand that IO performance, transactions/s, 24/7 vs office hours,
> data-complexity etc is also needed to really say how much beating a
> database can handle.
>
> I just hoped for some simple numbers, but other relevant performance
> numbers etc would be nice as well :-)
At my last job we ran a trio of mainline db servers with 48 opterons
(4x12 2.1GHz) with 128G RAM and 34 15k SAS drives on Areca, and LSI
RAID controllers as well as plain host based SAS adapters.
With the RAID controllers we were able to hit somewhere in the 4k to
5k tps range with pgbench on a 40G test db with somewhere around 50 to
64 connections. As we went past 64 connections, the numbers would
fall down to the 2.5k to 3k range as we headed towards 500 or so.
The actual application was on a ~300G database, with memcache in front
of it. When the memcache was working, load averaged about 4 to 12.
When memcache would die for whatever reason, the load would shoot up
to 300 to 500. Response times would go from sub second to
multi-second. But the db server would actually stay up under such
extreme load.