Re: Question on pgbench output - Mailing list pgsql-performance

From Scott Marlowe
Subject Re: Question on pgbench output
Date
Msg-id dcc563d10904031443p228df80ey12e8ef7c106e2941@mail.gmail.com
Whole thread Raw
In response to Question on pgbench output  (David Kerr <dmk@mr-paradox.net>)
List pgsql-performance
On Fri, Apr 3, 2009 at 1:53 PM, David Kerr <dmk@mr-paradox.net> wrote:
> Here is my transaction file:
> \setrandom iid 1 50000
> BEGIN;
> SELECT content FROM test WHERE item_id = :iid;
> END;
>
> and then i executed:
> pgbench -c 400 -t 50 -f trans.sql -l
>
> The results actually have surprised me, the database isn't really tuned
> and i'm not working on great hardware. But still I'm getting:
>
> caling factor: 1
> number of clients: 400
> number of transactions per client: 50
> number of transactions actually processed: 20000/20000
> tps = 51.086001 (including connections establishing)
> tps = 51.395364 (excluding connections establishing)

Not bad.  With an average record size of 1.2Meg you're reading ~60 Meg
per second (plus overhead) off of your drive(s).

> So the question is - Can anyone see a flaw in my test so far?
> (considering that i'm just focused on the performance of pulling
> the 1.2M record from the table) and if so any suggestions to further
> nail it down?

You can either get more memory (enough to hold your whole dataset in
ram), get faster drives and aggregate them with RAID-10, or look into
something like memcached servers, which can cache db queries for your
app layer.

pgsql-performance by date:

Previous
From: David Kerr
Date:
Subject: Re: Question on pgbench output
Next
From: Greg Smith
Date:
Subject: Re: Question on pgbench output