Thread: [PATCH] pgbench various mods for high volume testing

[PATCH] pgbench various mods for high volume testing

From
Mark Travis
Date:
Hello. Attached is a patch that I created against REL9_2_4 for
contrib/pgbench. I am willing to re-work the patch for HEAD or another
version if you choose to accept the patch.

The patch supports a number of modifications to pgbench to facilitate
benchmarking using many client processes across many hosts to the
effect of over 100,000 connections sending over 500,000 transactions
per second from over 500 pgbench processes on a dozen client hosts.
This effort was for an open source RDBMS that I have created which
speaks the PostgreSQL Frontend/Backend Protocol. I would like to get
approval to have this patch placed in the main branch for pgbench so
that I don't have to maintain a distinct patch. Even though I created
this patch to test a product which is not PostgreSQL, I hope that you
find the modifications to be useful for PostgreSQL testing, at least
at very high volumes.


That background out of the way, here are the additional features:
----------------------------------
--urandom: use /dev/urandom to provide seed values for randomness.
Without this, multiple pgbench processes are likely to generate the
same sequence of "random" numbers. This was noticeable in InfiniSQL
benchmarking because of the resulting extremely high rate of locked
records from having stored procedures invoked with identical parameter
values.

--per-second=NUM: report per-second throughput rate on stdout. NUM is
the quantity of transactions in each batch that gets counted. The
higher the value, the less frequently gettimeofday gets called.
gettimeofday invocation can become a limiting factor as throughput
increases, so minimizing it is beneficial. For example, with NUM of
100, time will be checked every 100 transactions, which will cause the
per-second output to be in multiples of 100. This enables fine-grained
(per second) analysis of transaction throughput.

-P PASSWORD: pass the db password on the command line. This is
necessary for InfiniSQL benchmarking because hundreds or more separate
pgbench processes can be launched, and InfiniSQL requires password
authentication. Having to manually enter all those passwords would
making benchmarking impossible.

-I: do not abort connection if transaction error is encountered.
InfiniSQL returns an error if records are locked, so pgbench was
patched to tolerate this. This is pending a fix, but until then,
pgbench needs to carry on. The specific error emitted from the server
is written to stderr for each occurrence. The total quantity of
transactions is not incremented if there's an error.
----------------------------

Thank you for your consideration. More background about how I used the
patch is at http://www.infinisql.org
If you find this patch to be useful, then I am willing to modify the
patch as necessary to get it accepted into the code base. I made sure
to create it as a '-c' patch and I haven't snuck in any rogue
whitespace. Apply it in the root of REL9_2_4 as: patch -p1 <
pgbench_persecond-v1.patch

Sincerely,
Mark Travis

Attachment

Re: [PATCH] pgbench various mods for high volume testing

From
Fabien COELHO
Date:
A non-authoritative answer from previous experience at trying to improve 
pgbench:

> Hello. Attached is a patch that I created against REL9_2_4 for
> contrib/pgbench. I am willing to re-work the patch for HEAD or another
> version if you choose to accept the patch.

It rather works the other way around: "you submit a patch which get 
accepted or not, possibly after (too) heavy discussions". It is not "you 
submit an idea, and it gets accepted, and the patch you will submit is 
applied later on". There is a commitfest to submit patches, see 
http://commitfest.postgresql.org.

Moreover people do not like bundled multi-purpose patch, so at the minimum 
it will have to be split per feature.

> That background out of the way, here are the additional features:

> --urandom: use /dev/urandom to provide seed values for randomness.
> Without this, multiple pgbench processes are likely to generate the
> same sequence of "random" numbers. This was noticeable in InfiniSQL
> benchmarking because of the resulting extremely high rate of locked
> records from having stored procedures invoked with identical parameter
> values.

This loos unix/linux specific? I think that if possible, the randomness 
issue should be kept out of "pgbench"?

> --per-second=NUM: report per-second throughput rate on stdout. NUM is
> the quantity of transactions in each batch that gets counted. The
> higher the value, the less frequently gettimeofday gets called.
> gettimeofday invocation can become a limiting factor as throughput
> increases, so minimizing it is beneficial. For example, with NUM of
> 100, time will be checked every 100 transactions, which will cause the
> per-second output to be in multiples of 100. This enables fine-grained
> (per second) analysis of transaction throughput.

See existing option --progress. I do not understand how a transaction may 
not be counted. Do you mean measured?

My measure of the cost of gettimeofday() calls show that for actual 
transactions which involve disk read/write operations on a Linux system 
the impact is really small, and this is also true for read-only accesses 
as it is small wrt network traffic (send/receive transaction) but some 
people have expressed concerns about gettimeofday costs in the past.

> -P PASSWORD: pass the db password on the command line. This is
> necessary for InfiniSQL benchmarking because hundreds or more separate
> pgbench processes can be launched, and InfiniSQL requires password
> authentication. Having to manually enter all those passwords would
> making benchmarking impossible.

Hmmm... $HOME/.pgpass is your friend? Consider an environment variable? 
The idea is to avoid having a password in you shell history.

> -I: do not abort connection if transaction error is encountered.
> InfiniSQL returns an error if records are locked, so pgbench was
> patched to tolerate this. This is pending a fix, but until then,
> pgbench needs to carry on. The specific error emitted from the server
> is written to stderr for each occurrence. The total quantity of
> transactions is not incremented if there's an error.

No opinion about this one.

-- 
Fabien.



Re: [PATCH] pgbench various mods for high volume testing

From
Andres Freund
Date:
On 2013-11-13 08:35:31 +0100, Fabien COELHO wrote:
> >Hello. Attached is a patch that I created against REL9_2_4 for
> >contrib/pgbench. I am willing to re-work the patch for HEAD or another
> >version if you choose to accept the patch.
> 
> It rather works the other way around: "you submit a patch which get accepted
> or not, possibly after (too) heavy discussions". It is not "you submit an
> idea, and it gets accepted, and the patch you will submit is applied later
> on". There is a commitfest to submit patches, see
> http://commitfest.postgresql.org.

Well, you certainly can, are even encouraged to, ask for feedback about
a feature before spending significant time on it. So interest certainly
cannot be a guarantee for acceptance, but it certainly is helpful.

> >That background out of the way, here are the additional features:
> 
> >--urandom: use /dev/urandom to provide seed values for randomness.
> >Without this, multiple pgbench processes are likely to generate the
> >same sequence of "random" numbers. This was noticeable in InfiniSQL
> >benchmarking because of the resulting extremely high rate of locked
> >records from having stored procedures invoked with identical parameter
> >values.
> 
> This loos unix/linux specific? I think that if possible, the randomness
> issue should be kept out of "pgbench"?

urandom is available on a couple of platforms, no just linux. I don't
see a big problem making the current srandom() invocation more complex.

> >-I: do not abort connection if transaction error is encountered.
> >InfiniSQL returns an error if records are locked, so pgbench was
> >patched to tolerate this. This is pending a fix, but until then,
> >pgbench needs to carry on. The specific error emitted from the server
> >is written to stderr for each occurrence. The total quantity of
> >transactions is not incremented if there's an error.

I am not sure I like the implementation not having looked at it, but I
certainly think this is a useful feature. I think the error rate should
be computed instead of just disregarding it though.
It might also be worthwile to add code to automatically retry
transactions that fail with an error indicating a transient error (like
serialization failures).

Greetings,

Andres Freund

-- Andres Freund                       http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training &
Services