Re: [PATCH] pgbench various mods for high volume testing - Mailing list pgsql-hackers

From Fabien COELHO
Subject Re: [PATCH] pgbench various mods for high volume testing
Date
Msg-id alpine.DEB.2.02.1311130822170.15650@sto
Whole thread Raw
In response to [PATCH] pgbench various mods for high volume testing  (Mark Travis <mtravis15432+pg@gmail.com>)
Responses Re: [PATCH] pgbench various mods for high volume testing
List pgsql-hackers
A non-authoritative answer from previous experience at trying to improve 
pgbench:

> Hello. Attached is a patch that I created against REL9_2_4 for
> contrib/pgbench. I am willing to re-work the patch for HEAD or another
> version if you choose to accept the patch.

It rather works the other way around: "you submit a patch which get 
accepted or not, possibly after (too) heavy discussions". It is not "you 
submit an idea, and it gets accepted, and the patch you will submit is 
applied later on". There is a commitfest to submit patches, see 
http://commitfest.postgresql.org.

Moreover people do not like bundled multi-purpose patch, so at the minimum 
it will have to be split per feature.

> That background out of the way, here are the additional features:

> --urandom: use /dev/urandom to provide seed values for randomness.
> Without this, multiple pgbench processes are likely to generate the
> same sequence of "random" numbers. This was noticeable in InfiniSQL
> benchmarking because of the resulting extremely high rate of locked
> records from having stored procedures invoked with identical parameter
> values.

This loos unix/linux specific? I think that if possible, the randomness 
issue should be kept out of "pgbench"?

> --per-second=NUM: report per-second throughput rate on stdout. NUM is
> the quantity of transactions in each batch that gets counted. The
> higher the value, the less frequently gettimeofday gets called.
> gettimeofday invocation can become a limiting factor as throughput
> increases, so minimizing it is beneficial. For example, with NUM of
> 100, time will be checked every 100 transactions, which will cause the
> per-second output to be in multiples of 100. This enables fine-grained
> (per second) analysis of transaction throughput.

See existing option --progress. I do not understand how a transaction may 
not be counted. Do you mean measured?

My measure of the cost of gettimeofday() calls show that for actual 
transactions which involve disk read/write operations on a Linux system 
the impact is really small, and this is also true for read-only accesses 
as it is small wrt network traffic (send/receive transaction) but some 
people have expressed concerns about gettimeofday costs in the past.

> -P PASSWORD: pass the db password on the command line. This is
> necessary for InfiniSQL benchmarking because hundreds or more separate
> pgbench processes can be launched, and InfiniSQL requires password
> authentication. Having to manually enter all those passwords would
> making benchmarking impossible.

Hmmm... $HOME/.pgpass is your friend? Consider an environment variable? 
The idea is to avoid having a password in you shell history.

> -I: do not abort connection if transaction error is encountered.
> InfiniSQL returns an error if records are locked, so pgbench was
> patched to tolerate this. This is pending a fix, but until then,
> pgbench needs to carry on. The specific error emitted from the server
> is written to stderr for each occurrence. The total quantity of
> transactions is not incremented if there's an error.

No opinion about this one.

-- 
Fabien.



pgsql-hackers by date:

Previous
From: Craig Ringer
Date:
Subject: Re: additional json functionality
Next
From: Andres Freund
Date:
Subject: Re: [PATCH] pgbench various mods for high volume testing