pgbench

pgbench — run a benchmark test on Postgres Pro

Synopsis

pgbench -i [option...] [dbname]

pgbench [option...] [dbname]

Description

pgbench is a simple program for running benchmark tests on Postgres Pro. It runs the same sequence of SQL commands over and over, possibly in multiple concurrent database sessions, and then calculates the average transaction rate (transactions per second). By default, pgbench tests a scenario that is loosely based on TPC-B, involving five SELECT, UPDATE, and INSERT commands per transaction. However, it is easy to test other cases by writing your own transaction script files.

Typical output from pgbench looks like:

transaction type: <builtin: TPC-B (sort of)>
default transaction isolation level: read committed
transaction maximum tries number: 1
scaling factor: 10
query mode: simple
number of clients: 10
number of threads: 1
number of transactions per client: 1000
number of transactions actually processed: 10000/10000
tps = 85.184871 (including connections establishing)
tps = 85.296346 (excluding connections establishing)

The first eight lines report some of the most important parameter settings. The next line reports the number of transactions completed and intended (the latter being just the product of number of clients and number of transactions per client); these will be equal unless the run failed before completion or some clients got serialization/deadlock failures. (In -T mode, only the actual number of transactions is printed.) (See the section called “Serialization/Deadlock Failures and Retries” for more information about serialization/deadlock failures.) The last two lines report the number of transactions per second, figured with and without counting the time to start database sessions.

The default TPC-B-like transaction test requires specific tables to be set up beforehand. pgbench should be invoked with the -i (initialize) option to create and populate these tables. (When you are testing a custom script, you don't need this step, but will instead need to do whatever setup your test needs.) Initialization looks like:

pgbench -i [ other-options ] dbname

where dbname is the name of the already-created database to test in. (You may also need -h, -p, and/or -U options to specify how to connect to the database server.)

Caution

pgbench -i creates four tables pgbench_accounts, pgbench_branches, pgbench_history, and pgbench_tellers, destroying any existing tables of these names. Be very careful to use another database if you have tables having these names!

At the default scale factor of 1, the tables initially contain this many rows:

table                   # of rows
---------------------------------
pgbench_branches        1
pgbench_tellers         10
pgbench_accounts        100000
pgbench_history         0

You can (and, for most purposes, probably should) increase the number of rows by using the -s (scale factor) option. The -F (fillfactor) option might also be used at this point.

Once you have done the necessary setup, you can run your benchmark with a command that doesn't include -i, that is

pgbench [ options ] dbname

In nearly all cases, you'll need some options to make a useful test. The most important options are -c (number of clients), -t (number of transactions), -T (time limit), and -f (specify a custom script file). See below for a full list.

Options

The following is divided into three subsections. Different options are used during database initialization and while running benchmarks, but some options are useful in both cases.

Initialization Options

pgbench accepts the following command-line initialization arguments:

dbname

Specifies the name of the database to test in. If this is not specified, the environment variable PGDATABASE is used. If that is not set, the user name specified for the connection is used.

-i
--initialize

Required to invoke initialization mode.

-I init_steps
--init-steps=init_steps

Perform just a selected set of the normal initialization steps. init_steps specifies the initialization steps to be performed, using one character per step. Each step is invoked in the specified order. The default is dtgvp. The available steps are:

d (Drop)

Drop any existing pgbench tables.

t (create Tables)

Create the tables used by the standard pgbench scenario, namely pgbench_accounts, pgbench_branches, pgbench_history, and pgbench_tellers.

g (Generate data)

Generate data and load it into the standard tables, replacing any data already present.

v (Vacuum)

Invoke VACUUM on the standard tables.

p (create Primary keys)

Create primary key indexes on the standard tables.

f (create Foreign keys)

Create foreign key constraints between the standard tables. (Note that this step is not performed by default.)

-F fillfactor
--fillfactor=fillfactor

Create the pgbench_accounts, pgbench_tellers and pgbench_branches tables with the given fillfactor. Default is 100.

-n
--no-vacuum

Perform no vacuuming during initialization. (This option suppresses the v initialization step, even if it was specified in -I.)

-q
--quiet

Switch logging to quiet mode, producing only one progress message per 5 seconds. The default logging prints one message each 100000 rows, which often outputs many lines per second (especially on good hardware).

-s scale_factor
--scale=scale_factor

Multiply the number of rows generated by the scale factor. For example, -s 100 will create 10,000,000 rows in the pgbench_accounts table. Default is 1. When the scale is 20,000 or larger, the columns used to hold account identifiers (aid columns) will switch to using larger integers (bigint), in order to be big enough to hold the range of account identifiers.

--foreign-keys

Create foreign key constraints between the standard tables. (This option adds the f step to the initialization step sequence, if it is not already present.)

--index-tablespace=index_tablespace

Create indexes in the specified tablespace, rather than the default tablespace.

--tablespace=tablespace

Create tables in the specified tablespace, rather than the default tablespace.

--unlogged-tables

Create all tables as unlogged tables, rather than permanent tables.

Benchmarking Options

pgbench accepts the following command-line benchmarking arguments:

-b scriptname[@weight]
--builtin=scriptname[@weight]

Add the specified built-in script to the list of scripts to be executed. Available built-in scripts are: tpcb-like, simple-update and select-only. Unambiguous prefixes of built-in names are accepted. With the special name list, show the list of built-in scripts and exit immediately.

Optionally, write an integer weight after @ to adjust the probability of selecting this script versus other ones. The default weight is 1. See below for details.

-c clients
--client=clients

Number of clients simulated, that is, number of concurrent database sessions. Default is 1.

-C
--connect

Establish a new connection for each transaction, rather than doing it just once per client session. This is useful to measure the connection overhead.

-d
--debug

Print debugging output.

-D varname=value
--define=varname=value

Define a variable for use by a custom script (see below). Multiple -D options are allowed.

-f filename[@weight]
--file=filename[@weight]

Add a transaction script read from filename to the list of scripts to be executed.

Optionally, write an integer weight after @ to adjust the probability of selecting this script versus other ones. The default weight is 1. (To use a script file name that includes an @ character, append a weight so that there is no ambiguity, for example filen@me@1.) See below for details.

-j threads
--jobs=threads

Number of worker threads within pgbench. Using more than one thread can be helpful on multi-CPU machines. Clients are distributed as evenly as possible among available threads. Default is 1.

-l
--log

Write information about each transaction to a log file. See below for details.

-L limit
--latency-limit=limit

Transactions that last more than limit milliseconds are counted and reported separately, as late.

When throttling is used (--rate=...), transactions that lag behind schedule by more than limit ms, and thus have no hope of meeting the latency limit, are not sent to the server at all. They are counted and reported separately as skipped.

You can use this option together with the --max-tries option, which enables retries for transactions with serialization/deadlock errors. A transaction is retried only within the specified limit. See the section called “Serialization/Deadlock Failures and Retries” for details.

-M querymode
--protocol=querymode

Protocol to use for submitting queries to the server:

  • simple: use simple query protocol.

  • extended: use extended query protocol.

  • prepared: use extended query protocol with prepared statements.

In the prepared mode, pgbench reuses the parse analysis result starting from the second query iteration, so pgbench runs faster than in other modes.

The default is simple query protocol. (See Chapter 54 for more information.)

-n
--no-vacuum

Perform no vacuuming before running the test. This option is necessary if you are running a custom test scenario that does not include the standard tables pgbench_accounts, pgbench_branches, pgbench_history, and pgbench_tellers.

-N
--skip-some-updates

Run built-in simple-update script. Shorthand for -b simple-update.

-P sec
--progress=sec

Show progress report every sec seconds. The report includes the time since the beginning of the run, the TPS since the last report, and the transaction latency average and standard deviation since the last report. If any transactions ended with serialization/deadlock failures since the last report, they are also reported as failed (see the section called “Serialization/Deadlock Failures and Retries” for more information). Under throttling (-R), the latency is computed with respect to the transaction scheduled start time, not the actual transaction beginning time. Thus, the latency also includes the average schedule lag time. If any transactions have been rolled back and retried after a serialization/deadlock error since the last report, the report includes the number of such transactions and the sum of all retries. Use the --max-tries to enable transactions retries after serialization/deadlock errors.

-r
--report-per-command

Report the following statistics for each command after the benchmark finishes: the average per-statement latency (execution time from the perspective of the client), the number of serialization failures and retries, the number of deadlock failures and retries. The report displays the columns with statistics on failures and retries only if the current pgbench run has a failure or retry of any type, respectively. See below for details.

-R rate
--rate=rate

Execute transactions targeting the specified rate instead of running as fast as possible (the default). The rate is given in transactions per second. If the targeted rate is above the maximum possible rate, the rate limit won't impact the results.

The rate is targeted by starting transactions along a Poisson-distributed schedule time line. The expected start time schedule moves forward based on when the client first started, not when the previous transaction ended. That approach means that when transactions go past their original scheduled end time, it is possible for later ones to catch up again.

When throttling is active, the transaction latency reported at the end of the run is calculated from the scheduled start times, so it includes the time each transaction had to wait for the previous transaction to finish. The wait time is called the schedule lag time, and its average and maximum are also reported separately. The transaction latency with respect to the actual transaction start time, i.e., the time spent executing the transaction in the database, can be computed by subtracting the schedule lag time from the reported latency.

If --latency-limit is used together with --rate, a transaction can lag behind so much that it is already over the latency limit when the previous transaction ends, because the latency is calculated from the scheduled start time. Such transactions are not sent to the server, but are skipped altogether and counted separately.

A high schedule lag time is an indication that the system cannot process transactions at the specified rate, with the chosen number of clients and threads. When the average transaction execution time is longer than the scheduled interval between each transaction, each successive transaction will fall further behind, and the schedule lag time will keep increasing the longer the test run is. When that happens, you will have to reduce the specified transaction rate.

-s scale_factor
--scale=scale_factor

Report the specified scale factor in pgbench's output. With the built-in tests, this is not necessary; the correct scale factor will be detected by counting the number of rows in the pgbench_branches table. However, when testing only custom benchmarks (-f option), the scale factor will be reported as 1 unless this option is used.

-S
--select-only

Run built-in select-only script. Shorthand for -b select-only.

-t transactions
--transactions=transactions

Number of transactions each client runs. Default is 10.

-T seconds
--time=seconds

Run the test for this many seconds, rather than a fixed number of transactions per client. -t and -T are mutually exclusive.

You can use this option together with the --max-tries option, which enables retries for transactions with serialization/deadlock errors. Transactions are retried only within the time limit specified by --time. See the section called “Serialization/Deadlock Failures and Retries” for details.

-v
--vacuum-all

Vacuum all four standard tables before running the test. With neither -n nor -v, pgbench will vacuum the pgbench_tellers and pgbench_branches tables, and will truncate pgbench_history.

--aggregate-interval=seconds

Length of aggregation interval (in seconds). May be used only with -l option. With this option, the log contains per-interval summary data, as described below.

--default-isolation-level=isolation_level

Set the default transaction isolation level:

  • RUC: Read Uncommitted

  • RC: Read Committed

  • RR: Repeatable Read

  • S: Serializable

By default, Read Committed isolation level is used. For more information, see Section 13.2.

--log-prefix=prefix

Set the filename prefix for the log files created by --log. The default is pgbench_log.

--max-tries=tries_number

Enable retries for transactions with serialization/deadlock errors and set the maximum number of transaction tries. The default value is 1, so such transactions are not retried.

This option can be used together with --latency-limit and/or --time, which limit the total time of all transaction tries. In these cases, transactions can be retried only within the specified time limits. The --max-tries=0 setting allows an unlimited number of tries within the allotted time; this value can be used only together with the --latency-limit and/or --time options.

See the section called “Serialization/Deadlock Failures and Retries” for more information about retrying transactions with serialization/deadlock errors.

--progress-timestamp

When showing progress (option -P), use a timestamp (Unix epoch) instead of the number of seconds since the beginning of the run. The unit is in seconds, with millisecond precision after the dot. This helps compare logs generated by various tools.

--random-seed=SEED

Set random generator seed. Seeds the system random number generator, which then produces a sequence of initial generator states, one for each thread. Values for SEED may be: time (the default, the seed is based on the current time), rand (use a strong random source, failing if none is available), or an unsigned decimal integer value. The random generator is invoked explicitly from a pgbench script (random... functions) or implicitly (for instance option --rate uses it to schedule transactions). When explicitly set, the value used for seeding is shown on the terminal. Any value allowed for SEED may also be provided through the environment variable PGBENCH_RANDOM_SEED. To ensure that the provided seed impacts all possible uses, put this option first or use the environment variable.

Setting the seed explicitly allows to reproduce a pgbench run exactly, as far as random numbers are concerned. As the random state is managed per thread, this means the exact same pgbench run for an identical invocation if there is one client per thread and there are no external or data dependencies. From a statistical viewpoint reproducing runs exactly is a bad idea because it can hide the performance variability or improve performance unduly, e.g., by hitting the same pages as a previous run. However, it may also be of great help for debugging, for instance re-running a tricky case which leads to an error. Use wisely.

--sampling-rate=rate

Sampling rate, used when writing data into the log, to reduce the amount of log generated. If this option is given, only the specified fraction of transactions are logged. 1.0 means all transactions will be logged, 0.05 means only 5% of the transactions will be logged.

Remember to take the sampling rate into account when processing the log file. For example, when computing TPS values, you need to multiply the numbers accordingly (e.g., with 0.01 sample rate, you'll only get 1/100 of the actual TPS).

Common Options

pgbench accepts the following command-line common arguments:

-h hostname
--host=hostname

The database server's host name

-p port
--port=port

The database server's port number

-U login
--username=login

The user name to connect as

-V
--version

Print the pgbench version and exit.

-?
--help

Show help about pgbench command line arguments, and exit.

Exit Status

A successful run will exit with status 0. Exit status 1 indicates static problems such as invalid command-line options. Errors during the run such as database errors or problems in the script will result in exit status 2. In the latter case, pgbench will print partial results.

Environment

PGDATABASE
PGHOST
PGPORT
PGUSER

Default connection parameters.

This utility, like most other Postgres Pro utilities, uses the environment variables supported by libpq (see Section 35.14).

Notes

What Is the Transaction Actually Performed in pgbench?

pgbench executes test scripts chosen randomly from a specified list. The scripts may include built-in scripts specified with -b and user-provided scripts specified with -f. Each script may be given a relative weight specified after an @ so as to change its selection probability. The default weight is 1. Scripts with a weight of 0 are ignored.

The default built-in transaction script (also invoked with -b tpcb-like) issues seven commands per transaction over randomly chosen aid, tid, bid and delta. The scenario is inspired by the TPC-B benchmark, but is not actually TPC-B, hence the name.

  1. BEGIN;

  2. UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;

  3. SELECT abalance FROM pgbench_accounts WHERE aid = :aid;

  4. UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;

  5. UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;

  6. INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);

  7. END;

If you select the simple-update built-in (also -N), steps 4 and 5 aren't included in the transaction. This will avoid update contention on these tables, but it makes the test case even less like TPC-B.

If you select the select-only built-in (also -S), only the SELECT is issued.

Custom Scripts

pgbench has support for running custom benchmark scenarios by replacing the default transaction script (described above) with a transaction script read from a file (-f option). In this case a transaction counts as one execution of a script file.

A script file contains one or more SQL commands terminated by semicolons. Empty lines and lines beginning with -- are ignored. Script files can also contain meta commands, which are interpreted by pgbench itself, as described below.

Note

Before Postgres Pro 9.6, SQL commands in script files were terminated by newlines, and so they could not be continued across lines. Now a semicolon is required to separate consecutive SQL commands (though a SQL command does not need one if it is followed by a meta command). If you need to create a script file that works with both old and new versions of pgbench, be sure to write each SQL command on a single line ending with a semicolon.

There is a simple variable-substitution facility for script files. Variable names must consist of letters (including non-Latin letters), digits, and underscores, with the first character not being a digit. Variables can be set by the command-line -D option, explained above, or by the meta commands explained below. In addition to any variables preset by -D command-line options, there are a few variables that are preset automatically, listed in Table 263. A value specified for these variables using -D takes precedence over the automatic presets. Once set, a variable's value can be inserted into a SQL command by writing :variablename. When running more than one client session, each session has its own set of variables. pgbench supports up to 255 variable uses in one statement.

Table 263. Automatic Variables

VariableDescription
client_idunique number identifying the client session (starts from zero)
default_seedseed used in hash functions by default
random_seedrandom generator seed (unless overwritten with -D)
scalecurrent scale factor

Script file meta commands begin with a backslash (\) and normally extend to the end of the line, although they can be continued to additional lines by writing backslash-return. Arguments to a meta command are separated by white space. These meta commands are supported:

\gset [prefix]

This command may be used to end SQL queries, taking the place of the terminating semicolon (;).

When this command is used, the preceding SQL query is expected to return one row, the columns of which are stored into variables named after column names, and prefixed with prefix if provided.

The following example puts the final account balance from the first query into variable abalance, and fills variables p_two and p_three with integers from the third query. The result of the second query is discarded.

UPDATE pgbench_accounts
  SET abalance = abalance + :delta
  WHERE aid = :aid
  RETURNING abalance \gset
-- compound of two queries
SELECT 1 \;
SELECT 2 AS two, 3 AS three \gset p_

\if expression
\elif expression
\else
\endif

This group of commands implements nestable conditional blocks, similarly to psql's \if expression. Conditional expressions are identical to those with \set, with non-zero values interpreted as true.

\set varname expression

Sets variable varname to a value calculated from expression. The expression may contain the NULL constant, Boolean constants TRUE and FALSE, integer constants such as 5432, double constants such as 3.14159, references to variables :variablename, operators with their usual SQL precedence and associativity, function calls, SQL CASE generic conditional expressions and parentheses.

Functions and most operators return NULL on NULL input.

For conditional purposes, non zero numerical values are TRUE, zero numerical values and NULL are FALSE.

Too large or small integer and double constants, as well as integer arithmetic operators (+, -, * and /) raise errors on overflows.

When no final ELSE clause is provided to a CASE, the default value is NULL.

Examples:

\set ntellers 10 * :scale
\set aid (1021 * random(1, 100000 * :scale)) % \
           (100000 * :scale) + 1
\set divx CASE WHEN :x <> 0 THEN :y/:x ELSE NULL END
\sleep number [ us | ms | s ]

Causes script execution to sleep for the specified duration in microseconds (us), milliseconds (ms) or seconds (s). If the unit is omitted then seconds are the default. number can be either an integer constant or a :variablename reference to a variable having an integer value.

Example:

\sleep 10 ms
\setshell varname command [ argument ... ]

Sets variable varname to the result of the shell command command with the given argument(s). The command must return an integer value through its standard output.

command and each argument can be either a text constant or a :variablename reference to a variable. If you want to use an argument starting with a colon, write an additional colon at the beginning of argument.

Example:

\setshell variable_to_be_assigned command literal_argument :variable ::literal_starting_with_colon
\shell command [ argument ... ]

Same as \setshell, but the result of the command is discarded.

Example:

\shell command literal_argument :variable ::literal_starting_with_colon

Built-in Operators

The arithmetic, bitwise, comparison and logical operators listed in Table 264 are built into pgbench and may be used in expressions appearing in \set.

Table 264. pgbench Operators by Increasing Precedence

OperatorDescriptionExampleResult
ORlogical or5 or 0TRUE
ANDlogical and3 and 0FALSE
NOTlogical notnot falseTRUE
IS [NOT] (NULL|TRUE|FALSE)value tests1 is nullFALSE
ISNULL|NOTNULLnull tests1 notnullTRUE
=is equal5 = 4FALSE
<>is not equal5 <> 4TRUE
!=is not equal5 != 5FALSE
<lower than5 < 4FALSE
<=lower or equal5 <= 4FALSE
>greater than5 > 4TRUE
>=greater or equal5 >= 4TRUE
|integer bitwise OR1 | 23
#integer bitwise XOR1 # 32
&integer bitwise AND1 & 31
~integer bitwise NOT~ 1-2
<<integer bitwise shift left1 << 24
>>integer bitwise shift right8 >> 22
+addition5 + 49
-subtraction3 - 2.01.0
*multiplication5 * 420
/division (integer truncates the results)5 / 31
%modulo3 % 21
-opposite- 2.0-2.0

Built-In Functions

The functions listed in Table 265 are built into pgbench and may be used in expressions appearing in \set.

Table 265. pgbench Functions

FunctionReturn TypeDescriptionExampleResult
abs(a)same as aabsolute valueabs(-17)17
debug(a)same as aprint a to stderr, and return adebug(5432.1)5432.1
double(i)doublecast to doubledouble(5432)5432.0
exp(x)doubleexponentialexp(1.0)2.718281828459045
greatest(a [, ... ] )double if any a is double, else integerlargest value among argumentsgreatest(5, 4, 3, 2)5
hash(a [, seed ] )integeralias for hash_murmur2()hash(10, 5432)-5817877081768721676
hash_fnv1a(a [, seed ] )integerFNV-1a hashhash_fnv1a(10, 5432)-7793829335365542153
hash_murmur2(a [, seed ] )integerMurmurHash2 hashhash_murmur2(10, 5432)-5817877081768721676
int(x)integercast to intint(5.4 + 3.8)9
least(a [, ... ] )double if any a is double, else integersmallest value among argumentsleast(5, 4, 3, 2.1)2.1
ln(x)doublenatural logarithmln(2.718281828459045)1.0
mod(i, j)integermodulomod(54, 32)22
pi()doublevalue of the constant PIpi()3.14159265358979323846
pow(x, y), power(x, y)doubleexponentiationpow(2.0, 10), power(2.0, 10)1024.0
random(lb, ub)integeruniformly-distributed random integer in [lb, ub]random(1, 10)an integer between 1 and 10
random_exponential(lb, ub, parameter)integerexponentially-distributed random integer in [lb, ub], see belowrandom_exponential(1, 10, 3.0)an integer between 1 and 10
random_gaussian(lb, ub, parameter)integerGaussian-distributed random integer in [lb, ub], see belowrandom_gaussian(1, 10, 2.5)an integer between 1 and 10
random_zipfian(lb, ub, parameter)integerZipfian-distributed random integer in [lb, ub], see belowrandom_zipfian(1, 10, 1.5)an integer between 1 and 10
sqrt(x)doublesquare rootsqrt(2.0)1.414213562

The random function generates values using a uniform distribution, that is all the values are drawn within the specified range with equal probability. The random_exponential, random_gaussian and random_zipfian functions require an additional double parameter which determines the precise shape of the distribution.

  • For an exponential distribution, parameter controls the distribution by truncating a quickly-decreasing exponential distribution at parameter, and then projecting onto integers between the bounds. To be precise, with


    f(x) = exp(-parameter * (x - min) / (max - min + 1)) / (1 - exp(-parameter))

    Then value i between min and max inclusive is drawn with probability: f(i) - f(i + 1).

    Intuitively, the larger the parameter, the more frequently values close to min are accessed, and the less frequently values close to max are accessed. The closer to 0 parameter is, the flatter (more uniform) the access distribution. A crude approximation of the distribution is that the most frequent 1% values in the range, close to min, are drawn parameter% of the time. The parameter value must be strictly positive.

  • For a Gaussian distribution, the interval is mapped onto a standard normal distribution (the classical bell-shaped Gaussian curve) truncated at -parameter on the left and +parameter on the right. Values in the middle of the interval are more likely to be drawn. To be precise, if PHI(x) is the cumulative distribution function of the standard normal distribution, with mean mu defined as (max + min) / 2.0, with


    f(x) = PHI(2.0 * parameter * (x - mu) / (max - min + 1)) /
           (2.0 * PHI(parameter) - 1)

    then value i between min and max inclusive is drawn with probability: f(i + 0.5) - f(i - 0.5). Intuitively, the larger the parameter, the more frequently values close to the middle of the interval are drawn, and the less frequently values close to the min and max bounds. About 67% of values are drawn from the middle 1.0 / parameter, that is a relative 0.5 / parameter around the mean, and 95% in the middle 2.0 / parameter, that is a relative 1.0 / parameter around the mean; for instance, if parameter is 4.0, 67% of values are drawn from the middle quarter (1.0 / 4.0) of the interval (i.e., from 3.0 / 8.0 to 5.0 / 8.0) and 95% from the middle half (2.0 / 4.0) of the interval (second and third quartiles). The minimum allowed parameter value is 2.0.

  • random_zipfian generates a bounded Zipfian distribution. parameter defines how skewed the distribution is. The larger the parameter, the more frequently values closer to the beginning of the interval are drawn. The distribution is such that, assuming the range starts from 1, the ratio of the probability of drawing k versus drawing k+1 is ((k+1)/k)**parameter. For example, random_zipfian(1, ..., 2.5) produces the value 1 about (2/1)**2.5 = 5.66 times more frequently than 2, which itself is produced (3/2)**2.5 = 2.76 times more frequently than 3, and so on.

    pgbench's implementation is based on "Non-Uniform Random Variate Generation", Luc Devroye, p. 550-551, Springer 1986. Due to limitations of that algorithm, the parameter value is restricted to the range [1.001, 1000].

Hash functions hash, hash_murmur2 and hash_fnv1a accept an input value and an optional seed parameter. In case the seed isn't provided the value of :default_seed is used, which is initialized randomly unless set by the command-line -D option. Hash functions can be used to scatter the distribution of random functions such as random_zipfian or random_exponential. For instance, the following pgbench script simulates possible real world workload typical for social media and blogging platforms where few accounts generate excessive load:

\set r random_zipfian(0, 100000000, 1.07)
\set k abs(hash(:r)) % 1000000

In some cases several distinct distributions are needed which don't correlate with each other and this is when implicit seed parameter comes in handy:

\set k1 abs(hash(:r, :default_seed + 123)) % 1000000
\set k2 abs(hash(:r, :default_seed + 321)) % 1000000

As an example, the full definition of the built-in TPC-B-like transaction is:

\set aid random(1, 100000 * :scale)
\set bid random(1, 1 * :scale)
\set tid random(1, 10 * :scale)
\set delta random(-5000, 5000)
BEGIN;
UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
END;

This script allows each iteration of the transaction to reference different, randomly-chosen rows. (This example also shows why it's important for each client session to have its own variables — otherwise they'd not be independently touching different rows.)

Per-Transaction Logging

With the -l option (but without the --aggregate-interval option), pgbench writes information about each transaction to a log file. The log file will be named prefix.nnn, where prefix defaults to pgbench_log, and nnn is the PID of the pgbench process. The prefix can be changed by using the --log-prefix option. If the -j option is 2 or higher, so that there are multiple worker threads, each will have its own log file. The first worker will use the same name for its log file as in the standard single worker case. The additional log files for the other workers will be named prefix.nnn.mmm, where mmm is a sequential number for each worker starting with 1.

The format of the log is:

client_id transaction_no time script_no time_epoch time_us [ schedule_lag ] [ serialization_retries deadlock_retries ]

where client_id indicates which client session ran the transaction, transaction_no counts how many transactions have been run by that session, time is the total elapsed transaction time in microseconds, script_no identifies which script file was used (useful when multiple scripts were specified with -f or -b), and time_epoch/time_us are a Unix-epoch time stamp and an offset in microseconds (suitable for creating an ISO 8601 time stamp with fractional seconds) showing when the transaction completed. The schedule_lag field is the difference between the transaction's scheduled start time, and the time it actually started, in microseconds. It is only present when the --rate option is used. When both --rate and --latency-limit are used, the time for a skipped transaction will be reported as skipped. serialization_retries and deadlock_retries are the sums of all retries after the corresponding errors during the current script execution. They are only present when the maximum number of tries for transactions is not equal to 1 (--max-tries). If the transaction ended with a serialization/deadlock failure, its time will be reported as failed (see the section called “Serialization/Deadlock Failures and Retries” for more information).

Here is a snippet of a log file generated in a single-client run:

0 199 2241 0 1175850568 995598
0 200 2465 0 1175850568 998079
0 201 2513 0 1175850569 608
0 202 2038 0 1175850569 2663

Another example with --rate=100 and --latency-limit=5 (note the additional schedule_lag column):

0 81 4621 0 1412881037 912698 3005
0 82 6173 0 1412881037 914578 4304
0 83 skipped 0 1412881037 914578 5217
0 83 skipped 0 1412881037 914578 5099
0 83 4722 0 1412881037 916203 3108
0 84 4142 0 1412881037 918023 2333
0 85 2465 0 1412881037 919759 740

In this example, transaction 82 was late, because its latency (6.173 ms) was over the 5 ms limit. The next two transactions were skipped, because they were already late before they were even started.

The following example shows a snippet of a log file with failures and retries, with the maximum number of tries set to 10:

3 0 47423 0 1499414498 34501 3 0
3 1 8333 0 1499414498 42848 0 0
3 2 8358 0 1499414498 51219 0 0
4 0 72345 0 1499414498 59433 6 0
1 3 41718 0 1499414498 67879 4 0
1 4 8416 0 1499414498 76311 0 0
3 3 33235 0 1499414498 84469 3 0
0 0 failed 0 1499414498 84905 9 0
2 0 failed 0 1499414498 86248 9 0
3 4 8307 0 1499414498 92788 0 0

When running a long test on hardware that can handle a lot of transactions, the log files can become very large. The --sampling-rate option can be used to log only a random sample of transactions.

Aggregated Logging

With the --aggregate-interval option, a different format is used for the log files:

interval_start num_transactions sum_latency sum_latency_2 min_latency max_latency failed_tx [ sum_lag sum_lag_2 min_lag max_lag [ skipped ] ] [ retried_tx serialization_retries deadlock_retries ]

where interval_start is the start of the interval (as a Unix epoch time stamp), num_transactions is the number of transactions within the interval, sum_latency is the sum of the transaction latencies within the interval, sum_latency_2 is the sum of squares of the transaction latencies within the interval, min_latency is the minimum latency within the interval, max_latency is the maximum latency within the interval, and failed_tx is the number of transactions ended with serialization/deadlock failures within the interval (see the section called “Serialization/Deadlock Failures and Retries” for more information). The next fields, sum_lag, sum_lag_2, min_lag, and max_lag, are only present if the --rate option is used. They provide statistics about the time each transaction had to wait for the previous one to finish, i.e., the difference between each transaction's scheduled start time and the time it actually started. The next field, skipped, is only present if the --latency-limit option is used, too. It counts the number of transactions skipped because they would have started too late. The retried_tx, serialization_retries, and deadlock_retries fields are only present if the maximum number of tries for transactions is not equal to 1 (--max-tries). They report the number of retried transactions and the sum of all retries after the corresponding errors within the interval. Each transaction is counted in the interval when it was committed.

Here is some example output:

1345828501 5601 1542744 483552416 61 2573 0
1345828503 7884 1979812 565806736 60 1479 0
1345828505 7208 1979422 567277552 59 1391 0
1345828507 7685 1980268 569784714 60 1398 0
1345828509 7073 1979779 573489941 236 1411 0

Notice that while the plain (unaggregated) log file shows which script was used for each transaction, the aggregated log does not. Therefore if you need per-script data, you need to aggregate the data on your own.

Per-Statement Report

With the -r option, pgbench collects the following statistics for each statement:

  • latency — elapsed transaction time for each statement. pgbench reports an average value of all successful runs of the statement.

  • The number of serialization and deadlock failures. See the section called “Serialization/Deadlock Failures and Retries” for more information.

  • The number of retries after a serialization/deadlock error in this statement; they are reported as serialization/deadlock retries, respectively.

The report displays the columns with statistics on failures and retries only if the current pgbench run has a failure or retry of any type, respectively.

All values are computed for each statement executed by every client and are reported after the benchmark has finished.

For the default script, the output will look similar to this:

starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
default transaction isolation level: read committed
transaction maximum tries number: 1
scaling factor: 1
query mode: simple
number of clients: 10
number of threads: 1
number of transactions per client: 1000
number of transactions actually processed: 10000/10000
latency average = 15.844 ms
latency stddev = 2.715 ms
tps = 618.764555 (including connections establishing)
tps = 622.977698 (excluding connections establishing)
statement latencies in milliseconds:
        0.002  \set aid random(1, 100000 * :scale)
        0.005  \set bid random(1, 1 * :scale)
        0.002  \set tid random(1, 10 * :scale)
        0.001  \set delta random(-5000, 5000)
        0.326  BEGIN;
        0.603  UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
        0.454  SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
        5.528  UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
        7.335  UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
        0.371  INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
        1.212  END;

Another example of output for the default script using serializable default transaction isolation level (--default-isolation-level S):

starting vacuum...end.
transaction type: <builtin: TPC-B (sort of)>
default transaction isolation level: serializable
transaction maximum tries number: 100
scaling factor: 1
query mode: simple
number of clients: 10
number of threads: 1
number of transactions per client: 1000
number of transactions actually processed: 9676/10000
number of transactions failed: 324 (3.240 %)
number of transactions retried: 5629 (56.290 %)
number of retries: 103299 (serialization: 103299, deadlocks: 0)
number of transactions above the 100.0 ms latency limit: 21/9676 (0.217 %)
latency average = 16.138 ms
latency stddev = 21.017 ms
tps = 413.650224 (including connections establishing)
tps = 413.686560 (excluding connections establishing)
statement latencies in milliseconds, serialization failures and retries,
deadlock failures and retries:
  0.002    0      0  0  0  \set aid random(1, 100000 * :scale)
  0.000    0      0  0  0  \set bid random(1, 1 * :scale)
  0.000    0      0  0  0  \set tid random(1, 10 * :scale)
  0.000    0      0  0  0  \set delta random(-5000, 5000)
  0.121    0      0  0  0  BEGIN;
  0.290    0      2  0  0  UPDATE pgbench_accounts
                           SET abalance = abalance + :delta WHERE aid = :aid;
  0.221    0      0  0  0  SELECT abalance FROM pgbench_accounts
                           WHERE aid = :aid;
  0.266  212  72127  0  0  UPDATE pgbench_tellers
                           SET tbalance = tbalance + :delta WHERE tid = :tid;
  0.222  112  31170  0  0  UPDATE pgbench_branches
                           SET bbalance = bbalance + :delta WHERE bid = :bid;
  0.178    0      0  0  0  INSERT INTO pgbench_history
                                  (tid, bid, aid, delta, mtime)
                           VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
  1.210    0      0  0  0  END;

If multiple script files are specified, all statistics are reported separately for each script file.

Note that collecting the additional timing information needed for per-statement latency computation adds some overhead. This will slow average execution speed and lower the computed TPS. The amount of slowdown varies significantly depending on platform and hardware. Comparing average TPS values with and without latency reporting enabled is a good way to measure if the timing overhead is significant.

Serialization/Deadlock Failures and Retries

Transactions with serialization or deadlock errors are rolled back and repeated until they complete successfully or reach the maximum number of tries specified by the --max-tries option. You can also limit the time of tries using the --latency-limit and/or --time options; in these cases, you can allow an unlimited number of tries within the specified time limits by setting the --max-tries option to zero. If the last transaction run fails, this transaction will be reported as failed. (It is assumed that one transaction script contains only one transaction; see the section called “What Is the Transaction Actually Performed in pgbench?” for more information.)

Note

Be careful when repeating scripts that contain multiple transactions: the script is always retried completely, so the successful transactions can be performed several times.

Be careful when repeating transactions with shell commands. Unlike the results of SQL commands, the results of shell commands are not rolled back, except for the variable value of the \setshell command. If a shell command fails, its client is aborted without a restart.

The latency of a successful transaction includes the entire time of transaction execution with rollbacks and retries. The latency for failed transactions and commands is not computed separately.

The main report contains the number of failed transactions if it is non-zero. If the total number of retried transactions is non-zero, the main report also contains the statistics related to retries: the total number of retried transactions, the total number of retries, and the number of retries after each kind of error. The per-statement report inherits all columns from the main report.

Good Practices

It is very easy to use pgbench to produce completely meaningless numbers. Here are some guidelines to help you get useful results.

In the first place, never believe any test that runs for only a few seconds. Use the -t or -T option to make the run last at least a few minutes, so as to average out noise. In some cases you could need hours to get numbers that are reproducible. It's a good idea to try the test run a few times, to find out if your numbers are reproducible or not.

For the default TPC-B-like test scenario, the initialization scale factor (-s) should be at least as large as the largest number of clients you intend to test (-c); else you'll mostly be measuring update contention. There are only -s rows in the pgbench_branches table, and every transaction wants to update one of them, so -c values in excess of -s will undoubtedly result in lots of transactions blocked waiting for other transactions.

The default test scenario is also quite sensitive to how long it's been since the tables were initialized: accumulation of dead rows and dead space in the tables changes the results. To understand the results you must keep track of the total number of updates and when vacuuming happens. If autovacuum is enabled it can result in unpredictable changes in measured performance.

A limitation of pgbench is that it can itself become the bottleneck when trying to test a large number of client sessions. This can be alleviated by running pgbench on a different machine from the database server, although low network latency will be essential. It might even be useful to run several pgbench instances concurrently, on several client machines, against the same database server.

Security

If untrusted users have access to a database that has not adopted a secure schema usage pattern, do not run pgbench in that database. pgbench uses unqualified names and does not manipulate the search path.