As per HammerDB documentation, the same test running for multiple iterations in the same Hardware gives less deviation (1%-2%)
We noticed the TPC-C performance(NOPM/TPM) deviation is >2% to 21% with virtual users(1 to 250 for 2 socket system) on running multiple iterations(5-6 runs).
Checked on different configurations/ system settings as below :
1.Reduced Max connection i.e., lower connections(example max_connections 1700 to 200 in postgres.conf )
2.Reduced warehouses in schema build i.e.pg_count_ware 800 to pg_count_ware 400/200
3.For each run/iteration rebuild schema(delete schema after results captured in each iteration and delete/drop tpcc, restart postgres and rebuild schema for next iteration)
4.For each Iteration unmount and mount /data forlder from SSD.
5.Numa settings like taskset/core pinning and SMT-OFF/SMT-ON.
6 Test run on different NUMA nodes like numactl --interleave=all or numa auto balancing.
7.With default PostgreSQL.conf and less virtual users(like 1,2,4,8,12,16,20) and small warehouse like 20 and pg_num_vu 4
8.Run HammerDB in client Machine and PostgreSQL in Master Machine.
Here are the questions:
1. What is the right way to test PostgreSQL with HammerDB for multiple iterations?
2. Is the performance deviation on multiple runs is expected because of raw Postgres performance?
3. Can "CPU usage, I/O volume, I/O Latency & HDD/SSD latency" be the reason for deviation?
PG Bug reporting form <noreply@postgresql.org> writes: > NOPM values captured with HammerDB-v4.3 scripts (schema_tpcc.tcl and > test_tpcc.tcl ) for multiple trails. > The expected performance deviation between multiple trials should be less > than 2%
According to who? Even if you'd provided an easily reproducible example, I doubt we'd accept this as a bug. Adding more sessions does not have zero cost.