On 04/15/2014 09:53 PM, Rod Taylor wrote:
> A documented beta test process/toolset which does the following would help:
> 1) Enables full query logging
> 2) Creates a replica of a production DB, record $TIME when it stops.
> 3) Allow user to make changes (upgrade to 9.4, change hardware, change
> kernel settings, ...)
> 4) Plays queries from the CSV logs starting from $TIME mimicking actual
> timing and transaction boundaries
>
> If Pg can make it easy to duplicate activities currently going on in
> production inside another environment, I would be pleased to fire a couple
> billion queries through it over the next few weeks.
>
> #4 should include reporting useful to the project, such as a sampling of
> queries which performed significantly worse and a few relative performance
> stats for overall execution time.
So we have some software we've been procrastinating on OSS'ing, which does:
1) Takes full query CSV logs from a running postgres instance
2) Runs them against a target instance in parallel
3) Records response times for all queries
tsung and pgreplay also do this, but have some limitations which make
them impractical for a general set of logs to replay.
What it would need is:
A) scripting around coordinated backups
B) Scripting for single-command runs, including changing pg.conf to
record data.
C) tools to *analyze* the output data, including error messages.
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com