Mark Wong <markw@osdl.org> writes:
> I have some initial results using 8.0beta5 with our OLTP workload.
> http://www.osdl.org/projects/dbt2dev/results/dev4-010/199/
> throughput: 4076.97
Do people really only look at the "throughput" numbers? Looking at those
graphs it seems that while most of the OLTP transactions are fulfilled in
subpar response times, there are still significant numbers that take as much
as 30s to fulfil.
Is this just a consequence of the type of queries being tested and the data
distribution? Or is Postgres handling queries that could run consistently fast
but for some reason generating large latencies sometimes?
I'm concerned because in my experience with web sites, once the database
responds slowly for even a small fraction of the requests, the web server
falls behind in handling http requests and a catastrophic failure builds.
It seems to me that reporting maximum, or at least the 95% confidence interval
(95% of queries executed between 50ms-20s) would be more useful than an
overall average.
Personally I would be happier with an average of 200ms but an interval of
100-300ms than an average of 100ms but an interval of 50ms-20s. Consistency
can be more important than sheer speed.
--
greg