Re: measuring lwlock-related latency spikes - Mailing list pgsql-hackers

From Robert Haas
Subject Re: measuring lwlock-related latency spikes
Date
Msg-id CA+TgmoZEPVv-Lrn-mmzFteTv8bx_g5jLfiEJgHYPQN50pnswVA@mail.gmail.com
Whole thread Raw
In response to Re: measuring lwlock-related latency spikes  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
List pgsql-hackers
On Tue, Apr 3, 2012 at 8:28 AM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:
> Might as well jump in with both feet:
>
> autovacuum_naptime = 1s
> autovacuum_vacuum_threshold = 1
> autovacuum_vacuum_scale_factor = 0.0
>
> If that smooths the latency peaks and doesn't hurt performance too
> much, it's decent evidence that the more refined technique could be a
> win.

It seems this isn't good for either throughput or latency.  Here are
latency percentiles for a recent run against master with my usual
settings:

90 1668
91 1747
92 1845
93 1953
94 2064
95 2176
96 2300
97 2461
98 2739
99 3542
100 12955473

And here's how it came out with these settings:

90 1818
91 1904
92 1998
93 2096
94 2200
95 2316
96 2459
97 2660
98 3032
99 3868
100 10842354

tps came out tps = 13658.330709 (including connections establishing),
vs 14546.644712 on the other run.

I have a (possibly incorrect) feeling that even with these
ridiculously aggressive settings, nearly all of the cleanup work is
getting done by HOT prunes rather than by vacuum, so we're still not
testing what we really want to be testing, but we're doing a lot of
extra work along the way.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pgsql-hackers by date:

Previous
From: Alvaro Herrera
Date:
Subject: Re: Another review of URI for libpq, v7 submission
Next
From: "Etsuro Fujita"
Date:
Subject: Re: WIP: Collecting statistics on CSV file data