Re: measuring lwlock-related latency spikes - Mailing list pgsql-hackers

From Simon Riggs
Subject Re: measuring lwlock-related latency spikes
Date
Msg-id CA+U5nMJXDqvnD=H9_cx2TWxOdy0WeGNCded+W+2ZCCsLy+mRsw@mail.gmail.com
Whole thread Raw
In response to Re: measuring lwlock-related latency spikes  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: measuring lwlock-related latency spikes  (Tom Lane <tgl@sss.pgh.pa.us>)
Re: measuring lwlock-related latency spikes  (Greg Stark <stark@mit.edu>)
Re: measuring lwlock-related latency spikes  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
On Mon, Apr 2, 2012 at 8:04 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> Long story short, when a CLOG-related stall happens,
>> essentially all the time is being spent in this here section of code:
>
>>     /*
>>      * If not part of Flush, need to fsync now.  We assume this happens
>>      * infrequently enough that it's not a performance issue.
>>      */
>>     if (!fdata) // fsync and close the file
>
> Seems like basically what you've proven is that this code path *is* a
> performance issue, and that we need to think a bit harder about how to
> avoid doing the fsync while holding locks.

Agreed, though I think it means the fsync is happening on a filesystem
that causes a full system fsync. That time is not normal.

I suggest we optimise that by moving the dirty block into shared
buffers and leaving it as dirty. That way we don't need to write or
fsync at all and the bgwriter can pick up the pieces. So my earlier
patch to get the bgwriter to clean the clog would be superfluous.

--
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: measuring lwlock-related latency spikes
Next
From: "Greg Sabino Mullane"
Date:
Subject: libxml related crash on git head