Re: log chunking broken with large queries under load - Mailing list pgsql-hackers

From Andrew Dunstan
Subject Re: log chunking broken with large queries under load
Date
Msg-id 4F79D3EC.6020207@dunslane.net
Whole thread Raw
In response to Re: log chunking broken with large queries under load  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: log chunking broken with large queries under load  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers

On 04/02/2012 12:00 PM, Tom Lane wrote:
> Andrew Dunstan<andrew@dunslane.net>  writes:
>> On 04/01/2012 06:34 PM, Andrew Dunstan wrote:
>>> Some of my PostgreSQL Experts colleagues have been complaining to me
>>> that servers under load with very large queries cause CSV log files
>>> that are corrupted,
>> We could just increase CHUNK_SLOTS in syslogger.c, but I opted instead
>> to stripe the slots with a two dimensional array, so we didn't have to
>> search a larger number of slots for any given message. See the attached
>> patch.
> This seems like it isn't actually fixing the problem, only pushing out
> the onset of trouble a bit.  Should we not replace the fixed-size array
> with a dynamic data structure?
>
>             


"A bit" = 10 to 20 times - more if we set CHUNK_STRIPES higher. :-)

But maybe your're right. If we do that and stick with my two-dimensional 
scheme to keep the number of probes per chunk down, we'd need to reorg 
the array every time we increased it. That might be a bit messy, but 
might be ok. Or maybe linearly searching an array of several hundred 
slots for our pid for every log chunk that comes in would be fast enough.

cheers

andrew




pgsql-hackers by date:

Previous
From: Noah Misch
Date:
Subject: Re: ECPG FETCH readahead
Next
From: Robert Haas
Date:
Subject: Re: measuring lwlock-related latency spikes