Re: a heavy duty operation on an "unused" table kills my server - Mailing list pgsql-performance

From Robert Haas
Subject Re: a heavy duty operation on an "unused" table kills my server
Date
Msg-id 603c8f071001160449p7ed128a8r5f5e343c5970516@mail.gmail.com
Whole thread Raw
In response to Re: a heavy duty operation on an "unused" table kills my server  (Greg Smith <greg@2ndquadrant.com>)
Responses Re: a heavy duty operation on an "unused" table kills my server
Re: a heavy duty operation on an "unused" table kills my server
List pgsql-performance
On Sat, Jan 16, 2010 at 4:09 AM, Greg Smith <greg@2ndquadrant.com> wrote:
> Tom Lane wrote:
>>
>> This is in fact exactly what the vacuum_cost_delay logic does.
>> It might be interesting to investigate generalizing that logic
>> so that it could throttle all of a backend's I/O not just vacuum.
>> In principle I think it ought to work all right for any I/O-bound
>> query.
>>
>
> So much for inventing a new idea; never considered that parallel before.
>  The logic is perfectly reusable, not so sure how much of the implementation
> would be though.
>
> I think the main difference is that there's one shared VacuumCostBalance to
> worry about, whereas each backend that might be limited would need its own
> clear scratchpad to accumulate costs into.  That part seems similar to how
> the new EXPLAIN BUFFERS capability instruments things though, which was the
> angle I was thinking of approaching this from.  Make that instrumenting more
> global, periodically compute a total cost from that instrument snapshot, and
> nap whenever the delta between the cost at the last nap and the current cost
> exceeds your threshold.

Seems like you'd also need to think about priority inversion, if the
"low-priority" backend is holding any locks.

...Robert

pgsql-performance by date:

Previous
From: Greg Smith
Date:
Subject: Re: a heavy duty operation on an "unused" table kills my server
Next
From: Greg Smith
Date:
Subject: Re: a heavy duty operation on an "unused" table kills my server