Re: RFC: replace pg_stat_activity.waiting with something more descriptive - Mailing list pgsql-hackers
From | Ildus Kurbangaliev |
---|---|
Subject | Re: RFC: replace pg_stat_activity.waiting with something more descriptive |
Date | |
Msg-id | 55A38B57.6080804@postgrespro.ru Whole thread Raw |
In response to | Re: RFC: replace pg_stat_activity.waiting with something more descriptive (Amit Kapila <amit.kapila16@gmail.com>) |
Responses |
Re: RFC: replace pg_stat_activity.waiting with something
more descriptive
|
List | pgsql-hackers |
On 07/12/2015 06:53 AM, Amit Kapila wrote:
gettimeofday already used in our patch and it gives enough accuracy (in microseconds), especially when lwlock become a problem. Also we tested our realization and it gives overhead less than 1%. (http://www.postgresql.org/message-id/559D4729.9080704@postgrespro.ru, testing part). We need help here with testing on other platforms. I used gettimeofday because of builtin module "instr_time.h" that already gives cross-platform tested functions for measuring, but I'm planning to make similar implementation for monotonic functions based on clock_gettime for more accuracy.On Fri, Jul 10, 2015 at 10:03 PM, Alexander Korotkov <a.korotkov@postgrespro.ru> wrote:
>
> On Fri, Jun 26, 2015 at 6:39 AM, Robert Haas <robertmhaas@gmail.com> wrote:
>>
>> On Thu, Jun 25, 2015 at 9:23 AM, Peter Eisentraut <peter_e@gmx.net> wrote:
>> > On 6/22/15 1:37 PM, Robert Haas wrote:
>> >> Currently, the only time we report a process as waiting is when it is
>> >> waiting for a heavyweight lock. I'd like to make that somewhat more
>> >> fine-grained, by reporting the type of heavyweight lock it's awaiting
>> >> (relation, relation extension, transaction, etc.). Also, I'd like to
>> >> report when we're waiting for a lwlock, and report either the specific
>> >> fixed lwlock for which we are waiting, or else the type of lock (lock
>> >> manager lock, buffer content lock, etc.) for locks of which there is
>> >> more than one. I'm less sure about this next part, but I think we
>> >> might also want to report ourselves as waiting when we are doing an OS
>> >> read or an OS write, because it's pretty common for people to think
>> >> that a PostgreSQL bug is to blame when in fact it's the operating
>> >> system that isn't servicing our I/O requests very quickly.
>> >
>> > Could that also cover waiting on network?
>>
>> Possibly. My approach requires that the number of wait states be kept
>> relatively small, ideally fitting in a single byte. And it also
>> requires that we insert pgstat_report_waiting() calls around the thing
>> that is notionally blocking. So, if there are a small number of
>> places in the code where we do network I/O, we could stick those calls
>> around those places, and this would work just fine. But if a foreign
>> data wrapper, or any other piece of code, does network I/O - or any
>> other blocking operation - without calling pgstat_report_waiting(), we
>> just won't know about it.
>
>
> Idea of fitting wait information into single byte and avoid both locking and atomic operations is attractive.
> But how long we can go with it?
> Could DBA make some conclusion by single querying of pg_stat_activity or double querying?>It could be helpful in situations, where the session is stuck on aparticular lock or when you see most of the backends are showingthe wait on same LWLock.
> In order to make a conclusion about system load one have to run daemon or background worker which is continuously sampling current wait events.> Sampling current wait event with high rate also gives some overhead to the system as well as locking or atomic operations.>The idea of sampling sounds good, but I think if it adds performancepenality on the system, then we should look into the ways to avoidit in hot-paths.
> Checking if backend is stuck isn't easy as well. If you don't expose how long last wait event continues it's hard to distinguish getting stuck on particular lock and high concurrency on that lock type.
>
> I can propose following:
>
> 1) Expose more information about current lock to user. For instance, having duration of current wait event, user can determine if backend is getting > stuck on particular event without sampling.>For having duration, I think you need to use gettimeofday or somesimilar call to calculate the wait time, now it will be okay for thecases where wait time is longer, however it could be problematic forthe cases if the waits are very small (which could probably be thecase for LWLocks)
If you agree I'l do some modifications to your patch, so we can later extend it with our other modifications. Main issue is that one variable for all types is not enough. For flexibity in the future we need at least two - class and event, for example class=LWLock, event=ProcArrayLock, or class=Storage, and event=READ. With this modification it is not so big problem merge our patches to one. There are not so many types of waits, they can fit to one int32 and can be read atomically too.
> 2) Accumulate per backend statistics about each wait event type: number of occurrences and total duration. With this statistics user can identify system bottlenecks again without sampling.
>
> Number #2 will be provided as a separate patch.
> Number #1 require different concurrency model. ldus will extract it from "waits monitoring" patch shortly.
>
Sure, I think those should be evaluated as separate patches,and I can look into those patches and see if something morecan be exposed as part of this patch which we can be reused inthose patches.
-- Ildus Kurbangaliev Postgres Professional: http://www.postgrespro.com The Russian Postgres Company
pgsql-hackers by date: