Re: proposal: lock_time for pg_stat_database - Mailing list pgsql-hackers

From Pavel Stehule
Subject Re: proposal: lock_time for pg_stat_database
Date
Msg-id CAFj8pRC_AUHZojL8zxOy7ix9thXJpMrXUwC1RvWzzdaTWmOX2w@mail.gmail.com
Whole thread Raw
In response to Re: proposal: lock_time for pg_stat_database  (Jim Nasby <Jim.Nasby@BlueTreble.com>)
Responses Re: proposal: lock_time for pg_stat_database
List pgsql-hackers


2015-01-16 18:23 GMT+01:00 Jim Nasby <Jim.Nasby@bluetreble.com>:
On 1/16/15 11:00 AM, Pavel Stehule wrote:
Hi all,

some time ago, I proposed a lock time measurement related to query. A main issue was a method, how to show this information. Today proposal is little bit simpler, but still useful. We can show a total lock time per database in pg_stat_database statistics. High number can be signal about lock issues.

Would this not use the existing stats mechanisms? If so, couldn't we do this per table? (I realize that won't handle all cases; we'd still need a "lock_time_other" somewhere).


it can use a current existing stats mechanisms

I afraid so isn't possible to assign waiting time to table - because it depends on order
 

Also, what do you mean by 'lock'? Heavyweight? We already have some visibility there. What I wish we had was some way to know if we're spending a lot of time in a particular non-heavy lock. Actually measuring time probably wouldn't make sense but we might be able to count how often we fail initial acquisition or something.

now, when I am thinking about it, lock_time is not good name - maybe "waiting lock time" (lock time should not be interesting, waiting is interesting) - it can be divided to some more categories - in GoodData we use Heavyweight, pages, and others categories.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com

pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: speedup tidbitmap patch: cache page
Next
From: Jim Nasby
Date:
Subject: Re: proposal: searching in array function - array_position