On Fri, Oct 3, 2014 at 5:33 PM, Bruce Momjian <bruce@momjian.us> wrote:
> As far as gathering data, I don't think we are going to do any better in
> terms of performance/simplicity/reliability than to have a single PGPROC
> entry to record when we enter/exit a lock, and having a secondary
> process scan the PGPROC array periodically.
That was the point.
>
> What that gives us is almost zero overhead on backends, high
> reliability, and the ability of the scan daemon to give higher weights
> to locks that are held longer. Basically, if you just stored the locks
> you held and released, you either have to add timing overhead to the
> backends, or you have no timing information collected. By scanning
> active locks, a short-lived lock might not be seen at all, while a
> longer-lived lock might be seen by multiple scans. What that gives us
> is a weighting of the lock time with almost zero overhead. If we want
> finer-grained lock statistics, we just increase the number of scans per
> second.
So I could add the function, which will accumulate the data in some
view/table (with weights etc). How it should be called? From specific
process? From some existing maintenance process such as autovacuum?
Should I implement GUC for example lwlock_pull_rate, 0 for off, from 1
to 10 for 1 to 10 samples pro second?
>
> I am assuming almost no one cares about the number of locks, but rather
> they care about cummulative lock durations.
Oracle and DB2 measure both, cummulative durations and counts.
>
> I am having trouble seeing any other option that has such a good
> cost/benefit profile.
At least cost. In Oracle documentation clearly stated, that it is all
about diagnostic convenience, performance impact is significant.
>
> --
> Bruce Momjian <bruce@momjian.us> http://momjian.us
> EnterpriseDB http://enterprisedb.com
>
> + Everyone has their own god. +
--
Ilya Kosmodemiansky,
PostgreSQL-Consulting.com
tel. +14084142500
cell. +4915144336040
ik@postgresql-consulting.com