Hi,
On Mon, Aug 11, 2025 at 07:49:45PM -0400, Tom Lane wrote:
> Michael Paquier <michael@paquier.xyz> writes:
> > On Mon, Aug 11, 2025 at 02:53:58PM -0700, Jeff Davis wrote:
> >> Can you describe your use case? I'd like to understand whether this is
> >> useful for users, hackers, or both.
>
> > This is a DBA feature, so the questions I'd ask myself are basically:
> > - Is there any decision-making where these numbers would help? These
> > decisions would shape in tweaking the configuration of the server or
> > the application to as we move from a "bad" number trend to a "good"
> > number trend.
> > - What would be good numbers? In this case, most likely a threshold
> > reached over a certain period of time.
> > - Would these new stats overlap with similar statistics gathered in
> > the system, creating duplication and bloat in the pgstats for no real
> > gain?
>
> I'm also wondering why slicing the numbers in this particular way
> (i.e., aggregating by locktype) is a helpful way to look at the data.
> Maybe it's just what you want, but that's not obvious to me.
Thanks for providing your thoughts!
I thought it was more natural to aggregate by locktype because:
- I think that matches how they are categorized in the doc (from a "wait event"
point of view i.e "Wait Events of Type Lock").
- It provides a natural drill-down path, spot issues by locktype in the stats and
then query pg_locks for specific objects when needed.
Does that make sense to you?
Regards,
--
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com