Re: Vacuum statistics - Mailing list pgsql-hackers

From Alena Rybakina
Subject Re: Vacuum statistics
Date
Msg-id b089a73b-2360-4e71-a950-ab5f02197ddb@postgrespro.ru
Whole thread Raw
In response to Re: Vacuum statistics  (Alena Rybakina <a.rybakina@postgrespro.ru>)
List pgsql-hackers

Sorry, I forgot to provide a link to the problem [0], actually. So I provided it below.

[0] https://www.postgresql.org/message-id/CAPpHfduoJEuoixPTTg2tjhnXqrdobuMaQGxriqxJ9TjN1uxOuA%40mail.gmail.com


On 21.03.2025 22:42, Alena Rybakina wrote:
On 13.03.2025 09:42, Bertrand Drouvot wrote:
Hi,

On Wed, Mar 12, 2025 at 05:15:53PM -0500, Jim Nasby wrote:
The usecase I can see here is that we don't want autovac creating so much
WAL traffic that it starts forcing other backends to have to write WAL out.
But tracking how many times autovac writes WAL buffers won't help with that
Right, because the one that increments the wal_buffers_full metric could "just"
be a victim (i.e the one that happens to trigger the WAL buffers disk flush,
even though other backends contributed most of the buffer usage).

(though we also don't want any WAL buffers written by autovac to be counted
in the system-wide wal_buffers_full:
why? Or do you mean that it would be good to have 2 kinds of metrics: one
generated by "maintenance" activity and one by "regular" backends?

What would be helpful would be a way to determine if autovac was causing
enough traffic to force other backends to write WAL. Offhand I'm not sure
how practical that actually is though.
a051e71e28a could help to see how much WAL has by written by the autovac workers.

BTW, there's also an argument to be made that autovac should throttle
itself if we're close to running out of available WAL buffers...
hmm, yeah I think that's an interesting idea OTOH that would mean to "delegate"
the WAL buffers flush to another backend.

Regards,


I will add it and fix the tests but later and I'll explain why. 

I'm working on this issue [0] and try have already created new statistics in Statistics Collector to store database and relation vacuum statistics: PGSTAT_KIND_VACUUM_DB and PGSTAT_KIND_VACUUM_RELATION.

Vacuum statistics are saved there instead of relation's and database's statistic structure, but for some reason it is not possible to find them in the hash table when building a snapshot and display them accordingly.
I have not yet figured out where the error is. 

Without solving this problem, committing vacuum statistics is not yet possible. An alternative way for us was to refuse some statistics for now for relations,
but we could not agree on which statistics should not be displayed yet and for now we are only adding them :).

I understand why this is important to display more vacuum information about vacuum statistics - it will allow us to better understand the problems of incorrect vacuum settings or, for example, notice a bug in its operation. 

In order to reduce the memory consumption for storing them for those who are not going to use them, I just realized that we need to create a separate space for storing the statistics
I mentioned above (PGSTAT_KIND_VACUUM_DB and PGSTAT_KIND_VACUUM_RELATION), there is no other way to do this and I am still trying to complete this functionality. 

I doubt that I will have time for this by code freeze date and even if I do, I will hardly have time for a normal review. There's really a lot more to learn related to the stat collector, so
I'm postponing it to the next commitfest. 

Sorry. I'll fix the tests as soon as I finish this part, since they'll most likely either break the same way or in some new way.

Tomorrow or the day after tomorrow I will send a diff patch with what I have already managed to demonstrate the problem, since I need to bring the code to a normal form.
Maybe someone who worked with the stat collector will suddenly tell me where and what I have implemented incorrectly.


-- 
Regards,
Alena Rybakina
Postgres Professional

pgsql-hackers by date:

Previous
From: Alena Rybakina
Date:
Subject: Re: Vacuum statistics
Next
From: Robert Haas
Date:
Subject: Re: making EXPLAIN extensible