Re: Enhancing Memory Context Statistics Reporting - Mailing list pgsql-hackers
From | Rahila Syed |
---|---|
Subject | Re: Enhancing Memory Context Statistics Reporting |
Date | |
Msg-id | CAH2L28t1bZ6CxfHHVJfTfH62XL25abXoNhd8B4R2C78QfbQt+A@mail.gmail.com Whole thread Raw |
In response to | Re: Enhancing Memory Context Statistics Reporting (Alexander Korotkov <aekorotkov@gmail.com>) |
Responses |
Re: Enhancing Memory Context Statistics Reporting
|
List | pgsql-hackers |
Hi Alexander,
Thank you for the review.
It looks like we're increasing *num_contexts twice per child memory
context. First, it gets increased with a recursive
MemoryContextStatsInternal() call, then by adding an ichild. I might
be wrong, but I think these calculations at least deserve more
comments.
I believe that's not the case. The recursive calls only work for children
encountered up to max_level and less than max_children per context.
The rest of the children are handled using MemoryContextTraverseNext,
without recursive calls. Thus, num_contexts is incremented for those
children separately from the recursive call counter.
I will add more comments around this.
The rest of the children are handled using MemoryContextTraverseNext,
without recursive calls. Thus, num_contexts is incremented for those
children separately from the recursive call counter.
I will add more comments around this.
v17-0002-Function-to-report-memory-context-statistics.patch
+ if (procNumber == MyProcNumber)
+ {
+ ereport(WARNING,
+ errmsg("cannot return statistics for local backend"),
+ errhint("Use pg_backend_memory_contexts view instead."));
+ PG_RETURN_NULL();
+ }
Is it worth it to keep this restriction? Can we fetch info about
local memory context for the sake of generality?
I think that could be done, but using pg_backend_memory_context would
be more efficient in this case.
be more efficient in this case.
I know there have been discussions in the thread before, but the
mechanism of publishing memory context stats via DSA looks quite
complicated. Also, the user probably intends to inspect memory
contexts when there is not a huge amount of free memory. So, failure
is probable on DSA allocation. Could we do simpler? For instance,
allocate some amount of static shared memory and use it as a message
queue between processes. As a heavy load is not supposed to be here,
I think one queue would be enough.
There could be other uses for such a function, such as a monitoring dashboard
that periodically queries all running backends for memory statistics. If we use a
single queue shared between all the backends, they will need to wait for the queue
to become available before sharing their statistics, leading to processing delays at
the publishing backend.
Even with separate queues for each backend or without expecting concurrent use,
publishing statistics could be delayed if a message queue is full. This is because a
backend needs to wait for a client process to consume existing messages or
statistics before publishing more.
If a client process exits without consuming messages, the publishing backend will
experience timeouts when trying to publish stats. This will impact backend performance
as statistics are published during CHECK_FOR_INTERRUPTS.
In the current implementation, the backend publishes all the statistics in one go
without waiting for clients to read any statistics.
that periodically queries all running backends for memory statistics. If we use a
single queue shared between all the backends, they will need to wait for the queue
to become available before sharing their statistics, leading to processing delays at
the publishing backend.
Even with separate queues for each backend or without expecting concurrent use,
publishing statistics could be delayed if a message queue is full. This is because a
backend needs to wait for a client process to consume existing messages or
statistics before publishing more.
If a client process exits without consuming messages, the publishing backend will
experience timeouts when trying to publish stats. This will impact backend performance
as statistics are published during CHECK_FOR_INTERRUPTS.
In the current implementation, the backend publishes all the statistics in one go
without waiting for clients to read any statistics.
In addition, allocating complete message queues in static shared memory can be
expensive, especially since these static structures need to be created even if memory
context statistics are never queried.
On the contrary, a dsa is created for the feature whenever statistics are first queried.
We are not preallocating shared memory for this feature, except for small structures
to store the dsa_handle and dsa_pointers for each backend.
Thank you,
expensive, especially since these static structures need to be created even if memory
context statistics are never queried.
On the contrary, a dsa is created for the feature whenever statistics are first queried.
We are not preallocating shared memory for this feature, except for small structures
to store the dsa_handle and dsa_pointers for each backend.
Thank you,
Rahila Syed
pgsql-hackers by date: