Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN ANALYZE - Mailing list pgsql-hackers

From Dmitry Dolgov
Subject Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN ANALYZE
Date
Msg-id 20231006152159.pyt7uvrvpoznx7vi@erthalion.local
Whole thread Raw
In response to Re: Parallel Bitmap Heap Scan reports per-worker stats in EXPLAIN ANALYZE  (David Geier <geidav.pg@gmail.com>)
List pgsql-hackers
> On Wed, Sep 20, 2023 at 03:42:43PM +0200, David Geier wrote:
> Another, not yet discussed, option I can see work is:
>
> 4. Allocate a fixed amount of memory for the instrumentation stats based on
> MAX_PARALLEL_WORKER_LIMIT: MAX_PARALLEL_WORKER_LIMIT is 1024 and used as the
> limit of the max_parallel_workers GUC. This way MAX_PARALLEL_WORKER_LIMIT *
> sizeof(BitmapHeapScanInstrumentation) = 1024 * 8 = 8192 bytes would be
> allocated. To cut this down in half we could additionally change the type of
> lossy_pages and exact_pages from long to uint32. Only possibly needed memory
> would have to get initialized, the remaining unused memory would remain
> untouched to not waste cycles.

I'm not sure that it would be acceptable -- if I understand correctly it
would be 8192 bytes per parallel bitmap heap scan node, and could be
noticeable in the worst case scenario with too many connections and too
many such nodes in every query.

I find the original approach with an offset not that bad, after all
there is something similar going on in other places, e.g. parallel heap
scan also has phs_snapshot_off (although the rest is fixed sized). My
commentary above in the thread was mostly about the cosmetic side.
Giving snapshot_and_stats a decent name and maybe even ditching the
access functions, using instead only the offset field couple of times,
and it would look better to me.



pgsql-hackers by date:

Previous
From: Dagfinn Ilmari Mannsåker
Date:
Subject: Re: typo in couple of places
Next
From: Andres Freund
Date:
Subject: Re: pg16: invalid page/page verification failed