Re: Adding skip scan (including MDAM style range skip scan) to nbtree - Mailing list pgsql-hackers

From Matthias van de Meent
Subject Re: Adding skip scan (including MDAM style range skip scan) to nbtree
Date
Msg-id CAEze2Wi7tDidbDVJhu=Pstb2hbUXDCxx_VAZnKSqbTMf7k8+uQ@mail.gmail.com
Whole thread Raw
In response to Adding skip scan (including MDAM style range skip scan) to nbtree  (Peter Geoghegan <pg@bowt.ie>)
Responses Re: Adding skip scan (including MDAM style range skip scan) to nbtree
List pgsql-hackers
On Sat, 10 May 2025 at 00:54, Tomas Vondra <tomas@vondra.me> wrote:
>
> On 5/9/25 23:30, Matthias van de Meent wrote:
> > ...
> >> The difference shown by your flame graph is absolutely enormous --
> >> that's *very* surprising to me. btbeginscan and btrescan go from being
> >> microscopic to being very prominent. But skip scan simply didn't touch
> >> either function, at all, directly or indirectly. And neither function
> >> has really changed in any significant way in recent years. So right
> >> now I'm completely stumped.
> >
> > I see some 60.5% of the samples under PostgresMain (35% overall) in
> > the "bad" flamegraph have asm_exc_page_fault on the stack, indicating
> > the backend(s) are hit with a torrent of continued page faults.
> > Notably, this is not just in btree code: ExecInitIndexOnlyScan's
> > components (ExecAssignExprContext,
> > ExecConditionalAssignProjectionInfo, ExecIndexBuildScanKeys,
> > ExecInitQual, etc.) are also very much affected, and none of those
> > call into index code. Notably, this is before any btree code is
> > executed in the query.
> >
> > In the "good" version, asm_exc_page_fault does not show up, at all;
> > nor does sysmalloc.
> >
>
> Yes. Have you tried reproducing the issue? It'd be good if someone else
> reproduced this independently, to confirm I'm not hallucinating.
>
> > @Tomas
> > Given the impact of MALLOC_TOP_PAD_, have you tested with other values
> > of MALLOC_TOP_PAD_?
> >
>
> I tried, and it seems 4MB is sufficient for the overhead to disappear.
> Perhaps some other mallopt parameters would help too, but my point was
> merely to demonstrate this is malloc-related.
>
> > Also, have you checked the memory usage of the benchmarked backends
> > before and after 92fe23d93aa, e.g. by dumping
> > pg_backend_memory_contexts after preparing and executing the sample
> > query, or through  pg_get_process_memory_contexts() from another
> > backend?
> >
>
> I haven't noticed any elevated memory usage in top, but the queries are
> very short, so I'm not sure how reliable that is. But if adding 4MB is
> enough to make this go away, I doubt I'd notice a difference.

I think I may have it down, based on memory context checks and some
introspection. It's a bit of a ramble, with garden path sentences, and
some data tables to back it up:

Up to PG17, and 3ba2cdaa454, the size of data allocated in "index
info" was just enough for a good portion of our indexes to only
require one memory context block.
With the increased size of btree's per-attribute amsupportinfo, the
requirements for even a single index attribute won't fit in this first
block, requiring at least a second mctx block. As each mctx block for
"index info" is at least 1KiB large, this adds at least 30KiB of
additional memory.

See the table below for an example btree index with one column:

| type (PG17)     | size  | alignment | size bucket | total + chunkhdr
| remaining | mctx blocks |
|-----------------|-------|-----------|-------------|------------------|-----------|-------------|
| AllocSetContext | 200 B | 0 B       | n/a         | 200 B
| 824 B     | 1           |
| Chunk hdr       | 8 B   | 0 B       | n/a         | 8 B
| 816 B     | 1           |
| IndexAmRoutine  | 248 B | 0 B       | 256 B       | 264 B
| 552 B     | 1           |
| rd_opfamily     | 4 B   | 4 B       | 8 B         | 16 B
| 536 B     | 1           |
| rd_opcintype    | 4 B   | 4 B       | 8 B         | 16 B
| 520 B     | 1           |
| rd_support      | 4 B   | 4 B       | 8 B         | 16 B
| 504 B     | 1           |
| rd_supportinfo  | 240 B | 0 B       | 256 B       | 264 B
| 240 B     | 1           |
| rd_indcollation | 4 B   | 4 B       | 8 B         | 16 B
| 224 B     | 1           |
| rd_indoption    | 2 B   | 6 B       | 8 B         | 16 B
| 206 B     | 1           |

| type (skips)    | size  | alignment | size bucket | total + chunkhdr
| remaining | mctx blocks |
|-----------------|-------|-----------|-------------|------------------|-----------|-------------|
| AllocSetContext | 200 B | 0 B       | n/a         | 200 B
| 824 B     | 1           |
| Block hdr       | 8 B   | 0 B       | n/a         | 8 B
| 816 B     | 1           |
| IndexAmRoutine  | 248 B | 0 B       | 256 B       | 264 B
| 552 B     | 1           |
| rd_opfamily     | 4 B   | 4 B       | 8 B         | 16 B
| 536 B     | 1           |
| rd_opcintype    | 4 B   | 4 B       | 8 B         | 16 B
| 520 B     | 1           |
| rd_support      | 4 B   | 4 B       | 8 B         | 16 B
| 504 B     | 1           |
| Block hdr       | 8 B   | 0 B       | n/a         | 8 B
| 1016 B    | 2           |
| rd_supportinfo  | 288 B | 0 B       | 512 B       | 520 B
| 496 B     | 2           |
| rd_indcollation | 4 B   | 4 B       | 8 B         | 16 B
| 224 B     | 1           |
| rd_indoption    | 2 B   | 6 B       | 8 B         | 16 B
| 206 B     | 1           |

Note that there's a new block required to fit rd_supportinfo because
it wouldn't fit in the first, due to AllocSet's bucketing the
allocation into a larger chunk.

If you check each backend's memory statistics for index info memory
contexts [0], you'll notice this too:

Master (with skip)
 count (d73d4cfd) | total_bytes | combined_size
------------------+-------------+---------------
               87 |             |        215808
               50 |        2048 |        102400
                1 |        2240 |          2240
               33 |        3072 |        101376
                3 |        3264 |          9792

(commit before skip)
 count (3ba2cdaa) | total_bytes | combined_size
------------------+-------------+---------------
               87 |             |        157696
               35 |        1024 |         35840
               37 |        2048 |         75776
               15 |        3072 |         46080

This shows we're using 56KiB more than before.
I'm not quite sure yet where the memfault overhead is introduced, but
I do think this is heavy smoke, and closer to the fire.


I've attached a patch that makes IndexAmRoutine a static const*,
removing it from rd_indexcxt, and returning some of the index ctx
memory usage to normal:

 count (patch 1) | total_bytes | combined_size
-----------------+-------------+---------------
              87 |             |        171776
              10 |        2048 |         20480
              40 |        1024 |         40960
               4 |        2240 |          8960
              33 |        3072 |        101376

Another patch on top of that, switching rd_indexcxt to
GenerationContext (from AllocSet) sees the following improvement

 count (patch 2) | total_bytes | combined_size
------------------+-------------+---------------
              87 |             |        118832
              22 |        1680 |         36960
              11 |        1968 |         21648
              50 |        1024 |         51200
               4 |        2256 |          9024

Also tracked: total memctx-tracked memory usage on a fresh connection [0]:

3ba2cdaa: 2006024 / 1959 kB
Master: 2063112 / 2015 kB
Patch 1: 2040648 / 1993 kB
Patch 2: 1976440 / 1930 kB

There isn't a lot of space on master to allocate new memory before it
reaches a (standard linux configuration) 128kB boundary - only 33kB
(assuming no other memory tracking overhead). It's easy to allocate
that much, and go over, causing malloc to extend with sbrk by 128kB.
If we then get back under because all per-query memory was released,
the newly allocated memory won't have any data anymore, and will get
released again immediately (default: release with sbrk when the top
>=128kB is free), thus churning that memory area.

We may just have been lucky before, and your observation that
MALLOC_TOP_PAD_ >= 4MB fixes the issue reinforces that idea.

If patch 1 or patch 1+2 fixes this regression for you, then that's
another indication that we exceeded this threshold in a bad way.


Kind regards,

Matthias van de Meent
Neon (https://neon.tech)

PS. In ± 1 hour I'm leaving for pgconf.dev, so this will be my final
investigation update on the issue today CEST.

[0] select count(*), total_bytes, sum(total_bytes) as "combined size"
from pg_backend_memory_contexts WHERE name = 'index info' group by
rollup (2);
[1] select sum(total_bytes), pg_size_pretty(sum(total_bytes)) from
pg_backend_memory_contexts;

Attachment

pgsql-hackers by date:

Previous
From: Amit Kapila
Date:
Subject: Re: POC: enable logical decoding when wal_level = 'replica' without a server restart
Next
From: Amit Kapila
Date:
Subject: Re: Fix slot synchronization with two_phase decoding enabled