Thread: regression coverage gaps for gist and hash indexes

regression coverage gaps for gist and hash indexes

From
Andres Freund
Date:
Hi,

I was working on committing patch 0001 from [1] and was a bit confused about
some of the changes to the WAL format for gist and hash index vacuum. It
looked to me that the changed code just flat out would not work.

Turns out the problem is that we don't reach deletion for hash and gist
vacuum:

gist:

> Oh, I see. We apparently don't reach the gist deletion code in the tests:
> https://coverage.postgresql.org/src/backend/access/gist/gistxlog.c.gcov.html#674
> https://coverage.postgresql.org/src/backend/access/gist/gistxlog.c.gcov.html#174
> 
> And indeed, if I add an abort() into , it's not reached.
> 
> And it's not because tests use a temp table, the caller is also unreachable:
> https://coverage.postgresql.org/src/backend/access/gist/gist.c.gcov.html#1643


hash:
> And there also are no tests:
> https://coverage.postgresql.org/src/backend/access/hash/hashinsert.c.gcov.html#372


I've since looked to other index AMs.

spgist's XLOG_SPGIST_VACUUM_ROOT emitted, but not replayed:
https://coverage.postgresql.org/src/backend/access/spgist/spgvacuum.c.gcov.html#474
https://coverage.postgresql.org/src/backend/access/spgist/spgxlog.c.gcov.html#962

gin's XLOG_GIN_VACUUM_DATA_LEAF_PAGE is not emitted, but only because of a
RelationNeedsWAL() check:
https://coverage.postgresql.org/src/backend/access/gin/gindatapage.c.gcov.html#852


I also looked at heapam:
XLOG_HEAP2_LOCK_UPDATED is not replayed, but emitted:
https://coverage.postgresql.org/src/backend/access/heap/heapam.c.gcov.html#5487
https://coverage.postgresql.org/src/backend/access/heap/heapam.c.gcov.html#9965

same for XLOG_HEAP2_REWRITE:
https://coverage.postgresql.org/src/backend/access/heap/rewriteheap.c.gcov.html#928
https://coverage.postgresql.org/src/backend/access/heap/heapam.c.gcov.html#9975

and XLOG_HEAP_TRUNCATE (ugh, that record is quite the layering violation):
https://coverage.postgresql.org/src/backend/commands/tablecmds.c.gcov.html#2128
https://coverage.postgresql.org/src/backend/access/heap/heapam.c.gcov.html#9918


The fact that those cases aren't replayed isn't too surprising -
XLOG_HEAP2_LOCK_UPDATED is exercised by isolationtester, XLOG_HEAP2_REWRITE,
XLOG_HEAP_TRUNCATE by contrib/test_decoding. Neither is part of
027_stream_regress.pl


The lack of any coverage of hash and gist deletion/vacuum seems quite
concerning to me.


Greetings,

Andres Freund

[1] https://postgr.es/m/da7184cf-c7b9-c333-801e-0e7507a23ddf@gmail.com



Re: regression coverage gaps for gist and hash indexes

From
Andrey Borodin
Date:
Hi!

On Fri, Mar 31, 2023 at 8:07 AM Andres Freund <andres@anarazel.de> wrote:
>
> I was working on committing patch 0001 from [1] and was a bit confused about
> some of the changes to the WAL format for gist and hash index vacuum. It
> looked to me that the changed code just flat out would not work.
>
> Turns out the problem is that we don't reach deletion for hash and gist
> vacuum:
>
> gist:
>
> > Oh, I see. We apparently don't reach the gist deletion code in the tests:
> > https://coverage.postgresql.org/src/backend/access/gist/gistxlog.c.gcov.html#674
> > https://coverage.postgresql.org/src/backend/access/gist/gistxlog.c.gcov.html#174
> >
> > And indeed, if I add an abort() into , it's not reached.
> >
> > And it's not because tests use a temp table, the caller is also unreachable:
> > https://coverage.postgresql.org/src/backend/access/gist/gist.c.gcov.html#1643
>

GiST logs deletions in gistXLogUpdate(), which is covered.
gistXLogDelete() is only used for cleaning during page splits. I'd
propose refactoring GiST WAL to remove gistXLogDelete() and using
gistXLogUpdate() instead.
However I see that gistXLogPageDelete() is not exercised, and is worth
fixing IMO. Simply adding 10x more data in gist.sql helps, but I think
we can do something more clever...


Best regards, Andrey Borodin.



Re: regression coverage gaps for gist and hash indexes

From
Tom Lane
Date:
Andrey Borodin <amborodin86@gmail.com> writes:
> On Fri, Mar 31, 2023 at 8:07 AM Andres Freund <andres@anarazel.de> wrote:
>> Turns out the problem is that we don't reach deletion for hash and gist
>> vacuum:

> GiST logs deletions in gistXLogUpdate(), which is covered.
> gistXLogDelete() is only used for cleaning during page splits. I'd
> propose refactoring GiST WAL to remove gistXLogDelete() and using
> gistXLogUpdate() instead.
> However I see that gistXLogPageDelete() is not exercised, and is worth
> fixing IMO. Simply adding 10x more data in gist.sql helps, but I think
> we can do something more clever...

See also the thread about bug #16329 [1].  Alexander promised to look
into improving the test coverage in this area, maybe he can keep an
eye on the WAL logic coverage too.

            regards, tom lane

[1] https://www.postgresql.org/message-id/flat/16329-7a6aa9b6fa1118a1%40postgresql.org



Re: regression coverage gaps for gist and hash indexes

From
Alexander Lakhin
Date:
31.03.2023 15:55, Tom Lane wrote:
> See also the thread about bug #16329 [1].  Alexander promised to look
> into improving the test coverage in this area, maybe he can keep an
> eye on the WAL logic coverage too.

Yes, I'm going to analyze that area too. Maybe it'll take more time
(a week or two) if I encounter some bugs there (for now I observe anomalies
with gist__int_ops), but I will definitely try to improve the gist testing.

Best regards,
Alexander



Re: regression coverage gaps for gist and hash indexes

From
Andres Freund
Date:
Hi,

On 2023-03-31 10:45:51 +0300, Andrey Borodin wrote:
> On Fri, Mar 31, 2023 at 8:07 AM Andres Freund <andres@anarazel.de> wrote:
> >
> > I was working on committing patch 0001 from [1] and was a bit confused about
> > some of the changes to the WAL format for gist and hash index vacuum. It
> > looked to me that the changed code just flat out would not work.
> >
> > Turns out the problem is that we don't reach deletion for hash and gist
> > vacuum:
> >
> > gist:
> >
> > > Oh, I see. We apparently don't reach the gist deletion code in the tests:
> > > https://coverage.postgresql.org/src/backend/access/gist/gistxlog.c.gcov.html#674
> > > https://coverage.postgresql.org/src/backend/access/gist/gistxlog.c.gcov.html#174
> > >
> > > And indeed, if I add an abort() into , it's not reached.
> > >
> > > And it's not because tests use a temp table, the caller is also unreachable:
> > > https://coverage.postgresql.org/src/backend/access/gist/gist.c.gcov.html#1643
> >
> 
> GiST logs deletions in gistXLogUpdate(), which is covered.
> gistXLogDelete() is only used for cleaning during page splits.

I am not sure what your point here is - deleting entries to prevent a page
split is deleting entries. What am I missing?


> propose refactoring GiST WAL to remove gistXLogDelete() and using
> gistXLogUpdate() instead.

I think we still would need to have coverage for gistprunepage(), even if
so. So that seems a separate issue.

I think what gist.sql is missing is a combination of delete, index scan (to
mark entries dead), new insertions (to trigger pruning to prevent page
splits).

Greetings,

Andres Freund



Re: regression coverage gaps for gist and hash indexes

From
Andres Freund
Date:
Hi,

On 2023-03-31 17:00:00 +0300, Alexander Lakhin wrote:
> 31.03.2023 15:55, Tom Lane wrote:
> > See also the thread about bug #16329 [1].  Alexander promised to look
> > into improving the test coverage in this area, maybe he can keep an
> > eye on the WAL logic coverage too.
> 
> Yes, I'm going to analyze that area too. Maybe it'll take more time
> (a week or two) if I encounter some bugs there (for now I observe anomalies
> with gist__int_ops), but I will definitely try to improve the gist testing.

Because I needed it to verify the changes in the referenced patch, I wrote
tests exercising killtuples based pruning for gist and hash.

For the former I first wrote it in contrib/btree_gist. But that would mean the
recovery side isn't exercised via 027_stream_regress.sql. So I rewrote it to
use point instead, which is a tad more awkward, but...

For now I left the new tests in their own files. But possibly they should be
in gist.sql and hash_index.sql respectively?


As I also wrote at the top of the tests, we can't easily verify that
killtuples pruning has actually happened, nor guarantee that concurrent
activity doesn't prevent it (e.g. autovacuum having a snapshot open or
such). At least not without loosing coverage of WAL logging / replay. To make
it more likely I added them as their own test group.


I don't know if we want the tests in this form, but I do find them useful for
now.


Greetings,

Andres Freund

Attachment

Re: regression coverage gaps for gist and hash indexes

From
"Drouvot, Bertrand"
Date:
Hi,

On 4/1/23 1:13 AM, Andres Freund wrote:
> Hi,
> 
> On 2023-03-31 17:00:00 +0300, Alexander Lakhin wrote:
>> 31.03.2023 15:55, Tom Lane wrote:
>>> See also the thread about bug #16329 [1].  Alexander promised to look
>>> into improving the test coverage in this area, maybe he can keep an
>>> eye on the WAL logic coverage too.
>>
>> Yes, I'm going to analyze that area too. Maybe it'll take more time
>> (a week or two) if I encounter some bugs there (for now I observe anomalies
>> with gist__int_ops), but I will definitely try to improve the gist testing.
> 
> Because I needed it to verify the changes in the referenced patch, I wrote
> tests exercising killtuples based pruning for gist and hash.
> 

Thanks for the patch!

I did not looked at the detail but "just" checked that the coverage is now done.

And Indeed, when running "make check" + "027_stream_regress.pl":

I can see it moving from (without the patch):

function gistXLogDelete called 0 returned 0% blocks executed 0%
function gistRedoDeleteRecord called 0 returned 0% blocks executed 0%
function gistprunepage called 0 returned 0% blocks executed 0%
function _hash_vacuum_one_page called 0 returned 0% blocks executed 0%

to (with the patch):

function gistXLogDelete called 9 returned 100% blocks executed 100%
function gistRedoDeleteRecord called 5 returned 100% blocks executed 100% (thanks to 027_stream_regress.pl)
function gistprunepage called 9 returned 100% blocks executed 79%
function _hash_vacuum_one_page called 12 returned 100% blocks executed 94%

> For now I left the new tests in their own files. But possibly they should be
> in gist.sql and hash_index.sql respectively?

+1 to put them in gist.sql and hash_index.sql.

Regards,

-- 
Bertrand Drouvot
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com



Re: regression coverage gaps for gist and hash indexes

From
Andres Freund
Date:
Hi,

On 2023-04-01 06:02:47 +0200, Drouvot, Bertrand wrote:
> On 4/1/23 1:13 AM, Andres Freund wrote:
> > On 2023-03-31 17:00:00 +0300, Alexander Lakhin wrote:
> > > 31.03.2023 15:55, Tom Lane wrote:
> > > > See also the thread about bug #16329 [1].  Alexander promised to look
> > > > into improving the test coverage in this area, maybe he can keep an
> > > > eye on the WAL logic coverage too.
> > > 
> > > Yes, I'm going to analyze that area too. Maybe it'll take more time
> > > (a week or two) if I encounter some bugs there (for now I observe anomalies
> > > with gist__int_ops), but I will definitely try to improve the gist testing.
> > 
> > Because I needed it to verify the changes in the referenced patch, I wrote
> > tests exercising killtuples based pruning for gist and hash.
> > 
> 
> Thanks for the patch!
> 
> I did not looked at the detail but "just" checked that the coverage is now done.
> 
> And Indeed, when running "make check" + "027_stream_regress.pl":
> 
> I can see it moving from (without the patch):
> 
> function gistXLogDelete called 0 returned 0% blocks executed 0%
> function gistRedoDeleteRecord called 0 returned 0% blocks executed 0%
> function gistprunepage called 0 returned 0% blocks executed 0%
> function _hash_vacuum_one_page called 0 returned 0% blocks executed 0%
> 
> to (with the patch):
> 
> function gistXLogDelete called 9 returned 100% blocks executed 100%
> function gistRedoDeleteRecord called 5 returned 100% blocks executed 100% (thanks to 027_stream_regress.pl)
> function gistprunepage called 9 returned 100% blocks executed 79%
> function _hash_vacuum_one_page called 12 returned 100% blocks executed 94%
> 
> > For now I left the new tests in their own files. But possibly they should be
> > in gist.sql and hash_index.sql respectively?
> 
> +1 to put them in gist.sql and hash_index.sql.

Unfortunately it turns out that running them in a parallel group reliably
prevents cleanup of the dead rows, at least on my machine. Thereby preventing
any increase in coverage. As they need to run serially, I think it makes more
sense to keep the tests focussed and leave gist.sql and hash_index.sql to run
in parallel.

Greetings,

Andres Freund



Re: regression coverage gaps for gist and hash indexes

From
Tom Lane
Date:
Andres Freund <andres@anarazel.de> writes:
> On 2023-04-01 06:02:47 +0200, Drouvot, Bertrand wrote:
>> +1 to put them in gist.sql and hash_index.sql.

> Unfortunately it turns out that running them in a parallel group reliably
> prevents cleanup of the dead rows, at least on my machine. Thereby preventing
> any increase in coverage. As they need to run serially, I think it makes more
> sense to keep the tests focussed and leave gist.sql and hash_index.sql to run
> in parallel.

If they have to run serially then that means that their runtime
contributes 1-for-1 to the total runtime of the core regression tests,
which is not nice at all.  Can we move them to some other portion
of our test suite, preferably one that's not repeated four or more
times in each buildfarm run?

            regards, tom lane



Re: regression coverage gaps for gist and hash indexes

From
Andres Freund
Date:
Hi,

On 2023-04-02 12:38:32 -0400, Tom Lane wrote:
> Andres Freund <andres@anarazel.de> writes:
> > On 2023-04-01 06:02:47 +0200, Drouvot, Bertrand wrote:
> >> +1 to put them in gist.sql and hash_index.sql.
> 
> > Unfortunately it turns out that running them in a parallel group reliably
> > prevents cleanup of the dead rows, at least on my machine. Thereby preventing
> > any increase in coverage. As they need to run serially, I think it makes more
> > sense to keep the tests focussed and leave gist.sql and hash_index.sql to run
> > in parallel.
> 
> If they have to run serially then that means that their runtime
> contributes 1-for-1 to the total runtime of the core regression tests,
> which is not nice at all.

Agreed, it's not nice. At least reasonably quick (74ms and 54ms on one run
here)...


> Can we move them to some other portion of our test suite, preferably one
> that's not repeated four or more times in each buildfarm run?

Not trivially, at least. Right now 027_stream_regress.pl doesn't run other
tests, so we'd not cover the replay portion if moved the tests to
contrib/btree_gist or such.

I wonder if we should have a test infrastructure function for waiting for the
visibility horizon to have passed a certain point. I think that might be
useful for making a few other tests robust...

Greetings,

Andres Freund



Re: regression coverage gaps for gist and hash indexes

From
Tom Lane
Date:
Andres Freund <andres@anarazel.de> writes:
> On 2023-04-02 12:38:32 -0400, Tom Lane wrote:
>> If they have to run serially then that means that their runtime
>> contributes 1-for-1 to the total runtime of the core regression tests,
>> which is not nice at all.

> Agreed, it's not nice. At least reasonably quick (74ms and 54ms on one run
> here)...

Oh, that's less bad than I expected.  The discussion in the other thread
was pointing in the direction of needing hundreds of ms to make indexes
that are big enough to hit all the code paths.

>> Can we move them to some other portion of our test suite, preferably one
>> that's not repeated four or more times in each buildfarm run?

> Not trivially, at least. Right now 027_stream_regress.pl doesn't run other
> tests, so we'd not cover the replay portion if moved the tests to
> contrib/btree_gist or such.

Yeah, I was imagining teaching 027_stream_regress.pl to run additional
scripts that aren't in the core tests.  (I'm still quite unhappy that
027_stream_regress.pl uses the core tests at all, really, as they were
never particularly designed to cover what it cares about.  The whole
thing is extremely inefficient and it's no surprise that its coverage
is scattershot.)

            regards, tom lane



Re: regression coverage gaps for gist and hash indexes

From
Andres Freund
Date:
Hi,

On 2023-04-02 13:03:51 -0400, Tom Lane wrote:
> Andres Freund <andres@anarazel.de> writes:
> > On 2023-04-02 12:38:32 -0400, Tom Lane wrote:
> >> If they have to run serially then that means that their runtime
> >> contributes 1-for-1 to the total runtime of the core regression tests,
> >> which is not nice at all.
>
> > Agreed, it's not nice. At least reasonably quick (74ms and 54ms on one run
> > here)...
>
> Oh, that's less bad than I expected.  The discussion in the other thread
> was pointing in the direction of needing hundreds of ms to make indexes
> that are big enough to hit all the code paths.

Well, the tests here really just try to hit the killtuples path, not some of
the paths discussed in [1]. It needs just enough index entries to encounter a
page split (which then is avoided by pruning tuples).

Looks like the test in [1] could be made a lot cheaper by changing effective_cache_size
for just that test:

    /*
     * In 'auto' mode, check if the index has grown too large to fit in cache,
     * and switch to buffering mode if it has.
     *
     * To avoid excessive calls to smgrnblocks(), only check this every
     * BUFFERING_MODE_SWITCH_CHECK_STEP index tuples.
     *
     * In 'stats' state, switch as soon as we have seen enough tuples to have
     * some idea of the average tuple size.
     */
    if ((buildstate->buildMode == GIST_BUFFERING_AUTO &&
         buildstate->indtuples % BUFFERING_MODE_SWITCH_CHECK_STEP == 0 &&
         effective_cache_size < smgrnblocks(RelationGetSmgr(index),
                                            MAIN_FORKNUM)) ||
        (buildstate->buildMode == GIST_BUFFERING_STATS &&
         buildstate->indtuples >= BUFFERING_MODE_TUPLE_SIZE_STATS_TARGET))
    {
        /*
         * Index doesn't fit in effective cache anymore. Try to switch to
         * buffering build mode.
         */
        gistInitBuffering(buildstate);
    }



> >> Can we move them to some other portion of our test suite, preferably one
> >> that's not repeated four or more times in each buildfarm run?
>
> > Not trivially, at least. Right now 027_stream_regress.pl doesn't run other
> > tests, so we'd not cover the replay portion if moved the tests to
> > contrib/btree_gist or such.
>
> Yeah, I was imagining teaching 027_stream_regress.pl to run additional
> scripts that aren't in the core tests.

Not opposed to that, but I'm not quite sure about what we'd use as
infrastructure. A second schedule?

I think the tests patches I am proposing here are quite valuable to run
without replication involved as well, the killtuples path isn't trivial, so
having it be covered by the normal regression tests makes sense to me.


> (I'm still quite unhappy that 027_stream_regress.pl uses the core tests at
> all, really, as they were never particularly designed to cover what it cares
> about.  The whole thing is extremely inefficient and it's no surprise that
> its coverage is scattershot.)

I don't think anybody would claim it's great as-is. But I still think that
having a meaningful coverage of replay is a heck of a lot better than not
having any, even if it's not a pretty or all that fast design. And the fact
that 027_stream_regress.pl runs with a small shared_buffers actually shook out
a few bugs...

I don't think we'd want to use a completely separate set of tests for
027_stream_regress.pl, typical tests will provide coverage on both the primary
and the standby, I think, and would just end up being duplicated between the
main regression test and something specific for 027_stream_regress.pl. But I
could imagine that it's worth maintaining a distinct version of
parallel_schedule that removes a tests that aren't likely to provide benenfits
for 027_stream_regress.pl.


Btw, from what I can tell, the main bottleneck for the main regression test
right now is the granular use of groups. Because the parallel groups have
fixed size limit, there's a stall waiting for the slowest test at the end of
each group. If we instead limited group concurrency solely in pg_regress,
instead of the schedule, a quick experiment suggest we could get a good bit of
speedup. And remove some indecision paralysis about which group to add a new
test to, as well as removing the annoyance of having to count the number of
tests in a group manually.

Greetings,

Andres Freund

[1] https://www.postgresql.org/message-id/flat/16329-7a6aa9b6fa1118a1%40postgresql.org



Re: regression coverage gaps for gist and hash indexes

From
Alexander Lakhin
Date:
Hi,

31.03.2023 17:00, Alexander Lakhin wrote:
> 31.03.2023 15:55, Tom Lane wrote:
>> See also the thread about bug #16329 [1]. Alexander promised to look
>> into improving the test coverage in this area, maybe he can keep an
>> eye on the WAL logic coverage too.
>
> Yes, I'm going to analyze that area too. Maybe it'll take more time
> (a week or two) if I encounter some bugs there (for now I observe anomalies
> with gist__int_ops), but I will definitely try to improve the gist testing.

After 2+ weeks of researching I'd like to summarize my findings.
1) The checking query proposed in [1] could be improved by adding
the restriction "tgk.v = brute.v" to the condition:
WHERE tgk.k >> point(brute.min - 1, 0) AND tgk.k << point(brute.max + 1, 0)
Otherwise that query gives a false positive after
insert into test_gist_killtuples values(point(505, 0));

The similar improved condition could be placed in hash_index_killtuples.sql.

Yet another improvement for the checking query could be done with the
replacement:
min(k <-> point(0, 0)), max(k <-> point(0, 0)) ->
min(k <-> point(0, k[1])), max(p <-> point(0, k[1])) ...

It doesn't change the query plan dramatically, but the query becomes more
universal (it would work for points with any non-negative integer x).

2) I've checked clang`s scan-build notices related to gist as I planned [2],
namely:
Logic error    Branch condition evaluates to a garbage value src/backend/access/gist/gistutil.c   
gistCompressValues   606
 
Logic error    Dereference of null pointer src/backend/access/gist/gist.c    gistFindCorrectParent    1099
Logic error    Dereference of null pointer src/backend/access/gist/gist.c    gistdoinsert    671
Logic error    Dereference of null pointer src/backend/access/gist/gist.c    gistfinishsplit    1339
Logic error    Dereference of null pointer src/backend/access/gist/gist.c    gistplacetopage    340
Logic error    Dereference of null pointer src/backend/access/gist/gistbuildbuffers.c gistPushItupToNodeBuffer    366
Logic error    Result of operation is garbage or undefined src/backend/access/gist/gistbuildbuffers.c 
gistRelocateBuildBuffersOnSplit    677
Logic error    Result of operation is garbage or undefined src/backend/access/gist/gistutil.c    gistchoose    463
Unused code    Dead assignment    src/backend/access/gist/gist.c gistdoinsert    843

And found that all of them (except for the last one, that doesn't worth
fixing, IMO) are false positives (I can present detailed explanations if it
could be of interest.) So I see no grounds here to build new tests on.

3) To date I found other anomalies more or less related to gist:
fillfactor is ignored for sorted index build mode, which is effectively default now [3]
amcheck verification for gist is not yet ready to use [4] (and the latest patch doesn't apply to the current HEAD)
bug #17888: Incorrect memory access in gist__int_ops for an input array with many elements [5]

4) I've constructed some tests, that provide full coverage for
gistFindCorrectParent(), reach for "very rare situation", and for
gistfixsplit(), but all these tests execute concurrent queries, so they
can't be implemented as simple regression tests. Moreover, I could not find
any explicit problems when reaching those places (I used the checking query
from [1] in absence of other means to check gist indexes), so I see no value
in developing (not to speak of committing) these tests for now. I'm going to
further explore the gist behavior in those dark corners, but it looks like
a long-term task, so I think it shouldn't delay the gist coverage improvement
already proposed.

5)
02.04.2023 20:50, Andres Freund wrote:
> Looks like the test in [1] could be made a lot cheaper by changing effective_cache_size
> for just that test:
The effective_cache_size is accounted only when buffering = auto, but in
that test we have buffering = on, so changing it wouldn't help there.

While looking at gist-related tests, I've noticed an incorrect comment
in index_including_gist.sql:
  * 1.1. test CREATE INDEX with buffered build

It's incorrect exactly because with the default effective_cache_size the
buffered build mode is not enabled for that index size (I've confirmed
this with the elog(LOG,..) placed inside gistInitBuffering()).

So I'd like to propose the patch attached, that:
a) demonstrates the bug #16329:
With 8e5eef50c reverted, I get:
**00:00:00:11.179 1587838** Valgrind detected 1 error(s) during execution of "CREATE INDEX tbl_gist_idx ON tbl_gist 
using gist (c4) INCLUDE (c1,c2,c3) WITH (buffering = on);"
b) makes the comment in index_including_gist.sql correct
c) increases a visible test coverage a little, in particular:
  Function 'gistBuffersReleaseBlock'
-Lines executed:66.67% of 9
+Lines executed:100.00% of 9
d) doesn't increase the test duration significantly:
without valgrind I see difference: 84 ms -> 93 ms, under vagrind: 13513 ms -> 14511 ms

Thus, I think, it's worth to split the activity related to gist testing
improvement to finalizing/accepting the already-emerging patches and to
background research/anomaly findings, which could inspire further
enhancements in this area.

[1] https://www.postgresql.org/message-id/20230331231300.4kkrl44usvy2pmkv%40awork3.anarazel.de
[2] https://www.postgresql.org/message-id/cad7055f-0d76-cc31-71d5-f8b600ebb116%40gmail.com
[3] https://www.postgresql.org/message-id/fbbfe5dc-3dfa-d54a-3a94-e2bee37b85d8%40gmail.com
[4] https://www.postgresql.org/message-id/885cfb61-26e9-e7c1-49a8-02b3fb12b497%40gmail.com
[5] https://www.postgresql.org/message-id/17888-f72930e6b5ce8c14@postgresql.org

Best regards,
Alexander
Attachment