Re: Performance degradation on concurrent COPY into a single relation in PG16. - Mailing list pgsql-hackers

From Andres Freund
Subject Re: Performance degradation on concurrent COPY into a single relation in PG16.
Date
Msg-id 20230916000011.2ugpkkkp7bpp4cfh@awork3.anarazel.de
Whole thread Raw
In response to Re: Performance degradation on concurrent COPY into a single relation in PG16.  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Performance degradation on concurrent COPY into a single relation in PG16.
List pgsql-hackers
Hi,

On 2023-09-06 18:01:53 -0400, Tom Lane wrote:
> It turns out that this patch is what's making buildfarm member
> chipmunk fail in contrib/pg_visibility [1].  That's easily reproduced
> by running the test with shared_buffers = 10MB.  I didn't dig further
> than the "git bisect" result:

At first I was a bit confounded by not being able to reproduce this. My test
environment had max_connections=110 for some other reason - and the problem
doesn't reproduce with that setting...


> $ git bisect bad
> 82a4edabd272f70d044faec8cf7fd1eab92d9991 is the first bad commit
> commit 82a4edabd272f70d044faec8cf7fd1eab92d9991
> Author: Andres Freund <andres@anarazel.de>
> Date:   Mon Aug 14 09:54:03 2023 -0700
>
>     hio: Take number of prior relation extensions into account
>
> but I imagine the problem is that the patch's more aggressive
> relation-extension heuristic is causing the table to have more
> pages than the test case expects.  Is it really a good idea
> for such a heuristic to depend on shared_buffers?

The heuristic doesn't directly depend on shared buffers. However, the amount
we extend by is limited by needing to pin shared buffers covering all the
newly extended buffers.

That's what ends up limiting things here - shared_buffers = 10MB and
max_connections = 10 doesn't allow for a lot of buffers to be pinned
concurrently in each backend.  Although perhaps LimitAdditionalPins() is a bit
too conservative, due to not checking the private refcount array and just
assuming REFCOUNT_ARRAY_ENTRIES.


> If you don't want to change the heuristic then we'll have to find a way to
> tweak this test to avoid it.

We could tweak LimitAdditionalPins() by checking PrivateRefCountArray instead
of assuming the worst-case REFCOUNT_ARRAY_ENTRIES.

However, it seems that the logic in the test is pretty fragile independent of
this issue? Different alignment, block size or an optimization of the page
layout could also break the test?

Unfortunately a query that doesn't falsely alert in this case is a bit ugly,
due to needing to deal with the corner case of an empty page at the end:

select *
from pg_visibility_map('copyfreeze')
where
  (not all_visible or not all_frozen)
  -- deal with trailing empty pages due to potentially bulk-extending too aggressively
  and exists(SELECT * FROM copyfreeze WHERE ctid >= ('('||blkno||', 0)')::tid)
;

Before 82a4edabd27 this situation was rare - you'd have needed contended
extensions. But after it has become more common. I worry that that might cause
other issues :(. OTOH, I think we'll need to extend way more aggressively at
some point...

Greetings,

Andres Freund



pgsql-hackers by date:

Previous
From: jian he
Date:
Subject: Re: Infinite Interval
Next
From: Andres Freund
Date:
Subject: Re: [PoC] Improve dead tuple storage for lazy vacuum