Re: [PoC] Improve dead tuple storage for lazy vacuum - Mailing list pgsql-hackers

From John Naylor
Subject Re: [PoC] Improve dead tuple storage for lazy vacuum
Date
Msg-id CAFBsxsEdAorbMfDGUgtyEBCyUoK=9J=D=UbJC1mT2grL1mSr1A@mail.gmail.com
Whole thread Raw
In response to Re: [PoC] Improve dead tuple storage for lazy vacuum  (Masahiko Sawada <sawada.mshk@gmail.com>)
Responses Re: [PoC] Improve dead tuple storage for lazy vacuum  (Masahiko Sawada <sawada.mshk@gmail.com>)
List pgsql-hackers


On Thu, Dec 22, 2022 at 10:00 PM Masahiko Sawada <sawada.mshk@gmail.com> wrote:

> If the value is a power of 2, it seems to work perfectly fine. But for
> example if it's 700MB, the total memory exceeds the limit:
>
> 2*(1+2+4+8+16+32+64+128) = 510MB (72.8% of 700MB) -> keep going
> 510 + 256 = 766MB -> stop but it exceeds the limit.
>
> In a more bigger case, if it's 11000MB,
>
> 2*(1+2+...+2048) = 8190MB (74.4%)
> 8190 + 4096 = 12286MB
>
> That being said, I don't think they are not common cases. So the 75%
> threshold seems to work fine in most cases.

Thinking some more, I agree this doesn't have large practical risk, but thinking from the point of view of the community, being loose with memory limits by up to 10% is not a good precedent.

Perhaps we can be clever and use 75% when the limit is a power of two and 50% otherwise. I'm skeptical of trying to be clever, and I just thought of an additional concern: We're assuming behavior of the growth in size of new DSA segments, which could possibly change. Given how allocators are typically coded, though, it seems safe to assume that they'll at most double in size.

--
John Naylor
EDB: http://www.enterprisedb.com

pgsql-hackers by date:

Previous
From: Amit Kapila
Date:
Subject: Re: Apply worker fails if a relation is missing on subscriber even if refresh publication has not been refreshed yet
Next
From: Pavel Stehule
Date:
Subject: Re: Schema variables - new implementation for Postgres 15 (typo)