Hello Maxim!
On Thu, Sep 11, 2025 at 11:58 AM Maxim Orlov <orlovmg@gmail.com> wrote:
>
> Once again, @ 8191e0c16a
Thank you for your work on this subject. Multixact members can really
grow much faster than multixact offsets, and avoiding wraparound just
here might make sense. At the same time, making multixact offsets
64-bit is local and doesn't require changing the tuple xmin/xmax
interpretation.
I went through the patchset. The shape does not look bad, but I have
a concern about the size of the multixact offsets. As I understand,
this patchset grows multixact offsets twice; each multixact offset
grows from 32 bits to 64 bits. This seems quite a price for avoiding
the Multixact members' wraparound.
We can try to squeeze multixact offsets given it's ascending sequence
each time increased by a multixact size. But how many members can a
multixact contain at maximum? Looking at MultiXactIdExpand(), I get
that we're keeping locks from in-progress transactions, and committed
non-lock transactions (I guess the latter could be only one). The
number of transactions running by backends should fit MAX_BACKENDS
(2^18 - 1), and the number of prepared transactions should also fit
MAX_BACKENDS. So, I guess we can cap the total number of one multixact
members to 2^24.
Therefore, we can change from each 8 of 32-bit multixact offsets
(takes 32-bytes) to one 64-bit offset + 7 of 24-bit offset increments
(takes 29-bytes). The actual multixact offsets can be calculated at
the fly, overhead shouldn't be significant. What do you think?
------
Regards,
Alexander Korotkov
Supabase