Re: Compression of bigger WAL records - Mailing list pgsql-hackers

From Andrey Borodin
Subject Re: Compression of bigger WAL records
Date
Msg-id 4D8E3280-632B-4C6C-A728-835B7FDE6325@yandex-team.ru
Whole thread
In response to Re: Compression of bigger WAL records  (Andrey Borodin <x4mmm@yandex-team.ru>)
Responses Re: Compression of bigger WAL records
Re: Compression of bigger WAL records
List pgsql-hackers

> On 16 Jan 2026, at 21:17, Andrey Borodin <x4mmm@yandex-team.ru> wrote:
>
> That's a very good idea! We don't need to replace current behavior, we can just complement it.
> I'll implement this idea!

Here's the implementation. Previously existing buffers are now combined
into single allocation, which is GUC-controlled (you can add more memory).

However, now this buffer is just enough to accommodate most of records...
So, maybe we do not need a GUC at all, because keeping it minimal (same
consumption as before the patch) is just enough.

Now the patch essentially have no extra memory footprint, but allows to
save 25% of WAL on index creation (in case of random data).

User can force FPI-only compression by increasing wal_compression_threshold
to 1GB.

The decision chain is now a bit complicated:
- assemble record without compression FPIs
- try whole record compression
- if compression enlarged record fallback to FPI compression
I think the case can be simplified to "Try only one compression approach that
is expected to work, if not - insert uncompressed".

What do you think?


Best regards, Andrey Borodin.


Attachment

pgsql-hackers by date:

Previous
From: Bryan Green
Date:
Subject: Re: Avoid multiple calls to memcpy (src/backend/access/index/genam.c)
Next
From: Hüseyin Demir
Date:
Subject: Re: client_connection_check_interval default value