Re: wal_compression=zstd - Mailing list pgsql-hackers

From Michael Paquier
Subject Re: wal_compression=zstd
Date
Msg-id YihgIWj3kqAvfKj+@paquier.xyz
Whole thread Raw
In response to Re: wal_compression=zstd  (Michael Paquier <michael@paquier.xyz>)
List pgsql-hackers
On Sat, Mar 05, 2022 at 07:26:39PM +0900, Michael Paquier wrote:
> Repeatability and randomness of data counts, we could have for example
> one case with a set of 5~7 int attributes, a second with text values
> that include random data, up to 10~12 bytes each to count on the tuple
> header to be able to compress some data, and a third with more
> repeatable data, like one attribute with an int column populate
> with generate_series().  Just to give an idea.

And that's what I did as of the attached set of test:
- Cluster on tmpfs.
- max_wal_size, min_wal_size at 2GB and shared_buffers at 1GB, aka
large enough to include the full data set in memory.
- Rather than using Justin's full patch set, I have just patched the
code in xloginsert.c to switch the level.
- One case with table that uses one int attribute, with rather
repetitive data worth 484MB.
- One case with table using (int, text), where the text data is made
of 10~11 bytes of random data, worth ~340MB.
- Use pg_prewarm to load the data into shared buffers.  With the
cluster mounted on a tmpfs that should not matter though.
- Both tables have a fillfactor at 50 to give room to the updates.

I have measured the CPU usage with a toy extension, also attached,
called pg_rusage() that is a simple wrapper to upstream's pg_rusage.c
to initialize a rusage snapshot and print its data with two SQL
functions that are called just before and after the FPIs are generated
(aka the UPDATE query that rewrites the whole table in the script
attached).

The quickly-hacked test script and the results are in test.tar.gz, for
reference.  The toy extension is pg_rusage.tar.gz.

Here are the results I compiled, as of results_format.sql in the
tarball attached:
             descr             | rel_size | fpi_size | time_s
-------------------------------+----------+----------+--------
 int column no compression     | 429 MB   | 727 MB   |  13.15
 int column ztsd default level | 429 MB   | 523 MB   |  14.23
 int column zstd level 1       | 429 MB   | 524 MB   |  13.94
 int column zstd level 10      | 429 MB   | 523 MB   |  23.46
 int column zstd level 19      | 429 MB   | 523 MB   | 103.71
 int column lz4 default level  | 429 MB   | 575 MB   |  13.37
 int/text no compression       | 344 MB   | 558 MB   |  10.08
 int/text lz4 default level    | 344 MB   | 463 MB   |  10.29
 int/text zstd default level   | 344 MB   | 415 MB   |  11.48
 int/text zstd level 1         | 344 MB   | 418 MB   |  11.25
 int/text zstd level 10        | 344 MB   | 415 MB   |  20.59
 int/text zstd level 19        | 344 MB   | 413 MB   |  62.64
(12 rows)

I did not expect zstd to be this slow at a level of 10~ actually.  The
runtime (elapsed CPU time) got severely impacted at level 19, that I
ran just for fun to see how that it would become compared to a level
of 10.  There is a slight difference between the default level and a
level of 1, and the compression size does not change much, nor does
the CPU usage really change.

Attached is an updated patch, while on it, that I have tweaked before
running my own tests.

At the end, I'd still like to think that we'd better stick with the
default level for this parameter, and that's the suggestion of
upstream.  So I would like to move on with that for this patch.
--
Michael

Attachment

pgsql-hackers by date:

Previous
From: "Daniel Westermann (DWE)"
Date:
Subject: Re: Changing "Hot Standby" to "hot standby"
Next
From: Michael Paquier
Date:
Subject: Re: Changing "Hot Standby" to "hot standby"