Re: libpq compression - Mailing list pgsql-hackers

From Daniil Zakhlystov
Subject Re: libpq compression
Date
Msg-id 119041606138877@mail.yandex-team.ru
Whole thread Raw
In response to Re: libpq compression  (Andrey Borodin <x4mmm@yandex-team.ru>)
Responses Re: libpq compression  (Konstantin Knizhnik <k.knizhnik@postgrespro.ru>)
List pgsql-hackers
** this is a plaintext version of the previous HTML-formatted message **

Hi,

I’ve run a couple of pgbenchmarks using this patch with odyssey connection pooler, with client-to-pooler ZSTD
compressionturned on.
 

pgbench --builtin tpcb-like -t 75 --jobs=32 --client=1000

CPU utilization chart of the configuration above:
https://storage.yandexcloud.net/usernamedt/odyssey-compression.png

CPU overhead on average was about 10%.

pgbench -i -s 1500

CPU utilization chart of the configuration above:
https://storage.yandexcloud.net/usernamedt/odyssey-compression-i-s.png

As you can see, there was not any noticeable difference in CPU utilization with ZSTD compression enabled or disabled.

Regarding replication, I've made a couple of fixes for this patch, you can find them in this pull request
https://github.com/postgrespro/libpq_compression/pull/3

With these fixes applied, I've run some tests using this patch with streaming physical replication on some large
clusters.Here is the difference of network usage on the replica with ZSTD replication compression enabled compared to
thereplica without replication compression:
 

- on pgbench -i -s 1500 there was ~23x less network usage

- on pgbench -T 300 --jobs=32 --client=640 there was ~4.5x less network usage

- on pg_restore of the ~300 GB database there was ~5x less network usage

To sum up, I think that the current version of the patch (with per-connection compression) is OK from the protocol
pointof view except for the compression initialization part. As discussed, we can either do initialization before the
startuppacket or move the compression to _pq_ parameter to avoid issues on older backends.
 

Regarding switchable on the fly compression, although it introduces more flexibility, seems like that it will
significantlyincrease the implementation complexity of both the frontend and backend. To support this approach in the
future,maybe we should add something like the compression mode to protocol and name the current approach as “permanent”
whilereserving the “switchable” compression type for future implementation?
 

Thanks,

Daniil Zakhlystov

06.11.2020, 11:58, "Andrey Borodin" <x4mmm@yandex-team.ru>:
>>  6 нояб. 2020 г., в 00:22, Peter Eisentraut <peter.eisentraut@enterprisedb.com>:
>>
>>  On 2020-11-02 20:50, Andres Freund wrote:
>>>  On 2020-10-31 22:25:36 +0500, Andrey Borodin wrote:
>>>>  But the price of compression is 1 cpu for 500MB/s (zstd). With a
>>>>  20Gbps network adapters cost of recompressing all traffic is at most
>>>>  ~4 cores.
>>>  It's not quite that simple, because presumably each connection is going
>>>  to be handled by one core at a time in the pooler. So it's easy to slow
>>>  down peak throughput if you also have to deal with TLS etc.
>>
>>  Also, current deployments of connection poolers use rather small machine sizes. Telling users you need 4 more cores
perinstance now to decompress and recompress all the traffic doesn't seem very attractive. Also, it's not unheard of to
havemore than one layer of connection pooling. With that, this whole design sounds a bit like a heat-generation system.
;-)
>
> User should ensure good bandwidth between pooler and DB. At least they must be within one availability zone. This
makescompression between pooler and DB unnecessary.
 
> Cross-datacenter traffic is many times more expensive.
>
> I agree that switching between compression levels (including turning it off) seems like nice feature. But
> 1. Scope of its usefulness is an order of magnitude smaller than compression of the whole connection.
> 2. Protocol for this feature is significantly more complicated.
> 3. Restarted compression is much less efficient and effective.
>
> Can we design a protocol so that this feature may be implemented in future, currently focusing on getting things
compressed?Are there any drawbacks in this approach?
 
>
> Best regards, Andrey Borodin.



pgsql-hackers by date:

Previous
From: James Coleman
Date:
Subject: Re: Fix generate_useful_gather_paths for parallel unsafe pathkeys
Next
From: Peter Eisentraut
Date:
Subject: Re: abstract Unix-domain sockets