Hi Tels!
Thanks for your interest in fast decompression.
> 3 нояб. 2019 г., в 12:24, Tels <nospam-pg-abuse@bloodgate.com> написал(а):
>
> I wonder if you agree and what would happen if you try this variant on your corpus tests.
I've tried some different optimization for literals. For example loop unrolling[0] and literals bulk-copying.
This approaches were brining some performance improvement. But with noise. Statistically they were somewhere better,
somewhereworse, net win, but that "net win" depends on what we consider important data and important platform.
Proposed patch makes clearly decompression faster on any dataset, and platform.
I believe improving pglz further is viable, but optimizations like common data prefix seems more promising to me.
Also, I think we actually need real codecs like lz4, zstd and brotli instead of our own invented wheel.
If you have some spare time - Pull Requests to test_pglz are welcome, lets benchmark more micro optimizations, it
bringsa lot of fun :)
--
Andrey Borodin
Open source RDBMS development team leader
Yandex.Cloud
[0] https://github.com/x4m/test_pglz/blob/master/pg_lzcompress_hacked.c#L166