[HACKERS][Proposal] LZ4 Compressed Storage Manager - Mailing list pgsql-hackers

From Николай Петров
Subject [HACKERS][Proposal] LZ4 Compressed Storage Manager
Date
Msg-id 11996861554042351@iva4-dd95b404a60b.qloud-c.yandex.net
Whole thread Raw
Responses Re: [HACKERS][Proposal] LZ4 Compressed Storage Manager
Re: [HACKERS][Proposal] LZ4 Compressed Storage Manager
Re: [HACKERS][Proposal] LZ4 Compressed Storage Manager
List pgsql-hackers
Hello everyone! 
Thank you for your interest to this topic. 

I would like to propose Compressed Storage Manager for PostgreSQL.

The problem:
In cases when you store some log-like data in your tables, or when you
store time-series data you may face with high disk space consumption 
because of a lot of data. It is a good idea to compress tables, 
especially if you have a compressible data and OLAP 
WORM (write once read many) usage scenarios.

Current ways to solve this problem:
Now this could be solved via a compressible file system such as BTRFS 
or ZFS. This approach has a contradictory impact on performance and 
connected with difficulties of administration.

Other's DB approaches:
Postgres Pro Enterprise has embedded CFS [1][2] for this purposes.
MySQL InnoDB has two options of compression - table level compression 
(zlib only) [3] and transparency pages compression (zlib, LZ4) [4] 
via hole punching [5]. 

My offer:
Implement LZ4 Compressed Storage Manager. It should compress pages on 
writing to block files and decompress on reading. I would like to 
offer LZ4 at first, because it has low CPU consumption and it is 
available under BSD 2 clause license. 

Compressed Storage Manager operation description (TLDR: algorithm could
be similar to MySQL table level compression):
    - It should store compressed pages in a block file, but because of 
      different size of compressed data, it should have an additional 
      file with offset for each pages.
    - When it reads a page, it translates upper PostgreSQL layers 
      file/offset query to actual page offset, read compressed page 
      bytes, decompress them and fill the requested  buffer with 
      decompressed page. 
    - New pages writing quite a simple, it has to compress the page, 
      write it to block file and write page offset into a file with 
      pointers.
    - In cases when it's necessary to write changed page, it has to 
      check that the size of the compressed page smaller or equal to 
      previous version. If it's bigger, it is should to write page 
      to the end of the block file and change the page pointer. The 
      old page version became dead.
    - There is an ability to make free space release mechanism, for instance, 
      MySQL use hole punching (what contradictory impact on 
      performance [6]). At first time dead pages could be freed 
      via VACUUM FULL. 

    pointers file
  +====+====+====+
  | p1 | p2 | p3 |  
  +=|==+==|=+==|=+  
    |     |    |_________________________________
    |     |____________________                  |
    |                          |                 |         block file
  +=|======+=================+=|===============+=|==================+
  | p1 len | p1 ####data#### | p2 len | p2 #d# | p3 len | p3 #data# |
  +========+=================+=================+====================+


Test of possible compression (database [7], table ticket_flights [8]):
    547M 47087 <- uncompressed
    200M 47087.lz4.1.pages.compressed  <-- pages compression (37%)

Pros:
- decreases disk space usage
- decreases disk reads
Cons:
- possible increases random access I/O
- increases CPU usage
- possible conflicts with PostgreSQL expectations 
  of Storage Manager behaviour
- could conflict with pg_basebackup and pg_upgrade utilities
- compression requires additional memory

Why it should be implemented on Storage Manager level instead of usage
Pluggable storage API [9]?
  - From my perspective view Storage Manager level implementation 
    allows to focus on proper I/O operations and compression. 
    It allows to write much more simple realization. It's because of 
    Pluggable storage API force you to implement more complex 
    interfaces. To be honest, I am really hesitating about this point, 
    especially because of Pluggable storage API allows to create 
    extension without core code modification and it potentially allows 
    to use more perfective compression algorithms (Table Access Manager
    allows you to get more information about storing data). 

I would like to implement a proof of concept 
and have a couple of questions:
  - your opinion about necessity of this feature 
    (Compressed Storage Manager)
  - Is it good idea to implement DB compression on Storage Manager 
    level? Perhaps it is better to use Pluggable storage API.
  - Is there any reason to refuse this proposal?
  - Are there any circumstances what didn't allow to implement 
    Compressed Storage Manager?

Regards, 
Nikolay P.

[1] - https://postgrespro.com/docs/enterprise/9.6/cfs
[2] - https://afiskon.github.io/static/2017/postgresql-in-core-compression-pgconf2017.pdf (page 17)
[3] - https://dev.mysql.com/doc/refman/8.0/en/innodb-table-compression.html
[4] - https://dev.mysql.com/doc/refman/8.0/en/innodb-page-compression.html
[5] - https://lwn.net/Articles/415889/
[6] - https://www.percona.com/blog/2017/11/20/innodb-page-compression/
[7] - https://postgrespro.com/education/demodb
[8] - https://postgrespro.com/docs/postgrespro/10/apjs02
[9] - https://commitfest.postgresql.org/22/1283/




pgsql-hackers by date:

Previous
From: Michael Paquier
Date:
Subject: Re: pgsql: Improve autovacuum logging for aggressive andanti-wraparound ru
Next
From: Ramanarayana
Date:
Subject: Re: Problem during Windows service start