Re: [GENERAL] tuple data size and compression - Mailing list pgsql-general

From Tom DalPozzo
Subject Re: [GENERAL] tuple data size and compression
Date
Msg-id CAK77FCTsSjNynyiWWoZA8CyPVFYRrEfGda5mHkEYWhLf2GLYdQ@mail.gmail.com
Whole thread Raw
In response to Re: [GENERAL] tuple data size and compression  (Adrian Klaver <adrian.klaver@aklaver.com>)
Responses Re: [GENERAL] tuple data size and compression
List pgsql-general
https://www.postgresql.org/docs/9.5/static/storage-toast.html

"The TOAST management code is triggered only when a row value to be stored in a table is wider than TOAST_TUPLE_THRESHOLD bytes (normally 2 kB). The TOAST code will compress and/or move field values out-of-line until the row value is shorter than TOAST_TUPLE_TARGET bytes (also normally 2 kB) or no more gains can be had. During an UPDATE operation, values of unchanged fields are normally preserved as-is; so an UPDATE of a row with out-of-line values incurs no TOAST costs if none of the out-of-line values change."

Pupillo
-- 
Adrian Klaver
adrian.klaver@aklaver.com
 
I see. But in my case rows don't reach that thresold (I didn't check if 2K but I didn't change anything). So I'm wondering if there is any other chance except the TOAST to get the rows compressed or not.
I noticed that, when I use constant data, the total IO writes (by iostat) are more or less 1/2 of the the total IO writes when using random or other data hard to compress.


pgsql-general by date:

Previous
From: Thomas Kellerer
Date:
Subject: Re: [GENERAL] pg_dump and quoted identifiers
Next
From: Peter Devoy
Date:
Subject: Re: [GENERAL] CRM where pg is a first class citizen?