Thread: space taken by a row & compressed data
With mysql I know how much space a row will take, based on the datatype of it columns. I also (approximately) know the size of indexes. Is there a way to know that in postgresql? Is there a way to pack (compress) data, as with myisampack for mysql? Thank you
Leonardo Francalanci wrote: > With mysql I know how much space a row will take, based on the datatype > of it columns. I also (approximately) know the size of indexes. > Is there a way to know that in postgresql? We have an FAQ item about this. > Is there a way to pack (compress) data, as with myisampack for mysql? Long data values are automatically compressed. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania 19073
Hi Leonardo, Am Do, den 26.08.2004 schrieb Leonardo Francalanci um 15:51: > With mysql I know how much space a row will take, based on the datatype > of it columns. I also (approximately) know the size of indexes. > Is there a way to know that in postgresql? > > Is there a way to pack (compress) data, as with myisampack for mysql? in the contrib dir you will find something (dbsize or so) to check for sizes of objects (at least tables) Due to the MVCC you happen to have multiple versions of an updated row at the same time so the numbers arent very exact all the time. Large objects such as text columns are stored in compressed form already. HTH Tino Wildenhain
> We have an FAQ item about this. Damn! I didn't see that one! Sorry... > Long data values are automatically compressed. The reason I'm asking is: we have a system that stores 200,000,000 rows per month (other tables store 10,000,000 rows per month) Every row has 400 columns of integers + 2 columns (date+integer) as index. Our system compresses rows before writing them to a binary file on disk. Data don't usually need to be updated/removed. We usually access all columns of a row (hence compression on a per-row basis makes sense). Is there any way to compress data on a per-row basis? Maybe with a User-Defined type?
Leonardo Francalanci wrote: > > We have an FAQ item about this. > > Damn! I didn't see that one! Sorry... > > > Long data values are automatically compressed. > > The reason I'm asking is: > we have a system that stores 200,000,000 rows per month > (other tables store 10,000,000 rows per month) > Every row has 400 columns of integers + 2 columns (date+integer) as index. > > Our system compresses rows before writing them to a binary file on disk. > Data don't usually need to be updated/removed. > We usually access all columns of a row (hence compression on a per-row basis > makes sense). > > Is there any way to compress data on a per-row basis? Maybe with > a User-Defined type? Ah, we only compress long row values, which integers would not be. I don't see any way to compress an entire row even with a user-defined type unless you put multiple values into a single column and compress those as a single value. In fact, if you used an array or some special data type it would become a long value and would be automatically compressed. However, as integers, there would have to be a lot of duplicate values before compression would be a win. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania 19073
"Leonardo Francalanci" <lfrancalanci@simtel.ie> writes: > we have a system that stores 200,000,000 rows per month > (other tables store 10,000,000 rows per month) > Every row has 400 columns of integers + 2 columns (date+integer) as index. > Our system compresses rows before writing them to a binary file on disk. > Data don't usually need to be updated/removed. > We usually access all columns of a row (hence compression on a per-row basis > makes sense). > Is there any way to compress data on a per-row basis? Maybe with > a User-Defined type? If you just stuck all the integers into a single integer-array column, it would be 1600 bytes wide, which is ... hmm ... not quite wide enough to trigger the toast logic. Perhaps it would be worthwhile for you to run a custom build with TOAST_TUPLE_THRESHOLD/TOAST_TUPLE_TARGET set to half their standard values (see src/include/access/tuptoaster.h). You'd not need to write any specialized code that way. Note that if you sometimes search on the values of one of the non-index columns, this might be a bad idea. regards, tom lane
Bruce Momjian <pgman@candle.pha.pa.us> writes: > However, as integers, there would have to be a lot of duplicate values > before compression would be a win. Not necessarily. If for instance most of the values fit in int2, then the upper zero bytes would be fodder for compression. (If they *all* fit in int2 then of course he's missing a trick...) The fact that they are successfully using row compression on their old platform indicates that there's some win available there. regards, tom lane
> If you just stuck all the integers into a single integer-array column, > it would be 1600 bytes wide, which is ... hmm ... not quite wide enough > to trigger the toast logic. Perhaps it would be worthwhile for you to > run a custom build with TOAST_TUPLE_THRESHOLD/TOAST_TUPLE_TARGET set > to half their standard values (see src/include/access/tuptoaster.h). Could you point me to some docs on this TOAST-mechanism? I only found http://postgresql.zadnik.org/projects/devel-toast.html but seems very old. For instance: what kind of method is used to compress data? > You'd not need to write any specialized code that way. Great! > Note that if you sometimes search on the values of one of the non-index > columns, this might be a bad idea. No, this never happens.
"Leonardo Francalanci" <lfrancalanci@simtel.ie> writes: > Could you point me to some docs on this TOAST-mechanism? > For instance: what kind of method is used to compress data? When in doubt, read the code ;-) src/backend/utils/adt/pg_lzcompress.c src/include/utils/pg_lzcompress.h regards, tom lane
Tom Lane <tgl@sss.pgh.pa.us> writes: > "Leonardo Francalanci" <lfrancalanci@simtel.ie> writes: > > Could you point me to some docs on this TOAST-mechanism? > > For instance: what kind of method is used to compress data? > > When in doubt, read the code ;-) > src/backend/utils/adt/pg_lzcompress.c > src/include/utils/pg_lzcompress.h Are toasted values stored in the table itself or in a separate table? My understanding was that it was the latter, which leads me to wonder whether he'll actually gain anything by having all the records in his table be toasted. It'll mean every record lookup has to traverse two indexes, and a sequential scan loses the sequential read performance boost. Or am I wrong and toasted values can be stored inline? -- greg
Greg Stark wrote: > Are toasted values stored in the table itself or in a separate table? In a separate table if they exceed a threshold. > My understanding was that it was the latter, which leads me to wonder whether > he'll actually gain anything by having all the records in his table be > toasted. It'll mean every record lookup has to traverse two indexes, and a > sequential scan loses the sequential read performance boost. > > Or am I wrong and toasted values can be stored inline? > They can be, but are not by default. See: http://www.postgresql.org/docs/current/static/sql-altertable.html SET STORAGE This form sets the storage mode for a column. This controls whether this column is held inline or in a supplementary table, and whether the data should be compressed or not. PLAIN must be used for fixed-length values such as integer and is inline, uncompressed. MAIN is for inline, compressible data. EXTERNAL is for external, uncompressed data, and EXTENDED is for external, compressed data. EXTENDED is the default for all data types that support it. The use of EXTERNAL will, for example, make substring operations on a text column faster, at the penalty of increased storage space. Joe
On 8/26/2004 4:13 PM, Greg Stark wrote: > Tom Lane <tgl@sss.pgh.pa.us> writes: > >> "Leonardo Francalanci" <lfrancalanci@simtel.ie> writes: >> > Could you point me to some docs on this TOAST-mechanism? >> > For instance: what kind of method is used to compress data? >> >> When in doubt, read the code ;-) >> src/backend/utils/adt/pg_lzcompress.c >> src/include/utils/pg_lzcompress.h > > Are toasted values stored in the table itself or in a separate table? Some here and some there. Toast tries by default to get a row under 2K size. As long as that isn't the case, it compresses the largest varlen attribute. If there are no more uncompressed varlen attributes, it takes the largest value and stores it in slices in the toast table, again only until the main row is under 2K. > > My understanding was that it was the latter, which leads me to wonder whether > he'll actually gain anything by having all the records in his table be > toasted. It'll mean every record lookup has to traverse two indexes, and a > sequential scan loses the sequential read performance boost. > > Or am I wrong and toasted values can be stored inline? > Depends on the type of query. Queries that really access a lot of toasted values lose. Queries that shovel around a lot of rows but don't touch most of the toasted values win. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #================================================== JanWieck@Yahoo.com #
Joe Conway <mail@joeconway.com> writes: > Greg Stark wrote: > > Are toasted values stored in the table itself or in a separate table? > > In a separate table if they exceed a threshold. How do you check to see how many records, or ideally which records, are being toasted and/or stored externally? -- greg
Greg Stark wrote: > How do you check to see how many records, or ideally which records, are being > toasted and/or stored externally? > I don't know of a builtin way to do that from SQL, but the attached seems to work for me. Joe
Attachment
Joe Conway wrote: > Greg Stark wrote: > >> How do you check to see how many records, or ideally which records, >> are being >> toasted and/or stored externally? >> > I don't know of a builtin way to do that from SQL, but the attached > seems to work for me. > FWIW, this version has a bit more robust type checking. The last one would crash if passed a non-varlena column. regression=# \d foo Table "public.foo" Column | Type | Modifiers --------+---------+----------- f1 | integer | f2 | text | regression=# select checktoast(f1), checktoast(f2) from foo; checktoast | checktoast ---------------------+--------------------- inline,uncompressed | inline,uncompressed inline,uncompressed | inline,compressed (2 rows) Joe
Attachment
Joe Conway <mail@joeconway.com> writes: > I don't know of a builtin way to do that from SQL, but the attached seems to > work for me. Cool. Thank you. -- greg