Re: pg_lzcompress strategy parameters - Mailing list pgsql-hackers

From Tom Lane
Subject Re: pg_lzcompress strategy parameters
Date
Msg-id 26793.1186353032@sss.pgh.pa.us
Whole thread Raw
In response to Re: pg_lzcompress strategy parameters  (Gregory Stark <stark@enterprisedb.com>)
Responses Re: pg_lzcompress strategy parameters  (Jan Wieck <JanWieck@Yahoo.com>)
List pgsql-hackers
Gregory Stark <stark@enterprisedb.com> writes:
> (Incidentally, this means what I said earlier about uselessly trying to
> compress objects below 256 is even grosser than I realized. If you have a
> single large object which even after compressing will be over the toast target
> it will force *every* varlena to be considered for compression even though
> they mostly can't be compressed. Considering a varlena smaller than 256 for
> compression only costs a useless palloc, so it's not the end of the world but
> still. It does seem kind of strange that a tuple which otherwise wouldn't be
> toasted at all suddenly gets all its fields compressed if you add one more
> field which ends up being stored externally.)

Yeah.  It seems like we should modify the first and third loops so that
if (after compression if any) the largest attribute is *by itself*
larger than the target threshold, then we push it out to the toast table
immediately, rather than continuing to compress other fields that might
well not need to be touched.
        regards, tom lane


pgsql-hackers by date:

Previous
From: Gustavo Tonini
Date:
Subject: pgCluster CVS repository
Next
From: Gregory Stark
Date:
Subject: Problem with locks