Re: QuickLZ compression algorithm (Re: Inclusion in the PostgreSQL backend for toasting rows) - Mailing list pgsql-hackers

From Mark Mielke
Subject Re: QuickLZ compression algorithm (Re: Inclusion in the PostgreSQL backend for toasting rows)
Date
Msg-id 49628C81.7030402@mark.mielke.cc
Whole thread Raw
In response to Re: QuickLZ compression algorithm (Re: Inclusion in the PostgreSQL backend for toasting rows)  (Gregory Stark <stark@enterprisedb.com>)
Responses Re: QuickLZ compression algorithm (Re: Inclusion in the PostgreSQL backend for toasting rows)  (Gregory Stark <stark@enterprisedb.com>)
List pgsql-hackers
Guaranteed compression of large data fields is the responsibility of the 
client. The database should feel free to compress behind the scenes if 
it turns out to be desirable, but an expectation that it compresses is 
wrong in my opinion.

That said, I'm wondering why compression has to be a problem or why >1 
Mbyte is a reasonable compromise? I missed the original thread that lead 
to 8.4. It seems to me that transparent file system compression doesn't 
have limits like "files must be less than 1 Mbyte to be compressed". 
They don't exhibit poor file system performance. I remember back in the 
386/486 days, that I would always DriveSpace compress everything, 
because hard disks were so slow then that DriveSpace would actually 
increase performance. The toast tables already give a sort of 
block-addressable scheme. Compression can be on a per block or per set 
of blocks basis allowing for seek into the block, or if compression 
doesn't seem to be working for the first few blocks, the later blocks 
can be stored uncompressed? Or is that too complicated compared to what 
we have now? :-)

Cheers,
mark

-- 
Mark Mielke <mark@mielke.cc>



pgsql-hackers by date:

Previous
From: Paul Schlie
Date:
Subject: Re: incoherent view of serializable transactions
Next
From: "Stephen R. van den Berg"
Date:
Subject: Re: QuickLZ compression algorithm (Re: Inclusion in the PostgreSQL backend for toasting rows)