Re: page compression - Mailing list pgsql-hackers

From Jim Nasby
Subject Re: page compression
Date
Msg-id 40030A90-48A3-4AD5-AD19-350A90112181@nasby.net
Whole thread Raw
In response to Re: page compression  (Simon Riggs <simon@2ndQuadrant.com>)
Responses Re: page compression  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
On Jan 2, 2011, at 5:36 PM, Simon Riggs wrote:
> On Tue, 2010-12-28 at 09:10 -0600, Andy Colson wrote:
>
>> I know its been discussed before, and one big problem is license and
>> patent problems.
>
> Would like to see a design for that. There's a few different ways we
> might want to do that, and I'm interested to see if its possible to get
> compressed pages to be indexable as well.
>
> For example, if you compress 2 pages into 8Kb then you do one I/O and
> out pops 2 buffers. That would work nicely with ring buffers.
>
> Or you might try to have pages > 8Kb in one block, which would mean
> decompressing every time you access the page. That wouldn't be much of a
> problem if we were just seq scanning.
>
> Or you might want to compress the whole table at once, so it can only be
> read by seq scan. Efficient, but not indexes.

FWIW, last time I looked at how Oracle handled compression, it would only compress existing data. As soon as you
modifieda row, it ended up un-compressed, presumably in a different page that was also un-compressed. 

I wonder if it would be feasible to use a fork to store where a compressed page lives inside the heap... if we could do
thatI don't see any reason why indexes wouldn't work. The changes required to support that might not be too horrific
either...
--
Jim C. Nasby, Database Architect                   jim@nasby.net
512.569.9461 (cell)                         http://jim.nasby.net




pgsql-hackers by date:

Previous
From: Jim Nasby
Date:
Subject: Re: contrib/snapshot
Next
From: Magnus Hagander
Date:
Subject: Re: Recovery conflict monitoring