There were some talks lately about compression.With a bit of lateral thinking I guess this can be used to contain the
bloat induced by updates.Of course this is just my hypothesis.
Compression in indexes :
Instead of storing (value, tuple identifier) keys in the indexes, store
(value, [tuple identifier list]) ; ie. all tuples which have the same
indexed value are referenced by the same index tuple, instead of having
one index tuple per actual tuple.The length of the list would of course be limited to the space actually
available on an index page ; if many rows have the same indexed value,
several index tuples would be generated so that index tuples fit on index
pages.This would make the index smaller (more likely to fit in RAM) at the cost
of a little CPU overhead for index modifications, but would make the index
scans actually use less CPU (no need to compare the indexed value on each
table tuple).
Compression in data pages :
The article that circulated on the list suggested several types of
compression, offset, dictionary, etc. The point is that several row
versions on the same page can be compressed well because these versions
probably have similar column values.
Just a thought...