Thread: Faster compression, again

Faster compression, again

From
Daniel Farina
Date:
For 9.3 at a minimum.

The topic of LZO became mired in doubts about:

* Potential Patents
* The author's intention for the implementation to be GPL

Since then, Google released "Snappy," also an LZ77-class
implementation, and it has been ported to C (recently, and with some
quirks, like no LICENSE file...yet, although it is linked from the
original Snappy project).  The original Snappy (C++) has a BSD license
and a patent grant (which shields you from Google, at least).  Do we
want to investigate a very-fast compression algorithm inclusion again
in the 9.3 cycle?

I've been using the similar implementation "LZO" for WAL archiving and
it is a significant savings (not as much as pg_lesslog, but also less
invasive).  It is also fast enough that even if one were not to uproot
TOAST's compression that it would probably be very close to a complete
win for protocol traffic, whereas SSL's standardized zlib can
definitely be a drag in some cases.

This idea resurfaces often, but the reason why I wrote in about it is
because I have a table which I categorized as "small" but was, in
fact, 1.5MB, which made transferring it somewhat slow over a remote
link.  zlib compression takes it down to about 550K and lzo (similar,
but not identical) 880K.  If we're curious how it affects replication
traffic, I could probably gather statistics on LZO-compressed WAL
traffic, of which we have a pretty huge amount captured.

-- 
fdr


Re: Faster compression, again

From
"ktm@rice.edu"
Date:
On Wed, Mar 14, 2012 at 11:06:16AM -0700, Daniel Farina wrote:
> For 9.3 at a minimum.
> 
> The topic of LZO became mired in doubts about:
> 
> * Potential Patents
> * The author's intention for the implementation to be GPL
> 
> Since then, Google released "Snappy," also an LZ77-class
> implementation, and it has been ported to C (recently, and with some
> quirks, like no LICENSE file...yet, although it is linked from the
> original Snappy project).  The original Snappy (C++) has a BSD license
> and a patent grant (which shields you from Google, at least).  Do we
> want to investigate a very-fast compression algorithm inclusion again
> in the 9.3 cycle?
> 

+1 for Snappy and a very fast compression algorithm.

Regards,
Ken


Re: Faster compression, again

From
Merlin Moncure
Date:
On Wed, Mar 14, 2012 at 1:06 PM, Daniel Farina <daniel@heroku.com> wrote:
> For 9.3 at a minimum.
>
> The topic of LZO became mired in doubts about:
>
> * Potential Patents
> * The author's intention for the implementation to be GPL
>
> Since then, Google released "Snappy," also an LZ77-class
> implementation, and it has been ported to C (recently, and with some
> quirks, like no LICENSE file...yet, although it is linked from the
> original Snappy project).  The original Snappy (C++) has a BSD license
> and a patent grant (which shields you from Google, at least).  Do we
> want to investigate a very-fast compression algorithm inclusion again
> in the 9.3 cycle?
>
> I've been using the similar implementation "LZO" for WAL archiving and
> it is a significant savings (not as much as pg_lesslog, but also less
> invasive).  It is also fast enough that even if one were not to uproot
> TOAST's compression that it would probably be very close to a complete
> win for protocol traffic, whereas SSL's standardized zlib can
> definitely be a drag in some cases.
>
> This idea resurfaces often, but the reason why I wrote in about it is
> because I have a table which I categorized as "small" but was, in
> fact, 1.5MB, which made transferring it somewhat slow over a remote
> link.  zlib compression takes it down to about 550K and lzo (similar,
> but not identical) 880K.  If we're curious how it affects replication
> traffic, I could probably gather statistics on LZO-compressed WAL
> traffic, of which we have a pretty huge amount captured.

there are plenty of on gpl lz based libraries out there (for example:
http://www.fastlz.org/) and always have been.  they are all much
faster than zlib.  the main issue is patents...you have to be careful
even though all the lz77/78 patents seem to have expired or apply to
specifics not relevant to general use.

see here for the last round of talks on this:
http://archives.postgresql.org/pgsql-performance/2009-08/msg00052.php

lzo is nearing its 20th birthday, so even if you are paranoid about
patents (admittedly, there is good reason to be), the window is
closing fast to have patent issues that aren't A expired or B  covered
by prior art on that or the various copycat implementations, at least
in the US.

snappy looks amazing.

merlin


Re: Faster compression, again

From
Andrew Dunstan
Date:

On 03/14/2012 04:10 PM, Merlin Moncure wrote:
> there are plenty of on gpl lz based libraries out there (for example:
> http://www.fastlz.org/) and always have been.  they are all much
> faster than zlib.  the main issue is patents...you have to be careful
> even though all the lz77/78 patents seem to have expired or apply to
> specifics not relevant to general use.
>

We're not going to include GPL code in the backend. We have enough 
trouble with readline and that's only for psql. SO the fact that there 
are GPL libraries can't help us, whether there are patent issues or not.

cheers

andrew


Re: Faster compression, again

From
"ktm@rice.edu"
Date:
On Wed, Mar 14, 2012 at 04:43:55PM -0400, Andrew Dunstan wrote:
> 
> 
> On 03/14/2012 04:10 PM, Merlin Moncure wrote:
> >there are plenty of on gpl lz based libraries out there (for example:
> >http://www.fastlz.org/) and always have been.  they are all much
> >faster than zlib.  the main issue is patents...you have to be careful
> >even though all the lz77/78 patents seem to have expired or apply to
> >specifics not relevant to general use.
> >
> 
> We're not going to include GPL code in the backend. We have enough
> trouble with readline and that's only for psql. SO the fact that
> there are GPL libraries can't help us, whether there are patent
> issues or not.
> 
> cheers
> 
> andrew
> 

That is what makes Google's Snappy so appealing, a BSD license.

Regards,
Ken


Re: Faster compression, again

From
Merlin Moncure
Date:
On Wed, Mar 14, 2012 at 3:43 PM, Andrew Dunstan <andrew@dunslane.net> wrote:
>
>
> On 03/14/2012 04:10 PM, Merlin Moncure wrote:
>>
>> there are plenty of on gpl lz based libraries out there (for example:
>> http://www.fastlz.org/) and always have been.  they are all much
>> faster than zlib.  the main issue is patents...you have to be careful
>> even though all the lz77/78 patents seem to have expired or apply to
>> specifics not relevant to general use.
>>
>
> We're not going to include GPL code in the backend. We have enough trouble
> with readline and that's only for psql. SO the fact that there are GPL
> libraries can't help us, whether there are patent issues or not.

er, typo: I meant to say: "*non-gpl* lz based..."  :-).

merlin


Re: Faster compression, again

From
Daniel Farina
Date:
On Wed, Mar 14, 2012 at 2:03 PM, Merlin Moncure <mmoncure@gmail.com> wrote:
> er, typo: I meant to say: "*non-gpl* lz based..."  :-).

Given that, few I would say have had the traction that LZO and Snappy
have had, even though in many respects they are interchangeable in the
general trade-off spectrum. The question is: what burden of proof is
required to convince the project that Snappy does not have exorbitant
patent issues, in proportion to the utility of having a compression
scheme of this type integrated?

One would think Google's lawyers did their homework to ensure they
would not be trolled for hideous sums of money by disclosing and
releasing the exact implementation of the compression used virtually
everywhere.  If anything, that may have been a more complicated issue
than writing and releasing yet-another-LZ77 implementation.

--
fdr


Re: Faster compression, again

From
Tom Lane
Date:
Daniel Farina <daniel@heroku.com> writes:
> Given that, few I would say have had the traction that LZO and Snappy
> have had, even though in many respects they are interchangeable in the
> general trade-off spectrum. The question is: what burden of proof is
> required to convince the project that Snappy does not have exorbitant
> patent issues, in proportion to the utility of having a compression
> scheme of this type integrated?

Another not-exactly-trivial requirement is to figure out how to not
break on-disk compatibility when installing an alternative compression
scheme.  In hindsight it might've been a good idea if pglz_compress had
wasted a little bit of space on some sort of version identifier ...
but it didn't.
        regards, tom lane


Re: Faster compression, again

From
"Kevin Grittner"
Date:
Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Another not-exactly-trivial requirement is to figure out how to
> not break on-disk compatibility when installing an alternative
> compression scheme.  In hindsight it might've been a good idea if
> pglz_compress had wasted a little bit of space on some sort of
> version identifier ... but it didn't.
Doesn't it always start with a header of two int32 values where the
first must be smaller than the second?  That seems like enough to
get traction for an identifiably different header for an alternative
compression technique.
-Kevin


Re: Faster compression, again

From
Daniel Farina
Date:
On Wed, Mar 14, 2012 at 2:58 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Daniel Farina <daniel@heroku.com> writes:
>> Given that, few I would say have had the traction that LZO and Snappy
>> have had, even though in many respects they are interchangeable in the
>> general trade-off spectrum. The question is: what burden of proof is
>> required to convince the project that Snappy does not have exorbitant
>> patent issues, in proportion to the utility of having a compression
>> scheme of this type integrated?
>
> Another not-exactly-trivial requirement is to figure out how to not
> break on-disk compatibility when installing an alternative compression
> scheme.  In hindsight it might've been a good idea if pglz_compress had
> wasted a little bit of space on some sort of version identifier ...
> but it didn't.

I was more thinking that the latency and throughput in LZ77 schemes
may be best first applied to protocol compression.  The downside is
that requires more protocol wrangling, but at least terabytes of
on-disk format doesn't get in the picture, even though LZ77 on the
data itself may be attractive.

I'm interested allowing layering transports below FEBE (similar to how
SSL is "below", except without the complexity of being tied into auth
& auth) in a couple of respects, and this is one of them.

--
fdr


Re: Faster compression, again

From
Robert Haas
Date:
On Wed, Mar 14, 2012 at 6:08 PM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:
> Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Another not-exactly-trivial requirement is to figure out how to
>> not break on-disk compatibility when installing an alternative
>> compression scheme.  In hindsight it might've been a good idea if
>> pglz_compress had wasted a little bit of space on some sort of
>> version identifier ... but it didn't.
>
> Doesn't it always start with a header of two int32 values where the
> first must be smaller than the second?  That seems like enough to
> get traction for an identifiably different header for an alternative
> compression technique.

The first of those words is vl_len_, which we can't fiddle with too
much, but the second is rawsize, which we can definitely fiddle with.
Right now, rawsize < vl_len_ means it's compressed; and rawsize ==
vl_len_ means it's uncompressed.  As you point out, rawsize > vl_len_
is undefined; also, and maybe simpler, rawsize < 0 is undefined.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Faster compression, again

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> On Wed, Mar 14, 2012 at 6:08 PM, Kevin Grittner
> <Kevin.Grittner@wicourts.gov> wrote:
>> Doesn't it always start with a header of two int32 values where the
>> first must be smaller than the second?  That seems like enough to
>> get traction for an identifiably different header for an alternative
>> compression technique.

> The first of those words is vl_len_, which we can't fiddle with too
> much, but the second is rawsize, which we can definitely fiddle with.
> Right now, rawsize < vl_len_ means it's compressed; and rawsize ==
> vl_len_ means it's uncompressed.  As you point out, rawsize > vl_len_
> is undefined; also, and maybe simpler, rawsize < 0 is undefined.

Well, let's please not make the same mistake again of assuming that
there will never again be any other ideas in this space.  IOW, let's
find a way to shoehorn in an actual compression-method ID value of some
sort.  I don't particularly care for trying to push that into rawsize,
because you don't really have more than about one bit to work with
there, unless you eat the entire word for ID purposes which seems
excessive.

After looking at pg_lzcompress.c for a bit, it appears to me that the
LSB of the first byte of compressed data must always be zero, because
the very first control bit has to say "copy a literal byte"; you can't
have a back-reference until there's some data in the output buffer.
So what I suggest is that we keep rawsize the same as it is, but peek at
the first byte after that to decide what we have: even means existing
compression method, an odd value is an ID byte selecting some new
method.  This gives us room for 128 new methods before we have trouble
again, while consuming only one byte which seems like acceptable
overhead for the purpose.
        regards, tom lane


Re: Faster compression, again

From
Robert Haas
Date:
On Wed, Mar 14, 2012 at 9:44 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Well, let's please not make the same mistake again of assuming that
> there will never again be any other ideas in this space.  IOW, let's
> find a way to shoehorn in an actual compression-method ID value of some
> sort.  I don't particularly care for trying to push that into rawsize,
> because you don't really have more than about one bit to work with
> there, unless you eat the entire word for ID purposes which seems
> excessive.

Well, you don't have to go that far.  For example, you could dictate
that, when the value is negative, the most significant byte indicates
the compression algorithm is in use (128 possible compression
algorithms).  The remaining 3 bytes indicate the compressed length;
but when they're all zero, the compressed length is instead stored in
the following 4-byte word.  This consumes one additional 4-byte word
for values that take >= 16MB compressed, but that's presumably a
non-problem.

> After looking at pg_lzcompress.c for a bit, it appears to me that the
> LSB of the first byte of compressed data must always be zero, because
> the very first control bit has to say "copy a literal byte"; you can't
> have a back-reference until there's some data in the output buffer.
> So what I suggest is that we keep rawsize the same as it is, but peek at
> the first byte after that to decide what we have: even means existing
> compression method, an odd value is an ID byte selecting some new
> method.  This gives us room for 128 new methods before we have trouble
> again, while consuming only one byte which seems like acceptable
> overhead for the purpose.

That would work, too.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Re: Faster compression, again

From
Daniel Farina
Date:
On Wed, Mar 14, 2012 at 11:06 AM, Daniel Farina <daniel@heroku.com> wrote:
>...and it has been ported to C (recently, and with some
> quirks, like no LICENSE file...yet, although it is linked from the
> original Snappy project).

I poked the author about the license and he fixed it in a jiffy.  Now
under BSD, with Intel's Copyright.  He seems to be committing a few
enhancements, but the snail's trail of the Internet suggests that this
code has made its way into Linux as well, including btrfs.

So now I guess we can have at it... https://github.com/andikleen/snappy-c/

--
fdr


Re: Faster compression, again

From
Simon Riggs
Date:
On Wed, Mar 14, 2012 at 6:06 PM, Daniel Farina <daniel@heroku.com> wrote:

> If we're curious how it affects replication
> traffic, I could probably gather statistics on LZO-compressed WAL
> traffic, of which we have a pretty huge amount captured.

What's the compression like for shorter chunks of data? Is it worth
considering using this for the libpq copy protocol and therefore
streaming replication also?

--
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: Faster compression, again

From
Daniel Farina
Date:
On Thu, Mar 15, 2012 at 3:14 PM, Simon Riggs <simon@2ndquadrant.com> wrote:
> On Wed, Mar 14, 2012 at 6:06 PM, Daniel Farina <daniel@heroku.com> wrote:
>
>> If we're curious how it affects replication
>> traffic, I could probably gather statistics on LZO-compressed WAL
>> traffic, of which we have a pretty huge amount captured.
>
> What's the compression like for shorter chunks of data? Is it worth
> considering using this for the libpq copy protocol and therefore
> streaming replication also?

The overhead is between 1 and 5 bytes that reserve the high bit as a
continuation bit (so one byte for small data), and then straight into
data.  So I think it could be applied for most payloads that are a few
bytes wide.  Presumably that could be lifted, but the format
description only allows for 2**32 - 1 for the uncompressed size.

I'd really like to find a way to layer both message-oblivious and
message-aware transport under FEBE with both backend and frontend
support without committing the project to new code for-ever-and-ever.
I guess I could investigate it in brief now, unless you've already
thought about/done some work in that area.

-- 
fdr


Re: Faster compression, again

From
"ktm@rice.edu"
Date:
On Thu, Mar 15, 2012 at 10:14:12PM +0000, Simon Riggs wrote:
> On Wed, Mar 14, 2012 at 6:06 PM, Daniel Farina <daniel@heroku.com> wrote:
>
> > If we're curious how it affects replication
> > traffic, I could probably gather statistics on LZO-compressed WAL
> > traffic, of which we have a pretty huge amount captured.
>
> What's the compression like for shorter chunks of data? Is it worth
> considering using this for the libpq copy protocol and therefore
> streaming replication also?
>
> --
>  Simon Riggs                   http://www.2ndQuadrant.com/
>  PostgreSQL Development, 24x7 Support, Training & Services

Here is a pointer to some tests with Snappy+CouchDB:

https://github.com/fdmanana/couchdb/blob/b8f806e41727ba18ed6143cee31a3242e024ab2c/snappy-couch-tests.txt

They checked compression on smaller chunks of data. I have extracted the
basic results. The first number is the original size in bytes, followed
by the compressed size in bytes, the percent compressed and the compression
ratio:

77 -> 60, 90% or 1.1:1
120 -> 104, 87% or 1.15:1
127 -> 80, 63% or 1.6:1
5942 -> 2930, 49% or 2:1

It looks like a good candidate for both the libpq copy protocol and
streaming replication. My two cents.

Regards,
Ken


Re: Faster compression, again

From
Simon Riggs
Date:
On Thu, Mar 15, 2012 at 10:34 PM, Daniel Farina <daniel@heroku.com> wrote:

> I'd really like to find a way to layer both message-oblivious and
> message-aware transport under FEBE with both backend and frontend
> support without committing the project to new code for-ever-and-ever.
> I guess I could investigate it in brief now, unless you've already
> thought about/done some work in that area.

Not done anything in that area myself.

I think its important that we have compression for the COPY protocol
within libpq, so I'll add that to my must-do list - but would be more
than happy if you wanted to tackle that yourself.

--
 Simon Riggs                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services


Re: Faster compression, again

From
Huchev
Date:
For a C implementation, it could interesting to consider LZ4 algorithm, since
it is written natively in this language. In contrast, Snappy has been ported
to C by Andy from the original C++ Google code, which lso translate into
less extensive usage and tests.

http://code.google.com/p/lz4/

Furthermode, LZ4 license is BSD. And it has been reported in several tests
as being faster than Snappy/LZO, especially on decompression speed.

http://article.gmane.org/gmane.comp.file-systems.btrfs/15744

And last point, there is a "high compression" mode, which could be useful
for data rarely written/modified but often read.

http://code.google.com/p/lz4hc/


--
View this message in context: http://postgresql.1045698.n5.nabble.com/Faster-compression-again-tp5565675p5615311.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


Re: Faster compression, again

From
Daniel Farina
Date:
On Tue, Apr 3, 2012 at 7:29 AM, Huchev <hugochevrain@gmail.com> wrote:
> For a C implementation, it could interesting to consider LZ4 algorithm, since
> it is written natively in this language. In contrast, Snappy has been ported
> to C by Andy from the original C++ Google code, which lso translate into
> less extensive usage and tests.

From what I can tell, the C implementation of snappy has more tests
than this LZ4 implementation, including a fuzz tester.  It's a
maintained part of Linux as well, and used for btrfs --- this is why
it was ported.  The high compression version of LZ4 is apparently
LGPL.  And, finally, there is the issue of patents: snappy has several
multi-billion dollar companies that can be held liable (originator
Google, as well as anyone connected to Linux), and to the best of my
knowledge, nobody has been held to extortion yet.

Consider me unconvinced as to this line of argument.

-- 
fdr


Re: Faster compression, again

From
Huchev
Date:
Well, the patent argument, used like this, looks like a wild card, which can be freely interpreted as a mortal danger
forsome, and a non-issue for others. A perfect scare-mongerer.<br />Quite frankly, I don't buy that one implementation
issafer because there is "Google backing it". I can't think of any reason why byte-aligned LZ77 algorithm could face
anyrisk. And btw, just look at the number of companies which had to pay protection money to Microsoft or face
litigationwith Apple because they were using Google's Android. It looks to me that Google is more a magnet for such
dangersthan a protector.<br /><br />Regarding test tools : Yes, this is correct, Snappy C has more fuzzer tools
providedwithin the package.<br /><br />Regarding integration to BTRFS, and therefore into Linux, both implementation
lookon equal terms. I haven't seen anything which tells that one has more chances than the other being part of Linux
3.5.In fact, maybe both will be integrated at the same time.<br /><br />However, a little publicized fact is that quite
afew people tried both implementation (Snappy C and LZ4), and there were more failures/difficulties reported on Snappy
C.It doesn't mean that Snappy C is bad, just more complex to use. It seems that the LZ4 implementation is more
straightforward: less dependancies, less risks, less time spent to properly optimize it, <br /> well, in a word,
simpler.<br/><br /><br /><br /><div class="gmail_quote">Le 5 avril 2012 01:11, Daniel Farina-4 [via PostgreSQL] <span
dir="ltr"><<ahref="/user/SendEmail.jtp?type=node&node=5619870&i=0" link="external" rel="nofollow"
target="_top">[hiddenemail]</a>></span> a écrit :<br /><blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px#ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5"> On Tue, Apr 3, 2012 at 7:29 AM,
Huchev<<a href="http://user/SendEmail.jtp?type=node&node=5619199&i=0" link="external" rel="nofollow"
target="_blank">[hiddenemail]</a>> wrote: <br />> For a C implementation, it could interesting to consider LZ4
algorithm,since <br />> it is written natively in this language. In contrast, Snappy has been ported <br />> to C
byAndy from the original C++ Google code, which lso translate into <br />> less extensive usage and tests. <br /><br
/>Fromwhat I can tell, the C implementation of snappy has more tests <br />than this LZ4 implementation, including a
fuzztester.  It's a <br />maintained part of Linux as well, and used for btrfs --- this is why <br />it was ported.
 Thehigh compression version of LZ4 is apparently <br />LGPL.  And, finally, there is the issue of patents: snappy has
several<br />multi-billion dollar companies that can be held liable (originator <br />Google, as well as anyone
connectedto Linux), and to the best of my <br />knowledge, nobody has been held to extortion yet. <br /><br />Consider
meunconvinced as to this line of argument. <br /><br />-- <br />fdr <br /><br /></div></div><span class="HOEnZb"><font
color="#888888">--<br />Sent via pgsql-hackers mailing list (<a
href="http://user/SendEmail.jtp?type=node&node=5619199&i=1"link="external" rel="nofollow"
target="_blank">[hiddenemail]</a>) <br />To make changes to your subscription: <br /><a
href="http://www.postgresql.org/mailpref/pgsql-hackers"link="external" rel="nofollow"
target="_blank">http://www.postgresql.org/mailpref/pgsql-hackers</a><br/><br /><br /><hr color="#cccccc" noshade
size="1"/><div style="color:#444;font:12px tahoma,geneva,helvetica,arial,sans-serif"><div style="font-weight:bold">If
youreply to this email, your message will be added to the discussion below:</div><a
href="http://postgresql.1045698.n5.nabble.com/Faster-compression-again-tp5565675p5619199.html"link="external"
rel="nofollow"
target="_blank">http://postgresql.1045698.n5.nabble.com/Faster-compression-again-tp5565675p5619199.html</a></div><div
style="color:#666;font:11pxtahoma,geneva,helvetica,arial,sans-serif;margin-top:.4em;line-height:1.5em"> To unsubscribe
fromFaster compression, again, <a href="" link="external" rel="nofollow" target="_blank">click here</a>.<br /><a
href="http://postgresql.1045698.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml"
link="external"rel="nofollow" style="font:9px serif" target="_blank">NAML</a></div></font></span></blockquote></div><br
/><br/><hr align="left" width="300" /> View this message in context: <a
href="http://postgresql.1045698.n5.nabble.com/Faster-compression-again-tp5565675p5619870.html">Re:Faster compression,
again</a><br/> Sent from the <a
href="http://postgresql.1045698.n5.nabble.com/PostgreSQL-hackers-f1928748.html">PostgreSQL- hackers mailing list
archive</a>at Nabble.com.<br />