Re: Client/Server compression? - Mailing list pgsql-hackers

From Arguile
Subject Re: Client/Server compression?
Date
Msg-id LLENKEMIODLDJNHBEFBOKEFFEGAA.arguile@lucentstudios.com
Whole thread Raw
In response to Re: Client/Server compression?  (Bruce Momjian <pgman@candle.pha.pa.us>)
Responses Re: Client/Server compression?  (Greg Copeland <greg@CopelandConsulting.Net>)
List pgsql-hackers
Bruce Momjian wrote:
>
> Greg Copeland wrote:
> > Well, it occurred to me that if a large result set were to be identified
> > before transport between a client and server, a significant amount of
> > bandwidth may be saved by using a moderate level of compression.
> > Especially with something like result sets, which I tend to believe may
> > lend it self well toward compression.
>
> I should have said compressing the HTTP protocol, not FTP.
>
> > This may be of value for users with low bandwidth connectivity to their
> > servers or where bandwidth may already be at a premium.
>
> But don't slow links do the compression themselves, like PPP over a
> modem?

Yes, but that's packet level compression. You'll never get even close to the
result you can achieve compressing the set as a whole.

Speaking of HTTP, it's fairly common for web servers (Apache has mod_gzip)
to gzip content before sending it to the client (which unzips it silently);
especially when dealing with somewhat static content (so it can be cached
zipped). This can provide great bandwidth savings.

I'm sceptical of the benefit such compressions would provide in this setting
though. We're dealing with sets that would have to be compressed every time
(no caching) which might be a bit expensive on a database server. Having it
as a default off option for psql migtht be nice, but I wonder if it's worth
the time, effort, and cpu cycles.




pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: [BUGS] Bug #613: Sequence values fall back to previously chec
Next
From: Paul Ramsey
Date:
Subject: Re: Client/Server compression?