Re: Client/Server compression? - Mailing list pgsql-hackers

From Kyle
Subject Re: Client/Server compression?
Date
Msg-id 15506.41833.4054.850914@doppelbock.patentinvestor.com
Whole thread Raw
In response to Re: Client/Server compression?  (Greg Copeland <greg@CopelandConsulting.Net>)
Responses Re: Client/Server compression?  (Greg Copeland <greg@CopelandConsulting.Net>)
List pgsql-hackers
Greg Copeland wrote:
> [cut]
> My current thoughts are to allow for enabled/disabled compression and
> variable compression settings (1-9) within a database configuration. 
> Worse case, it may be fun to implement and I'm thinking there may
> actually be some surprises as an end result if it's done properly.
> 
> [cut]
>
> Greg


Wouldn't Tom's suggestion of riding on top of ssh would give similar
results?  Anyway, it'd probably be a good proof of concept of whether
or not it's worth the effort.  And that brings up the question: how
would you measure the benefit?  I'd assume you'd get a good cut in
network traffic, but you'll take a hit in cpu time.  What's an
acceptable tradeoff?

That's one reason I was thinking about the toast stuff.  If the
backend could serve toast, you'd get an improvement in server to
client network traffic without the server spending cpu time on
compression since the data has previously compressed.

Let me know if this is feasible (or slap me if this is how things
already are): when the backend detoasts data, keep both copies in
memory.  When it comes time to put data on the wire, instead of
putting the whole enchilada down give the client the compressed toast
instead.  And yeah, I guess this would require a protocol change to
flag the compressed data.  But it seems like a way to leverage work
already done.

-kf



pgsql-hackers by date:

Previous
From: reina@nsi.edu (Tony Reina)
Date:
Subject: Anyone have a SQL code for cumulative F distribution function?
Next
From: "Clark C . Evans"
Date:
Subject: Re: [BUGS] Bug #613: Sequence values fall back to previously chec