Re: Client/Server compression? - Mailing list pgsql-hackers

From Mark Pritchard
Subject Re: Client/Server compression?
Date
Msg-id EGECIAPHKLJFDEJBGGOBMELLHPAA.mark@tangent.net.au
Whole thread Raw
In response to Re: Client/Server compression?  (Bruce Momjian <pgman@candle.pha.pa.us>)
List pgsql-hackers
You can get some tremendous gains by compressing HTTP sessions - mod_gzip
for Apache does this very well.

I believe SlashDot saves in the order of 30% of their bandwidth by using
compression, as do sites like http://www.whitepages.com.au/ and
http://www.yellowpages.com.au/

The mod_gzip trick is effectively very similar to what Greg is proposing. Of
course, how often would you connect to your database over anything less than
a fast (100mbit+) LAN connection?

In any case the conversation regarding FE/BE protocol changes occurs
frequently, and this thread would certainly impact that protocol. Has any
thought ever been put into using an existing standard such as HTTP instead
of the current postgres proprietary protocol? There are a lot of advantages:

* You could leverage the existing client libraries (java.net.URL etc) to
make writing PG clients (JDBC/ODBC/custom) an absolute breeze.

* Results sets / server responses could be returned in XML.

* The protocol handles extensions well (X-* headers)

* Load balancing across a postgres cluster would be trivial with any number
of software/hardware http load balancers.

* The prepared statement work needs to hit the FE/BE protocol anyway...

If the project gurus thought this was worthwhile, I could certainly like to
have a crack at it.

Regards,

Mark

> -----Original Message-----
> From: pgsql-hackers-owner@postgresql.org
> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian
> Sent: Friday, 15 March 2002 6:36 AM
> To: Greg Copeland
> Cc: PostgresSQL Hackers Mailing List
> Subject: Re: [HACKERS] Client/Server compression?
>
>
> Greg Copeland wrote:
>
> Checking application/pgp-signature: FAILURE
> -- Start of PGP signed section.
> > Well, it occurred to me that if a large result set were to be identified
> > before transport between a client and server, a significant amount of
> > bandwidth may be saved by using a moderate level of compression.
> > Especially with something like result sets, which I tend to believe may
> > lend it self well toward compression.
> >
> > Unlike FTP which may be transferring (and often is) previously
> > compressed data, raw result sets being transfered between the server and
> > a remote client, IMOHO, would tend to compress rather well as I doubt
> > much of it would be true random data.
> >
>
> I should have said compressing the HTTP protocol, not FTP.
>
> > This may be of value for users with low bandwidth connectivity to their
> > servers or where bandwidth may already be at a premium.
>
> But don't slow links do the compression themselves, like PPP over a
> modem?
>
> --
>   Bruce Momjian                        |  http://candle.pha.pa.us
>   pgman@candle.pha.pa.us               |  (610) 853-3000
>   +  If your life is a hard drive,     |  830 Blythe Avenue
>   +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026
>
> ---------------------------(end of broadcast)---------------------------
> TIP 2: you can get off all lists at once with the unregister command
>     (send "unregister YourEmailAddressHere" to majordomo@postgresql.org)
>



pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: Client/Server compression?
Next
From: Greg Copeland
Date:
Subject: Re: Client/Server compression?