2009/6/23 Rob Sargent
<robjsargent@gmail.com>
Not sure if this belongs here or on the admin or performance list. Apologies if so. (And this may be a second posting as the first was from an un-registered account. Further apologies)
My assumption is that any de/compression done by postgres would be server-side.
We're considering minimizing bandwidth utilization
Why? Are you hitting some limit?
by using client-side compression on a column value that will typically be multi-megabyte in size. We would use ALTER TABLE SET STORAGE EXTERNAL to prevent the server from un-necessary compression.
Is this generally worthwhile?
No general answer, sorry.
It depends on other conditions and on what you want to achieve.
1. Almost always you will get better on-disk compression ratio. That's because, quote from pg docs:
"The compression technique used is a fairly simple and very fast member of the LZ family of compression techniques. See src/backend/utils/adt/pg_lzcompress.c for the details."
2. You will force all client apps to decompress data on their side.
3. To minimize bandwith utilisation, there are other ways (think compressed tunnels, SSL and so on - but it adds per-connection overhead)
Is there a great penalty for a query which delves into the value, given that the server will not be aware it's compressed?
not clear. do you mean something like
SELECT ... FROM table where decompress(compressed_data) LIKE 'whatever'
???
of course it will be a great penalty.
I assume we're pretty much on our own to prevent such actions (i.e. the app can never query against this column via sql).
certainly.
--
Filip Rembiałkowski
JID,mailto:
filip.rembialkowski@gmail.comhttp://filip.rembialkowski.net/