On Thu, Aug 13, 2009 at 11:20 PM, Sam Mason<sam@samason.me.uk> wrote:
> On Thu, Aug 13, 2009 at 03:31:39PM -0400, Greg Smith wrote:
>> If your client app is coded correctly to handle large packets of data, it
>> should work up to the size limits documented at
>> http://www.postgresql.org/about/ , so you probably having nothing to worry
>> about here.
>
> Is it worth having a note about having enough memory floating around
> for those limits to actually be hit in practice? There would be no
> way of creating a row 1.6TB in size in one go, it would be ~800 UPDATE
> statements to get it up to that size as far as I can see.
That wouldn't work actually. If you did something like "UPDATE tab set
a = a || a" the first thing Postgres does when it executes the
concatenation operator is retrieve the original a and decompress it
(twice in this case). Then it constructs the result entirely in memory
before toasting. At the very least one copy of "a" and one copy of the
compressed "a" have to fit in memory.
To work with objects which don't fit comfortably in memory you really
have to use the lo interface. Toast lets you get away with it only for
special cases like substr() or length() but not in general.
--
greg
http://mit.edu/~gsstark/resume.pdf