On Fri, Aug 14, 2009 at 12:03:37AM +0100, Greg Stark wrote:
> On Thu, Aug 13, 2009 at 11:20 PM, Sam Mason<sam@samason.me.uk> wrote:
> > Is it worth having a note about having enough memory floating around
> > for those limits to actually be hit in practice? There would be no
> > way of creating a row 1.6TB in size in one go, it would be ~800 UPDATE
> > statements to get it up to that size as far as I can see.
>
> That wouldn't work actually. If you did something like "UPDATE tab set
> a = a || a" the first thing Postgres does when it executes the
> concatenation operator is retrieve the original a and decompress it
> (twice in this case). Then it constructs the result entirely in memory
> before toasting. At the very least one copy of "a" and one copy of the
> compressed "a" have to fit in memory.
Yup, that would indeed break---I was thinking of a single update per
column. The ~800 comes from the fact that I think you may just about be
able to squeeze two 1GB literals into memory at a time and hence update
two of your 1600 columns with each update.
> To work with objects which don't fit comfortably in memory you really
> have to use the lo interface. Toast lets you get away with it only for
> special cases like substr() or length() but not in general.
Yup, the lo interface is of course much better for this sort of thing.
--
Sam http://samason.me.uk/