On Sat, 2006-01-21 at 22:30 +0100, Tino Wildenhain wrote:
> Rod Taylor schrieb:
> > On Sat, 2006-01-21 at 12:48 -0300, Alvaro Herrera wrote:
> >
> >>Rod Taylor wrote:
> >>
> >>>Is there any way of bumping this limit or am I stuck breaking up the
> >>>transaction?
> >>
> >>Wow, I never heard of anyone reaching the limit :-( Sorry, you are
> >>stuck (short of changing CommandId to 64 bits, which would bloat your
> >>tables considerably ...)
> As a quick fix (as is quickly fixed, quick running ;)
> You could load your update data to a temp table via
> COPY and then update a large table in one sweep.
> I guess you dont have 2^31 tables to update? ;)
I wouldn't have written a plpgsql function if I was doing the same thing
with every tuple. Running multiple statements across the 40GB heap would
take much longer to complete.
I've resorted to a client side script that COPYs the data out of various
partitions in the main structure, applies the logic, then COPYs it back
in again to a new partition. Once complete drop the original partition
so the datasets swap.
This is somewhat slower than my original method of doing it all on the
server side but not by too much since the reduced CPU load seems to
offset the increased network IO.
--