Re: processing large amount of rows with plpgsql - Mailing list pgsql-general

From Geert Mak
Subject Re: processing large amount of rows with plpgsql
Date
Msg-id 156BB7A4-1F36-4023-A885-77BDB2AF2699@verysmall.org
Whole thread Raw
In response to Re: processing large amount of rows with plpgsql  (Merlin Moncure <mmoncure@gmail.com>)
Responses Re: processing large amount of rows with plpgsql  ("Marc Mamin" <M.Mamin@intershop.de>)
List pgsql-general
On 08.08.2012, at 22:04, Merlin Moncure wrote:

> What is the general structure of the procedure?  In particular, how
> are you browsing and updating the rows?

Here it is -

BEGIN
for statistics_row in SELECT * FROM statistics ORDER BY time ASC
LOOP
    ...
    ... here some very minimal transformation is done
    ... and the row is written into the second table
    ...
END LOOP;
RETURN 1;
END;

> There is (almost) no way to
> force commit inside a function --

So what you are saying is that this behavior is normal and we should either equip ourselves with enough disk space
(whichI am trying now, it is a cloud server, which I am resizing to gain more disk space and see what will happen) or
doit with an external (scripting) language? 

> there has been some discussion about
> stored procedure and/or autonomous transaction feature in terms of
> getting there.
>
> I say 'almost' because you can emulate some aspects of autonomous
> transactions with dblink, but that may not be a very good fit for your
> particular case.

I met already dblink mention in this context somewhere... Though if plpgsql performs well with more disk space, I'll
leaveit for now. It is a one time operation this one. 

Thank you,
Geert

pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: Problem running "ALTER TABLE...", ALTER TABLE waiting
Next
From: "Marc Mamin"
Date:
Subject: Re: processing large amount of rows with plpgsql