Hi ...
I have been working on some server side scripts in Plpgsql and I'm quite
happy about them (nice and fast way to work with large data sets) :-)
But, if I, in my code, make "for loop" on a huge select result (about
1.7 mil. rows), my function begin to consume alot of memory. The code
looks like this :
FOR trec IN SELECT f1,f2 FROM large_tabel LOOP...
END LOOP;
When selecting the result with a small limit (about 1000) the function
still consumes much memory, but when the function is successfully done
the memory are freed nicely again. When using the full select I run out
of memory (I got 2G in this mashine + swap :-)).
Are there a way to release this memory, or would a cursor be the answer.
Does Plpgsql not use normal scope rules in FOR LOOP's ?
Anyway, I fixed the problem by removing the topmost FOR loop, and then
called the function from a SELECT, multibly times. Like this :
SELECT myfunc( f1, f2 ) FROM large_table;
Is this a known issue, or is it at bug, or are there something I have
misunderstood ?
Who maintain the part of PostgreSQL, anyway ?
/BL