i have a table like:
CREATE TABLE testtbl
(
somestr varchar(20),
someint int4,
somedate abstime
);
it has some 6 million records in it.
if i do the following:
BEGIN WORK;
DECLARE cursorname CURSOR FOR SELECT somestr, someint, somedate FROM testtbl;
FETCH FORWARD 1000 IN cursorname;
(skip out and check the size of the backend process)
FETCH FORWARD 1000 IN cursorname;
FETCH FORWARD 1000 IN cursorname;
FETCH FORWARD 1000 IN cursorname;
(skip out and check the size of the backend process)
CLOSE cursorname;
END WORK;
the size of the backend grows to something, and doesn't grow much, or at all
between the first and second checks.
if the SELECT is changed to:
SELECT somestr, somedate, someint, somedate + timespan(someint)
the backend starts to grow, especially after fetching forward 10000 or so
records.
is this a memory leak?
is there another way to stage the query (basically adding seconds to a date for each record)?
this is on postgresql-6.5.3 on FreeBSD 3.x
--
[ Jim Mercer jim@reptiles.org +1 416 506-0654 ]
[ Reptilian Research -- Longer Life through Colder Blood ]
[ Don't be fooled by cheap Finnish imitations; BSD is the One True Code. ]