I have a large table that I need to traverse in full. I currently
start with a simple unrestricted SELECT, and then fetch each and
every row one at a time. I thought that by fetching just one row at a
time I would not consume any significant amount of memory.
However, judging by the memory consumption of my front-end process,
it would seem that the SELECT is loading the entire table into memory
before I even fetch the first row! Can anyone confirm that this is in
fact what goes on?
If so, is there any way to avoid it? The obvious solution would seem
to be to use LIMIT and OFFSET to get just a few thousand rows at a
time, but will that suffer from a time overhead while the backend
skips over millions of rows to get to the ones it needs??
Thanks for any clues anyone can provide!
Doug.
P.S. If it matters, I am using the Perl interface. I am also running in
SERIALIZABLE mode...