Just a patch to clean up a bug in pg_dump whose sole purpose is to confuse
users. Why should -d crash pg_dump just because you have a big table? I
couldn't find this listed anywhere, not even on the TODO list. So if some
change to the library fixed this, I apologise.
This patch replaces the simple SELECT * with a cursor that fetches 1,000 rows
at a time. The 1,000 was chosen because it was small enough to test but I
think realisitically 10,000 wouldn't be too much.
Also, it seems there is no regression test for pg_dump. Is this intentional
or has noone come up with a good way to test it?
http://svana.org/kleptog/pgsql/pgsql-pg_dump.patch (also attached)
Please CC any replies.
P.S. For those people waiting for the timing patch, I'm just dealing with a
little issue involving getting a flag from ExplainOneQuery to ExecInitNode.
I think I may have an answer but it needs testing.
--
Martijn van Oosterhout <kleptog@svana.org>
http://svana.org/kleptog/
> It would be nice if someone came up with a certification system that
> actually separated those who can barely regurgitate what they crammed over
> the last few weeks from those who command secret ninja networking powers.