On Fri, Feb 03, 2006 at 19:38:04 +0100,
Patrick Rotsaert <patrick.rotsaert@arrowup.be> wrote:
>
> I have 5.1GB of free disk space. If this is the cause, I have a
> problem... or is there another way to extract (and remove) duplicate rows?
How about processing a subset of the ids in one pass and then may make
multiple passes to check all of the ids. As long as you don't have to use
too small of chunks, this might work for you.