Hi all,
On Thu, 2010-04-08 at 07:45 -0400, Robert Haas wrote:
> >> 2010/4/8 Thom Brown <thombrown@gmail.com>:
> >> > So you could write:
> >> >
> >> > DELETE FROM massive_table WHERE id < 40000000 LIMIT 10000;
> I've certainly worked around the lack of this syntax more than once.
> And I bet it's not even that hard to implement.
The fact that it's not implemented has nothing to do with it's
complexity (in fact it is probably just a matter of enabling it) -
you'll have a hard time to convince some old-time hackers on this list
that the non-determinism inherent in this kind of query is
acceptable ;-)
There is a workaround to do it, which works quite good in fact:
delete from massive_table where ctid = any(array(select ctid from
massive_table WHERE id < 40000000 LIMIT 10000));
Just run an explain on it and you'll see it won't get any better, but
beware that it might be less optimal than you think, as you will be
likely sequential scanning the table for each chunk unless you put some
selective where conditions on it too - and then you'll still scan the
whole deleted part and not just the next chunk - the deleted records
won't go out of the way magically, you need to vacuum, and that's
probably a problem too on a big table. So most likely it will help you
less than you think on a massive table, the run time per chunk will
increase with each chunk unless you're able to vacuum efficiently. In
any case you need to balance the chunk size with the scanned portion of
the table so you get a reasonable run time per chunk, and not too much
overhead of the whole chunking process...
Cheers,
Csaba.