On Tue, 2007-07-24 at 19:06, Stephan Szabo wrote:
> > Unfortunately I don't think this will work. Multiple backends will happily
> > pick up the same ctid in their selects and then try to delete the same
> > records.
>
> I'm pretty sure he said that the batch processing (and the delete) would
> only be happening from one backend at a time, no concurrency on that
> portion, merely concurrency with the large volume of inserts.
Yes it's exactly like that... only it also happened accidentally that 2
batch processes started at the same time, and they should not double
process the data, nor loose some of it. The above scheme is OK with that
too... but the array version from Tom is even better :-)
Regarding the proposed mark/process/delete version, we've done it that
way, and we always managed to get some corner case which lost us data...
so even if it's possible to do it well, it's definitely not easy. The
delete/copy/process private data version is much safer, and it actually
can be done in one transaction to assure crash safety.
Cheers,
Csaba.