Thread: Re: [pgadmin-hackers] Feature request: limited deletions

Re: [pgadmin-hackers] Feature request: limited deletions

From
Thom Brown
Date:
<div class="gmail_quote">On 8 April 2010 11:55, Ian Barwick <span dir="ltr"><<a
href="mailto:barwick@gmail.com">barwick@gmail.com</a>></span>wrote:<br /><blockquote class="gmail_quote"
style="margin:0pt 0pt 0pt 0.8ex; border-left: 1px solid rgb(204, 204, 204); padding-left: 1ex;"> 2010/4/8 Thom Brown
<<ahref="mailto:thombrown@gmail.com">thombrown@gmail.com</a>>:<br /><div class="im">> I couldn't find any
discussionon this, but the request is quite<br /> > straightforward.  Implement a LIMIT on DELETE statements like
SELECT<br/> > statements.<br /> ><br /> > So you could write:<br /> ><br /> > DELETE FROM massive_table
WHEREid < 40000000 LIMIT 10000;<br /> ><br /> > This would allow deletions in smaller batches rather than
waiting<br/> > potentially hours for the server to mark all those rows as deleted and<br /> > commit it as one
massivetransaction.<br /><br /></div>Is this a PgAdmin-specific question? If it is, apologies I am missing<br /> the
context.<br/><br /> If not, this is totally the wrong list, but why not use a subquery to<br /> control what is
deleted?<br/><font color="#888888"><br /><br /> Ian Barwick<br /></font></blockquote></div><br /> Erm... my mistake, I
thoughtthis was on the generic hackers list.  Moving it over in this reply.<br /><br /> Thom<br /> 

Re: [pgadmin-hackers] Feature request: limited deletions

From
Robert Haas
Date:
On Thu, Apr 8, 2010 at 7:05 AM, Thom Brown <thombrown@gmail.com> wrote:
> On 8 April 2010 11:55, Ian Barwick <barwick@gmail.com> wrote:
>>
>> 2010/4/8 Thom Brown <thombrown@gmail.com>:
>> > I couldn't find any discussion on this, but the request is quite
>> > straightforward.  Implement a LIMIT on DELETE statements like SELECT
>> > statements.
>> >
>> > So you could write:
>> >
>> > DELETE FROM massive_table WHERE id < 40000000 LIMIT 10000;
>> >
>> > This would allow deletions in smaller batches rather than waiting
>> > potentially hours for the server to mark all those rows as deleted and
>> > commit it as one massive transaction.
>>
>> Is this a PgAdmin-specific question? If it is, apologies I am missing
>> the context.
>>
>> If not, this is totally the wrong list, but why not use a subquery to
>> control what is deleted?
>
> Erm... my mistake, I thought this was on the generic hackers list.  Moving
> it over in this reply.

I've certainly worked around the lack of this syntax more than once.
And I bet it's not even that hard to implement.

...Robert


Re: [pgadmin-hackers] Feature request: limited deletions

From
Csaba Nagy
Date:
Hi all,

On Thu, 2010-04-08 at 07:45 -0400, Robert Haas wrote:
> >> 2010/4/8 Thom Brown <thombrown@gmail.com>:
> >> > So you could write:
> >> >
> >> > DELETE FROM massive_table WHERE id < 40000000 LIMIT 10000;

> I've certainly worked around the lack of this syntax more than once.
> And I bet it's not even that hard to implement.

The fact that it's not implemented has nothing to do with it's
complexity (in fact it is probably just a matter of enabling it) -
you'll have a hard time to convince some old-time hackers on this list
that the non-determinism inherent in this kind of query is
acceptable ;-)

There is a workaround to do it, which works quite good in fact:

delete from massive_table where ctid = any(array(select ctid from
massive_table WHERE id < 40000000 LIMIT 10000));

Just run an explain on it and you'll see it won't get any better, but
beware that it might be less optimal than you think, as you will be
likely sequential scanning the table for each chunk unless you put some
selective where conditions on it too - and then you'll still scan the
whole deleted part and not just the next chunk - the deleted records
won't go out of the way magically, you need to vacuum, and that's
probably a problem too on a big table. So most likely it will help you
less than you think on a massive table, the run time per chunk will
increase with each chunk unless you're able to vacuum efficiently. In
any case you need to balance the chunk size with the scanned portion of
the table so you get a reasonable run time per chunk, and not too much
overhead of the whole chunking process...

Cheers,
Csaba.