Thread: time taking deletion on large tables

time taking deletion on large tables

From
Atul Kumar
Date:
Hi,

The feed_posts table has over 50 Million rows.

When I m deleting all rows of a certain type that are over 60 days old.

When I try to do a delete like this:  it hangs for an entire day, so I
need to kill it with pg_terminate_backend(pid).

DELETE FROM feed_posts
WHERE feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'
AND created_at > '2020-05-11 00:00:00'
AND created_at < '2020-05-12 00:00:00';

So– I need help in figuring out how to do large deletes on a
production database during normal hours.

explain plan is given below



"Delete on feed_posts (cost=1156.57..195748.88 rows=15534 width=6)"
" -> Bitmap Heap Scan on feed_posts (cost=1156.57..195748.88
rows=15534 width=6)"
" Recheck Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp
without time zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp
without time zone))"
" Filter: (feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'::uuid)"
" -> Bitmap Index Scan on feed_posts_created_at (cost=0.00..1152.68
rows=54812 width=0)"
" Index Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp without
time zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp without
time zone))"


please help me on deleting the rows, Do I need to anything in postgres
configuration ?
or in table structure ?





Regards,
Atul



Re: time taking deletion on large tables

From
hubert depesz lubaczewski
Date:
On Thu, Dec 03, 2020 at 08:15:06PM +0530, Atul Kumar wrote:
> Hi,
> 
> The feed_posts table has over 50 Million rows.
> 
> When I m deleting all rows of a certain type that are over 60 days old.

The common solution to this problem is to partition, and then, instead
of deleting rows - delete old partitions.

depesz



Re: time taking deletion on large tables

From
Ron
Date:
On 12/3/20 8:45 AM, Atul Kumar wrote:
> Hi,
>
> The feed_posts table has over 50 Million rows.
>
> When I m deleting all rows of a certain type that are over 60 days old.
>
> When I try to do a delete like this:  it hangs for an entire day, so I
> need to kill it with pg_terminate_backend(pid).
>
> DELETE FROM feed_posts
> WHERE feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'
> AND created_at > '2020-05-11 00:00:00'
> AND created_at < '2020-05-12 00:00:00';
>
> So– I need help in figuring out how to do large deletes on a
> production database during normal hours.

Presumably there is an index on created_at?

What about feed_definition_id?

> explain plan is given below
>
>
>
> "Delete on feed_posts (cost=1156.57..195748.88 rows=15534 width=6)"
> " -> Bitmap Heap Scan on feed_posts (cost=1156.57..195748.88
> rows=15534 width=6)"
> " Recheck Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp
> without time zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp
> without time zone))"
> " Filter: (feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'::uuid)"
> " -> Bitmap Index Scan on feed_posts_created_at (cost=0.00..1152.68
> rows=54812 width=0)"
> " Index Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp without
> time zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp without
> time zone))"

Have you recently analyzed the table?

-- 
Angular momentum makes the world go 'round.



Re: time taking deletion on large tables

From
Rui DeSousa
Date:


On Dec 3, 2020, at 9:45 AM, Atul Kumar <akumar14871@gmail.com> wrote:

The feed_posts table has over 50 Million rows.

When I m deleting all rows of a certain type that are over 60 days old.

When I try to do a delete like this:  it hangs for an entire day, so I
need to kill it with pg_terminate_backend(pid).

Delete the records in batches. I have used this approach many times successfully for large tables that are highly active on live production systems. 

You’ll have to find the correct batch size to use for your dataset while keeping the run time short; i.e. 30 seconds.  Then repeatedly call the function using a script — I’ve used a perl script with the DBI module to accomplish it. 

i.e. 

create or replace function purge_feed_post (_purge_date date, _limit int default 5000)
  returns int 
as 
$$
  declare 
    _rowcnt int;
  begin
    create temp table if not exists purge_feed_post_set (
      feed_post_id int
      )
    ;

    /* Identify records to be purged */
    insert into purge_feed_post_set (
      feed_post_id
      )
      select feed_post_id
      from feed_posts 
      where created_at < _purge_date
      order by created_at
      limit _limit
    ;
    
    /* Remove old records */
    delete from feed_posts using purge_feed_post_set 
      where feed_posts.feed_post_id = purge_feed_post_set.feed_post_id
    ;

    get diagnostics _rowcnt = ROW_COUNT;

    delete from purge_feed_post_set;

    return _rowcnt;
  end;
$$ language plpgsql
  set search_path = public
;