Re: time taking deletion on large tables - Mailing list pgsql-admin

From Ron
Subject Re: time taking deletion on large tables
Date
Msg-id 5f37b65c-4b3a-9f5f-85fb-2fc3c7a6f530@gmail.com
Whole thread Raw
In response to time taking deletion on large tables  (Atul Kumar <akumar14871@gmail.com>)
List pgsql-admin
On 12/3/20 8:45 AM, Atul Kumar wrote:
> Hi,
>
> The feed_posts table has over 50 Million rows.
>
> When I m deleting all rows of a certain type that are over 60 days old.
>
> When I try to do a delete like this:  it hangs for an entire day, so I
> need to kill it with pg_terminate_backend(pid).
>
> DELETE FROM feed_posts
> WHERE feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'
> AND created_at > '2020-05-11 00:00:00'
> AND created_at < '2020-05-12 00:00:00';
>
> So– I need help in figuring out how to do large deletes on a
> production database during normal hours.

Presumably there is an index on created_at?

What about feed_definition_id?

> explain plan is given below
>
>
>
> "Delete on feed_posts (cost=1156.57..195748.88 rows=15534 width=6)"
> " -> Bitmap Heap Scan on feed_posts (cost=1156.57..195748.88
> rows=15534 width=6)"
> " Recheck Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp
> without time zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp
> without time zone))"
> " Filter: (feed_definition_id = 'bf33573d-936e-4e55-8607-72b685d2cbae'::uuid)"
> " -> Bitmap Index Scan on feed_posts_created_at (cost=0.00..1152.68
> rows=54812 width=0)"
> " Index Cond: ((created_at >= '2020-05-11 00:00:00'::timestamp without
> time zone) AND (created_at <= '2020-05-12 00:00:00'::timestamp without
> time zone))"

Have you recently analyzed the table?

-- 
Angular momentum makes the world go 'round.



pgsql-admin by date:

Previous
From: hubert depesz lubaczewski
Date:
Subject: Re: time taking deletion on large tables
Next
From: Yambu
Date:
Subject: Moving a table/index to different tablespace