Re: Help with large delete - Mailing list pgsql-general

From Perry Smith
Subject Re: Help with large delete
Date
Msg-id 1BE8F4F8-5A34-40A0-9FC4-138BA91C00AD@easesoftware.com
Whole thread Raw
In response to Re: Help with large delete  (Jan Wieck <jan@wi3ck.info>)
Responses Re: Help with large delete  (Rob Sargent <robjsargent@gmail.com>)
List pgsql-general


On Apr 16, 2022, at 12:57, Jan Wieck <jan@wi3ck.info> wrote:

Make your connection immune to disconnects by using something like the screen utility.

Exactly… I’m using emacs in a server (daemon) mode so it stays alive.  Then I do “shell” within it.


On Sat, Apr 16, 2022, 09:26 Perry Smith <pedz@easesoftware.com> wrote:
Currently I have one table that mimics a file system.  Each entry has a parent_id and a base name where parent_id is an id in the table that must exist in the table or be null with cascade on delete.

I’ve started a delete of a root entry with about 300,000 descendants.  The table currently has about 22M entries and I’m adding about 1600 entries per minute still.  Eventually there will not be massive amounts of entries being added and the table will be mostly static.

I started the delete before from a terminal that got detached.  So I killed that process and started it up again from a terminal less likely to get detached.˘

My question is basically how can I make life easier for Postgres?  I believe (hope) the deletes will be few and far between but they will happen from time to time.  In this case, Dropbox — its a long story that isn’t really pertinent.  The point is that @#$% happens.

“What can I do” includes starting completely over if necessary.  I’ve only got about a week invested in this and its just machine time at zero cost.  I could stop the other processes that are adding entries and let the delete finish if that would help.  etc.

Thank you for your time,
Perry


Attachment

pgsql-general by date:

Previous
From: Jan Wieck
Date:
Subject: Re: Help with large delete
Next
From: Rob Sargent
Date:
Subject: Re: Help with large delete