Re: Deleting millions of rows - Mailing list pgsql-performance

From Brian Cox
Subject Re: Deleting millions of rows
Date
Msg-id 49876D28.5080403@ca.com
Whole thread Raw
In response to Deleting millions of rows  (Brian Cox <brian.cox@ca.com>)
Responses Re: Deleting millions of rows  (Scott Marlowe <scott.marlowe@gmail.com>)
List pgsql-performance
Tom Lane [tgl@sss.pgh.pa.us] wrote:
> It's the pending trigger list.  He's got two trigger events per row,
> which at 40 bytes apiece would approach 4GB of memory.  Apparently
> it's a 32-bit build of Postgres, so he's running out of process address
> space.
Yes, this is a 32 bit Postgres running on a 32 bit Linux. I assume that
the 2 triggers are due to the 2 "on delete cascade" FKs. Thanks for
explaining this bit of a mystery.

> TRUNCATE is the best solution if you want to get rid of the whole table
> contents.  If you're deleting very many but not all rows, people tend
> to drop the FK constraints and re-establish them afterwards.  Retail
> checking is just too slow.
Thanks also to you (and several others) for reminding me of TRUNCATE.
This will definitely work for what I was trying to do: reset this table
for more testing.

In production, the table on which I ran DELETE FROM grows constantly
with old data removed in bunches periodically (say up to a few 100,000s
of rows [out of several millions] in a bunch). I'm assuming that
auto-vacuum/analyze will allow Postgres to maintain reasonable
performance for INSERTs and SELECTs on it; do you think that this is a
reasonable assumption?

Thanks,
Brian


pgsql-performance by date:

Previous
From: Tom Lane
Date:
Subject: Re: Deleting millions of rows
Next
From: Scott Marlowe
Date:
Subject: Re: Deleting millions of rows