Re: Deleting millions of rows - Mailing list pgsql-performance

From Tom Lane
Subject Re: Deleting millions of rows
Date
Msg-id 2239.1233608323@sss.pgh.pa.us
Whole thread Raw
In response to Re: Deleting millions of rows  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: Deleting millions of rows  (Robert Haas <robertmhaas@gmail.com>)
Re: Deleting millions of rows  (Andrew Lazarus <andrew@pillette.com>)
List pgsql-performance
Robert Haas <robertmhaas@gmail.com> writes:
> On Mon, Feb 2, 2009 at 3:01 PM, Brian Cox <brian.cox@ca.com> wrote:
>>> How much memory do you have in your machine? What is work_mem set to?
>>
>> 32G; work_mem=64M

> Hmm.  Well then I'm not sure why you're running out of memory,

It's the pending trigger list.  He's got two trigger events per row,
which at 40 bytes apiece would approach 4GB of memory.  Apparently
it's a 32-bit build of Postgres, so he's running out of process address
space.

There's a TODO item to spill that list to disk when it gets too large,
but the reason nobody's done it yet is that actually executing that many
FK check trigger events would take longer than you want to wait anyway.

TRUNCATE is the best solution if you want to get rid of the whole table
contents.  If you're deleting very many but not all rows, people tend
to drop the FK constraints and re-establish them afterwards.  Retail
checking is just too slow.

            regards, tom lane

pgsql-performance by date:

Previous
From: Scott Marlowe
Date:
Subject: Re: Deleting millions of rows
Next
From: Brian Cox
Date:
Subject: Re: Deleting millions of rows