Re: Memory exhausted on DELETE. - Mailing list pgsql-general

From Tom Lane
Subject Re: Memory exhausted on DELETE.
Date
Msg-id 29791.1098666317@sss.pgh.pa.us
Whole thread Raw
In response to Memory exhausted on DELETE.  (<jbi130@yahoo.com>)
List pgsql-general
<jbi130@yahoo.com> writes:
> I have a table with about 1,400,000 rows in it.  Each DELETE cascades to
> about 7 tables.  When I do a 'DELETE FROM events' I get the following
> error:

>   ERROR:  Memory exhausted in AllocSetAlloc(84)

> I'm running a default install.  What postgres options to I need
> to tweak to get this delete to work?

It isn't a Postgres tweak.  The only way you can fix it is to allow the
backend process to grow larger, which means increasing the kernel limits
on process data size.  This might be as easy as tweaking "ulimit" in the
postmaster's environment, or it might be painful, depending on your OS.
You might also have to increase swap space.

There's a TODO item to allow the list of pending trigger events (which
is, I believe, what's killing you) to be pushed out to temp files when
it gets too big.  However, that will have negative performance
implications of its own, so...

> Also, if my tables grows to 30,000,000 rows will the same tweaks
> still work?  Or do I have to use a different delete strategy, such
> as deleting 1000 rows at a time.

On the whole, deleting a few thousand rows at a time might be your best
bet.

BTW: make sure you have indexes on the referencing columns, or this will
take a REALLY long time.

            regards, tom lane

pgsql-general by date:

Previous
From: Tom Lane
Date:
Subject: Re: Function syntax checking
Next
From: Ken Tozier
Date:
Subject: Re: Importing a tab delimited text file - How?