Re: Deleting millions of rows - Mailing list pgsql-performance

From Robert Haas
Subject Re: Deleting millions of rows
Date
Msg-id 603c8f070902021526x67d34095gff54c36295f504e0@mail.gmail.com
Whole thread Raw
In response to Re: Deleting millions of rows  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Deleting millions of rows
List pgsql-performance
> It's the pending trigger list.  He's got two trigger events per row,
> which at 40 bytes apiece would approach 4GB of memory.  Apparently
> it's a 32-bit build of Postgres, so he's running out of process address
> space.
>
> There's a TODO item to spill that list to disk when it gets too large,
> but the reason nobody's done it yet is that actually executing that many
> FK check trigger events would take longer than you want to wait anyway.

Have you ever given any thought to whether it would be possible to
implement referential integrity constraints with statement-level
triggers instead of row-level triggers?  IOW, instead of planning this
and executing it N times:

DELETE FROM ONLY <fktable> WHERE $1 = fkatt1 [AND ...]

...we could join the original query against fktable with join clauses
on the correct pairs of attributes and then execute it once.

Is this insanely difficult to implement?

...Robert

pgsql-performance by date:

Previous
From: Scott Marlowe
Date:
Subject: Re: Deleting millions of rows
Next
From: Jeff
Date:
Subject: Re: SSD performance