Re: LONG delete with LOTS of FK's - Mailing list pgsql-general

From David Kerr
Subject Re: LONG delete with LOTS of FK's
Date
Msg-id 20130516225224.GA92863@mr-paradox.net
Whole thread Raw
In response to Re: LONG delete with LOTS of FK's  (Larry Rosenman <ler@lerctr.org>)
Responses Re: LONG delete with LOTS of FK's
List pgsql-general
On Fri, May 10, 2013 at 11:01:15AM -0500, Larry Rosenman wrote:
- On 2013-05-10 10:57, Tom Lane wrote:
- >Larry Rosenman <ler@lerctr.org> writes:
- >On 2013-05-10 09:14, Tom Lane wrote:
- >... and verify you get a cheap plan for each referencing table.
- >
- >We don't :(
- >
- >Ugh.  I bet the problem is that in some of these tables, there are lots
- >and lots of duplicate account ids, such that seqscans look like a good
- >bet when searching for an otherwise-unknown id.  You don't see this
- >with a handwritten test for a specific id because then the planner can
- >see it's not any of the common values.
- >
- >9.2 would fix this for you --- any chance of updating?
- >
- >            regards, tom lane
- I'll see what we can do.  I was looking for a reason, this may be it.
-
- Thanks for all your help.

I haven't seen an explain for this badboy, maybe I missed it (even just a
plain explain might be useful) but you may be running into a situation where
the planner is trying to materialize or hash 2 big tables.

I've actually run into that in the past and had some success in PG9.1 running
with enable_material=false for some queries.

It might be worth a shot to play with that and enable_hashagg/enable_hashjoin=false
(If you get a speedup, it points to some tuning/refactoring that could happen)

Dave


pgsql-general by date:

Previous
From: Fabio Rueda Carrascosa
Date:
Subject: Re: pg_upgrade link mode
Next
From: Andrew Dunstan
Date:
Subject: Re: [HACKERS] PLJava for Postgres 9.2.