Re: poor performing plan from analyze vs. fast default plan pre-analyze on new database

From: Tom Lane
Subject: Re: poor performing plan from analyze vs. fast default plan pre-analyze on new database
Date: ,
Msg-id: 6499.1244046477@sss.pgh.pa.us
(view: Whole thread, Raw)
In response to: poor performing plan from analyze vs. fast default plan pre-analyze on new database  (Davin Potts)
List: pgsql-performance


Davin Potts <> writes:
> How to approach manipulating the execution plan back to something more
> efficient? �What characteristics of the table could have induced
> analyze to suggest the much slower query plan?

What's evidently happening is that the planner is backing off from using
a hashed subplan because it thinks the hashtable will require more than
work_mem.  Is 646400 a reasonably good estimate of the number of rows
that the sub-select will produce?  If it's a large overestimate, then
perhaps increasing the stats target for content.hash will help.  If
it's good, then what you want to do is increase work_mem to allow the
planner to use the better plan.

            regards, tom lane


pgsql-performance by date:

From: Janine Sisk
Date:
Subject: Re: Pointers needed on optimizing slow SQL statements
From: Greg Smith
Date:
Subject: Re: Best way to load test a postgresql server