On 2013-05-16 17:52, David Kerr wrote:
> On Fri, May 10, 2013 at 11:01:15AM -0500, Larry Rosenman wrote:
> - On 2013-05-10 10:57, Tom Lane wrote:
> - >Larry Rosenman <ler@lerctr.org> writes:
> - >On 2013-05-10 09:14, Tom Lane wrote:
> - >... and verify you get a cheap plan for each referencing table.
> - >
> - >We don't :(
> - >
> - >Ugh. I bet the problem is that in some of these tables, there are
> lots
> - >and lots of duplicate account ids, such that seqscans look like a
> good
> - >bet when searching for an otherwise-unknown id. You don't see this
> - >with a handwritten test for a specific id because then the planner
> can
> - >see it's not any of the common values.
> - >
> - >9.2 would fix this for you --- any chance of updating?
> - >
> - > regards, tom lane
> - I'll see what we can do. I was looking for a reason, this may be it.
> -
> - Thanks for all your help.
>
> I haven't seen an explain for this badboy, maybe I missed it (even just
> a
> plain explain might be useful) but you may be running into a situation
> where
> the planner is trying to materialize or hash 2 big tables.
>
> I've actually run into that in the past and had some success in PG9.1
> running
> with enable_material=false for some queries.
>
> It might be worth a shot to play with that and
> enable_hashagg/enable_hashjoin=false
> (If you get a speedup, it points to some tuning/refactoring that could
> happen)
>
> Dave
I'll take a look tomorrow, but we WERE seeing Seq Scan's against
multi-million
row tables, so I suspect Tom is right on with the replanning that's in
9.2 fixing
it, and I'm in the process of validating that.
I'll have that news tomorrow as well. (updated a test box with the
production
DB to 9.2.4 and am running a Vacuum Analyze and then will redo this test
with that
DB/Engine.
--
Larry Rosenman http://www.lerctr.org/~ler
Phone: +1 214-642-9640 (c) E-Mail: ler@lerctr.org
US Mail: 430 Valona Loop, Round Rock, TX 78681-3893