Re: 'Interesting' prepared statement slowdown on large table join - Mailing list pgsql-performance

From Tom Lane
Subject Re: 'Interesting' prepared statement slowdown on large table join
Date
Msg-id 217.1305131881@sss.pgh.pa.us
Whole thread Raw
In response to Re: 'Interesting' prepared statement slowdown on large table join  (Shaun Thomas <sthomas@peak6.com>)
List pgsql-performance
Shaun Thomas <sthomas@peak6.com> writes:
> On 05/11/2011 06:08 AM, Prodan, Andrei wrote:
>> Index Scan using attr_name_value on big_table  (cost=0.00..22.85
>> rows=4 width=7) (actual time=0.176..757.646 rows=914786 loops=1)

> Holy inaccurate statistics, Batman!

> Try increasing your statistics target for attr_name and attr_value in
> your big table.

Actually, the big problem here is probably not lack of statistics, but
the insistence on using a parameterized prepared plan in the first
place.  If you're going to be doing queries where the number of selected
rows varies that much, using a generic parameterized plan is just a
recipe for shooting yourself in the foot.  The planner cannot know what
the actual search values will be, and thus has no way of adapting the
plan based on how common those search values are.  Having more stats
won't help in that situation.

Forget the prepared plan and just issue the query the old-fashioned way.

I do suspect that the reason the plan is flipping back and forth is
instability of the collected statistics, which might be improved by
increasing the stats target, or then again maybe not.  But that's really
rather irrelevant.

            regards, tom lane

pgsql-performance by date:

Previous
From: Shaun Thomas
Date:
Subject: Re: Postgres refusing to use >1 core
Next
From: Robert Haas
Date:
Subject: Re: Poor performance when joining against inherited tables