Re: Really dumb planner decision - Mailing list pgsql-performance

From Merlin Moncure
Subject Re: Really dumb planner decision
Date
Msg-id b42b73150904160744lf4b93f4j6bb8c44b8322e31@mail.gmail.com
Whole thread Raw
In response to Re: Really dumb planner decision  ("Kevin Grittner" <Kevin.Grittner@wicourts.gov>)
List pgsql-performance
On Thu, Apr 16, 2009 at 10:11 AM, Kevin Grittner
<Kevin.Grittner@wicourts.gov> wrote:
> Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> Bear in mind that those limits exist to keep you from running into
>> exponentially increasing planning time when the size of a planning
>> problem gets big.  "Raise 'em to the moon" isn't really a sane
> strategy.
>> It might be that we could get away with raising them by one or two
> given
>> the general improvement in hardware since the values were last
> looked
>> at; but I'd be hesitant to push the defaults further than that.
>
> I also think that there was a change somewhere in the 8.2 or 8.3 time
> frame which mitigated this.  (Perhaps a change in how statistics were
> scanned?)  The combination of a large statistics target and higher
> limits used to drive plan time through the roof, but I'm now seeing
> plan times around 50 ms for limits of 20 and statistics targets of
> 100.  Given the savings from the better plans, it's worth it, at least
> in our case.
>
> I wonder what sort of testing would be required to determine a safe
> installation default with the current code.

Well, given all the variables, maybe we should instead bet targeting
plan time, either indirectly vi estimated values, or directly by
allowing a configurable planning timeout, jumping off to alternate
approach (nestloopy style, or geqo) if available.

merlin

pgsql-performance by date:

Previous
From: "Kevin Grittner"
Date:
Subject: Re: Really dumb planner decision
Next
From: Tom Lane
Date:
Subject: Re: Shouldn't the planner have a higher cost for reverse index scans?