On Mon, 2006-08-07 at 16:54 +0100, Simon Riggs wrote:
> On Fri, 2006-08-04 at 14:40 -0400, Tom Lane wrote:
> > I was just looking at Martin Lesser's gripe here:
> > http://archives.postgresql.org/pgsql-performance/2006-08/msg00053.php
> > about how the planner is not real bright about the filter conditions
> > it generates for a simple partitioning layout. In particular it's
> > generating scans involving self-contradictory conditions:
> >
> > Result (cost=0.00..33.20 rows=6 width=36)
> > -> Append (cost=0.00..33.20 rows=6 width=36)
> > -> Seq Scan on t_parted (cost=0.00..33.20 rows=6 width=36)
> > Filter: ((id1 >= 0) AND (id1 < 100) AND (id1 >= 900) AND (id1 < 1000))
> >
> > which it seems we ought to be bright enough to notice. In particular
> > I would argue that turning on constraint_exclusion ought to instruct
> > the planner to catch this sort of thing, whereas when it's off we
> > ought not expend the cycles. I have a preliminary patch (below)
> > that seems to fix it.
> >
> > The problem I'm having is that this isn't "constraint exclusion" anymore
> > --- it will in fact make useful deductions without a table constraint
> > anywhere in sight. Should we rename the GUC variable, and if so to what?
> > Or just live with the misnomer? I guess plan C would be to invent a
> > separate GUC variable for the other kind of test, but I can't see that
> > it's worth having two. Thoughts?
>
> In general, I'd prefer a control that allowed "amount of planning" to be
> specified, much in the same way we rate error messages. We really want
> just one simple knob that can be turned up or down, no matter how many
> new optimizations we add.
>
> planning_effort = LOW | MEDIUM | HIGH | VERYHIGH | EXHAUSTIVE
A simple way of doing this might be to use a minimum cost number?
# Minimum cost of query is over 100 before applying mutual_exclusion = 100
Once applied if the filter accomplished something the query is replanned
or adjusted to take that change into account.
If there were a large number of constraints on t_parted it may well have
taken longer to plan than to execute on the 6 rows. If there were 1M
rows in the structure, the extra effort would have been well worth it.
Ideally we could set the planning time as a percentage of total
execution time and let PostgreSQL figure out what should be tried and
when, but that means giving a cost to planner functionality and having
PostgreSQL plan how to plan.
planning_effort = 5%
--