Simon Riggs <simon@2ndQuadrant.com> wrote:
> Kevin Grittner <Kevin.Grittner@wicourts.gov> wrote:
>
>>> ... but I share Simon's desire to see some proof before anything
>>> gets committed.
>>
>> And we agree there. In fact, I can't think of anyone in the
>> community who doesn't want to see that for *any* purported
>> performance enhancement.
>
> I'm not talking about eventual commit, I'm talking about the whole
> process of development.
I'm confused -- you want to see proof that the concept works well in
PostgreSQL before development effort on it begins? Or there is some
alternative you would like to see pursued instead? Something else?
> From what has been said so far, the use case for this is related
> to the practice of using "covered indexes", which makes me nervous
> because that is an expert level tuning task on other DBMS
What? On the versions of MS SQL Server and Sybase ASE I've used it
costs covered index plans against all the other plans automatically,
and picks this type of plan if the cost looks lower. Sure, DBAs
sometimes add indexes, or add columns to indexes, in hopes that such
a plan will be chosen -- but what's new and different there?
> The typical speed up for non-covered indexes will come when we
> access a very large table (not in cache) via an index scan that is
> smaller than a bitmapindex scan. Will we be able to gauge
> selectivities sufficiently accurately to be able to pinpoint that
> during optimization? How will we know that the table is not in
> cache? Or is this an optimisation in the executor for a bitmapheap
> scan?
I would continue to object to using current cache contents for plan
choice because of plan instability and the fact that an odd initial
cache load could skew plans in a bad direction indefinitely. I do
agree (and have already posted) that I think the hardest part of
this might be developing a good cost model. I doubt that's an
insoluble problem, especially since it is something we can refine
over time as we gain experience with the edge cases.
-Kevin