Re: slow bitmap heap scans on pg 9.2 - Mailing list pgsql-performance

From Steve Singer
Subject Re: slow bitmap heap scans on pg 9.2
Date
Msg-id 516B4492.7090204@ca.afilias.info
Whole thread Raw
In response to Re: slow bitmap heap scans on pg 9.2  (Jeff Janes <jeff.janes@gmail.com>)
Responses Re: slow bitmap heap scans on pg 9.2
List pgsql-performance
On 13-04-13 04:54 PM, Jeff Janes wrote:
> On Sat, Apr 13, 2013 at 9:14 AM, Steve Singer <ssinger@ca.afilias.info
> <mailto:ssinger@ca.afilias.info>> wrote:
>
>
>     indexTotalCost += index->pages * spc_random_page_cost / 100000.0;
>
>     Is driving my high costs on the inner loop. The index has 2-5
>     million pages depending on the partition .   If I run this against
>     9.2.2 with / 10000.0 the estimate is even higher.
>
>     If I try this with this with the
>
>     *indexTotalCost += log(1.0 + index->pages / 10000.0) *
>     spc_random_page_cost;
>
>     from 9.3 and I play I can make this work I can it pick the plan on
>     some partitions with product_id=2 but not product_id=1.   If I
>     remove the fudge-factor cost adjustment line I get the nested-loop
>     plan always.
>
>
> That was only temporarily the formula during 9.3dev.  Tom re-did that
> entire part of the code rather substantially in the current tip of 9.3
> (commit 31f38f28b00cbe2b).  Now it is based on the number of tuples, and
> the height, rather than pages, and is multiplied by the
> cpu_operator_cost not the random_page_cost.
>
> descentCost = ceil(log(index->tuples) / log(2.0)) * cpu_operator_cost;
>
> ...
>
> descentCost = (index->tree_height + 1) * 50.0 * cpu_operator_cost;
>
>
>     Breaking the index into smaller partial indexes for each year seems
>     to be giving me the plans I want with random_page_cost=2 (I might
>     also try partial indexes on the month).
>
>     Even with the 9.3 log based fudge-factor we are seeing the
>     fudge-factor being big enough so that the planner is picking a table
>     scan over the index.
>
>
> Have you tried it under 9.3 HEAD, rather than just back-porting the
> temporary
> *indexTotalCost += log(1.0 + index->pages / 10000.0) * spc_random_page_cost;
> code into 9.2?
>
> If you are trying to make your own private copy of 9.2, then removing
> the fudge factor altogether is probably the way to go.  But if you want
> to help improve future versions, you probably need to test with the most
> up-to-date dev version.

I will do that in a few days.  I don't have enough disk space on this
dev server to have a 9.2 datadir and a 9.3 one for this database.  Once
I have a solution that I can use with 9.2 firmed up I can upgrade the
datadir to 9.3 and test this.  I am hoping I can get a set of partial
indexes that will give good results with an unmodified 9.2, so far that
looks promising but I still have more cases to verify (these indexes
take a while to build).

>
>       A lot of loop iterations can be satisfied by cached pages of the
>     index the fudge-factor doesn't really account for this.
>
>
>
> Setting random_page_cost to 2 is already telling it that most of fetches
> are coming from the cache.  Of course for the upper blocks of an index
> even more than "most" are likely to be, but the latest dev code takes
> care of that.
>
> Cheers,
>
> Jeff



pgsql-performance by date:

Previous
From: Rodrigo Barboza
Date:
Subject: Re: Segment best size
Next
From: Atri Sharma
Date:
Subject: Advice on testing buffer spin lock contention