Re: Looks like merge join planning time is too big, 55 seconds - Mailing list pgsql-performance

From Jeff Janes
Subject Re: Looks like merge join planning time is too big, 55 seconds
Date
Msg-id CAMkU=1x51iVmUcLewMUBLB3fKW9tkpfsL0iYQuXp33aTAiQVPA@mail.gmail.com
Whole thread Raw
In response to Re: Looks like merge join planning time is too big, 55 seconds  (Sergey Burladyan <eshkinkot@gmail.com>)
Responses Re: Looks like merge join planning time is too big, 55 seconds
List pgsql-performance
On Thu, Aug 1, 2013 at 5:16 PM, Sergey Burladyan <eshkinkot@gmail.com> wrote:
> I also find this trace for other query:
> explain select * from xview.user_items_v v where ( v.item_id = 132358330 );
>
>
> If I not mistaken, may be two code paths like this here:
> (1) mergejoinscansel -> scalarineqsel-> ineq_histogram_selectivity -> get_actual_variable_range -> index_getnext
> (2) scalargtsel -> scalarineqsel -> ineq_histogram_selectivity -> get_actual_variable_range -> index_getnext

Yeah, I think you are correct.

> And may be get_actual_variable_range() function is too expensive for
> call with my bloated table items with bloated index items_user_id_idx on it?

But why is it bloated in this way?  It must be visiting many thousands
of dead/invisible rows before finding the first visible one.  But,
Btree index have a mechanism to remove dead tuples from indexes, so it
doesn't follow them over and over again (see "kill_prior_tuple").  So
is that mechanism not working, or are the tuples not dead but just
invisible (i.e. inserted by a still open transaction)?

Cheers,

Jeff


pgsql-performance by date:

Previous
From:
Date:
Subject: Sub-optimal plan for a paginated query on a view with another view inside of it.
Next
From: Jeff Janes
Date:
Subject: Re: Looks like merge join planning time is too big, 55 seconds