Re: [HACKERS] Proposal: Improve bitmap costing for lossy pages - Mailing list pgsql-hackers

From Dilip Kumar
Subject Re: [HACKERS] Proposal: Improve bitmap costing for lossy pages
Date
Msg-id CAFiTN-toFL3kN8hT1NDygqPU8H_dtDUugSy_CZqg9nSD2m=vFQ@mail.gmail.com
Whole thread Raw
In response to Re: [HACKERS] Proposal: Improve bitmap costing for lossy pages  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: [HACKERS] Proposal: Improve bitmap costing for lossy pages  (Robert Haas <robertmhaas@gmail.com>)
List pgsql-hackers
On Thu, May 18, 2017 at 8:07 PM, Robert Haas <robertmhaas@gmail.com> wrote:

Thanks for the feedback and sorry for the delayed response.

> You might need to adjust effective_cache_size.

You are right. But, effective_cache_size will have the impact on the
number of pages_fetched when it's used as parameterized path (i.e
inner side of the nested loop). But for our case where we see the
wrong number of pages got estimated (Q10), it was for the
non-parameterized path.  However, I have also tested with high
effective cache size but did not observe any change.

> The Mackert and Lohman
> formula isn't exactly counting unique pages fetched.

Right.

>It will count
> the same page twice if it thinks the page will be evicted from the
> cache after the first fetch and before the second one.

That too when loop count > 1.  If loop_count =1 then seems like it
doesn't consider the effective_cache size. But, actually, multiple
tuples can fall into the same page.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



pgsql-hackers by date:

Previous
From: Amit Kapila
Date:
Subject: Re: [HACKERS] Broken hint bits (freeze)
Next
From: Robert Haas
Date:
Subject: Re: [HACKERS] List of hostaddrs not supported