Re: [HACKERS] Proposal: Improve bitmap costing for lossy pages - Mailing list pgsql-hackers

From Robert Haas
Subject Re: [HACKERS] Proposal: Improve bitmap costing for lossy pages
Date
Msg-id CA+TgmoaJOTG+eP5KYP+tK-1XW=6c+WzA_UgA2_P6MnWGTA04-A@mail.gmail.com
Whole thread Raw
In response to [HACKERS] Proposal: Improve bitmap costing for lossy pages  (Dilip Kumar <dilipbalaut@gmail.com>)
Responses Re: [HACKERS] Proposal: Improve bitmap costing for lossy pages  (Dilip Kumar <dilipbalaut@gmail.com>)
List pgsql-hackers
On Thu, May 18, 2017 at 2:52 AM, Dilip Kumar <dilipbalaut@gmail.com> wrote:
> Most of the queries show decent improvement, however, Q14 shows
> regression at work_mem = 4MB. On analysing this case, I found that
> number of pages_fetched calculated by "Mackert and Lohman formula" is
> very high (1112817) compared to the actual unique heap pages fetched
> (293314). Therefore, while costing bitmap scan using 1112817 pages and
> 4MB of work_mem, we predicted that even after we lossify all the pages
> it can not fit into work_mem, hence bitmap scan was not selected.

You might need to adjust effective_cache_size.  The Mackert and Lohman
formula isn't exactly counting unique pages fetched.  It will count
the same page twice if it thinks the page will be evicted from the
cache after the first fetch and before the second one.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Robert Haas
Date:
Subject: Re: [HACKERS] [bug fix] PG10: libpq doesn't connect to alternativehosts when some errors occur
Next
From: Robert Haas
Date:
Subject: Re: [HACKERS] [Proposal] Allow users to specify multiple tables inVACUUM commands