Re: [HACKERS] Proposal: Improve bitmap costing for lossy pages - Mailing list pgsql-hackers

From Dilip Kumar
Subject Re: [HACKERS] Proposal: Improve bitmap costing for lossy pages
Date
Msg-id CAFiTN-sNOay1LDwq5w9=m5_exA49bwnO9HZY4OehZbZVb0nCuQ@mail.gmail.com
Whole thread Raw
In response to Re: [HACKERS] Proposal: Improve bitmap costing for lossy pages  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: [HACKERS] Proposal: Improve bitmap costing for lossy pages
List pgsql-hackers
On Thu, Aug 31, 2017 at 11:27 PM, Robert Haas <robertmhaas@gmail.com> wrote:

I have repeated one of the tests after fixing the problems pointed by
you but this time results are not that impressive.  Seems like below
check was the problem in the previous patch
  if (tbm->nentries > tbm->maxentries / 2)       tbm->maxentries = Min(tbm->nentries, (INT_MAX - 1) / 2) * 2;

Because we were lossifying only till tbm->nentries becomes 90% of
tbm->maxentries but later we had this check which will always be true
and tbm->maxentries will be doubled and that was the main reason of
huge reduction of lossy pages, basically, we started using more
work_mem in all the cases.

I have taken one reading just to see the impact after fixing the
problem with the patch.
Work_mem: 40 MB
(Lossy Pages count)

Query    head          patch
6           995223       733087
14         337894       206824
15         995417       798817
20       1654016     1588498

Still, we see a good reduction in lossy pages count.  I will perform
the test at different work_mem and for different values of
TBM_FILFACTOR and share the number soon.

-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com



pgsql-hackers by date:

Previous
From: Michael Paquier
Date:
Subject: Re: [HACKERS] pg_basebackup throttling doesn't throttle as promised
Next
From: Pavan Deolasee
Date:
Subject: Re: [HACKERS] Parallel worker error