On Thu, Aug 31, 2017 at 11:27 PM, Robert Haas <robertmhaas@gmail.com> wrote:
I have repeated one of the tests after fixing the problems pointed by
you but this time results are not that impressive. Seems like below
check was the problem in the previous patch
if (tbm->nentries > tbm->maxentries / 2) tbm->maxentries = Min(tbm->nentries, (INT_MAX - 1) / 2) * 2;
Because we were lossifying only till tbm->nentries becomes 90% of
tbm->maxentries but later we had this check which will always be true
and tbm->maxentries will be doubled and that was the main reason of
huge reduction of lossy pages, basically, we started using more
work_mem in all the cases.
I have taken one reading just to see the impact after fixing the
problem with the patch.
Work_mem: 40 MB
(Lossy Pages count)
Query head patch
6 995223 733087
14 337894 206824
15 995417 798817
20 1654016 1588498
Still, we see a good reduction in lossy pages count. I will perform
the test at different work_mem and for different values of
TBM_FILFACTOR and share the number soon.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com