Re: [HACKERS] Proposal: Improve bitmap costing for lossy pages - Mailing list pgsql-hackers

From Dilip Kumar
Subject Re: [HACKERS] Proposal: Improve bitmap costing for lossy pages
Date
Msg-id CAFiTN-uL=rQtvt9zFnLV9khXODhEyJTvC4TB135HSK1=YdFAxQ@mail.gmail.com
Whole thread Raw
In response to Re: [HACKERS] Proposal: Improve bitmap costing for lossy pages  (Dilip Kumar <dilipbalaut@gmail.com>)
Responses Re: [HACKERS] Proposal: Improve bitmap costing for lossy pages
Re: [HACKERS] Proposal: Improve bitmap costing for lossy pages
List pgsql-hackers
On Mon, Sep 4, 2017 at 11:18 AM, Dilip Kumar <dilipbalaut@gmail.com> wrote:
> On Thu, Aug 31, 2017 at 11:27 PM, Robert Haas <robertmhaas@gmail.com> wrote:
>
> I have repeated one of the tests after fixing the problems pointed by
> you but this time results are not that impressive.  Seems like below
> check was the problem in the previous patch
>
>    if (tbm->nentries > tbm->maxentries / 2)
>         tbm->maxentries = Min(tbm->nentries, (INT_MAX - 1) / 2) * 2;
>
> Because we were lossifying only till tbm->nentries becomes 90% of
> tbm->maxentries but later we had this check which will always be true
> and tbm->maxentries will be doubled and that was the main reason of
> huge reduction of lossy pages, basically, we started using more
> work_mem in all the cases.
>
> I have taken one reading just to see the impact after fixing the
> problem with the patch.
>
>  Work_mem: 40 MB
> (Lossy Pages count)
>
> Query    head          patch
> 6           995223       733087
> 14         337894       206824
> 15         995417       798817
> 20       1654016     1588498
>
> Still, we see a good reduction in lossy pages count.  I will perform
> the test at different work_mem and for different values of
> TBM_FILFACTOR and share the number soon.

I haven't yet completely measured the performance with executor
lossification change, meanwhile, I have worked on some of the comments
on optimiser change and taken the performance again, I still see good
improvement in the performance (almost 2x for some of the queries) and
with new method of lossy pages calculation I don't see regression in
Q14 (now Q14 is not changing its plan).

I used  lossy_pages = max(0, total_pages - maxentries / 2). as
suggesed by Alexander.


Performance Results:

Machine: Intell 56 core machine (2 NUMA node)
work_mem: varies.
TPCH S.F: 20
Median of 3 runs.

work_mem = 4MB

Query    Patch(ms)    Head(ms)    Change in plan

    4       4686.186       5039.295     PBHS -> PSS

    5       26772.192    27500.800    BHS -> SS

    6       6615.916       7760.005     PBHS -> PSS

    8       6370.611      12407.731    PBHS -> PSS

  15       17493.564   24242.256     BHS -> SS


work_mem = 20MB

Query    Patch(ms)    Head(ms)    Change in plan

6           6656.467       7469.961     PBHS -> PSS

8           6116.526      12300.784    PBHS -> PSS

15         17873.726    22913.421    BHS -> PSS


work_mem = 64MB

Query    Patch(ms)    Head(ms)   Change in plan

15         14900.881    27460.093   BHS -> PBHS


-- 
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Attachment

pgsql-hackers by date:

Previous
From: Alexander Korotkov
Date:
Subject: Re: [HACKERS] SQL/JSON in PostgreSQL
Next
From: Dilip Kumar
Date:
Subject: Re: [HACKERS] Effect of changing the value for PARALLEL_TUPLE_QUEUE_SIZE