Re: Disk-based hash aggregate's cost model - Mailing list pgsql-hackers

From Tomas Vondra
Subject Re: Disk-based hash aggregate's cost model
Date
Msg-id 20200906212112.nzoy5ytrzjjodpfh@development
Whole thread Raw
In response to Re: Disk-based hash aggregate's cost model  (Jeff Davis <pgsql@j-davis.com>)
Responses Re: Disk-based hash aggregate's cost model  (Jeff Davis <pgsql@j-davis.com>)
List pgsql-hackers
Hi,

I've tested the costing changes on the simplified TPC-H query, on two
different machines, and it seems like a clear improvement.

This is using the same cost/duration measure, which I think is pretty
neat way to look at this. Sure, it's imperfect (depends on which cost
and durations you actually take etc.), but it makes the comparisons
easier and for simple queries it does not matter that much.

The costing clearly depends on parameters like random_page_cost and how
it matches the hardware, but for the machine with SSD and default
random_page_cost the effect looks like this:

      work_mem    sort    master    patched
     ---------------------------------------
           1MB     249        95        215
           2MB     256        89        187
           4MB     233        90        192
           8MB     227        70        124
          16MB     245        67        118
          32MB     261        63        111
          64MB     256        59        104
         256MB     266        55        102

and with random_page_cost reduced to 1.5 it looks like this:

      work_mem       sort    master    patched
     ------------------------------------------
           1MB        221        63        150
           2MB        227        64        133
           4MB        214        65        137
           8MB        214        57         95
          16MB        232        53         90
          32MB        247        50         85
          64MB        249        47         80
         256MB        258        46         77

And on a machine with SATA RAID storage it looks like this:

      work_mem       sort    master   patched
     -----------------------------------------
           1MB        102        41        94
           2MB        101        34        77
           4MB         99        35        78
           8MB         98        35        79
          16MB         98        25        50
          32MB        106        26        51
          64MB        106        26        51
         256MB        105        29        50

So yeah, the patched costing is much closer to sort (from the point of
this cost/duration metric), although for higher work_mem values there's
still a clear gap where the hashing seems to be under-costed by a factor
of ~2 or more.

I think this is simply showing that sort may the effect of increasing
work_mem is much more pronounced for sort/groupagg compared to hashagg.
For example on the SDD machine the duration changes like this:

     work_mem    hashagg    groupagg
    ---------------------------------
          1MB        217         201
          2MB        178         195
          4MB        176         186
          8MB        160         176
         16MB        168         163
         32MB        180         153
         64MB        189         143
        256MB        204         138

and the SATA RAID storage seems to behave in a similar way (although the
difference is smaller).

So in general I think this costing change is reasonable. It might not go
far enough, but it certainly makes it probably makes it easier to tweak
the rest by changing random_page_cost etc. Not sure if we need/should
tweak the costing to reduce the effect of work_mem (on hashagg).

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



pgsql-hackers by date:

Previous
From: "Zidenberg, Tsahi"
Date:
Subject: Re: [PATCH] audo-detect and use -moutline-atomics compilation flag for aarch64
Next
From: Tom Lane
Date:
Subject: Re: [bug+patch] Inserting DEFAULT into generated columns from VALUES RTE