Thread: Re: Adjust tuples estimate for appendrels

Re: Adjust tuples estimate for appendrels

From
Tender Wang
Date:


Richard Guo <guofenglinux@gmail.com> 于2025年1月27日周一 16:06写道:
(This was briefly discussed in [1], which primarily focuses on the
incremental sort regression.  So start a new thread for this topic.)

In set_append_rel_size(), we currently set rel->tuples to rel->rows
for an appendrel.  Generally, rel->tuples is the raw number of tuples
in the relation and rel->rows is the estimated number of tuples after
the relation's restriction clauses have been applied.  Although an
appendrel itself doesn't directly enforce any quals today, its child
relations may.  Therefore, setting rel->tuples equal to rel->rows for
an appendrel isn't always appropriate.

AFAICS, doing so can lead to issues in cost estimates.  For instance,
when estimating the number of distinct values from an appendrel, we
would not be able to adjust the estimate based on the restriction
selectivity (see estimate_num_groups()).

Attached is a patch that sets an appendrel's tuples to the total
number of tuples accumulated from each live child, which I believe
aligns better with reality.

I basically agree with you.  I just have a little advice. 
Can we give some explanation about why we do this adjust. 
For example, adding below sentence you said above:
"doing so can lead to issues in cost estimates.  For instance,
when estimating the number of distinct values from an appendrel, we
would not be able to adjust the estimate based on the restriction
selectivity (see estimate_num_groups())."

Others looks good to me.
 

Here's a simple example that demonstrates how this change improves
cost estimates in certain cases.

create table p (a int, b int, c float) partition by range(a);
create table p1 partition of p for values from (0) to (1000);
create table p2 partition of p for values from (1000) to (2000);

insert into p select i%2000, random(1, 100000), random(1, 100000) from
generate_series(1, 1000000)i;

analyze p;

explain analyze select b from p where c < 10000 group by b;

-- without this patch
 HashAggregate  (cost=18651.38..19568.54 rows=91716 width=4)
                (actual time=467.859..487.227 rows=63346 loops=1)

-- with this patch
 HashAggregate  (cost=18651.38..19275.60 rows=62422 width=4)
                (actual time=447.383..466.351 rows=63346 loops=1)

Unfortunately, I failed to come up with a stable test case that shows
a plan diff with this change.  So the attached patch does not include
a test case for now.

Yeah, it seems not easy to write test case to reflect this change. We usually set costs off in EXPLAIN.
I think it's ok without a test case. 


BTW, the comment within set_append_rel_size() says that:

   /*
    * Set "raw tuples" count equal to "rows" for the appendrel; needed
    * because some places assume rel->tuples is valid for any baserel.
    */

I wonder if this assumption still holds today.  If we create an empty
table, analyze it, and then use it in a query, the table will have
rel->tuples set to zero and rel->rows set to one, which doesn't cause
any issues today.

[1] https://postgr.es/m/CAMbWs4-ocromEKMtVDH3RBMuAJQaQDK0qi4k6zOuvpOnMWZauw@mail.gmail.com

Thanks
Richard


--
Thanks,
Tender Wang