Re: Possible incorrect row estimation for Gather paths - Mailing list pgsql-hackers
From | Anthonin Bonnefoy |
---|---|
Subject | Re: Possible incorrect row estimation for Gather paths |
Date | |
Msg-id | CAO6_XqqQRSfZms8JRmOnikC_e8zJZuWPqzEezw_9HVyUw5bhUw@mail.gmail.com Whole thread Raw |
In response to | Re: Possible incorrect row estimation for Gather paths (Rafia Sabih <rafia.pghackers@gmail.com>) |
Responses |
Re: Possible incorrect row estimation for Gather paths
|
List | pgsql-hackers |
Hi Rafia, Thanks for reviewing. On Wed, Jul 10, 2024 at 4:54 PM Rafia Sabih <rafia.pghackers@gmail.com> wrote: > But I don't quite understood the purpose of this, > + total_groups = input_rel->rows; > + > + /* > + * If the number of rows is unknown, fallback to gather rows > + * estimation > + */ > + if (total_groups == 0) > + total_groups = gather_rows_estimate(input_path); > > why not just use total_groups = gather_rows_estimate(input_path), what > is the importance of having total_groups = input_rel->rows? The initial goal was to use the source tuples if available and avoid possible rounding errors. Though I realise that the difference would be minimal. For example, 200K tuples and 3 workers would yield int(int(200000 / 2.4) * 2.4)=199999. That is probably not worth the additional complexity, I've updated the patch to just use gather_rows_estimate. > With respect to the change introduced by the patch in the regression > test, I wonder if we should test it on the tables of a larger scale > and check for performance issues. > Imagine the case when Parallel Seq Scan on extremely_skewed s > (cost=0.00..167.01 rows=1 width=4) returns 1000 rows instead of 1 ... > I wonder which plan would perform better then or will there be a > totally different plan. For the extremely_skewed test, having the parallel seq scan returning more rows won't impact the Gather since The Hash Join will reduce the number of rows to 1. I've found an example where we can see the plan changes with the default settings: CREATE TABLE simple (id SERIAL PRIMARY KEY, other bigint); insert into simple select generate_series(1,500000), ceil(random()*100); analyze simple; EXPLAIN SELECT * FROM simple where other < 10 order by id; Unpatched: Gather Merge (cost=9377.85..12498.60 rows=27137 width=12) Workers Planned: 1 -> Sort (cost=8377.84..8445.68 rows=27137 width=12) Sort Key: id -> Parallel Seq Scan on simple (cost=0.00..6379.47 rows=27137 width=12) Filter: (other < 10) Patched: Sort (cost=12381.73..12492.77 rows=44417 width=12) Sort Key: id -> Seq Scan on simple (cost=0.00..8953.00 rows=44417 width=12) Filter: (other < 10) Looking at the candidates, the previous Gather Merge now has an estimated number of rows of 44418. The 1 difference compared to the other Gather Merge plan is due to rounding (26128 * 1.7 = 44417.6). Plan 3 -> Gather Merge (cost=9296.40..14358.75 rows=44418 width=12) Workers Planned: 1 -> Sort (cost=8296.39..8361.71 rows=26128 width=12) Sort Key: id -> Parallel Seq Scan on simple (cost=0.00..6379.47 rows=26128 width=12) Filter: (other < 10) Plan 4 -> Gather Merge (cost=9296.40..14358.63 rows=44417 width=12) Workers Planned: 1 -> Sort (cost=8296.39..8361.71 rows=26128 width=12) Sort Key: id -> Parallel Seq Scan on simple (cost=0.00..6379.47 rows=26128 width=12) Filter: (other < 10) Plan 5 -> Sort (cost=12381.73..12492.77 rows=44417 width=12) Sort Key: id -> Seq Scan on simple (cost=0.00..8953.00 rows=44417 width=12) Filter: (other < 10) The Sort plan is slightly slower than the Gather Merge plan: 100ms average against 83ms but the Gather Merge comes at the additional cost of creating and using a parallel worker. The unpatched row estimation makes the parallel plan look cheaper and running a parallel query for a 17ms improvement doesn't seem like a good trade. I've also realised from the comments in optimizer.h that nodes/pathnodes.h should not be included there and fixed it. Regards, Anthonin
Attachment
pgsql-hackers by date: