Re: Todo: Teach planner to evaluate multiple windows in the optimal order - Mailing list pgsql-hackers

From Ankit Kumar Pandey
Subject Re: Todo: Teach planner to evaluate multiple windows in the optimal order
Date
Msg-id 5314d2fc-76b6-c4ea-c083-1ef11411e5be@gmail.com
Whole thread Raw
In response to Re: Todo: Teach planner to evaluate multiple windows in the optimal order  (Ankit Kumar Pandey <itsankitkp@gmail.com>)
Responses Re: Todo: Teach planner to evaluate multiple windows in the optimal order
List pgsql-hackers
> I think more benchmarking is required
> so we can figure out if this is a corner case or a common case

I did some more benchmarks:

#1. AIM: Pushdown column whose size is very high

create table test(a int, b int, c text);
insert into test select a,b,c from generate_series(1,1000)a, generate_series(1,1000)b, repeat(md5(random()::text),
999)c;

explain (analyze, costs off) select count(*) over (order by a), row_number() over (order by a, b) from test order by
a,b,c;
                                                 QUERY PLAN
----------------------------------------------------------------------------------------------------------
  Incremental Sort (actual time=1161.605..6577.141 rows=1000000 loops=1)
    Sort Key: a, b, c
    Presorted Key: a, b
    Full-sort Groups: 31250  Sort Method: quicksort  Average Memory: 39kB  Peak Memory: 39kB
    ->  WindowAgg (actual time=1158.896..5819.460 rows=1000000 loops=1)
          ->  WindowAgg (actual time=1154.614..3391.537 rows=1000000 loops=1)
                ->  Gather Merge (actual time=1154.602..2404.125 rows=1000000 loops=1)
                      Workers Planned: 2
                      Workers Launched: 2
                      ->  Sort (actual time=1118.326..1295.743 rows=333333 loops=3)
                            Sort Key: a, b
                            Sort Method: external merge  Disk: 145648kB
                            Worker 0:  Sort Method: external merge  Disk: 140608kB
                            Worker 1:  Sort Method: external merge  Disk: 132792kB
                            ->  Parallel Seq Scan on test (actual time=0.018..169.319 rows=333333 loops=3)
  Planning Time: 0.091 ms
  Execution Time: 6816.616 ms
(17 rows)

Planner choose faster path correctly (which was not path which had pushed down column).

#2. AIM: Check strict vs incremental sorts wrt to large size data
Patch version is faster as for external merge sort, disk IO is main bottleneck and if we sort an extra column,
it doesn't have major impact. This is when work mem is very small.

For larger work_mem, difference between patched version and master is minimal and
they both provide somewhat comparable performance.

Tried permutation of few cases which we have already covered but I did not see anything alarming in those.


> I'm just unsure if we should write this off as the expected behaviour

> of Sort and continue with the patch or delay the whole thing until we

> make some improvements to sort.  

I am not seeing other cases where patch version is consistently slower.


Thanks,
Ankit





pgsql-hackers by date:

Previous
From: Andrew Dunstan
Date:
Subject: Re: Non-superuser subscription owners
Next
From: Tom Lane
Date:
Subject: Re: plpython vs _POSIX_C_SOURCE