Re: PseudoPartitioning and agregates - Mailing list pgsql-hackers

From Tom Lane
Subject Re: PseudoPartitioning and agregates
Date
Msg-id 18502.1116802011@sss.pgh.pa.us
Whole thread Raw
In response to PseudoPartitioning and agregates  (Sokolov Yura <falcon@intercable.ru>)
Responses Re: PseudoPartitioning and agregates  (Greg Stark <gsstark@mit.edu>)
List pgsql-hackers
Sokolov Yura <falcon@intercable.ru> writes:
> I think, postgres can perfoms aggregate on each table in union first,
> and then merge results.

I looked into this because it did not make a lot of sense to me.  The
aggregates you are testing with (count, sum, max) are all perfectly
linear in the number of input values, so it absolutely should not save
any time to divide up and then recombine the input, especially not if
the recombination takes some actual work (like a join).

What I found was that the reason for the difference was the overhead of
SubqueryScan and Append nodes:

> HashAggregate  (cost=23830.52..23832.02 rows=200 width=8) (actual time=22547.272..22586.130 rows=5243 loops=1)
>   ->  Subquery Scan t_union  (cost=0.00..18587.64 rows=524288 width=8) (actual time=0.204..17863.444 rows=524288
loops=1)
>         ->  Append  (cost=0.00..13344.76 rows=524288 width=12) (actual time=0.193..12990.177 rows=524288 loops=1)
>               ->  Subquery Scan "*SELECT* 1"  (cost=0.00..6684.88 rows=262144 width=12) (actual time=0.186..4488.981
rows=262144loops=1)
 
>                     ->  Seq Scan on t  (cost=0.00..4063.44 rows=262144 width=12) (actual time=0.163..1915.213
rows=262144loops=1)
 
>               ->  Subquery Scan "*SELECT* 2"  (cost=0.00..6659.88 rows=262144 width=12) (actual time=0.225..4558.788
rows=262144loops=1)
 
>                     ->  Seq Scan on t1  (cost=0.00..4038.44 rows=262144 width=12) (actual time=0.208..1798.410
rows=262144loops=1)
 
> Total runtime: 22634.454 ms
> (well, actual time is 2375 ms  Postgres 8.0.1 Slackware 10.0)

EXPLAIN ANALYZE overstates the penalty because its per-plan-node
instrumentation overhead is pretty high, but nonetheless it's
clear that the actually useful work (the two seqscans and the
HashAggregate) is only accounting for a portion of the runtime.
The reason your join query wins is that only a much smaller number of
tuples have to pass through multiple levels of plan nodes.

In the above example the Subquery Scan nodes aren't really doing
anything useful at all: they have neither any selection (filter
conditions) nor any projection (the output columns are the same
as the input, though this is not shown by EXPLAIN).  They are put there
by the planner because there are cases where they *are* needed,
eg to do type conversion when UNION'ing unlike column types.
But we could try harder to optimize them out.

I have committed some changes in CVS tip to get rid of useless
Subquery Scan nodes.  Your example now looks like

save=# explain analyze SELECT grp,count(*) AS c,sum(amount) AS s,max(amount) AS m FROM union_table GROUP BY grp;
                                             QUERY PLAN
 

-------------------------------------------------------------------------------------------------------------------------HashAggregate
(cost=24311.52..24313.02 rows=200 width=8) (actual time=15201.186..15261.184 rows=5243 loops=1)  ->  Append
(cost=0.00..13825.76rows=524288 width=12) (actual time=0.236..7519.033 rows=524288 loops=1)        ->  Seq Scan on t1
(cost=0.00..4291.44rows=262144 width=12) (actual time=0.205..2071.102 rows=262144 loops=1)        ->  Seq Scan on t2
(cost=0.00..4291.44rows=262144 width=12) (actual time=0.095..1743.434 rows=262144 loops=1)Total runtime: 15292.082 ms
 
(5 rows)

The Subquery Scans also disappear from your more complex query,
but since they weren't processing nearly as many tuples, there's
not much improvement there:
                                                         QUERY PLAN

------------------------------------------------------------------------------------------------------------------------------Merge
FullJoin  (cost=14207.24..14964.22 rows=33081 width=40) (actual time=11339.442..11457.087 rows=5243 loops=1)  Merge
Cond:("outer".grp = "inner".grp)  ->  Sort  (cost=7100.32..7106.65 rows=2532 width=20) (actual time=5672.726..5682.937
rows=2622loops=1)        Sort Key: t1.grp        ->  HashAggregate  (cost=6912.88..6931.87 rows=2532 width=8) (actual
time=5591.574..5624.523rows=2622 loops=1)              ->  Seq Scan on t1  (cost=0.00..4291.44 rows=262144 width=8)
(actualtime=0.212..1707.122 rows=262144 loops=1)  ->  Sort  (cost=7106.91..7113.45 rows=2613 width=20) (actual
time=5666.598..5676.735rows=2622 loops=1)        Sort Key: t2.grp        ->  HashAggregate  (cost=6912.88..6932.48
rows=2613width=8) (actual time=5584.098..5616.810 rows=2622 loops=1)              ->  Seq Scan on t2
(cost=0.00..4291.44rows=262144 width=8) (actual time=0.139..1707.351 rows=262144 loops=1)Total runtime: 11501.805 ms
 
(11 rows)

The EXPLAIN ANALYZE overhead for the Append is still pretty heavy,
but when comparing actual runtimes for the two queries, they are
now very nearly the same.
        regards, tom lane


pgsql-hackers by date:

Previous
From: LAMBEAU Bernard
Date:
Subject: Re: Views update ?
Next
From: José Orlando Pereira
Date:
Subject: Re: Two-phase commit issues