Thread: Statistical aggregate functions are not working with partitionwise aggregate

Statistical aggregate functions are not working with partitionwise aggregate

From
Rajkumar Raghuwanshi
Date:
Hi,

On PG-head, Some of statistical aggregate function are not giving correct output when enable partitionwise aggregate while same is working on v11.

below are some of examples.

CREATE TABLE tbl(a int2,b float4) partition by range(a);
create table tbl_p1 partition of tbl for values from (minvalue) to (0);
create table tbl_p2 partition of tbl for values from (0) to (maxvalue);
insert into tbl values (-1,-1),(0,0),(1,1),(2,2);

--when partitionwise aggregate is off
postgres=# SELECT regr_count(b, a) FROM tbl;
 regr_count
------------
          4
(1 row)
postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl;
 regr_avgx | regr_avgy
-----------+-----------
       0.5 |       0.5
(1 row)
postgres=# SELECT corr(b, a) FROM tbl;
 corr
------
    1
(1 row)

--when partitionwise aggregate is on
postgres=# SET enable_partitionwise_aggregate = true;
SET
postgres=# SELECT regr_count(b, a) FROM tbl;
 regr_count
------------
          0
(1 row)
postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl;
 regr_avgx | regr_avgy
-----------+-----------
           |         
(1 row)
postgres=# SELECT corr(b, a) FROM tbl;
 corr
------
    
(1 row)

Thanks & Regards,
Rajkumar Raghuwanshi
QMG, EnterpriseDB Corporation


On Fri, May 3, 2019 at 2:56 PM Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote:
Hi,

On PG-head, Some of statistical aggregate function are not giving correct output when enable partitionwise aggregate while same is working on v11.

I had a quick look over this and observed that something broken with the PARTIAL aggregation.

I can reproduce same issue with the larger dataset which results into parallel scan.

CREATE TABLE tbl1(a int2,b float4) partition by range(a);
create table tbl1_p1 partition of tbl1 for values from (minvalue) to (0);
create table tbl1_p2 partition of tbl1 for values from (0) to (maxvalue);
insert into tbl1 select i%2, i from generate_series(1, 1000000) i;

# SELECT regr_count(b, a) FROM tbl1;
 regr_count
------------
          0
(1 row)

postgres:5432 [120536]=# explain SELECT regr_count(b, a) FROM tbl1;
                                           QUERY PLAN                                          
------------------------------------------------------------------------------------------------
 Finalize Aggregate  (cost=15418.08..15418.09 rows=1 width=8)
   ->  Gather  (cost=15417.87..15418.08 rows=2 width=8)
         Workers Planned: 2
         ->  Partial Aggregate  (cost=14417.87..14417.88 rows=1 width=8)
               ->  Parallel Append  (cost=0.00..11091.62 rows=443500 width=6)
                     ->  Parallel Seq Scan on tbl1_p2  (cost=0.00..8850.00 rows=442500 width=6)
                     ->  Parallel Seq Scan on tbl1_p1  (cost=0.00..24.12 rows=1412 width=6)
(7 rows)

postgres:5432 [120536]=# set max_parallel_workers_per_gather to 0;
SET
postgres:5432 [120536]=# SELECT regr_count(b, a) FROM tbl1;
 regr_count
------------
    1000000
(1 row)

After looking further, it seems that it got broken by following commit:

commit a9c35cf85ca1ff72f16f0f10d7ddee6e582b62b8
Author: Andres Freund <andres@anarazel.de>
Date:   Sat Jan 26 14:17:52 2019 -0800

    Change function call information to be variable length.


This commit is too big to understand and thus could not get into the excact cause.

Thanks


below are some of examples.

CREATE TABLE tbl(a int2,b float4) partition by range(a);
create table tbl_p1 partition of tbl for values from (minvalue) to (0);
create table tbl_p2 partition of tbl for values from (0) to (maxvalue);
insert into tbl values (-1,-1),(0,0),(1,1),(2,2);

--when partitionwise aggregate is off
postgres=# SELECT regr_count(b, a) FROM tbl;
 regr_count
------------
          4
(1 row)
postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl;
 regr_avgx | regr_avgy
-----------+-----------
       0.5 |       0.5
(1 row)
postgres=# SELECT corr(b, a) FROM tbl;
 corr
------
    1
(1 row)

--when partitionwise aggregate is on
postgres=# SET enable_partitionwise_aggregate = true;
SET
postgres=# SELECT regr_count(b, a) FROM tbl;
 regr_count
------------
          0
(1 row)
postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl;
 regr_avgx | regr_avgy
-----------+-----------
           |         
(1 row)
postgres=# SELECT corr(b, a) FROM tbl;
 corr
------
    
(1 row)

Thanks & Regards,
Rajkumar Raghuwanshi
QMG, EnterpriseDB Corporation


--
Jeevan Chalke
Technical Architect, Product Development
EnterpriseDB Corporation
The Enterprise PostgreSQL Company

Statistical aggregate functions are not working with PARTIAL aggregation

From
Rajkumar Raghuwanshi
Date:
Hi,
As this issue is reproducible without partition-wise aggregate also, changing email subject from "Statistical aggregate functions are not working with partitionwise aggregate " to "Statistical aggregate functions are not working with PARTIAL aggregation".

original reported test case and discussion can be found at below link.

Thanks & Regards,
Rajkumar Raghuwanshi
QMG, EnterpriseDB Corporation


On Fri, May 3, 2019 at 5:26 PM Jeevan Chalke <jeevan.chalke@enterprisedb.com> wrote:


On Fri, May 3, 2019 at 2:56 PM Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote:
Hi,

On PG-head, Some of statistical aggregate function are not giving correct output when enable partitionwise aggregate while same is working on v11.

I had a quick look over this and observed that something broken with the PARTIAL aggregation.

I can reproduce same issue with the larger dataset which results into parallel scan.

CREATE TABLE tbl1(a int2,b float4) partition by range(a);
create table tbl1_p1 partition of tbl1 for values from (minvalue) to (0);
create table tbl1_p2 partition of tbl1 for values from (0) to (maxvalue);
insert into tbl1 select i%2, i from generate_series(1, 1000000) i;

# SELECT regr_count(b, a) FROM tbl1;
 regr_count
------------
          0
(1 row)

postgres:5432 [120536]=# explain SELECT regr_count(b, a) FROM tbl1;
                                           QUERY PLAN                                          
------------------------------------------------------------------------------------------------
 Finalize Aggregate  (cost=15418.08..15418.09 rows=1 width=8)
   ->  Gather  (cost=15417.87..15418.08 rows=2 width=8)
         Workers Planned: 2
         ->  Partial Aggregate  (cost=14417.87..14417.88 rows=1 width=8)
               ->  Parallel Append  (cost=0.00..11091.62 rows=443500 width=6)
                     ->  Parallel Seq Scan on tbl1_p2  (cost=0.00..8850.00 rows=442500 width=6)
                     ->  Parallel Seq Scan on tbl1_p1  (cost=0.00..24.12 rows=1412 width=6)
(7 rows)

postgres:5432 [120536]=# set max_parallel_workers_per_gather to 0;
SET
postgres:5432 [120536]=# SELECT regr_count(b, a) FROM tbl1;
 regr_count
------------
    1000000
(1 row)

After looking further, it seems that it got broken by following commit:

commit a9c35cf85ca1ff72f16f0f10d7ddee6e582b62b8
Author: Andres Freund <andres@anarazel.de>
Date:   Sat Jan 26 14:17:52 2019 -0800

    Change function call information to be variable length.


This commit is too big to understand and thus could not get into the excact cause.

Thanks


below are some of examples.

CREATE TABLE tbl(a int2,b float4) partition by range(a);
create table tbl_p1 partition of tbl for values from (minvalue) to (0);
create table tbl_p2 partition of tbl for values from (0) to (maxvalue);
insert into tbl values (-1,-1),(0,0),(1,1),(2,2);

--when partitionwise aggregate is off
postgres=# SELECT regr_count(b, a) FROM tbl;
 regr_count
------------
          4
(1 row)
postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl;
 regr_avgx | regr_avgy
-----------+-----------
       0.5 |       0.5
(1 row)
postgres=# SELECT corr(b, a) FROM tbl;
 corr
------
    1
(1 row)

--when partitionwise aggregate is on
postgres=# SET enable_partitionwise_aggregate = true;
SET
postgres=# SELECT regr_count(b, a) FROM tbl;
 regr_count
------------
          0
(1 row)
postgres=# SELECT regr_avgx(b, a), regr_avgy(b, a) FROM tbl;
 regr_avgx | regr_avgy
-----------+-----------
           |         
(1 row)
postgres=# SELECT corr(b, a) FROM tbl;
 corr
------
    
(1 row)

Thanks & Regards,
Rajkumar Raghuwanshi
QMG, EnterpriseDB Corporation


--
Jeevan Chalke
Technical Architect, Product Development
EnterpriseDB Corporation
The Enterprise PostgreSQL Company

Re: Statistical aggregate functions are not working with PARTIALaggregation

From
Kyotaro HORIGUCHI
Date:
Hello.

At Tue, 7 May 2019 14:39:55 +0530, Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote in
<CAKcux6=YBMCntcafSs_22dS1ab6mGay_QUaHx-nvg+_FVPMg3Q@mail.gmail.com>
> Hi,
> As this issue is reproducible without partition-wise aggregate also,
> changing email subject from "Statistical aggregate functions are not
> working with partitionwise aggregate " to "Statistical aggregate functions
> are not working with PARTIAL aggregation".
> 
> original reported test case and discussion can be found at below link.
> https://www.postgresql.org/message-id/flat/CAKcux6%3DuZEyWyLw0N7HtR9OBc-sWEFeByEZC7t-KDf15FKxVew%40mail.gmail.com

The immediate reason for the behavior seems that
EEOP_AGG_STRICT_INPUT_CHECK_ARGS regards the non-existent second
argument as null.

The invalid deserialfn_oid case in ExecBuildAggTrans, it
initializes args[1] using the second argument of the functoin
(int8pl() in the case) so the correct numTransInputs here is 1,
not 2.

The attached patch makes at least the test case work correctly
and this seems to be the alone instance of the same issue.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center


diff --git a/src/backend/executor/execExpr.c b/src/backend/executor/execExpr.c
index 0fb31f5c3d..9f83dbe51e 100644
--- a/src/backend/executor/execExpr.c
+++ b/src/backend/executor/execExpr.c
@@ -3009,8 +3009,9 @@ ExecBuildAggTrans(AggState *aggstate, AggStatePerPhase phase,
             {
                 /*
                  * Start from 1, since the 0th arg will be the transition
-                 * value
+                 * value. Exclude it from numTransInputs.
                  */
+                pertrans->numTransInputs--;
                 ExecInitExprRec(source_tle->expr, state,
                                 &trans_fcinfo->args[argno + 1].value,
                                 &trans_fcinfo->args[argno + 1].isnull);

Re: Statistical aggregate functions are not working with PARTIALaggregation

From
Kyotaro HORIGUCHI
Date:
At Tue, 07 May 2019 20:47:28 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in
<20190507.204728.233299873.horiguchi.kyotaro@lab.ntt.co.jp>
> Hello.
> 
> At Tue, 7 May 2019 14:39:55 +0530, Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote in
<CAKcux6=YBMCntcafSs_22dS1ab6mGay_QUaHx-nvg+_FVPMg3Q@mail.gmail.com>
> > Hi,
> > As this issue is reproducible without partition-wise aggregate also,
> > changing email subject from "Statistical aggregate functions are not
> > working with partitionwise aggregate " to "Statistical aggregate functions
> > are not working with PARTIAL aggregation".
> > 
> > original reported test case and discussion can be found at below link.
> > https://www.postgresql.org/message-id/flat/CAKcux6%3DuZEyWyLw0N7HtR9OBc-sWEFeByEZC7t-KDf15FKxVew%40mail.gmail.com
> 
> The immediate reason for the behavior seems that
> EEOP_AGG_STRICT_INPUT_CHECK_ARGS considers non existent second
> argument as null, which is out of arguments list in
> trans_fcinfo->args[].
> 
> The invalid deserialfn_oid case in ExecBuildAggTrans, it
> initializes args[1] using the second argument of the functoin
> (int8pl() in the case) so the correct numTransInputs here is 1,
> not 2.
> 
> I don't understand this fully but at least the attached patch
> makes the test case work correctly and this seems to be the only
> case of this issue.

This behavior is introduced by 69c3936a14 (in v11).  At that time
FunctionCallInfoData is pallioc0'ed and has fixed length members
arg[6] and argnull[7]. So nulls[1] is always false even if nargs
= 1 so the issue had not been revealed.

After introducing a9c35cf85c (in v12) the same check is done on
FunctionCallInfoData that has NullableDatum args[] of required
number of elements. In that case args[1] is out of palloc'ed
memory so this issue has been revealed.

In a second look, I seems to me that the right thing to do here
is setting numInputs instaed of numArguments to numTransInputs in
combining step.

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
From c5dcbc32646e8e4865307bb0525a143da573e240 Mon Sep 17 00:00:00 2001
From: Kyotaro Horiguchi <horiguchi.kyotaro@lab.ntt.co.jp>
Date: Wed, 8 May 2019 13:01:01 +0900
Subject: [PATCH] Give correct number of arguments to combine function.

Combine function's first parameter is used as transition value. The
correct number of transition input for the function is not the number
of arguments but of transition input.
---
 src/backend/executor/nodeAgg.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index d01fc4f52e..e13a8cb304 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -2912,7 +2912,8 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans,
     pertrans->aggtranstype = aggtranstype;
 
     /* Detect how many arguments to pass to the transfn */
-    if (AGGKIND_IS_ORDERED_SET(aggref->aggkind))
+    if (AGGKIND_IS_ORDERED_SET(aggref->aggkind) ||
+        DO_AGGSPLIT_COMBINE(aggstate->aggsplit))
         pertrans->numTransInputs = numInputs;
     else
         pertrans->numTransInputs = numArguments;
-- 
2.16.3


Re: Statistical aggregate functions are not working with PARTIALaggregation

From
Kyotaro HORIGUCHI
Date:
At Wed, 08 May 2019 13:06:36 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in
<20190508.130636.184826233.horiguchi.kyotaro@lab.ntt.co.jp>
> At Tue, 07 May 2019 20:47:28 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote
in<20190507.204728.233299873.horiguchi.kyotaro@lab.ntt.co.jp>
 
> > Hello.
> > 
> > At Tue, 7 May 2019 14:39:55 +0530, Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote in
<CAKcux6=YBMCntcafSs_22dS1ab6mGay_QUaHx-nvg+_FVPMg3Q@mail.gmail.com>
> > > Hi,
> > > As this issue is reproducible without partition-wise aggregate also,
> > > changing email subject from "Statistical aggregate functions are not
> > > working with partitionwise aggregate " to "Statistical aggregate functions
> > > are not working with PARTIAL aggregation".
> > > 
> > > original reported test case and discussion can be found at below link.
> > >
https://www.postgresql.org/message-id/flat/CAKcux6%3DuZEyWyLw0N7HtR9OBc-sWEFeByEZC7t-KDf15FKxVew%40mail.gmail.com
> > 
> > The immediate reason for the behavior seems that
> > EEOP_AGG_STRICT_INPUT_CHECK_ARGS considers non existent second
> > argument as null, which is out of arguments list in
> > trans_fcinfo->args[].
> > 
> > The invalid deserialfn_oid case in ExecBuildAggTrans, it
> > initializes args[1] using the second argument of the functoin
> > (int8pl() in the case) so the correct numTransInputs here is 1,
> > not 2.
> > 
> > I don't understand this fully but at least the attached patch
> > makes the test case work correctly and this seems to be the only
> > case of this issue.
> 
> This behavior is introduced by 69c3936a14 (in v11).  At that time
> FunctionCallInfoData is pallioc0'ed and has fixed length members
> arg[6] and argnull[7]. So nulls[1] is always false even if nargs
> = 1 so the issue had not been revealed.
> 
> After introducing a9c35cf85c (in v12) the same check is done on
> FunctionCallInfoData that has NullableDatum args[] of required
> number of elements. In that case args[1] is out of palloc'ed
> memory so this issue has been revealed.
> 
> In a second look, I seems to me that the right thing to do here
> is setting numInputs instaed of numArguments to numTransInputs in
> combining step.

By the way, as mentioned above, this issue exists since 11 but
harms at 12. Is this an open item, or older bug?

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




On Wed, May 8, 2019 at 12:09 AM Kyotaro HORIGUCHI
<horiguchi.kyotaro@lab.ntt.co.jp> wrote:
> By the way, as mentioned above, this issue exists since 11 but
> harms at 12. Is this an open item, or older bug?

Sounds more like an open item to me.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



On Wed, May 08, 2019 at 01:09:23PM +0900, Kyotaro HORIGUCHI wrote:
>At Wed, 08 May 2019 13:06:36 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in
<20190508.130636.184826233.horiguchi.kyotaro@lab.ntt.co.jp>
>> At Tue, 07 May 2019 20:47:28 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote
in<20190507.204728.233299873.horiguchi.kyotaro@lab.ntt.co.jp>
 
>> > Hello.
>> >
>> > At Tue, 7 May 2019 14:39:55 +0530, Rajkumar Raghuwanshi <rajkumar.raghuwanshi@enterprisedb.com> wrote in
<CAKcux6=YBMCntcafSs_22dS1ab6mGay_QUaHx-nvg+_FVPMg3Q@mail.gmail.com>
>> > > Hi,
>> > > As this issue is reproducible without partition-wise aggregate also,
>> > > changing email subject from "Statistical aggregate functions are not
>> > > working with partitionwise aggregate " to "Statistical aggregate functions
>> > > are not working with PARTIAL aggregation".
>> > >
>> > > original reported test case and discussion can be found at below link.
>> > >
https://www.postgresql.org/message-id/flat/CAKcux6%3DuZEyWyLw0N7HtR9OBc-sWEFeByEZC7t-KDf15FKxVew%40mail.gmail.com
>> >
>> > The immediate reason for the behavior seems that
>> > EEOP_AGG_STRICT_INPUT_CHECK_ARGS considers non existent second
>> > argument as null, which is out of arguments list in
>> > trans_fcinfo->args[].
>> >
>> > The invalid deserialfn_oid case in ExecBuildAggTrans, it
>> > initializes args[1] using the second argument of the functoin
>> > (int8pl() in the case) so the correct numTransInputs here is 1,
>> > not 2.
>> >
>> > I don't understand this fully but at least the attached patch
>> > makes the test case work correctly and this seems to be the only
>> > case of this issue.
>>
>> This behavior is introduced by 69c3936a14 (in v11).  At that time
>> FunctionCallInfoData is pallioc0'ed and has fixed length members
>> arg[6] and argnull[7]. So nulls[1] is always false even if nargs
>> = 1 so the issue had not been revealed.
>>
>> After introducing a9c35cf85c (in v12) the same check is done on
>> FunctionCallInfoData that has NullableDatum args[] of required
>> number of elements. In that case args[1] is out of palloc'ed
>> memory so this issue has been revealed.
>>
>> In a second look, I seems to me that the right thing to do here
>> is setting numInputs instaed of numArguments to numTransInputs in
>> combining step.
>
>By the way, as mentioned above, this issue exists since 11 but
>harms at 12. Is this an open item, or older bug?
>

It is an open item - there's a section for older bugs, but considering
it's harmless in 11 (at least that's my understanding from the above
discussion) I've added it as a regular open item.

I've linked it to a9c35cf85c, which seems to be the culprit commit.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services 



Don't we have a build farm animal that runs under valgrind that would
have caught this?



Re: Statistical aggregate functions are not working with PARTIALaggregation

From
Andrew Dunstan
Date:
On 5/8/19 12:41 PM, Greg Stark wrote:
> Don't we have a build farm animal that runs under valgrind that would
> have caught this?
>
>

There are two animals running under valgrind: lousyjack and skink.


cheers


andrew


-- 
Andrew Dunstan                https://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services




Re: Statistical aggregate functions are not working with PARTIALaggregation

From
Kyotaro HORIGUCHI
Date:
Hello. There is an unfortunate story on this issue.

At Wed, 8 May 2019 14:56:25 -0400, Andrew Dunstan <andrew.dunstan@2ndquadrant.com> wrote in
<7969b496-096a-bf9b-2a03-4706baa4c48e@2ndQuadrant.com>
> 
> On 5/8/19 12:41 PM, Greg Stark wrote:
> > Don't we have a build farm animal that runs under valgrind that would
> > have caught this?
> >
> >
> 
> There are two animals running under valgrind: lousyjack and skink.

Valgrind doesn't detect the overruning read since the block
doesn't has 'MEMNOACCESS' region, since the requested size is
just 64 bytes.

Thus the attached patch let valgrind detect the overrun.

==00:00:00:22.959 20254== VALGRINDERROR-BEGIN
==00:00:00:22.959 20254== Conditional jump or move depends on uninitialised value(s)
==00:00:00:22.959 20254==    at 0x88A838: ExecInterpExpr (execExprInterp.c:1553)
==00:00:00:22.959 20254==    by 0x88AFD5: ExecInterpExprStillValid (execExprInterp.c:1769)
==00:00:00:22.959 20254==    by 0x8C3503: ExecEvalExprSwitchContext (executor.h:307)
==00:00:00:22.959 20254==    by 0x8C4653: advance_aggregates (nodeAgg.c:679)

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center

diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
index d01fc4f52e..7c6eab6d94 100644
--- a/src/backend/executor/nodeAgg.c
+++ b/src/backend/executor/nodeAgg.c
@@ -2935,7 +2935,7 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans,
         fmgr_info_set_expr((Node *) combinefnexpr, &pertrans->transfn);
 
         pertrans->transfn_fcinfo =
-            (FunctionCallInfo) palloc(SizeForFunctionCallInfo(2));
+            (FunctionCallInfo) palloc(SizeForFunctionCallInfo(2) + 1);
         InitFunctionCallInfoData(*pertrans->transfn_fcinfo,
                                  &pertrans->transfn,
                                  2,

Re: Statistical aggregate functions are not working with PARTIALaggregation

From
Kyotaro HORIGUCHI
Date:
At Thu, 09 May 2019 11:17:46 +0900 (Tokyo Standard Time), Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote in
<20190509.111746.217492977.horiguchi.kyotaro@lab.ntt.co.jp>
> Valgrind doesn't detect the overruning read since the block
> doesn't has 'MEMNOACCESS' region, since the requested size is
> just 64 bytes.
> 
> Thus the attached patch let valgrind detect the overrun.

So the attached patch makes palloc always attach the MEMNOACCESS
region and sentinel byte. The issue under discussion is detected
with this patch either. (But in return memory usage gets larger.)

regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center
From 5c25b69822fdd86d05871f61cf30e47f514853ae Mon Sep 17 00:00:00 2001
From: Kyotaro Horiguchi <horiguchi.kyotaro@lab.ntt.co.jp>
Date: Thu, 9 May 2019 13:18:33 +0900
Subject: [PATCH] Always put sentinel in allocated memory blocks.

Currently allocated memory blocks doesn't have sentinel byte and
valgrind NOACCESS region if requested size is equal to chunk size. The
behavior largily diminishes the chance of overrun dection. This patch
let allocated memory blocks are always armed with the stuff when
MEMORY_CONTEXT_CHECKING is defined.
---
 src/backend/utils/mmgr/aset.c       | 51 +++++++++++++++++++++----------------
 src/backend/utils/mmgr/generation.c | 35 +++++++++++++++----------
 src/backend/utils/mmgr/slab.c       | 23 ++++++++++-------
 3 files changed, 64 insertions(+), 45 deletions(-)

diff --git a/src/backend/utils/mmgr/aset.c b/src/backend/utils/mmgr/aset.c
index 08aff333a4..15ef5c5afa 100644
--- a/src/backend/utils/mmgr/aset.c
+++ b/src/backend/utils/mmgr/aset.c
@@ -297,6 +297,13 @@ static const MemoryContextMethods AllocSetMethods = {
 #endif
 };
 
+/* to keep room for sentinel in allocated chunks */
+#ifdef MEMORY_CONTEXT_CHECKING
+#define REQUIRED_SIZE(x) ((x) + 1)
+#else
+#define REQUIRED_SIZE(x) (x)
+#endif
+
 /*
  * Table for AllocSetFreeIndex
  */
@@ -726,9 +733,9 @@ AllocSetAlloc(MemoryContext context, Size size)
      * If requested size exceeds maximum for chunks, allocate an entire block
      * for this request.
      */
-    if (size > set->allocChunkLimit)
+    if (REQUIRED_SIZE(size) > set->allocChunkLimit)
     {
-        chunk_size = MAXALIGN(size);
+        chunk_size = MAXALIGN(REQUIRED_SIZE(size));
         blksize = chunk_size + ALLOC_BLOCKHDRSZ + ALLOC_CHUNKHDRSZ;
         block = (AllocBlock) malloc(blksize);
         if (block == NULL)
@@ -742,8 +749,8 @@ AllocSetAlloc(MemoryContext context, Size size)
 #ifdef MEMORY_CONTEXT_CHECKING
         chunk->requested_size = size;
         /* set mark to catch clobber of "unused" space */
-        if (size < chunk_size)
-            set_sentinel(AllocChunkGetPointer(chunk), size);
+        Assert (size < chunk_size);
+        set_sentinel(AllocChunkGetPointer(chunk), size);
 #endif
 #ifdef RANDOMIZE_ALLOCATED_MEMORY
         /* fill the allocated space with junk */
@@ -787,7 +794,7 @@ AllocSetAlloc(MemoryContext context, Size size)
      * If one is found, remove it from the free list, make it again a member
      * of the alloc set and return its data address.
      */
-    fidx = AllocSetFreeIndex(size);
+    fidx = AllocSetFreeIndex(REQUIRED_SIZE(size));
     chunk = set->freelist[fidx];
     if (chunk != NULL)
     {
@@ -800,8 +807,8 @@ AllocSetAlloc(MemoryContext context, Size size)
 #ifdef MEMORY_CONTEXT_CHECKING
         chunk->requested_size = size;
         /* set mark to catch clobber of "unused" space */
-        if (size < chunk->size)
-            set_sentinel(AllocChunkGetPointer(chunk), size);
+        Assert (size < chunk->size);
+        set_sentinel(AllocChunkGetPointer(chunk), size);
 #endif
 #ifdef RANDOMIZE_ALLOCATED_MEMORY
         /* fill the allocated space with junk */
@@ -824,7 +831,7 @@ AllocSetAlloc(MemoryContext context, Size size)
      * Choose the actual chunk size to allocate.
      */
     chunk_size = (1 << ALLOC_MINBITS) << fidx;
-    Assert(chunk_size >= size);
+    Assert(chunk_size >= REQUIRED_SIZE(size));
 
     /*
      * If there is enough room in the active allocation block, we will put the
@@ -959,8 +966,8 @@ AllocSetAlloc(MemoryContext context, Size size)
 #ifdef MEMORY_CONTEXT_CHECKING
     chunk->requested_size = size;
     /* set mark to catch clobber of "unused" space */
-    if (size < chunk->size)
-        set_sentinel(AllocChunkGetPointer(chunk), size);
+    Assert (size < chunk->size);
+    set_sentinel(AllocChunkGetPointer(chunk), size);
 #endif
 #ifdef RANDOMIZE_ALLOCATED_MEMORY
     /* fill the allocated space with junk */
@@ -996,10 +1003,10 @@ AllocSetFree(MemoryContext context, void *pointer)
 
 #ifdef MEMORY_CONTEXT_CHECKING
     /* Test for someone scribbling on unused space in chunk */
-    if (chunk->requested_size < chunk->size)
-        if (!sentinel_ok(pointer, chunk->requested_size))
-            elog(WARNING, "detected write past chunk end in %s %p",
-                 set->header.name, chunk);
+    Assert (chunk->requested_size < chunk->size);
+    if (!sentinel_ok(pointer, chunk->requested_size))
+        elog(WARNING, "detected write past chunk end in %s %p",
+             set->header.name, chunk);
 #endif
 
     if (chunk->size > set->allocChunkLimit)
@@ -1078,10 +1085,10 @@ AllocSetRealloc(MemoryContext context, void *pointer, Size size)
 
 #ifdef MEMORY_CONTEXT_CHECKING
     /* Test for someone scribbling on unused space in chunk */
-    if (chunk->requested_size < oldsize)
-        if (!sentinel_ok(pointer, chunk->requested_size))
-            elog(WARNING, "detected write past chunk end in %s %p",
-                 set->header.name, chunk);
+    Assert (chunk->requested_size < oldsize);
+    if (!sentinel_ok(pointer, chunk->requested_size))
+        elog(WARNING, "detected write past chunk end in %s %p",
+             set->header.name, chunk);
 #endif
 
     /*
@@ -1089,7 +1096,7 @@ AllocSetRealloc(MemoryContext context, void *pointer, Size size)
      * allocated area already is >= the new size.  (In particular, we always
      * fall out here if the requested size is a decrease.)
      */
-    if (oldsize >= size)
+    if (oldsize >= REQUIRED_SIZE(size))
     {
 #ifdef MEMORY_CONTEXT_CHECKING
         Size        oldrequest = chunk->requested_size;
@@ -1157,7 +1164,7 @@ AllocSetRealloc(MemoryContext context, void *pointer, Size size)
             elog(ERROR, "could not find block containing chunk %p", chunk);
 
         /* Do the realloc */
-        chksize = MAXALIGN(size);
+        chksize = MAXALIGN(REQUIRED_SIZE(size));
         blksize = chksize + ALLOC_BLOCKHDRSZ + ALLOC_CHUNKHDRSZ;
         block = (AllocBlock) realloc(block, blksize);
         if (block == NULL)
@@ -1197,8 +1204,8 @@ AllocSetRealloc(MemoryContext context, void *pointer, Size size)
         chunk->requested_size = size;
 
         /* set mark to catch clobber of "unused" space */
-        if (size < chunk->size)
-            set_sentinel(pointer, size);
+        Assert (size < chunk->size);
+        set_sentinel(pointer, size);
 #else                            /* !MEMORY_CONTEXT_CHECKING */
 
         /*
diff --git a/src/backend/utils/mmgr/generation.c b/src/backend/utils/mmgr/generation.c
index 02a23694cb..344f634ecb 100644
--- a/src/backend/utils/mmgr/generation.c
+++ b/src/backend/utils/mmgr/generation.c
@@ -178,6 +178,13 @@ static const MemoryContextMethods GenerationMethods = {
 #endif
 };
 
+/* to keep room for sentinel in allocated chunks */
+#ifdef MEMORY_CONTEXT_CHECKING
+#define REQUIRED_SIZE(x) ((x) + 1)
+#else
+#define REQUIRED_SIZE(x) (x)
+#endif
+
 /* ----------
  * Debug macros
  * ----------
@@ -341,7 +348,7 @@ GenerationAlloc(MemoryContext context, Size size)
     GenerationContext *set = (GenerationContext *) context;
     GenerationBlock *block;
     GenerationChunk *chunk;
-    Size        chunk_size = MAXALIGN(size);
+    Size        chunk_size = MAXALIGN(REQUIRED_SIZE(size));
 
     /* is it an over-sized chunk? if yes, allocate special block */
     if (chunk_size > set->blockSize / 8)
@@ -368,8 +375,8 @@ GenerationAlloc(MemoryContext context, Size size)
 #ifdef MEMORY_CONTEXT_CHECKING
         chunk->requested_size = size;
         /* set mark to catch clobber of "unused" space */
-        if (size < chunk_size)
-            set_sentinel(GenerationChunkGetPointer(chunk), size);
+        Assert (size < chunk_size);
+        set_sentinel(GenerationChunkGetPointer(chunk), size);
 #endif
 #ifdef RANDOMIZE_ALLOCATED_MEMORY
         /* fill the allocated space with junk */
@@ -446,8 +453,8 @@ GenerationAlloc(MemoryContext context, Size size)
 #ifdef MEMORY_CONTEXT_CHECKING
     chunk->requested_size = size;
     /* set mark to catch clobber of "unused" space */
-    if (size < chunk->size)
-        set_sentinel(GenerationChunkGetPointer(chunk), size);
+    Assert (size < chunk->size);
+    set_sentinel(GenerationChunkGetPointer(chunk), size);
 #endif
 #ifdef RANDOMIZE_ALLOCATED_MEMORY
     /* fill the allocated space with junk */
@@ -485,10 +492,10 @@ GenerationFree(MemoryContext context, void *pointer)
 
 #ifdef MEMORY_CONTEXT_CHECKING
     /* Test for someone scribbling on unused space in chunk */
-    if (chunk->requested_size < chunk->size)
-        if (!sentinel_ok(pointer, chunk->requested_size))
-            elog(WARNING, "detected write past chunk end in %s %p",
-                 ((MemoryContext) set)->name, chunk);
+    Assert (chunk->requested_size < chunk->size);
+    if (!sentinel_ok(pointer, chunk->requested_size))
+        elog(WARNING, "detected write past chunk end in %s %p",
+             ((MemoryContext) set)->name, chunk);
 #endif
 
 #ifdef CLOBBER_FREED_MEMORY
@@ -546,10 +553,10 @@ GenerationRealloc(MemoryContext context, void *pointer, Size size)
 
 #ifdef MEMORY_CONTEXT_CHECKING
     /* Test for someone scribbling on unused space in chunk */
-    if (chunk->requested_size < oldsize)
-        if (!sentinel_ok(pointer, chunk->requested_size))
-            elog(WARNING, "detected write past chunk end in %s %p",
-                 ((MemoryContext) set)->name, chunk);
+    Assert (chunk->requested_size < oldsize);
+    if (!sentinel_ok(pointer, chunk->requested_size))
+        elog(WARNING, "detected write past chunk end in %s %p",
+             ((MemoryContext) set)->name, chunk);
 #endif
 
     /*
@@ -562,7 +569,7 @@ GenerationRealloc(MemoryContext context, void *pointer, Size size)
      *
      * XXX Perhaps we should annotate this condition with unlikely()?
      */
-    if (oldsize >= size)
+    if (oldsize >= REQUIRED_SIZE(size))
     {
 #ifdef MEMORY_CONTEXT_CHECKING
         Size        oldrequest = chunk->requested_size;
diff --git a/src/backend/utils/mmgr/slab.c b/src/backend/utils/mmgr/slab.c
index 33d69239d9..97dc07e73a 100644
--- a/src/backend/utils/mmgr/slab.c
+++ b/src/backend/utils/mmgr/slab.c
@@ -155,6 +155,13 @@ static const MemoryContextMethods SlabMethods = {
 #endif
 };
 
+/* to keep room for sentinel in allocated chunks */
+#ifdef MEMORY_CONTEXT_CHECKING
+#define REQUIRED_SIZE(x) ((x) + 1)
+#else
+#define REQUIRED_SIZE(x) (x)
+#endif
+
 /* ----------
  * Debug macros
  * ----------
@@ -209,7 +216,7 @@ SlabContextCreate(MemoryContext parent,
         chunkSize = sizeof(int);
 
     /* chunk, including SLAB header (both addresses nicely aligned) */
-    fullChunkSize = sizeof(SlabChunk) + MAXALIGN(chunkSize);
+    fullChunkSize = sizeof(SlabChunk) + MAXALIGN(REQUIRED_SIZE(chunkSize));
 
     /* Make sure the block can store at least one chunk. */
     if (blockSize < fullChunkSize + sizeof(SlabBlock))
@@ -465,14 +472,12 @@ SlabAlloc(MemoryContext context, Size size)
 
 #ifdef MEMORY_CONTEXT_CHECKING
     /* slab mark to catch clobber of "unused" space */
-    if (slab->chunkSize < (slab->fullChunkSize - sizeof(SlabChunk)))
-    {
-        set_sentinel(SlabChunkGetPointer(chunk), size);
-        VALGRIND_MAKE_MEM_NOACCESS(((char *) chunk) +
-                                   sizeof(SlabChunk) + slab->chunkSize,
-                                   slab->fullChunkSize -
-                                   (slab->chunkSize + sizeof(SlabChunk)));
-    }
+    Assert (slab->chunkSize < (slab->fullChunkSize - sizeof(SlabChunk)));
+    set_sentinel(SlabChunkGetPointer(chunk), size);
+    VALGRIND_MAKE_MEM_NOACCESS(((char *) chunk) +
+                               sizeof(SlabChunk) + slab->chunkSize,
+                               slab->fullChunkSize -
+                               (slab->chunkSize + sizeof(SlabChunk)));
 #endif
 #ifdef RANDOMIZE_ALLOCATED_MEMORY
     /* fill the allocated space with junk */
-- 
2.16.3


Re: Statistical aggregate functions are not working with PARTIALaggregation

From
Andres Freund
Date:
Hi,

On 2019-05-08 13:06:36 +0900, Kyotaro HORIGUCHI wrote:
> This behavior is introduced by 69c3936a14 (in v11).  At that time
> FunctionCallInfoData is pallioc0'ed and has fixed length members
> arg[6] and argnull[7]. So nulls[1] is always false even if nargs
> = 1 so the issue had not been revealed.

> After introducing a9c35cf85c (in v12) the same check is done on
> FunctionCallInfoData that has NullableDatum args[] of required
> number of elements. In that case args[1] is out of palloc'ed
> memory so this issue has been revealed.

> In a second look, I seems to me that the right thing to do here
> is setting numInputs instaed of numArguments to numTransInputs in
> combining step.

Yea, to me this just seems a consequence of the wrong
numTransInputs. Arguably this is a bug going back to 9.6, where
combining aggregates where introduced. It's just that numTransInputs
isn't used anywhere for combining aggregates, before 11.

It's documentation says:

    /*
     * Number of aggregated input columns to pass to the transfn.  This
     * includes the ORDER BY columns for ordered-set aggs, but not for plain
     * aggs.  (This doesn't count the transition state value!)
     */
    int            numTransInputs;

which IMO is violated by having it set to the plain aggregate's value,
rather than the combine func.

While I agree that fixing numTransInputs is the right way, I'm not
convinced the way you did it is the right approach. I'm somewhat
inclined to think that it's wrong that ExecInitAgg() calls
build_pertrans_for_aggref() with a numArguments that's not actually
right? Alternatively I think we should just move the numTransInputs
computation into the existing branch around DO_AGGSPLIT_COMBINE.

It seems pretty clear that this needs to be fixed for v11, it seems too
fragile to rely on trans_fcinfo->argnull[2] being zero initialized.

I'm less sure about fixing it for 9.6/10. There's no use of
numTransInputs for combining back then.

David, I assume you didn't adjust numTransInput plainly because it
wasn't needed / you didn't notice? Do you have a preference for a fix?



Independent of these changes, some of the code around partial, ordered
set and polymorphic aggregates really make it hard to understand things:

        /* Detect how many arguments to pass to the finalfn */
        if (aggform->aggfinalextra)
            peragg->numFinalArgs = numArguments + 1;
        else
            peragg->numFinalArgs = numDirectArgs + 1;

What on earth is that supposed to mean? Sure, the +1 is obvious, but why
the different sources for arguments are needed isn't - especially
because numArguments was just calculated with the actual aggregate
inputs. Nor is aggfinalextra's documentation particularly elucidating:
    /* true to pass extra dummy arguments to aggfinalfn */
    bool        aggfinalextra BKI_DEFAULT(f);

especially not why aggfinalextra means we have to ignore direct
args. Presumably because aggfinalextra just emulates what direct args
does for ordered set args, but we allow both to be set.

Similarly

    /* Detect how many arguments to pass to the transfn */
    if (AGGKIND_IS_ORDERED_SET(aggref->aggkind))
        pertrans->numTransInputs = numInputs;
    else
        pertrans->numTransInputs = numArguments;

is hard to understand, without additional comments. One can, looking
around, infer that it's because ordered set aggs need sort columns
included. But that should just have been mentioned.

And to make sense of build_aggregate_transfn_expr()'s treatment of
direct args, one has to know that direct args are only possible for
ordered set aggregates. Which IMO is not obvious in nodeAgg.c.

...

I feel this code has become quite creaky in the last few years.

Greetings,

Andres Freund



Re: Statistical aggregate functions are not working with PARTIALaggregation

From
Andres Freund
Date:
Hi,

David, anyone, any comments?

On 2019-05-16 20:04:04 -0700, Andres Freund wrote:
> On 2019-05-08 13:06:36 +0900, Kyotaro HORIGUCHI wrote:
> > This behavior is introduced by 69c3936a14 (in v11).  At that time
> > FunctionCallInfoData is pallioc0'ed and has fixed length members
> > arg[6] and argnull[7]. So nulls[1] is always false even if nargs
> > = 1 so the issue had not been revealed.
> 
> > After introducing a9c35cf85c (in v12) the same check is done on
> > FunctionCallInfoData that has NullableDatum args[] of required
> > number of elements. In that case args[1] is out of palloc'ed
> > memory so this issue has been revealed.
> 
> > In a second look, I seems to me that the right thing to do here
> > is setting numInputs instaed of numArguments to numTransInputs in
> > combining step.
> 
> Yea, to me this just seems a consequence of the wrong
> numTransInputs. Arguably this is a bug going back to 9.6, where
> combining aggregates where introduced. It's just that numTransInputs
> isn't used anywhere for combining aggregates, before 11.
> 
> It's documentation says:
> 
>     /*
>      * Number of aggregated input columns to pass to the transfn.  This
>      * includes the ORDER BY columns for ordered-set aggs, but not for plain
>      * aggs.  (This doesn't count the transition state value!)
>      */
>     int            numTransInputs;
> 
> which IMO is violated by having it set to the plain aggregate's value,
> rather than the combine func.
> 
> While I agree that fixing numTransInputs is the right way, I'm not
> convinced the way you did it is the right approach. I'm somewhat
> inclined to think that it's wrong that ExecInitAgg() calls
> build_pertrans_for_aggref() with a numArguments that's not actually
> right? Alternatively I think we should just move the numTransInputs
> computation into the existing branch around DO_AGGSPLIT_COMBINE.
> 
> It seems pretty clear that this needs to be fixed for v11, it seems too
> fragile to rely on trans_fcinfo->argnull[2] being zero initialized.
> 
> I'm less sure about fixing it for 9.6/10. There's no use of
> numTransInputs for combining back then.
> 
> David, I assume you didn't adjust numTransInput plainly because it
> wasn't needed / you didn't notice? Do you have a preference for a fix?

Unless somebody comments I'm later today going to move the numTransInput
computation into the DO_AGGSPLIT_COMBINE branch in
build_pertrans_for_aggref(), add a small test (using
enable_partitionwise_aggregate), and backpatch to 11.

Greetings,

Andres Freund



On Sun, 19 May 2019 at 07:37, Andres Freund <andres@anarazel.de> wrote:
> David, anyone, any comments?

Looking at this now.

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



On Fri, 17 May 2019 at 15:04, Andres Freund <andres@anarazel.de> wrote:
>
> On 2019-05-08 13:06:36 +0900, Kyotaro HORIGUCHI wrote:
> > In a second look, I seems to me that the right thing to do here
> > is setting numInputs instaed of numArguments to numTransInputs in
> > combining step.
>
> Yea, to me this just seems a consequence of the wrong
> numTransInputs. Arguably this is a bug going back to 9.6, where
> combining aggregates where introduced. It's just that numTransInputs
> isn't used anywhere for combining aggregates, before 11.

Isn't it more due to the lack of any aggregates with > 1 arg having a
combine function?

> While I agree that fixing numTransInputs is the right way, I'm not
> convinced the way you did it is the right approach. I'm somewhat
> inclined to think that it's wrong that ExecInitAgg() calls
> build_pertrans_for_aggref() with a numArguments that's not actually
> right? Alternatively I think we should just move the numTransInputs
> computation into the existing branch around DO_AGGSPLIT_COMBINE.

Yeah, probably we should be passing in the correct arg count for the
combinefn to build_pertrans_for_aggref(). However, I see that we also
pass in the inputTypes from the transfn, just we don't use them when
working with the combinefn.

You'll notice that I've just hardcoded the numTransArgs to set it to 1
when we're working with a combinefn.  The combinefn always requires 2
args of trans type, so this seems pretty valid to me.  I think
Kyotaro's patch setting of numInputs is wrong. It just happens to
accidentally match. I also added a regression test to exercise
regr_count.  I tagged it onto an existing query so as to minimise the
overhead.  It seems worth doing since most other aggs have a single
argument and this one wasn't working because it had two args.

I also noticed that the code seemed to work in af025eed536d, so I
guess the new expression evaluation code is highlighting the existing
issue.

> I feel this code has become quite creaky in the last few years.

You're not kidding!

Patch attached of how I think we should fix it.

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Attachment

Re: Statistical aggregate functions are not working with PARTIALaggregation

From
Andres Freund
Date:
Hi,

On 2019-05-19 20:18:38 +1200, David Rowley wrote:
> On Fri, 17 May 2019 at 15:04, Andres Freund <andres@anarazel.de> wrote:
> >
> > On 2019-05-08 13:06:36 +0900, Kyotaro HORIGUCHI wrote:
> > > In a second look, I seems to me that the right thing to do here
> > > is setting numInputs instaed of numArguments to numTransInputs in
> > > combining step.
> >
> > Yea, to me this just seems a consequence of the wrong
> > numTransInputs. Arguably this is a bug going back to 9.6, where
> > combining aggregates where introduced. It's just that numTransInputs
> > isn't used anywhere for combining aggregates, before 11.
> 
> Isn't it more due to the lack of any aggregates with > 1 arg having a
> combine function?

I'm not sure I follow? regr_count() already was in 9.6? Including a
combine function?

postgres[1490][1]=# SELECT version();
┌──────────────────────────────────────────────────────────────────────────────────────────┐
│                                         version                                          │
├──────────────────────────────────────────────────────────────────────────────────────────┤
│ PostgreSQL 9.6.13 on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-7) 8.3.0, 64-bit │
└──────────────────────────────────────────────────────────────────────────────────────────┘
(1 row)

postgres[1490][1]=# SELECT aggfnoid::regprocedure FROM pg_aggregate pa JOIN pg_proc pptrans ON (pa.aggtransfn =
pptrans.oid)AND pptrans.pronargs > 2 AND aggcombinefn <> 0;
 
┌───────────────────────────────────────────────────┐
│                     aggfnoid                      │
├───────────────────────────────────────────────────┤
│ regr_count(double precision,double precision)     │
│ regr_sxx(double precision,double precision)       │
│ regr_syy(double precision,double precision)       │
│ regr_sxy(double precision,double precision)       │
│ regr_avgx(double precision,double precision)      │
│ regr_avgy(double precision,double precision)      │
│ regr_r2(double precision,double precision)        │
│ regr_slope(double precision,double precision)     │
│ regr_intercept(double precision,double precision) │
│ covar_pop(double precision,double precision)      │
│ covar_samp(double precision,double precision)     │
│ corr(double precision,double precision)           │
└───────────────────────────────────────────────────┘


But it's not an active problem in 9.6, because numTransInputs wasn't
used at all for combine functions: Before c253b722f6 there simply was no
NULL check for strict trans functions, and after that the check was
simply hardcoded for the right offset in fcinfo, as it's done by code
specific to aggsplit combine.

In bf6c614a2f2 that was generalized, so the strictness check was done by
common code doing the strictness checks, based on numTransInputs. But
due to the fact that the relevant fcinfo->isnull[2..] was always
zero-initialized (more or less by accident, by being part of the
AggStatePerTrans struct, which is palloc0'ed), there was no observable
damage, we just checked too many array elements. And then finally in
a9c35cf85ca1f, that broke, because the fcinfo is a) dynamically
allocated without being zeroed b) exactly the right length.


> > While I agree that fixing numTransInputs is the right way, I'm not
> > convinced the way you did it is the right approach. I'm somewhat
> > inclined to think that it's wrong that ExecInitAgg() calls
> > build_pertrans_for_aggref() with a numArguments that's not actually
> > right? Alternatively I think we should just move the numTransInputs
> > computation into the existing branch around DO_AGGSPLIT_COMBINE.
> 
> Yeah, probably we should be passing in the correct arg count for the
> combinefn to build_pertrans_for_aggref(). However, I see that we also
> pass in the inputTypes from the transfn, just we don't use them when
> working with the combinefn.

Not sure what you mean by that "however"?


> You'll notice that I've just hardcoded the numTransArgs to set it to 1
> when we're working with a combinefn.  The combinefn always requires 2
> args of trans type, so this seems pretty valid to me.

> I think Kyotaro's patch setting of numInputs is wrong.

Yea, my proposal was to simply harcode it to 2 in the
DO_AGGSPLIT_COMBINE path.


> Patch attached of how I think we should fix it.

Thanks.



> diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
> index d01fc4f52e..b061162961 100644
> --- a/src/backend/executor/nodeAgg.c
> +++ b/src/backend/executor/nodeAgg.c
> @@ -2522,8 +2522,9 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
>          int            existing_aggno;
>          int            existing_transno;
>          List       *same_input_transnos;
> -        Oid            inputTypes[FUNC_MAX_ARGS];
> +        Oid            transFnInputTypes[FUNC_MAX_ARGS];
>          int            numArguments;
> +        int            numTransFnArgs;
>          int            numDirectArgs;
>          HeapTuple    aggTuple;
>          Form_pg_aggregate aggform;
> @@ -2701,14 +2702,23 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
>           * could be different from the agg's declared input types, when the
>           * agg accepts ANY or a polymorphic type.
>           */
> -        numArguments = get_aggregate_argtypes(aggref, inputTypes);
> +        numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);

Not sure I understand the distinction you're trying to make with the
variable renaming. The combine function is also a transition function,
no?


>          /* Count the "direct" arguments, if any */
>          numDirectArgs = list_length(aggref->aggdirectargs);
>  
> +        /*
> +         * Combine functions always have a 2 trans state type input params, so
> +         * this is always set to 1 (we don't count the first trans state).
> +         */

Perhaps the parenthetical should instead be something like "to 1 (the
trans type is not counted as an arg, just like with non-combine trans
function)" or similar?


> @@ -2781,7 +2791,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
>                                        aggref, transfn_oid, aggtranstype,
>                                        serialfn_oid, deserialfn_oid,
>                                        initValue, initValueIsNull,
> -                                      inputTypes, numArguments);
> +                                      transFnInputTypes, numArguments);

That means we pass in the wrong input types? Seems like it'd be better
to either pass an empty list, or just create the argument list here.


I'm inclined to push a minimal fix now, and then a slightly more evolved
version fo this after beta1.


> diff --git a/src/test/regress/sql/aggregates.sql b/src/test/regress/sql/aggregates.sql
> index d4fd657188..bd8b9e8b4f 100644
> --- a/src/test/regress/sql/aggregates.sql
> +++ b/src/test/regress/sql/aggregates.sql
> @@ -963,10 +963,11 @@ SET enable_indexonlyscan = off;
>  
>  -- variance(int4) covers numeric_poly_combine
>  -- sum(int8) covers int8_avg_combine
> +-- regr_cocunt(float8, float8) covers int8inc_float8_float8 and aggregates with > 1 arg

typo...

>  EXPLAIN (COSTS OFF)
> -  SELECT variance(unique1::int4), sum(unique1::int8) FROM tenk1;
> +  SELECT variance(unique1::int4), sum(unique1::int8),regr_count(unique1::float8, unique1::float8) FROM tenk1;
>  
> -SELECT variance(unique1::int4), sum(unique1::int8) FROM tenk1;
> +SELECT variance(unique1::int4), sum(unique1::int8),regr_count(unique1::float8, unique1::float8) FROM tenk1;
>  
>  ROLLBACK;

Does this actually cover the bug at issue here? The non-combine case
wasn't broken?

Greetings,

Andres Freund



On Mon, 20 May 2019 at 06:36, Andres Freund <andres@anarazel.de> wrote:
> > Isn't it more due to the lack of any aggregates with > 1 arg having a
> > combine function?
>
> I'm not sure I follow? regr_count() already was in 9.6? Including a
> combine function?

Oops, that line I meant to delete before sending.

> > Yeah, probably we should be passing in the correct arg count for the
> > combinefn to build_pertrans_for_aggref(). However, I see that we also
> > pass in the inputTypes from the transfn, just we don't use them when
> > working with the combinefn.
>
> Not sure what you mean by that "however"?

Well, previously those two arguments were always for the function in
pg_aggregate.aggtransfn. I only changed one of them to mean the trans
func that's being used, which detracted slightly from my ambition to
change just what numArguments means.

> > You'll notice that I've just hardcoded the numTransArgs to set it to 1
> > when we're working with a combinefn.  The combinefn always requires 2
> > args of trans type, so this seems pretty valid to me.
>
> > I think Kyotaro's patch setting of numInputs is wrong.
>
> Yea, my proposal was to simply harcode it to 2 in the
> DO_AGGSPLIT_COMBINE path.

ok.

> > diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
> > index d01fc4f52e..b061162961 100644
> > --- a/src/backend/executor/nodeAgg.c
> > +++ b/src/backend/executor/nodeAgg.c
> > @@ -2522,8 +2522,9 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
> >               int                     existing_aggno;
> >               int                     existing_transno;
> >               List       *same_input_transnos;
> > -             Oid                     inputTypes[FUNC_MAX_ARGS];
> > +             Oid                     transFnInputTypes[FUNC_MAX_ARGS];
> >               int                     numArguments;
> > +             int                     numTransFnArgs;
> >               int                     numDirectArgs;
> >               HeapTuple       aggTuple;
> >               Form_pg_aggregate aggform;
> > @@ -2701,14 +2702,23 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
> >                * could be different from the agg's declared input types, when the
> >                * agg accepts ANY or a polymorphic type.
> >                */
> > -             numArguments = get_aggregate_argtypes(aggref, inputTypes);
> > +             numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);
>
> Not sure I understand the distinction you're trying to make with the
> variable renaming. The combine function is also a transition function,
> no?

I was trying to make it more clear what each variable is for. It's
true that the combine function is used as a transition function in
this case, but I'd hoped it would be more easy to understand that the
input arguments listed in a variable named transFnInputTypes would be
for the function mentioned in pg_aggregate.aggtransfn rather than the
transfn we're using.  If that's not any more clear then maybe another
fix is better, or we can leave it...   I had to make sense of all this
code last night and I was just having a go at making it easier to
follow for the next person who has to.

> >               /* Count the "direct" arguments, if any */
> >               numDirectArgs = list_length(aggref->aggdirectargs);
> >
> > +             /*
> > +              * Combine functions always have a 2 trans state type input params, so
> > +              * this is always set to 1 (we don't count the first trans state).
> > +              */
>
> Perhaps the parenthetical should instead be something like "to 1 (the
> trans type is not counted as an arg, just like with non-combine trans
> function)" or similar?

Yeah, that's better.

>
> > @@ -2781,7 +2791,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
> >                                                                         aggref, transfn_oid, aggtranstype,
> >                                                                         serialfn_oid, deserialfn_oid,
> >                                                                         initValue, initValueIsNull,
> > -                                                                       inputTypes, numArguments);
> > +                                                                       transFnInputTypes, numArguments);
>
> That means we pass in the wrong input types? Seems like it'd be better
> to either pass an empty list, or just create the argument list here.

What do you mean "here"?  Did you mean to quote this fragment?

@@ -2880,7 +2895,7 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans,
    Oid aggtransfn, Oid aggtranstype,
    Oid aggserialfn, Oid aggdeserialfn,
    Datum initValue, bool initValueIsNull,
-   Oid *inputTypes, int numArguments)
+   Oid *transFnInputTypes, int numArguments)

I had hoped the rename would make it more clear that these are the
args for the function in pg_aggregate.aggtransfn.  We could pass NULL
instead when it's the combine func, but I didn't really see the
advantage of it.

> I'm inclined to push a minimal fix now, and then a slightly more evolved
> version fo this after beta1.

Ok

> > diff --git a/src/test/regress/sql/aggregates.sql b/src/test/regress/sql/aggregates.sql
> > index d4fd657188..bd8b9e8b4f 100644
> > --- a/src/test/regress/sql/aggregates.sql
> > +++ b/src/test/regress/sql/aggregates.sql
> > @@ -963,10 +963,11 @@ SET enable_indexonlyscan = off;
> >
> >  -- variance(int4) covers numeric_poly_combine
> >  -- sum(int8) covers int8_avg_combine
> > +-- regr_cocunt(float8, float8) covers int8inc_float8_float8 and aggregates with > 1 arg
>
> typo...

oops. I spelt coconut wrong. :)

>
> >  EXPLAIN (COSTS OFF)
> > -  SELECT variance(unique1::int4), sum(unique1::int8) FROM tenk1;
> > +  SELECT variance(unique1::int4), sum(unique1::int8),regr_count(unique1::float8, unique1::float8) FROM tenk1;
> >
> > -SELECT variance(unique1::int4), sum(unique1::int8) FROM tenk1;
> > +SELECT variance(unique1::int4), sum(unique1::int8),regr_count(unique1::float8, unique1::float8) FROM tenk1;
> >
> >  ROLLBACK;
>
> Does this actually cover the bug at issue here? The non-combine case
> wasn't broken?

The EXPLAIN shows the plan is:

                  QUERY PLAN
----------------------------------------------
 Finalize Aggregate
   ->  Gather
         Workers Planned: 4
         ->  Partial Aggregate
               ->  Parallel Seq Scan on tenk1
(5 rows)

so it is exercising the combine functions.

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: Statistical aggregate functions are not working with PARTIALaggregation

From
Andres Freund
Date:
Hi,

Thanks to all for reporting, helping to identify and finally patch the
problem!

On 2019-05-20 10:36:43 +1200, David Rowley wrote:
> On Mon, 20 May 2019 at 06:36, Andres Freund <andres@anarazel.de> wrote:
> > > diff --git a/src/backend/executor/nodeAgg.c b/src/backend/executor/nodeAgg.c
> > > index d01fc4f52e..b061162961 100644
> > > --- a/src/backend/executor/nodeAgg.c
> > > +++ b/src/backend/executor/nodeAgg.c
> > > @@ -2522,8 +2522,9 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
> > >               int                     existing_aggno;
> > >               int                     existing_transno;
> > >               List       *same_input_transnos;
> > > -             Oid                     inputTypes[FUNC_MAX_ARGS];
> > > +             Oid                     transFnInputTypes[FUNC_MAX_ARGS];
> > >               int                     numArguments;
> > > +             int                     numTransFnArgs;
> > >               int                     numDirectArgs;
> > >               HeapTuple       aggTuple;
> > >               Form_pg_aggregate aggform;
> > > @@ -2701,14 +2702,23 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
> > >                * could be different from the agg's declared input types, when the
> > >                * agg accepts ANY or a polymorphic type.
> > >                */
> > > -             numArguments = get_aggregate_argtypes(aggref, inputTypes);
> > > +             numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);
> >
> > Not sure I understand the distinction you're trying to make with the
> > variable renaming. The combine function is also a transition function,
> > no?
> 
> I was trying to make it more clear what each variable is for. It's
> true that the combine function is used as a transition function in
> this case, but I'd hoped it would be more easy to understand that the
> input arguments listed in a variable named transFnInputTypes would be
> for the function mentioned in pg_aggregate.aggtransfn rather than the
> transfn we're using.  If that's not any more clear then maybe another
> fix is better, or we can leave it...   I had to make sense of all this
> code last night and I was just having a go at making it easier to
> follow for the next person who has to.

That's what I guessed, but I'm not sure it really achieves that. How
about we have something roughly like:

int                     numTransFnArgs = -1;
int                     numCombineFnArgs = -1;
Oid                     transFnInputTypes[FUNC_MAX_ARGS];
Oid                     combineFnInputTypes[2];

if (DO_AGGSPLIT_COMBINE(...)
   numCombineFnArgs = 1;
   combineFnInputTypes = list_make2(aggtranstype, aggtranstype);
else
   numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);

...

if (DO_AGGSPLIT_COMBINE(...))
    build_pertrans_for_aggref(pertrans, aggstate, estate,
                              aggref, combinefn_oid, aggtranstype,
                              serialfn_oid, deserialfn_oid,
                              initValue, initValueIsNull,
                              combineFnInputTypes, numCombineFnArgs);
else
    build_pertrans_for_aggref(pertrans, aggstate, estate,
                              aggref, transfn_oid, aggtranstype,
                              serialfn_oid, deserialfn_oid,
                              initValue, initValueIsNull,
                              transFnInputTypes, numTransFnArgs);

seems like that'd make the code clearer?  I wonder if we shouldn't
strive to have *no* DO_AGGSPLIT_COMBINE specific logic in
build_pertrans_for_aggref (except perhaps for an error check or two).

Istm we shouldn't even need a separate build_aggregate_combinefn_expr()
from build_aggregate_transfn_expr().


> > > @@ -2781,7 +2791,7 @@ ExecInitAgg(Agg *node, EState *estate, int eflags)
> > >                                                                         aggref, transfn_oid, aggtranstype,
> > >                                                                         serialfn_oid, deserialfn_oid,
> > >                                                                         initValue, initValueIsNull,
> > > -                                                                       inputTypes, numArguments);
> > > +                                                                       transFnInputTypes, numArguments);
> >
> > That means we pass in the wrong input types? Seems like it'd be better
> > to either pass an empty list, or just create the argument list here.
> 
> What do you mean "here"?  Did you mean to quote this fragment?
> 
> @@ -2880,7 +2895,7 @@ build_pertrans_for_aggref(AggStatePerTrans pertrans,
>     Oid aggtransfn, Oid aggtranstype,
>     Oid aggserialfn, Oid aggdeserialfn,
>     Datum initValue, bool initValueIsNull,
> -   Oid *inputTypes, int numArguments)
> +   Oid *transFnInputTypes, int numArguments)
> 
> I had hoped the rename would make it more clear that these are the
> args for the function in pg_aggregate.aggtransfn.  We could pass NULL
> instead when it's the combine func, but I didn't really see the
> advantage of it.

The advantage is that if somebody starts to use the the wrong list in
the wrong context, we'd be more likely to get an error than something
that works in the common cases, but not in the more complicated
situations.


> > I'm inclined to push a minimal fix now, and then a slightly more evolved
> > version fo this after beta1.
> 
> Ok

Done that now.


> > >  EXPLAIN (COSTS OFF)
> > > -  SELECT variance(unique1::int4), sum(unique1::int8) FROM tenk1;
> > > +  SELECT variance(unique1::int4), sum(unique1::int8),regr_count(unique1::float8, unique1::float8) FROM tenk1;
> > >
> > > -SELECT variance(unique1::int4), sum(unique1::int8) FROM tenk1;
> > > +SELECT variance(unique1::int4), sum(unique1::int8),regr_count(unique1::float8, unique1::float8) FROM tenk1;
> > >
> > >  ROLLBACK;
> >
> > Does this actually cover the bug at issue here? The non-combine case
> > wasn't broken?
> 
> The EXPLAIN shows the plan is:

Err, comes from only looking at the diff :(. Missed the previous SETs,
and the explain wasn't included in the context either...


Ugh, I just noticed - as you did before - that numInputs is declared at
the top-level in build_pertrans_for_aggref, and then *again* in the
!DO_AGGSPLIT_COMBINE branch. Why, oh, why (yes, I'm aware that that's in
one of my commmits :().  I've renamed your numTransInputs variable to
numTransArgs, as it seems confusing to have different values in
pertrans->numTransInputs and a local numTransInputs variable.

Btw, the zero input case appears to also be affected by this bug: We
quite reasonably don't emit a strict input check expression step for the
combine function when numTransInputs = 0. But the only zero length agg
is count(*), and while it has strict trans & combine functions, it does
have an initval of 0. So I don't think it's reachable with builtin
aggregates, and I can't imagine another zero argument aggregate.

Greetings,

Andres Freund



On Mon, 20 May 2019 at 13:20, Andres Freund <andres@anarazel.de> wrote:
> How
> about we have something roughly like:
>
> int                     numTransFnArgs = -1;
> int                     numCombineFnArgs = -1;
> Oid                     transFnInputTypes[FUNC_MAX_ARGS];
> Oid                     combineFnInputTypes[2];
>
> if (DO_AGGSPLIT_COMBINE(...)
>    numCombineFnArgs = 1;
>    combineFnInputTypes = list_make2(aggtranstype, aggtranstype);
> else
>    numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);
>
> ...
>
> if (DO_AGGSPLIT_COMBINE(...))
>     build_pertrans_for_aggref(pertrans, aggstate, estate,
>                               aggref, combinefn_oid, aggtranstype,
>                               serialfn_oid, deserialfn_oid,
>                               initValue, initValueIsNull,
>                               combineFnInputTypes, numCombineFnArgs);
> else
>     build_pertrans_for_aggref(pertrans, aggstate, estate,
>                               aggref, transfn_oid, aggtranstype,
>                               serialfn_oid, deserialfn_oid,
>                               initValue, initValueIsNull,
>                               transFnInputTypes, numTransFnArgs);
>
> seems like that'd make the code clearer?

I think that might be a good idea... I mean apart from trying to
assign a List to an array :)  We still must call
get_aggregate_argtypes() in order to determine the final function, so
the code can't look exactly like you've written.

>  I wonder if we shouldn't
> strive to have *no* DO_AGGSPLIT_COMBINE specific logic in
> build_pertrans_for_aggref (except perhaps for an error check or two).

Just so we have a hard copy to review and discuss, I think this would
look something like the attached.

We do miss out on a few very small optimisations, but I don't think
they'll be anything we could measure. Namely
build_aggregate_combinefn_expr() called make_agg_arg() once and used
it twice instead of calling it once for each arg.  I don't think
that's anything we could measure, especially in a situation where
two-stage aggregation is being used.

I ended up also renaming aggtransfn to transfn_oid in
build_pertrans_for_aggref(). Having it called aggtranfn seems a bit
too close to the pg_aggregate.aggtransfn column which is confusion
given that we might pass it the value of the aggcombinefn column.

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Attachment

Re: Statistical aggregate functions are not working with PARTIALaggregation

From
Kyotaro HORIGUCHI
Date:
Hello.

I couldn't understand the multiple argument lists with confident
so the patch was born from a guess^^; Sorry for the confusing but
I'm relieved by knowing that it was not so easy to understand.

At Mon, 20 May 2019 17:27:10 +1200, David Rowley <david.rowley@2ndquadrant.com> wrote in
<CAKJS1f_1xfn8navZP05U8BszsG=+CNck_99f_+0j2ccBSrBDkQ@mail.gmail.com>
> On Mon, 20 May 2019 at 13:20, Andres Freund <andres@anarazel.de> wrote:
> > How
> > about we have something roughly like:
> >
> > int                     numTransFnArgs = -1;
> > int                     numCombineFnArgs = -1;
> > Oid                     transFnInputTypes[FUNC_MAX_ARGS];
> > Oid                     combineFnInputTypes[2];
> >
> > if (DO_AGGSPLIT_COMBINE(...)
> >    numCombineFnArgs = 1;
> >    combineFnInputTypes = list_make2(aggtranstype, aggtranstype);
> > else
> >    numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);
> >
> > ...
> >
> > if (DO_AGGSPLIT_COMBINE(...))
> >     build_pertrans_for_aggref(pertrans, aggstate, estate,
> >                               aggref, combinefn_oid, aggtranstype,
> >                               serialfn_oid, deserialfn_oid,
> >                               initValue, initValueIsNull,
> >                               combineFnInputTypes, numCombineFnArgs);
> > else
> >     build_pertrans_for_aggref(pertrans, aggstate, estate,
> >                               aggref, transfn_oid, aggtranstype,
> >                               serialfn_oid, deserialfn_oid,
> >                               initValue, initValueIsNull,
> >                               transFnInputTypes, numTransFnArgs);
> >
> > seems like that'd make the code clearer?
> 
> I think that might be a good idea... I mean apart from trying to
> assign a List to an array :)  We still must call
> get_aggregate_argtypes() in order to determine the final function, so
> the code can't look exactly like you've written.
> 
> >  I wonder if we shouldn't
> > strive to have *no* DO_AGGSPLIT_COMBINE specific logic in
> > build_pertrans_for_aggref (except perhaps for an error check or two).
> 
> Just so we have a hard copy to review and discuss, I think this would
> look something like the attached.

May I give some comments? They might make me look stupid but I
can't help asking.

-        numArguments = get_aggregate_argtypes(aggref, inputTypes);
+        numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);

If the function retrieves argument types of transform functions,
it would be better that the function name is
get_aggregate_transargtypes() and Aggref.aggargtypes has the name
like aggtransargtypes.

         /* Detect how many arguments to pass to the finalfn */
         if (aggform->aggfinalextra)
-            peragg->numFinalArgs = numArguments + 1;
+            peragg->numFinalArgs = numTransFnArgs + 1;
         else
             peragg->numFinalArgs = numDirectArgs + 1;

I can understand the aggfinalextra case, but cannot understand
another. As Andres said I think the code requires an explanation
of why the final args is not numTransFnArgs but *numDirectArgs
plus 1*.

+                /*
+                 * When combining there's only one input, the to-be-combined
+                 * added transition value from below (this node's transition
+                 * value is counted separately).
+                 */
+                pertrans->numTransInputs = 1;

I believe this works but why the member has not been set
correctly by the creator of the aggsplit?


+                /* Detect how many arguments to pass to the transfn */

I want to have a comment there that explains why what ordered-set
requires is not numTransFnArgs + (#sort cols?), but
"list_length(aggref->args)", or a comment that explanas why they
are compatible to be expplained.

> We do miss out on a few very small optimisations, but I don't think
> they'll be anything we could measure. Namely
> build_aggregate_combinefn_expr() called make_agg_arg() once and used
> it twice instead of calling it once for each arg.  I don't think
> that's anything we could measure, especially in a situation where
> two-stage aggregation is being used.
> 
> I ended up also renaming aggtransfn to transfn_oid in
> build_pertrans_for_aggref(). Having it called aggtranfn seems a bit
> too close to the pg_aggregate.aggtransfn column which is confusion
> given that we might pass it the value of the aggcombinefn column.

Is Form_pg_aggregate->aggtransfn different thing from
transfn_oid? It seems very confusing to me apart from the naming.


regards.

-- 
Kyotaro Horiguchi
NTT Open Source Software Center




On Mon, 20 May 2019 at 19:59, Kyotaro HORIGUCHI
<horiguchi.kyotaro@lab.ntt.co.jp> wrote:
> -        numArguments = get_aggregate_argtypes(aggref, inputTypes);
> +        numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);
>
> If the function retrieves argument types of transform functions,
> it would be better that the function name is
> get_aggregate_transargtypes() and Aggref.aggargtypes has the name
> like aggtransargtypes.

Probably that would be a better name.

>          /* Detect how many arguments to pass to the finalfn */
>          if (aggform->aggfinalextra)
> -            peragg->numFinalArgs = numArguments + 1;
> +            peragg->numFinalArgs = numTransFnArgs + 1;
>          else
>              peragg->numFinalArgs = numDirectArgs + 1;
>
> I can understand the aggfinalextra case, but cannot understand
> another. As Andres said I think the code requires an explanation
> of why the final args is not numTransFnArgs but *numDirectArgs
> plus 1*.

numDirectArgs will be 0 for anything apart from order-set aggregate,
so in this case, numFinalArgs will become 1, since the final function
just accepts the aggregate state.
For ordered-set aggregates like, say percentile_cont the finalfn also
needs the argument that was passed into the aggregate. e.g
percentile_cont(0.5), the final function needs to know about 0.5.  In
this case 0.5 is the direct arg and the indirect args are what are in
the order by clause, x in WITHIN GROUP (ORDER BY x).  At least that's
my understanding.

> +                /*
> +                 * When combining there's only one input, the to-be-combined
> +                 * added transition value from below (this node's transition
> +                 * value is counted separately).
> +                 */
> +                pertrans->numTransInputs = 1;
>
> I believe this works but why the member has not been set
> correctly by the creator of the aggsplit?

Not quite sure what you mean here. We need to set this based on if
we're dealing with a combine function or a trans function.

> +                /* Detect how many arguments to pass to the transfn */
>
> I want to have a comment there that explains why what ordered-set
> requires is not numTransFnArgs + (#sort cols?), but
> "list_length(aggref->args)", or a comment that explanas why they
> are compatible to be expplained.

Normal aggregate ORDER BY clauses are handle in nodeAgg.c, but
ordered-set aggregate's WITHIN GROUP (ORDER BY ..) args are handled in
the aggregate's transition function.

> > I ended up also renaming aggtransfn to transfn_oid in
> > build_pertrans_for_aggref(). Having it called aggtranfn seems a bit
> > too close to the pg_aggregate.aggtransfn column which is confusion
> > given that we might pass it the value of the aggcombinefn column.
>
> Is Form_pg_aggregate->aggtransfn different thing from
> transfn_oid? It seems very confusing to me apart from the naming.

I don't think this is explained very well in the code, so I understand
your confusion.  Some of the renaming work I've been trying to do is
to try to make this more clear.

Basically when we're doing the "Finalize Aggregate" stage after having
performed a previous "Partial Aggregate", we must re-aggregate all the
aggregate states that were made in the Partial Aggregate stage.  Some
of these states might need to be combined together if they both belong
to the same group. That's done with the function mentioned in
pg_aggregate.aggcombinefn.  Whether we're doing a normal "Aggregate"
or a "Finalize Aggregate" the actual work to do is not all that
different, the only real difference is that we're aggregating
previously aggregated states rather than normal values. Since the rest
of the work the same, we run it through the same code in nodeAgg.c,
only we use the pg_aggregate.aggcombinefn for "Finalize Aggregate" and
use pg_aggregate.aggtransfn for nodes shown as "Aggregate" and
"Partial Aggregate" in explain.

That's sort of simplified as really a node shown as "Partial
Aggregate" in EXPLAIN is just not calling the aggfinalfn and "Finalize
Aggregate" is.  The code in nodeAgg.c technically supports
re-combining previously aggregated states and not finalizing them.
You might wonder why you might do that.  It was partially a by-product
of how the code was written, but also I had in mind about clustered
servers aggregating large datasets on remote servers running parallel
aggregates on each server then each server sending back the partially
aggregated states back to the main server to be re-combined and
finalized -- 3-stage aggregation. Changes were made after the initial
partial aggregate commit to partially remove the ability to form paths
in this shape this in the planner code, but that's mostly just removed
support in explain.c and changing bool flags in favour of the AggSplit
enum which lacks a combination of values to do that. We'd need an
AGGSPLIT_COMBINE_SKIPFINAL_DESERIAL_SERIAL... or something, to make
that work.  (I'm surprised not to see more AggSplit values for
partition-wise aggregate since that shouldn't need to serialize and
deserialize the states...)

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



On Sun, May 19, 2019 at 2:36 PM Andres Freund <andres@anarazel.de> wrote:
> Not sure I understand the distinction you're trying to make with the
> variable renaming. The combine function is also a transition function,
> no?

Not in my mental model.  It's true that a combine function is used in
a similar manner to a transition function, but they are not the same
thing.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



Hi,

On May 20, 2019 6:23:46 AM PDT, Robert Haas <robertmhaas@gmail.com> wrote:
>On Sun, May 19, 2019 at 2:36 PM Andres Freund <andres@anarazel.de>
>wrote:
>> Not sure I understand the distinction you're trying to make with the
>> variable renaming. The combine function is also a transition
>function,
>> no?
>
>Not in my mental model.  It's true that a combine function is used in
>a similar manner to a transition function, but they are not the same
>thing.

Well, the context here is precisely that. We're still calling functions that have trans* in the name, we pass them
transfnstyle named parameters. If you read my suggestion, it essentially is running go *further* than David's renaming? 

Andres
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.



Re: Statistical aggregate functions are not working with PARTIALaggregation

From
Andres Freund
Date:
Hi,

On 2019-05-20 17:27:10 +1200, David Rowley wrote:
> On Mon, 20 May 2019 at 13:20, Andres Freund <andres@anarazel.de> wrote:
> > How
> > about we have something roughly like:
> >
> > int                     numTransFnArgs = -1;
> > int                     numCombineFnArgs = -1;
> > Oid                     transFnInputTypes[FUNC_MAX_ARGS];
> > Oid                     combineFnInputTypes[2];
> >
> > if (DO_AGGSPLIT_COMBINE(...)
> >    numCombineFnArgs = 1;
> >    combineFnInputTypes = list_make2(aggtranstype, aggtranstype);
> > else
> >    numTransFnArgs = get_aggregate_argtypes(aggref, transFnInputTypes);
> >
> > ...
> >
> > if (DO_AGGSPLIT_COMBINE(...))
> >     build_pertrans_for_aggref(pertrans, aggstate, estate,
> >                               aggref, combinefn_oid, aggtranstype,
> >                               serialfn_oid, deserialfn_oid,
> >                               initValue, initValueIsNull,
> >                               combineFnInputTypes, numCombineFnArgs);
> > else
> >     build_pertrans_for_aggref(pertrans, aggstate, estate,
> >                               aggref, transfn_oid, aggtranstype,
> >                               serialfn_oid, deserialfn_oid,
> >                               initValue, initValueIsNull,
> >                               transFnInputTypes, numTransFnArgs);
> >
> > seems like that'd make the code clearer?
> 
> I think that might be a good idea... I mean apart from trying to
> assign a List to an array :)  We still must call
> get_aggregate_argtypes() in order to determine the final function, so
> the code can't look exactly like you've written.
> 
> >  I wonder if we shouldn't
> > strive to have *no* DO_AGGSPLIT_COMBINE specific logic in
> > build_pertrans_for_aggref (except perhaps for an error check or two).
> 
> Just so we have a hard copy to review and discuss, I think this would
> look something like the attached.
> 
> We do miss out on a few very small optimisations, but I don't think
> they'll be anything we could measure. Namely
> build_aggregate_combinefn_expr() called make_agg_arg() once and used
> it twice instead of calling it once for each arg.  I don't think
> that's anything we could measure, especially in a situation where
> two-stage aggregation is being used.
> 
> I ended up also renaming aggtransfn to transfn_oid in
> build_pertrans_for_aggref(). Having it called aggtranfn seems a bit
> too close to the pg_aggregate.aggtransfn column which is confusion
> given that we might pass it the value of the aggcombinefn column.

Now that master is open for development, and you have a commit bit, are
you planning to go forward with this on your own?

Greetings,

Andres Freund



On Thu, 25 Jul 2019 at 06:52, Andres Freund <andres@anarazel.de> wrote:
> Now that master is open for development, and you have a commit bit, are
> you planning to go forward with this on your own?

I plan to, but it's not a high priority at the moment.  I'd like to do
much more in nodeAgg.c, TBH. It would be good to remove some code from
nodeAgg.c and put it in the planner.

I'd like to see:

1) Planner doing the Aggref merging for aggregates with the same transfn etc.
2) Planner trying to give nodeAgg.c a sorted path to work with on
DISTINCT / ORDER BY aggs
3) Planner providing nodeAgg.c with the order that the aggregates
should be evaluated in order to minimise sorting for DISTINCT  / ORDER
BY aggs.

I'd take all those up on a separate thread though.

If you're in a rush to see the cleanup proposed a few months ago then
please feel free to take it up. It might be a while before I can get a
chance to look at it again.

-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services



Re: Statistical aggregate functions are not working with PARTIALaggregation

From
Andres Freund
Date:
Hi,

On 2019-07-25 10:36:26 +1200, David Rowley wrote:
> I'd like to do > much more in nodeAgg.c, TBH. It would be good to remove some code from
> nodeAgg.c and put it in the planner.

Indeed!


> I'd like to see:
> 
> 1) Planner doing the Aggref merging for aggregates with the same
> transfn etc.

Makes sense.

I assume this would entail associating T_Aggref expressions with the
corresponding Agg at an earlier state? The whole business of having to
prepare expression evaluation, just so ExecInitAgg() can figure out
which aggregates it has to compute always has struck me as
architecturally bad.


> 2) Planner trying to give nodeAgg.c a sorted path to work with on
> DISTINCT / ORDER BY aggs

That'll have to be a best effort thing though, i.e. there'll always be
cases where we'll have to retain the current logic (or just regress
performance really badly)?

Greetings,

Andres Freund



On Thu, 25 Jul 2019 at 11:33, Andres Freund <andres@anarazel.de> wrote:
>
> On 2019-07-25 10:36:26 +1200, David Rowley wrote:
> > 2) Planner trying to give nodeAgg.c a sorted path to work with on
> > DISTINCT / ORDER BY aggs
>
> That'll have to be a best effort thing though, i.e. there'll always be
> cases where we'll have to retain the current logic (or just regress
> performance really badly)?

It's something we already do for windowing functions. We just don't do
it for aggregates. It's slightly different since windowing functions
just chain nodes together to evaluate multiple window definitions.
Aggregates can't/don't do that since aggregates... well, aggregate,
(i.e the input to the 2nd one can't be aggregated by the 1st one) but
there's likely not much of a reason why standard_qp_callback couldn't
choose some pathkeys for the first AggRef with a ORDER BY / DISTINCT
clause. nodeAgg.c would still need to know how to change the sort
order in order to evaluate other Aggrefs in some different order.

I'm not quite sure where the regression would be. nodeAgg.c must
perform the sort, or if we give the planner some pathkeys, then worst
case the planner adds a Sort node. That seems equivalent to me.
However, in the best case, there's a suitable index and no sorting is
required anywhere. Probably then we can add combine function support
for the remaining built-in aggregates. There was trouble doing that in
[1] due to some concerns about messing up results for people who rely
on the order of an aggregate without actually writing an ORDER BY.

[1] https://www.postgresql.org/message-id/CAKJS1f9sx_6GTcvd6TMuZnNtCh0VhBzhX6FZqw17TgVFH-ga_A@mail.gmail.com


-- 
 David Rowley                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services