Re: BUG #15486: PG11 jit on 50x slower than jit off - Mailing list pgsql-bugs

From Andres Freund
Subject Re: BUG #15486: PG11 jit on 50x slower than jit off
Date
Msg-id 20181122015409.i4jd6e2ypwmbkyu5@alap3.anarazel.de
Whole thread Raw
In response to BUG #15486: PG11 jit on 50x slower than jit off  (PG Bug reporting form <noreply@postgresql.org>)
Responses Re: BUG #15486: PG11 jit on 50x slower than jit off  (Andrew Gierth <andrew@tao11.riddles.org.uk>)
List pgsql-bugs
Hi,

On 2018-11-05 08:10:17 +0000, PG Bug reporting form wrote:
> we have a problem with PG11 and JIT. Execution time with JIT ON is slower
> than JIT OFF. PostgreSQL is running on Debian in LXC Container on Proxmox
> VE.
> 
> Here is some example:
> 
> create table t(a int);
> insert into t select generate_series from generate_series(1, 100000);
> analyze t;
> set jit to off;
> 
> explain analyze SELECT  
>     ARRAY( SELECT DISTINCT unnest(array[]::integer[] || t.a) AS unnest) AS
> c1,
>     ARRAY( SELECT DISTINCT unnest(((((array[]::integer[]))) || t.a)) AS
> unnest) AS c2
>    FROM t
> ;
> 
> set jit to on;
> 
> explain analyze SELECT  
>     ARRAY( SELECT DISTINCT unnest(array[]::integer[] || t.a) AS unnest) AS
> c1,
>     ARRAY( SELECT DISTINCT unnest(((((array[]::integer[]))) || t.a)) AS
> unnest) AS c2
>    FROM t
> ;
> 

> Here are results:

>    Timing: Generation 55262.932 ms, Inlining 0.000 ms, Optimization 0.481
> ms, Emission 12.218 ms, Total 55275.631 ms
>  Execution Time: 70522.785 ms
> (17 rows)
> ------------------

Thanks for the handy reproducer!

The relevant call stack out of which most of the JIT compilation happens
is:
#11 0x0000560b5baa312a in jit_compile_expr (state=0x560b5d5d25c0)
    at /home/andres/src/postgresql/src/backend/jit/jit.c:180
#12 0x0000560b5b6ca191 in ExecReadyExpr (state=0x560b5d5d25c0)
    at /home/andres/src/postgresql/src/backend/executor/execExpr.c:628
#13 0x0000560b5b6cfe43 in ExecBuildGroupingEqual (ldesc=0x560b5d5dffa8, rdesc=0x560b5d5dffa8, 
    lops=0x560b5bd7d9c0 <TTSOpsMinimalTuple>, rops=0x560b5bd7d9c0 <TTSOpsMinimalTuple>, numCols=1, 
    keyColIdx=0x560b5d5dff28, eqfunctions=0x560b5d5e00c0, parent=0x560b5d5b7e90)
    at /home/andres/src/postgresql/src/backend/executor/execExpr.c:3449
#14 0x0000560b5b6d8b10 in BuildTupleHashTable (parent=0x560b5d5b7e90, inputDesc=0x560b5d5dffa8, 
    numCols=1, keyColIdx=0x560b5d5dff28, eqfuncoids=0x560b5d5e00c0, hashfunctions=0x560b5d5e00e0, 
    nbuckets=100, additionalsize=0, tablecxt=0x560b5d5d11f0, tempcxt=0x560b5d5cd1d0, 
    use_variable_hash_iv=false) at /home/andres/src/postgresql/src/backend/executor/execGrouping.c:211
#15 0x0000560b5b6f6155 in build_hash_table (aggstate=0x560b5d5b7e90)
    at /home/andres/src/postgresql/src/backend/executor/nodeAgg.c:1280
#16 0x0000560b5b6fa1f1 in ExecReScanAgg (node=0x560b5d5b7e90)
    at /home/andres/src/postgresql/src/backend/executor/nodeAgg.c:3444
#17 0x0000560b5b6c8491 in ExecReScan (node=0x560b5d5b7e90)
    at /home/andres/src/postgresql/src/backend/executor/execAmi.c:261
#18 0x0000560b5b71ec64 in ExecScanSubPlan (node=0x560b5d5e1e28, econtext=0x560b5d5e0a58, 
    isNull=0x560b5d5e0ed5) at /home/andres/src/postgresql/src/backend/executor/nodeSubplan.c:298
#19 0x0000560b5b71e72c in ExecSubPlan (node=0x560b5d5e1e28, econtext=0x560b5d5e0a58, 
    isNull=0x560b5d5e0ed5) at /home/andres/src/postgresql/src/backend/executor/nodeSubplan.c:90
#20 0x0000560b5b6d6850 in ExecEvalSubPlan (state=0x560b5d5e0ed0, op=0x560b5d5e1020, 
    econtext=0x560b5d5e0a58) at /home/andres/src/postgresql/src/backend/executor/execExprInterp.c:3770

So what happens is that we currently generate an expression whenever
BuildTupleHashTable() is called - which generally sounds sane. But in
this specific case the BuildTupleHashTable() is called many many times,
because we reset the subplans (the correlated aubqueries) for every
input row.

This actually isn't great outside of JIT compilation as well - we waste
a substantial portion of time rebuilding the expression everytime, and
the hashtable grows every time as well.

I see three possible solutions:

1) Move ExecBuildGroupingEqual to BuildTupleHashTable callers and pass
   it in. That way we don't need to rebuild the expression, but would
   still pay the price of re-allocating the hashtable itself.
2) Create a new ResetTupleHashTable that reuses the TupleHashTable, but
   recreates hashtable->hashtab afresh.
3) Create a new ResetTupleHashTable that iterates over the hashtable and
   resets the fields.

I'm somewhat opposed to 1) as that'll require breaking existing APIs in
11, and exposes more details to the outside than necessary.  I don't
really have an opinion between 2) and 3) - 2) is better if the
inputsizes are very irregular with only a few large hashtables between
resets, 3) is much better when the predicted size is correct or
commonlytoo small.

Comments?

Greetings,

Andres Freund


pgsql-bugs by date:

Previous
From: Michael Paquier
Date:
Subject: Re: ALTER INDEX ... ALTER COLUMN not present in dump
Next
From: Andrew Gierth
Date:
Subject: Re: BUG #15486: PG11 jit on 50x slower than jit off