[PERFORM] Re: join estimate of subqueries with range conditions and constraintexclusion - Mailing list pgsql-performance

From Justin Pryzby
Subject [PERFORM] Re: join estimate of subqueries with range conditions and constraintexclusion
Date
Msg-id 20170530105215.GA13459@telsasoft.com
Whole thread Raw
In response to [PERFORM] join estimate of subqueries with range conditions and constraintexclusion  (Justin Pryzby <pryzby@telsasoft.com>)
Responses Re: [PERFORM] Re: join estimate of subqueries with range conditions and constraint exclusion  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-performance
On Wed, May 24, 2017 at 04:17:30PM -0500, Justin Pryzby wrote:
> We got bitten again by what appears to be the same issue I reported (perhaps
> poorly) here:
> https://www.postgresql.org/message-id/20170326193344.GS31628%40telsasoft.com

> I'm diagnosing a bad estimate/plan due to excessively high n_distinct leading
> to underestimated rowcount when selecting from a small fraction of the table
> heirarchy.  This leads intermittently to bad things, specifically a cascade of
> misestimates and associated nested loops around millions of rows.

I dug into this some more;  I can mitigate the issue with this change:

diff --git a/src/backend/utils/adt/selfuncs.c b/src/backend/utils/adt/selfuncs.c
index 6a4f7b1..962a5b4 100644
--- a/src/backend/utils/adt/selfuncs.c
+++ b/src/backend/utils/adt/selfuncs.c
@@ -2279,6 +2279,22 @@ eqjoinsel_inner(Oid operator,

        nd1 = get_variable_numdistinct(vardata1, &isdefault1);
        nd2 = get_variable_numdistinct(vardata2, &isdefault2);
+       elog(DEBUG4, "nd %lf %lf", nd1 ,nd2);
+       if (nd1>vardata1->rel->rows) nd1=vardata1->rel->rows;
+       if (nd2>vardata1->rel->rows) nd2=vardata2->rel->rows;
+
+       elog(DEBUG4, "nd %lf %lf", nd1 ,nd2);
+       elog(DEBUG4, "rows %lf %lf", vardata1->rel->rows ,vardata2->rel->rows);
+       elog(DEBUG4, "tuples %lf %lf", vardata1->rel->tuples ,vardata2->rel->tuples);

original estimate:

DEBUG:  nd 35206.000000 35206.000000
DEBUG:  nd 35206.000000 35206.000000
DEBUG:  rows 5031.000000 5031.000000
DEBUG:  tuples 5031.000000 5031.000000
                                                                           QUERY PLAN

----------------------------------------------------------------------------------------------------------------------------------------------------------------
 Hash Join  (cost=1294.56..2558.62 rows=723 width=750) (actual time=103.273..490.984 rows=50300 loops=1)
   Hash Cond: (eric_enodeb_metrics.start_time = eric_enodeb_metrics_1.start_time)

patched estimate/plan:

postgres=# explain ANALYZE SELECT * FROM (SELECT * FROM eric_enodeb_metrics WHERE start_time>'2017-04-25 18:00') x JOIN
(SELECT* FROM eric_enodeb_metrics WHERE start_time>'2017-04-25 18:00') y USING (start_time); 

DEBUG:  nd 35206.000000 35206.000000
DEBUG:  nd 5031.000000 5031.000000
DEBUG:  rows 5031.000000 5031.000000
DEBUG:  tuples 5031.000000 5031.000000

| Hash Join  (cost=1294.56..2602.14 rows=5075 width=750) (actual time=90.445..477.712 rows=50300 loops=1)
|   Hash Cond: (eric_enodeb_metrics.start_time = eric_enodeb_metrics_1.start_time)
|   ->  Append  (cost=0.00..1231.67 rows=5031 width=379) (actual time=16.424..46.899 rows=5030 loops=1)
|         ->  Seq Scan on eric_enodeb_metrics  (cost=0.00..0.00 rows=1 width=378) (actual time=0.012..0.012 rows=0
loops=1)
|               Filter: (start_time > '2017-04-25 18:00:00-05'::timestamp with time zone)
|         ->  Seq Scan on eric_enodeb_201704  (cost=0.00..1231.67 rows=5030 width=379) (actual time=16.408..45.634
rows=5030loops=1) 
|               Filter: (start_time > '2017-04-25 18:00:00-05'::timestamp with time zone)
|               Rows Removed by Filter: 23744
|   ->  Hash  (cost=1231.67..1231.67 rows=5031 width=379) (actual time=73.801..73.801 rows=5030 loops=1)
|         Buckets: 8192  Batches: 1  Memory Usage: 1283kB
|         ->  Append  (cost=0.00..1231.67 rows=5031 width=379) (actual time=14.607..47.395 rows=5030 loops=1)
|               ->  Seq Scan on eric_enodeb_metrics eric_enodeb_metrics_1  (cost=0.00..0.00 rows=1 width=378) (actual
time=0.009..0.009rows=0 loops=1) 
|                     Filter: (start_time > '2017-04-25 18:00:00-05'::timestamp with time zone)
|               ->  Seq Scan on eric_enodeb_201704 eric_enodeb_201704_1  (cost=0.00..1231.67 rows=5030 width=379)
(actualtime=14.594..46.091 rows=5030 loops=1) 
|                     Filter: (start_time > '2017-04-25 18:00:00-05'::timestamp with time zone)
|                     Rows Removed by Filter: 23744

.. which gets additionally extreme with increasingly restrictive condition, as
rows estimate diverges more from nd.

There's still an 2nd issue which this doesn't address, having to do with joins
of tables with full/complete MCV lists, and selective queries on those tables,
as demonstrated by the artificial test:

> postgres=# CREATE TABLE t(i INT);
> postgres=# TRUNCATE t;INSERT INTO t SELECT i FROM generate_series(1,99) i,generate_series(1,99);ANALYZE t;
> postgres=# SELECT (SELECT sum(x) FROM unnest(most_common_freqs) x) frac_MCV, tablename, attname, n_distinct,
array_length(most_common_vals,1)n_mcv, array_length(histogram_bounds,1) n_hist, (SELECT MAX(x) FROM
unnest(most_common_vals::text::text[])x) maxmcv, (histogram_bounds::text::text[])[array_length(histogram_bounds,1)]
maxhistFROM pg_stats WHERE attname~'i' AND tablename='t' GROUP BY 1,2,3,4,5,6,7,8 ORDER BY 1 DESC; 
> -[ RECORD 1 ]--
> frac_mcv   | 1
> tablename  | t
> attname    | i
> n_distinct | 99
> n_mcv      | 99
> n_hist     |
> maxmcv     | 99
> maxhist    |
>
> range query (which could use constraint exclusion), but bad estimate:
> postgres=# explain ANALYZE SELECT * FROM (SELECT * FROM t WHERE i<2) AS a JOIN (SELECT * FROM t WHERE i<2) AS b USING
(i);
>  Merge Join  (cost=339.59..341.57 rows=99 width=4) (actual time=8.272..16.892 rows=9801 loops=1)


> My understanding:
> Postgres estimates join selectivity using number of distinct values of
> underlying.  For the subqueries "a" and "b", the estimate is same as for
> underlying table "t", even when selecting only a small fraction of the table...

It seems to me that 1) estimates of tables with MCV lists including every
column values should get much better estiamtes than that, and hopefully
estimates of (t WHERE) JOIN (t WHERE) USING (c) as good as t JOIN t USING(c)
WHERE.  2) postgres estimator doesn't have everything it needs to invoke
existing functionality to apply all its knowledge without also invoking the
executor (testing MCVs for passing qual conditions); 3) frequency values (join
eqsel's numbers[]) should be scaled up by something resembling rows/tuples, but
my existing test showed that can be too strong a correction.

Justin


pgsql-performance by date:

Previous
From: Rick Otten
Date:
Subject: Re: [PERFORM] Client Server performance & UDS
Next
From: Karl Czajkowski
Date:
Subject: Re: [PERFORM] Client Server performance & UDS