Re: Problems with "pg.dropped" column after upgrade 9.5 to 9.6 - Mailing list pgsql-bugs
| From | Tom Lane |
|---|---|
| Subject | Re: Problems with "pg.dropped" column after upgrade 9.5 to 9.6 |
| Date | |
| Msg-id | 27650.1478106537@sss.pgh.pa.us Whole thread Raw |
| In response to | Re: Problems with "pg.dropped" column after upgrade 9.5 to 9.6 (Tom Lane <tgl@sss.pgh.pa.us>) |
| Responses |
Re: Problems with "pg.dropped" column after upgrade 9.5 to 9.6
|
| List | pgsql-bugs |
I wrote:
> So I'm pretty tempted to fix it as per attached, that is, teach setrefs.c
> that it's silly to try to convert a Const into a Var. It is sane to
> replace more-complicated expressions with Vars, because that means we save
> re-evaluating those expressions. But a quick look at ExecEvalConst and
> ExecEvalVar shows that a Const is strictly cheaper to execute than a Var,
> so doing such a conversion is actually a pessimization.
After further poking around, I concluded that it'd be a good idea to make
a similar change in set_dummy_tlist_references(), to prevent a failure
if the top plan node below ModifyTable chances to be a Sort or other
non-projecting plan node. I'm not sure such a case can arise today, but
I'm not sure it can't either.
With that, there are a couple more changes in regression test EXPLAIN
output. A full patch that passes "make check-world" is attached.
I'm not sure if it's worth memorializing the specific test case discussed
in this thread as a regression test. It depends enough on the behavior
of SQL function planning that I wouldn't have much confidence in it
continuing to test what we meant it to test going forward.
regards, tom lane
diff --git a/contrib/postgres_fdw/expected/postgres_fdw.out b/contrib/postgres_fdw/expected/postgres_fdw.out
index 2745ad5..7339b58 100644
*** a/contrib/postgres_fdw/expected/postgres_fdw.out
--- b/contrib/postgres_fdw/expected/postgres_fdw.out
*************** select count(c2) w, c2 x, 5 y, 7.0 z fro
*** 2447,2456 ****
QUERY PLAN
----------------------------------------------------------------------------------------------------------
Sort
! Output: (count(c2)), c2, (5), (7.0), (9)
Sort Key: ft1.c2
-> Foreign Scan
! Output: (count(c2)), c2, (5), (7.0), (9)
Relations: Aggregate on (public.ft1)
Remote SQL: SELECT count(c2), c2, 5, 7.0, 9 FROM "S 1"."T 1" GROUP BY c2, 5::integer, 9::integer
(7 rows)
--- 2447,2456 ----
QUERY PLAN
----------------------------------------------------------------------------------------------------------
Sort
! Output: (count(c2)), c2, 5, 7.0, 9
Sort Key: ft1.c2
-> Foreign Scan
! Output: (count(c2)), c2, 5, 7.0, 9
Relations: Aggregate on (public.ft1)
Remote SQL: SELECT count(c2), c2, 5, 7.0, 9 FROM "S 1"."T 1" GROUP BY c2, 5::integer, 9::integer
(7 rows)
*************** select count(*) from (select c5, count(c
*** 2499,2505 ****
Aggregate
Output: count(*)
-> Foreign Scan
! Output: ft1.c5, (NULL::bigint), (sqrt((ft1.c2)::double precision))
Filter: (((((avg(ft1.c1)) / (avg(ft1.c1))))::double precision * random()) <= '1'::double precision)
Relations: Aggregate on (public.ft1)
Remote SQL: SELECT c5, NULL::bigint, sqrt(c2), avg("C 1") FROM "S 1"."T 1" GROUP BY c5, (sqrt(c2)) HAVING
((avg("C1") < 500::numeric))
--- 2499,2505 ----
Aggregate
Output: count(*)
-> Foreign Scan
! Output: ft1.c5, NULL::bigint, (sqrt((ft1.c2)::double precision))
Filter: (((((avg(ft1.c1)) / (avg(ft1.c1))))::double precision * random()) <= '1'::double precision)
Relations: Aggregate on (public.ft1)
Remote SQL: SELECT c5, NULL::bigint, sqrt(c2), avg("C 1") FROM "S 1"."T 1" GROUP BY c5, (sqrt(c2)) HAVING
((avg("C1") < 500::numeric))
*************** select sum(q.a), count(q.b) from ft4 lef
*** 3139,3145 ****
-> Subquery Scan on q
Output: q.a, q.b
-> Foreign Scan
! Output: (13), (avg(ft1.c1)), (NULL::bigint)
Relations: Aggregate on ((public.ft2) LEFT JOIN (public.ft1))
Remote SQL: SELECT 13, avg(r1."C 1"), NULL::bigint FROM ("S 1"."T 1" r2 LEFT JOIN "S 1"."T
1"r1 ON (((r1."C 1" = r2."C 1"))))
(16 rows)
--- 3139,3145 ----
-> Subquery Scan on q
Output: q.a, q.b
-> Foreign Scan
! Output: 13, (avg(ft1.c1)), NULL::bigint
Relations: Aggregate on ((public.ft2) LEFT JOIN (public.ft1))
Remote SQL: SELECT 13, avg(r1."C 1"), NULL::bigint FROM ("S 1"."T 1" r2 LEFT JOIN "S 1"."T
1"r1 ON (((r1."C 1" = r2."C 1"))))
(16 rows)
diff --git a/src/backend/optimizer/plan/setrefs.c b/src/backend/optimizer/plan/setrefs.c
index d10a983..d91bc3b 100644
*** a/src/backend/optimizer/plan/setrefs.c
--- b/src/backend/optimizer/plan/setrefs.c
*************** set_dummy_tlist_references(Plan *plan, i
*** 1823,1828 ****
--- 1823,1841 ----
Var *oldvar = (Var *) tle->expr;
Var *newvar;
+ /*
+ * As in search_indexed_tlist_for_non_var(), we prefer to keep Consts
+ * as Consts, not Vars referencing Consts. Here, there's no speed
+ * advantage to be had, but it makes EXPLAIN output look cleaner, and
+ * again it avoids confusing the executor.
+ */
+ if (IsA(oldvar, Const))
+ {
+ /* just reuse the existing TLE node */
+ output_targetlist = lappend(output_targetlist, tle);
+ continue;
+ }
+
newvar = makeVar(OUTER_VAR,
tle->resno,
exprType((Node *) oldvar),
*************** search_indexed_tlist_for_non_var(Node *n
*** 2010,2015 ****
--- 2023,2038 ----
{
TargetEntry *tle;
+ /*
+ * If it's a simple Const, replacing it with a Var is silly, even if there
+ * happens to be an identical Const below; a Var is more expensive to
+ * execute than a Const. What's more, replacing it could confuse some
+ * places in the executor that expect to see simple Consts for, eg,
+ * dropped columns.
+ */
+ if (IsA(node, Const))
+ return NULL;
+
tle = tlist_member(node, itlist->tlist);
if (tle)
{
diff --git a/src/test/regress/expected/inherit.out b/src/test/regress/expected/inherit.out
index df7cba6..79e9969 100644
*** a/src/test/regress/expected/inherit.out
--- b/src/test/regress/expected/inherit.out
*************** ORDER BY thousand, tenthous;
*** 1426,1432 ****
Sort Key: tenk1.thousand, tenk1.tenthous
-> Index Only Scan using tenk1_thous_tenthous on tenk1
-> Sort
! Sort Key: (42), (42)
-> Index Only Scan using tenk1_hundred on tenk1 tenk1_1
(6 rows)
--- 1426,1432 ----
Sort Key: tenk1.thousand, tenk1.tenthous
-> Index Only Scan using tenk1_thous_tenthous on tenk1
-> Sort
! Sort Key: 42, 42
-> Index Only Scan using tenk1_hundred on tenk1 tenk1_1
(6 rows)
pgsql-bugs by date: