Re: Showing applied extended statistics in explain Part 2 - Mailing list pgsql-hackers

From Tomas Vondra
Subject Re: Showing applied extended statistics in explain Part 2
Date
Msg-id 70c461b0-3111-471c-9532-af057d22c59e@enterprisedb.com
Whole thread Raw
In response to Re: Showing applied extended statistics in explain Part 2  (Tomas Vondra <tomas.vondra@enterprisedb.com>)
Responses Re: Showing applied extended statistics in explain Part 2
List pgsql-hackers
Hello Yamada-san,

I finally got to look at this patch again - apologies for taking so
long, I'm well aware it's rather frustrating to wait for feedback. I'll
try to pay more attention to this patch, and don't hesitate to ping me
off-list if immediate input is needed.

I looked at the patch from March 1 [1], which applies mostly with some
minor bitrot, but no major conflicts. A couple review comments:


1) The patch is not added to the CF app, which I think is a mistake. Can
you please add it to the 2024-07 commitfest? Otherwise people may not be
aware of it, won't do reviews etc. It'll require posting a rebased
patch, but should not be a big deal.


2) Not having the patch in a CF also means cfbot is not running tests on
it. Which is unfortunate, because the patch actually has an a bug cfbot
would find - I've noticed it after running the tests through the github
CI, see [2].

FWIW I very much recommend setting up this CI and using it during
development, it turned out to be very valuable for me as it tests on a
range of systems, and more convenient than the rpi5 machines I used for
that purposes before. See src/tools/ci/README for details.


3) The bug has this symptom:

  ERROR:  unrecognized node type: 268
  CONTEXT:  PL/pgSQL function check_estimated_rows(text) line 7 ...
  STATEMENT:  SELECT * FROM check_estimated_rows('SELECT a, b FROM ...

but it only shows on the FreeBSD machine (in CI). But that's simply
because that's running tests with "-DCOPY_PARSE_PLAN_TREES", which
always copies the query plan, to make sure all the nodes can be copied.
And we don't have a copy function for the StatisticExtInfo node (that's
the node 268), so it fails.

FWIW you can have this failure even on a regular build, you just need to
do explain on a prepared statement (with an extended statistics):

  CREATE TABLE t (a int, b int);

  INSERT INTO t SELECT (i/100), (i/100)
    FROM generate_series(1,1000) s(i);

  CREATE STATISTICS ON a,b FROM t;

  ANALYZE t;

  PREPARE ps (INT, INT) AS SELECT * FROM t WHERE a = $1 AND b = $2;

  EXPLAIN EXECUTE ps(5,5);

  ERROR:  unrecognized node type: 268


4) I can think of two basic ways to fix this issue - either allow
copying of the StatisticExtInto node, or represent the information in a
different way (e.g. add a new node for that purpose, or use existing
nodes to do that).

I don't think we should do the first thing - the StatisticExtInfo is in
pathnodes.h, is designed to be used during path construction, has a lot
of fields that we likely don't need to show stuff in explain - but we'd
need to copy those too, and that seems useless / expensive.

So I suggest we invent a new (much simpler) node, tracking only the bits
we actually need for in the explain part. Or alternatively, if you think
adding a separate node is an overkill, maybe we could keep just an OID
of the statistics we applied, and the explain would lookup the name?

But I think having a new node might might also make the patch simpler,
as the new struct could combine the information the patch keeps in three
separate lists. Instead, there's be just one new list in Plan, members
would be the new node type, and each element would be

  (statistics OID, list of clauses, flag is_or)

or something like that.


5) In [3] Tom raised two possible issues with doing this - cost of
copying the information, and locking. For the performance concerns, I
think the first thing we should do is measuring how expensive it is. I
suggest measuring the overhead for about three basic cases:

 - table with no extended stats
 - table with a couple (1-10?) extended stats
 - table with a lot of (100?) extended stats

And see the impact of the patch. That is, measure the planning time with
master and with the patch applied. The table size does not matter much,
I think - this should measure just the planning, not execute the query.
In practice the extra costs will get negligible as the execution time
grows. But we're measuring the worst case, so that's fine.

For the locking, I agree with Robert [4] that this probably should not
be an issue - I don't see why would this be different from indexes etc.
But I haven't thought about that too much, so maybe investigate and test
this a bit more (that plans get invalidated if the statistics changes,
and so on).


6) I'm not sure we want to have this under EXPLAIN (VERBOSE). It's what
I did in the initial PoC patch, but maybe we should invent a new flag
for this purpose, otherwise VERBOSE will cover too much stuff? I'm
thinking about "STATS" for example.

This would probably mean the patch should also add a new auto_explain
"log_" flag to enable/disable this.


7) The patch really needs some docs - I'd mention this in the EXPLAIN
docs, probably. There's also a chapter about estimates, maybe that
should mention this too? Try searching for places in the SGML docs
mentioning extended stats and/or explain, I guess.

For tests, I guess stats_ext is the right place to test this. I'm not
sure what's the best way to do this, though. If it's covered by VERBOSE,
that seems it might be unstable - and that would be an issue. But maybe
we might add a function similar to check_estimated_rows(), but verifying
 the query used the expected statistics to estimate expected clauses.

But maybe with the new explain "STATS" flag it would be easier, because
we could do EXPLAIN (COSTS OFF, STATS ON) and that would be stable.

As for what needs to be tested, I don't think we need to test how we
match queries/clauses to statistics - that's already tested. It's fine
to focus just on displaying the expected stuff. I'd take a couple of the
existing tests, and check those. And then also add a couple tests for
prepared statements, and invalidation of a plan after an extended
statistics gets dropped, etc.


So there's stuff to do to make this committable, but hopefully this
review gives you some guidance regarding what/how ;-)


regards


[1]
https://www.postgresql.org/message-id/TYYPR01MB82310B308BA8770838F681619E5E2%40TYYPR01MB8231.jpnprd01.prod.outlook.com

[2] https://cirrus-ci.com/build/6436352672137216

[3] https://www.postgresql.org/message-id/459863.1627419001%40sss.pgh.pa.us

[4]
https://www.postgresql.org/message-id/CA%2BTgmoZU34zo4%3DhyqgLH16iGpHQ6%2BQAesp7k5a1cfZB%3D%2B9xtsw%40mail.gmail.com

-- 
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: Bertrand Drouvot
Date:
Subject: Re: Track the amount of time waiting due to cost_delay
Next
From: Bertrand Drouvot
Date:
Subject: Re: Track the amount of time waiting due to cost_delay