Thread: explain analyze rows=%.0f

explain analyze rows=%.0f

From
Robert Haas
Date:
I have always assumed that there is some very good reason why EXPLAIN
ANALYZE reports the number of rows as an integer rather than a
floating point value, but in reading explain.c it seems that the
reason is just that we decided to round to zero decimal places.  Any
chance we could reconsider this decision?  I often find myself wanting
to know the value that is here called ntuples, but rounding
ntuples/nloops off to the nearest integer loses too much precision.

(Before someone mentions it, yes that would be a good thing to include
in XML-formatted explain output.  But I don't see that including a
couple of decimal places would hurt the text output format either.)

...Robert


Re: explain analyze rows=%.0f

From
Euler Taveira de Oliveira
Date:
Robert Haas escreveu:
> I have always assumed that there is some very good reason why EXPLAIN
> ANALYZE reports the number of rows as an integer rather than a
> floating point value, but in reading explain.c it seems that the
> reason is just that we decided to round to zero decimal places.  Any
> chance we could reconsider this decision?  I often find myself wanting
> to know the value that is here called ntuples, but rounding
> ntuples/nloops off to the nearest integer loses too much precision.
> 
Don't you think is too strange having, for example, 6.67 rows? I would confuse
users and programs that parses the EXPLAIN output. However, I wouldn't object
to add ntuples to an extended explain output (as discussed in the other thread).


--  Euler Taveira de Oliveira http://www.timbira.com/


Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Thu, May 28, 2009 at 11:00 PM, Euler Taveira de Oliveira
<euler@timbira.com> wrote:
> Robert Haas escreveu:
>> I have always assumed that there is some very good reason why EXPLAIN
>> ANALYZE reports the number of rows as an integer rather than a
>> floating point value, but in reading explain.c it seems that the
>> reason is just that we decided to round to zero decimal places.  Any
>> chance we could reconsider this decision?  I often find myself wanting
>> to know the value that is here called ntuples, but rounding
>> ntuples/nloops off to the nearest integer loses too much precision.
>>
> Don't you think is too strange having, for example, 6.67 rows?

No stranger than having it say 7 when it's really not.  Actually mine
mostly come out 1 when the real value is somewhere between 0.5 and
1.49.  :-(

...Robert


Re: explain analyze rows=%.0f

From
Joshua Tolley
Date:
On Thu, May 28, 2009 at 11:12:42PM -0400, Robert Haas wrote:
> On Thu, May 28, 2009 at 11:00 PM, Euler Taveira de Oliveira
> > Don't you think is too strange having, for example, 6.67 rows?
>
> No stranger than having it say 7 when it's really not.  Actually mine
> mostly come out 1 when the real value is somewhere between 0.5 and
> 1.49.  :-(

+1. It would help users realize more quickly that some of the values in the
EXPLAIN output are, for instance, *average* number of rows *per iteration* of a
nested loop, say, rather than total rows found in all loops. That's an
important distinction that isn't immediately clear to the novice EXPLAIN
reader, but would become so very quickly as users tried to figure out how a
scan could come up with a fractional row.

- Josh / eggyknap

Re: explain analyze rows=%.0f

From
Tom Lane
Date:
Joshua Tolley <eggyknap@gmail.com> writes:
> On Thu, May 28, 2009 at 11:12:42PM -0400, Robert Haas wrote:
>> On Thu, May 28, 2009 at 11:00 PM, Euler Taveira de Oliveira
>>> Don't you think is too strange having, for example, 6.67 rows?
>> 
>> No stranger than having it say 7 when it's really not.  Actually mine
>> mostly come out 1 when the real value is somewhere between 0.5 and
>> 1.49.  :-(

> +1. It would help users realize more quickly that some of the values in the
> EXPLAIN output are, for instance, *average* number of rows *per iteration* of a
> nested loop, say, rather than total rows found in all loops.

I think it would only be sensible to show fractional digits if nloops is
greater than 1.  Otherwise the value must in fact be an integer, and
you're just going to confuse people more by suggesting that it might not
be.
        regards, tom lane


Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Fri, May 29, 2009 at 1:30 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Joshua Tolley <eggyknap@gmail.com> writes:
>> On Thu, May 28, 2009 at 11:12:42PM -0400, Robert Haas wrote:
>>> On Thu, May 28, 2009 at 11:00 PM, Euler Taveira de Oliveira
>>>> Don't you think is too strange having, for example, 6.67 rows?
>>>
>>> No stranger than having it say 7 when it's really not.  Actually mine
>>> mostly come out 1 when the real value is somewhere between 0.5 and
>>> 1.49.  :-(
>
>> +1. It would help users realize more quickly that some of the values in the
>> EXPLAIN output are, for instance, *average* number of rows *per iteration* of a
>> nested loop, say, rather than total rows found in all loops.
>
> I think it would only be sensible to show fractional digits if nloops is
> greater than 1.  Otherwise the value must in fact be an integer, and
> you're just going to confuse people more by suggesting that it might not
> be.

That might be over-engineering, but I'll take it.

...Robert


Re: explain analyze rows=%.0f

From
Ron Mayer
Date:
Euler Taveira de Oliveira wrote:
> Robert Haas escreveu:
>> ...EXPLAIN ANALYZE reports the number of rows as an integer...  Any
>> chance we could reconsider this decision?  I often find myself wanting
>> to know the value that is here called ntuples, but rounding
>> ntuples/nloops off to the nearest integer loses too much precision.
>>
> Don't you think is too strange having, for example, 6.67 rows? I would confuse
> users and programs that parses the EXPLAIN output. However, I wouldn't object

I don't think it's that confusing.   If it says "0.1 rows", I imagine most
people would infer that this means "typically 0, but sometimes 1 or a few" rows.

What I'd find strange about "6.67 rows" in your example is more that on
the estimated rows side, it seems to imply an unrealistically precise estimate
in the same way that "667 rows" would seem unrealistically precise to me.
Maybe rounding to 2 significant digits would reduce confusion?


Re: explain analyze rows=%.0f

From
Simon Riggs
Date:
On Mon, 2009-06-01 at 20:30 -0700, Ron Mayer wrote:

> What I'd find strange about "6.67 rows" in your example is more that on
> the estimated rows side, it seems to imply an unrealistically precise estimate
> in the same way that "667 rows" would seem unrealistically precise to me.
> Maybe rounding to 2 significant digits would reduce confusion?

You're right that the number of significant digits already exceeds the
true accuracy of the computation. I think what Robert wants to see is
the exact value used in the calc, so the estimates can be checked more
thoroughly than is currently possible.

-- Simon Riggs           www.2ndQuadrant.comPostgreSQL Training, Services and Support



Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Jun 2, 2009, at 9:41 AM, Simon Riggs <simon@2ndQuadrant.com> wrote:

>
> On Mon, 2009-06-01 at 20:30 -0700, Ron Mayer wrote:
>
>> What I'd find strange about "6.67 rows" in your example is more  
>> that on
>> the estimated rows side, it seems to imply an unrealistically  
>> precise estimate
>> in the same way that "667 rows" would seem unrealistically precise  
>> to me.
>> Maybe rounding to 2 significant digits would reduce confusion?
>
> You're right that the number of significant digits already exceeds the
> true accuracy of the computation. I think what Robert wants to see is
> the exact value used in the calc, so the estimates can be checked more
> thoroughly than is currently possible.

Bingo.

...Robert


Re: explain analyze rows=%.0f

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> On Jun 2, 2009, at 9:41 AM, Simon Riggs <simon@2ndQuadrant.com> wrote:
>> You're right that the number of significant digits already exceeds the
>> true accuracy of the computation. I think what Robert wants to see is
>> the exact value used in the calc, so the estimates can be checked more
>> thoroughly than is currently possible.

> Bingo.

Uh, the planner's estimate *is* an integer.  What was under discussion
(I thought) was showing some fractional digits in the case where EXPLAIN
ANALYZE is outputting a measured row count that is an average over
multiple loops, and therefore isn't necessarily an integer.  In that
case the measured value can be considered arbitrarily precise --- though
I think in practice one or two fractional digits would be plenty.
        regards, tom lane


Re: explain analyze rows=%.0f

From
Robert Haas
Date:

...Robert

On Jun 2, 2009, at 10:38 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:

> Robert Haas <robertmhaas@gmail.com> writes:
>> On Jun 2, 2009, at 9:41 AM, Simon Riggs <simon@2ndQuadrant.com>  
>> wrote:
>>> You're right that the number of significant digits already exceeds  
>>> the
>>> true accuracy of the computation. I think what Robert wants to see  
>>> is
>>> the exact value used in the calc, so the estimates can be checked  
>>> more
>>> thoroughly than is currently possible.
>
>> Bingo.
>
> Uh, the planner's estimate *is* an integer.  What was under discussion
> (I thought) was showing some fractional digits in the case where  
> EXPLAIN
> ANALYZE is outputting a measured row count that is an average over
> multiple loops, and therefore isn't necessarily an integer.  In that
> case the measured value can be considered arbitrarily precise ---  
> though
> I think in practice one or two fractional digits would be plenty.

We're in violent agreement here.

...Robert


Re: explain analyze rows=%.0f

From
Ibrar Ahmed
Date:
On Thu, Jun 23, 2022 at 12:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
> On Jun 2, 2009, at 9:41 AM, Simon Riggs <simon@2ndQuadrant.com> wrote:
>> You're right that the number of significant digits already exceeds the
>> true accuracy of the computation. I think what Robert wants to see is
>> the exact value used in the calc, so the estimates can be checked more
>> thoroughly than is currently possible.

> Bingo.

Uh, the planner's estimate *is* an integer.  What was under discussion
(I thought) was showing some fractional digits in the case where EXPLAIN
ANALYZE is outputting a measured row count that is an average over
multiple loops, and therefore isn't necessarily an integer.  In that
case the measured value can be considered arbitrarily precise --- though
I think in practice one or two fractional digits would be plenty.

                        regards, tom lane


Hi,
I was looking at the TODO list and found that the issue requires
a quick fix. Attached is a patch which shows output like this. It shows the
fraction digits in case of loops > 1

postgres=# explain analyze select * from foo;
                                                  QUERY PLAN                                                  
--------------------------------------------------------------------------------------------------------------
 Seq Scan on foo  (cost=0.00..64414.79 rows=2326379 width=8) (actual time=0.025..277.096 rows=2344671 loops=1
 Planning Time: 0.516 ms
 Execution Time: 356.993 ms
(3 rows)
 
postgres=# explain analyze select * from foo where b = (select c from bar where c = 1);
                                                         QUERY PLAN                                                         
----------------------------------------------------------------------------------------------------------------------------
 Seq Scan on foo  (cost=8094.37..78325.11 rows=2326379 width=8) (actual time=72.352..519.159 rows=2344671 loops=1
   Filter: (b = $1)
   InitPlan 1 (returns $1)
     ->  Gather  (cost=1000.00..8094.37 rows=1 width=4) (actual time=0.872..72.434 rows=1 loops=1
           Workers Planned: 2
           Workers Launched: 2
           ->  Parallel Seq Scan on bar  (cost=0.00..7094.27 rows=1 width=4) (actual time=41.931..65.382 rows=0.33 loops=3)
                 Filter: (c = 1)
                 Rows Removed by Filter: 245457
 Planning Time: 0.277 ms
 Execution Time: 597.795 ms
(11 rows)


--
Ibrar Ahmed
Attachment

Re: explain analyze rows=%.0f

From
"David G. Johnston"
Date:
On Wed, Jun 22, 2022 at 12:11 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:
On Thu, Jun 23, 2022 at 12:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
> On Jun 2, 2009, at 9:41 AM, Simon Riggs <simon@2ndQuadrant.com> wrote:
>> You're right that the number of significant digits already exceeds the
>> true accuracy of the computation. I think what Robert wants to see is
>> the exact value used in the calc, so the estimates can be checked more
>> thoroughly than is currently possible.

> Bingo.

Uh, the planner's estimate *is* an integer.  What was under discussion
(I thought) was showing some fractional digits in the case where EXPLAIN
ANALYZE is outputting a measured row count that is an average over
multiple loops, and therefore isn't necessarily an integer.  In that
case the measured value can be considered arbitrarily precise --- though
I think in practice one or two fractional digits would be plenty.

                        regards, tom lane


Hi,
I was looking at the TODO list and found that the issue requires
a quick fix. Attached is a patch which shows output like this.

Quick code review:

+ "actual rows=%.0f loops=%.0f": " rows=%.2f loops=%.0f",

The leading space before the else block "rows" does not belong.

There should be a space after the colon.

The word "actual" that you are dropping in the else block seems like it should belong - it is a header for the entire section not just a modifier for the word "rows".  This is evidenced by the timing block verbiage where rows is standalone and the word actual comes before time.  In short, only the format specifier should change under the current scheme.  Both sections.

- WRITE_FLOAT_FIELD(rows, "%.0f");
+ WRITE_FLOAT_FIELD(rows, "%.2f");

This one looks suspicious, though I haven't dug into the code to see exactly what all is being touched.  That it doesn't have an nloops condition like everything else stands out.

Tooling that expects an integer is the only downside I see here, but I concur that the usability of always showing two decimal places when nloops > 1 overcomes any objection I have on those grounds.

David J.

Re: explain analyze rows=%.0f

From
Ibrar Ahmed
Date:


On Thu, Jun 23, 2022 at 1:04 AM David G. Johnston <david.g.johnston@gmail.com> wrote:
On Wed, Jun 22, 2022 at 12:11 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:
On Thu, Jun 23, 2022 at 12:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
> On Jun 2, 2009, at 9:41 AM, Simon Riggs <simon@2ndQuadrant.com> wrote:
>> You're right that the number of significant digits already exceeds the
>> true accuracy of the computation. I think what Robert wants to see is
>> the exact value used in the calc, so the estimates can be checked more
>> thoroughly than is currently possible.

> Bingo.

Uh, the planner's estimate *is* an integer.  What was under discussion
(I thought) was showing some fractional digits in the case where EXPLAIN
ANALYZE is outputting a measured row count that is an average over
multiple loops, and therefore isn't necessarily an integer.  In that
case the measured value can be considered arbitrarily precise --- though
I think in practice one or two fractional digits would be plenty.

                        regards, tom lane


Hi,
I was looking at the TODO list and found that the issue requires
a quick fix. Attached is a patch which shows output like this.

Quick code review:

+ "actual rows=%.0f loops=%.0f": " rows=%.2f loops=%.0f",

The leading space before the else block "rows" does not belong.

There should be a space after the colon.

Thanks, David for your quick response. I have updated the patch. 
 
The word "actual" that you are dropping in the else block seems like it should belong - it is a header for the entire section not just a modifier for the word "rows".  This is evidenced by the timing block verbiage where rows is standalone and the word actual comes before time.  In short, only the format specifier should change under the current scheme.  Both sections.

- WRITE_FLOAT_FIELD(rows, "%.0f");
+ WRITE_FLOAT_FIELD(rows, "%.2f");

This one looks suspicious, though I haven't dug into the code to see exactly what all is being touched.  That it doesn't have an nloops condition like everything else stands out.

I was also thinking about that, but I don't see any harm when we ultimately truncating that decimal
at a latter stage of code in case of loop = 1.
 
Tooling that expects an integer is the only downside I see here, but I concur that the usability of always showing two decimal places when nloops > 1 overcomes any objection I have on those grounds.

David J.



--
Ibrar Ahmed
Attachment

Re: explain analyze rows=%.0f

From
Amit Kapila
Date:
On Thu, Jun 23, 2022 at 2:25 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:
>
> On Thu, Jun 23, 2022 at 1:04 AM David G. Johnston <david.g.johnston@gmail.com> wrote:
>>
>> - WRITE_FLOAT_FIELD(rows, "%.0f");
>> + WRITE_FLOAT_FIELD(rows, "%.2f");
>>
>> This one looks suspicious, though I haven't dug into the code to see exactly what all is being touched.  That it
doesn'thave an nloops condition like everything else stands out.
 
>>
> I was also thinking about that, but I don't see any harm when we ultimately truncating that decimal
> at a latter stage of code in case of loop = 1.
>

That change is in the path node which we anyway not going to target as
part of this change. We only want to change the display for actual
rows in Explain Analyze. So, I can't see how the quoted change can
help in any way.

Few miscellaneous comments:
========================
*
 static FullTransactionId XactTopFullTransactionId = {InvalidTransactionId};
-static int nParallelCurrentXids = 0;
+static int nParallelCurrentXids = 0;

I don't see why this change is required.

* Can you please add a comment explaining why we are making this
change for actual rows?

* Can you please write a test case unless there is some existing test
that covers the change by displaying actual rows values in decimal but
in that case patch should have that changed output test? If you don't
think we can reliably write such a test then please let me know the
reason?

-- 
With Regards,
Amit Kapila.



Re: explain analyze rows=%.0f

From
vignesh C
Date:
On Thu, Jun 23, 2022 at 2:25 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:
>
>
>
> On Thu, Jun 23, 2022 at 1:04 AM David G. Johnston <david.g.johnston@gmail.com> wrote:
>>
>> On Wed, Jun 22, 2022 at 12:11 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:
>>>
>>> On Thu, Jun 23, 2022 at 12:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>>>
>>>> Robert Haas <robertmhaas@gmail.com> writes:
>>>> > On Jun 2, 2009, at 9:41 AM, Simon Riggs <simon@2ndQuadrant.com> wrote:
>>>> >> You're right that the number of significant digits already exceeds the
>>>> >> true accuracy of the computation. I think what Robert wants to see is
>>>> >> the exact value used in the calc, so the estimates can be checked more
>>>> >> thoroughly than is currently possible.
>>>>
>>>> > Bingo.
>>>>
>>>> Uh, the planner's estimate *is* an integer.  What was under discussion
>>>> (I thought) was showing some fractional digits in the case where EXPLAIN
>>>> ANALYZE is outputting a measured row count that is an average over
>>>> multiple loops, and therefore isn't necessarily an integer.  In that
>>>> case the measured value can be considered arbitrarily precise --- though
>>>> I think in practice one or two fractional digits would be plenty.
>>>>
>>>>                         regards, tom lane
>>>>
>>>>
>>> Hi,
>>> I was looking at the TODO list and found that the issue requires
>>> a quick fix. Attached is a patch which shows output like this.
>>
>>
>> Quick code review:
>>
>> + "actual rows=%.0f loops=%.0f": " rows=%.2f loops=%.0f",
>>
>> The leading space before the else block "rows" does not belong.
>>
>> There should be a space after the colon.
>>
> Thanks, David for your quick response. I have updated the patch.
>
>>
>> The word "actual" that you are dropping in the else block seems like it should belong - it is a header for the
entiresection not just a modifier for the word "rows".  This is evidenced by the timing block verbiage where rows is
standaloneand the word actual comes before time.  In short, only the format specifier should change under the current
scheme. Both sections. 
>>
>> - WRITE_FLOAT_FIELD(rows, "%.0f");
>> + WRITE_FLOAT_FIELD(rows, "%.2f");
>>
>> This one looks suspicious, though I haven't dug into the code to see exactly what all is being touched.  That it
doesn'thave an nloops condition like everything else stands out. 
>>
> I was also thinking about that, but I don't see any harm when we ultimately truncating that decimal
> at a latter stage of code in case of loop = 1.

Thanks for the patch.

1) There are some existing regression tests that are failing, you
should update the expect files accordingly for the same:
--- /home/vignesh/postgres/src/test/regress/expected/select_parallel.out
       2022-05-18 20:51:46.874818044 +0530
+++ /home/vignesh/postgres/src/test/regress/results/select_parallel.out
2022-07-07 15:27:34.450440922 +0530
@@ -545,17 +545,17 @@
 explain (analyze, timing off, summary off, costs off)
    select count(*) from tenk1, tenk2 where tenk1.hundred > 1
         and tenk2.thousand=0;
-                                QUERY PLAN
---------------------------------------------------------------------------
+                                 QUERY PLAN
+-----------------------------------------------------------------------------
  Aggregate (actual rows=1 loops=1)
    ->  Nested Loop (actual rows=98000 loops=1)
          ->  Seq Scan on tenk2 (actual rows=10 loops=1)
                Filter: (thousand = 0)
                Rows Removed by Filter: 9990
-         ->  Gather (actual rows=9800 loops=10)
+         ->  Gather (actual rows=9800.00 loops=10)
                Workers Planned: 4
                Workers Launched: 4
-               ->  Parallel Seq Scan on tenk1 (actual rows=1960 loops=50)
+               ->  Parallel Seq Scan on tenk1 (actual rows=1960.00 loops=50)
                      Filter: (hundred > 1)

    test select_parallel              ... FAILED      744 ms
     partition_prune              ... FAILED      861 ms
     explain                      ... FAILED      134 ms
     memoize                      ... FAILED      250 ms

2) This change is not required as part of this patch:
--- a/src/backend/access/transam/xact.c
+++ b/src/backend/access/transam/xact.c
@@ -122,7 +122,7 @@ bool                bsysscan = false;
  * lookups as fast as possible.
  */
 static FullTransactionId XactTopFullTransactionId = {InvalidTransactionId};
-static int nParallelCurrentXids = 0;
+static int     nParallelCurrentXids = 0;
 static TransactionId *ParallelCurrentXids;

Regards,
Vignesh



Re: explain analyze rows=%.0f

From
Greg Stark
Date:
> -               ->  Parallel Seq Scan on tenk1 (actual rows=1960 loops=50)
> +               ->  Parallel Seq Scan on tenk1 (actual rows=1960.00

At the not inconsiderable risk of bike-shedding....

I'm wondering if printing something like 0.00 will be somewhat
deceptive when the real value is non-zero but less than 1 row per 200
loops. I wonder if the number of decimal places should be calculated
to produce a minimum of one non-zero digit for non-zero values.

-- 
greg



Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Thu, Jul 7, 2022 at 1:53 PM Greg Stark <stark@mit.edu> wrote:
> > -               ->  Parallel Seq Scan on tenk1 (actual rows=1960 loops=50)
> > +               ->  Parallel Seq Scan on tenk1 (actual rows=1960.00
>
> At the not inconsiderable risk of bike-shedding....
>
> I'm wondering if printing something like 0.00 will be somewhat
> deceptive when the real value is non-zero but less than 1 row per 200
> loops. I wonder if the number of decimal places should be calculated
> to produce a minimum of one non-zero digit for non-zero values.

I mean, what I really want here if I'm honest is to not have the
system divide the number of rows by the loop count. And it sort of
sounds like maybe that's what you want, too. You want to know whether
the loop count is actually zero, not whether it's close to zero when
you divide it by some number that might be gigantic.

Parallel query's treatment of this topic has come in for some
criticism, but I don't know what else it could really do: there could
be any number of loops in each backend, and it need not be the same
across all backends, so all it can do is add up the loop counts just
like it adds up the row counts and times. And if we just printed out
those totals, the result would be understandable by everyone. But we
insist on dividing it by the loop count, and then things get really
obscure. Consider this example, which does not involve parallel query:

Nested Loop (actual time=TIME FOR THIS AND ALL CHILDREN rows=THE REAL
ROW COUNT loops=1)
-> Seq Scan on something (actual time=THE TIME IT REALLY TOOK rows=THE
REAL ROW COUNT loops=1)
-> Index Scan using someidx on somethingelse (actual time=NOT REALLY
HOW LONG IT TOOK rows=NOT REALLY HOW MANY ROWS WE GOT loops=HUGE
NUMBER)

If I'm looking at this plan and trying to find out what's gone wrong,
I want to know how much time got spent in the nested loop, how much
time got spent in the Seq Scan, and how much time got spent in the
Index Scan. It's easy to figure out how much time got spent in the Seq
Scan, but to find out how much time got spent in the Index Scan, I
have to multiply the time by the loop count. Then, I have to add that
number to the time spent in the Seq Scan and subtract that from the
time from the nested loop to find the time spent on the nested loop
itself. This is quite a lot of computation, especially if the plan
involves a dozen or two different nested loops, and if we didn't
insist on dividing the time by the loop count, it would be MUCH EASIER
to figure out whether the time spent in the Index Scan is a
significant percentage of the total time or not.

And likewise, if you're trying to understand the row count for the
nested loop, it would be a heck of a lot simpler if you could see the
*raw* row count for the index scan. It's unclear to me what value
there ever is in knowing that the number of rows per iteration was
about 0 or about 1 or about 2. The only thing I'm ever going to do
with the row count that gets printed here is multiply it by the loop
count and then try to figure out how much precision I've lost because
of limits on the number of decimal places. Right now that's basically
all of it because nearly every case ends up with the index scan having
rows=1, so even just adding 2 decimal places will help a lot. But I'm
still just going to be reverse engineering what I really want to know,
which is the original number, from what the system gives me, which is
a needlessly-obfuscated version of that value.

Grumble, grumble. It's sad that it's been 13 years and we haven't done
anything about this.

-- 
Robert Haas
EDB: http://www.enterprisedb.com



Re: explain analyze rows=%.0f

From
Peter Geoghegan
Date:
On Thu, Jul 7, 2022 at 1:21 PM Robert Haas <robertmhaas@gmail.com> wrote:
> Nested Loop (actual time=TIME FOR THIS AND ALL CHILDREN rows=THE REAL
> ROW COUNT loops=1)
> -> Seq Scan on something (actual time=THE TIME IT REALLY TOOK rows=THE
> REAL ROW COUNT loops=1)
> -> Index Scan using someidx on somethingelse (actual time=NOT REALLY
> HOW LONG IT TOOK rows=NOT REALLY HOW MANY ROWS WE GOT loops=HUGE
> NUMBER)
>
> If I'm looking at this plan and trying to find out what's gone wrong,
> I want to know how much time got spent in the nested loop, how much
> time got spent in the Seq Scan, and how much time got spent in the
> Index Scan. It's easy to figure out how much time got spent in the Seq
> Scan, but to find out how much time got spent in the Index Scan, I
> have to multiply the time by the loop count.

I agree that this general state of affairs is very confusing, and
seems like something that we should still improve. Just because it's a
very old way of presenting the information doesn't mean that it's the
best one, or even a particularly good one. Plus you could probably
make some kind of concession in the direction of maintaining
compatibility with the current approach if you had to. Right?

-- 
Peter Geoghegan



Re: explain analyze rows=%.0f

From
Justin Pryzby
Date:
On Thu, Jul 07, 2022 at 04:21:37PM -0400, Robert Haas wrote:
> I mean, what I really want here if I'm honest is to not have the
> system divide the number of rows by the loop count. And it sort of
> sounds like maybe that's what you want, too. You want to know whether
> the loop count is actually zero, not whether it's close to zero when
> you divide it by some number that might be gigantic.
...
> involves a dozen or two different nested loops, and if we didn't
> insist on dividing the time by the loop count, it would be MUCH EASIER
> to figure out whether the time spent in the Index Scan is a
> significant percentage of the total time or not.

I think the guiding princible for what to do should be to reduce how much is
needed to explain about how to interpret what explain is showing...

The docs say this:
| In such cases, the loops value reports the total number of executions of the
| node, and the actual time and rows values shown are averages per-execution.
| This is done to make the numbers comparable with the way that the cost
| estimates are shown.  Multiply by the loops value to get the total time
| actually spent in the node.

On Thu, Jul 07, 2022 at 01:45:19PM -0700, Peter Geoghegan wrote:
> Plus you could probably
> make some kind of concession in the direction of maintaining
> compatibility with the current approach if you had to. Right?

The minimum would be to show the information in a way that makes it clear that
it's "new style" output showing a total and not an average, so that a person
who sees it knows how to interpret it (same for the web "explain tools")

A concession would be to show the current information *plus* total/raw values.

This thread is about how to display the existing values.  But note that there's
a CF entry for also collecting more values to show things like min/max rows per
loop.

https://commitfest.postgresql.org/38/2765/
Add extra statistics to explain for Nested Loop

-- 
Justin



Re: explain analyze rows=%.0f

From
Ibrar Ahmed
Date:


On Thu, Jul 7, 2022 at 2:41 PM Amit Kapila <amit.kapila16@gmail.com> wrote:
On Thu, Jun 23, 2022 at 2:25 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:
>
> On Thu, Jun 23, 2022 at 1:04 AM David G. Johnston <david.g.johnston@gmail.com> wrote:
>>
>> - WRITE_FLOAT_FIELD(rows, "%.0f");
>> + WRITE_FLOAT_FIELD(rows, "%.2f");
>>
>> This one looks suspicious, though I haven't dug into the code to see exactly what all is being touched.  That it doesn't have an nloops condition like everything else stands out.
>>
> I was also thinking about that, but I don't see any harm when we ultimately truncating that decimal
> at a latter stage of code in case of loop = 1.
>

That change is in the path node which we anyway not going to target as
part of this change. We only want to change the display for actual
rows in Explain Analyze. So, I can't see how the quoted change can
help in any way.

Agreed removed.
 
Few miscellaneous comments:
========================
*
 static FullTransactionId XactTopFullTransactionId = {InvalidTransactionId};
-static int nParallelCurrentXids = 0;
+static int nParallelCurrentXids = 0;

Removed.
 
I don't see why this change is required.

* Can you please add a comment explaining why we are making this
change for actual rows?

Done

* Can you please write a test case unless there is some existing test
that covers the change by displaying actual rows values in decimal but
in that case patch should have that changed output test? If you don't
think we can reliably write such a test then please let me know the
reason?

I think there are tests, and I have updated the results accordingly. 
--
With Regards,
Amit Kapila.


--
Ibrar Ahmed
Attachment

Re: explain analyze rows=%.0f

From
Ibrar Ahmed
Date:


On Thu, Jul 7, 2022 at 3:14 PM vignesh C <vignesh21@gmail.com> wrote:
On Thu, Jun 23, 2022 at 2:25 AM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:
>
>
>
> On Thu, Jun 23, 2022 at 1:04 AM David G. Johnston <david.g.johnston@gmail.com> wrote:
>>
>> On Wed, Jun 22, 2022 at 12:11 PM Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:
>>>
>>> On Thu, Jun 23, 2022 at 12:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
>>>>
>>>> Robert Haas <robertmhaas@gmail.com> writes:
>>>> > On Jun 2, 2009, at 9:41 AM, Simon Riggs <simon@2ndQuadrant.com> wrote:
>>>> >> You're right that the number of significant digits already exceeds the
>>>> >> true accuracy of the computation. I think what Robert wants to see is
>>>> >> the exact value used in the calc, so the estimates can be checked more
>>>> >> thoroughly than is currently possible.
>>>>
>>>> > Bingo.
>>>>
>>>> Uh, the planner's estimate *is* an integer.  What was under discussion
>>>> (I thought) was showing some fractional digits in the case where EXPLAIN
>>>> ANALYZE is outputting a measured row count that is an average over
>>>> multiple loops, and therefore isn't necessarily an integer.  In that
>>>> case the measured value can be considered arbitrarily precise --- though
>>>> I think in practice one or two fractional digits would be plenty.
>>>>
>>>>                         regards, tom lane
>>>>
>>>>
>>> Hi,
>>> I was looking at the TODO list and found that the issue requires
>>> a quick fix. Attached is a patch which shows output like this.
>>
>>
>> Quick code review:
>>
>> + "actual rows=%.0f loops=%.0f": " rows=%.2f loops=%.0f",
>>
>> The leading space before the else block "rows" does not belong.
>>
>> There should be a space after the colon.
>>
> Thanks, David for your quick response. I have updated the patch.
>
>>
>> The word "actual" that you are dropping in the else block seems like it should belong - it is a header for the entire section not just a modifier for the word "rows".  This is evidenced by the timing block verbiage where rows is standalone and the word actual comes before time.  In short, only the format specifier should change under the current scheme.  Both sections.
>>
>> - WRITE_FLOAT_FIELD(rows, "%.0f");
>> + WRITE_FLOAT_FIELD(rows, "%.2f");
>>
>> This one looks suspicious, though I haven't dug into the code to see exactly what all is being touched.  That it doesn't have an nloops condition like everything else stands out.
>>
> I was also thinking about that, but I don't see any harm when we ultimately truncating that decimal
> at a latter stage of code in case of loop = 1.

Thanks for the patch.

Thanks for the review.

1) There are some existing regression tests that are failing, you
should update the expect files accordingly for the same:
--- /home/vignesh/postgres/src/test/regress/expected/select_parallel.out
       2022-05-18 20:51:46.874818044 +0530
+++ /home/vignesh/postgres/src/test/regress/results/select_parallel.out
2022-07-07 15:27:34.450440922 +0530
@@ -545,17 +545,17 @@
 explain (analyze, timing off, summary off, costs off)
    select count(*) from tenk1, tenk2 where tenk1.hundred > 1
         and tenk2.thousand=0;
-                                QUERY PLAN
---------------------------------------------------------------------------
+                                 QUERY PLAN
+-----------------------------------------------------------------------------
  Aggregate (actual rows=1 loops=1)
    ->  Nested Loop (actual rows=98000 loops=1)
          ->  Seq Scan on tenk2 (actual rows=10 loops=1)
                Filter: (thousand = 0)
                Rows Removed by Filter: 9990
-         ->  Gather (actual rows=9800 loops=10)
+         ->  Gather (actual rows=9800.00 loops=10)
                Workers Planned: 4
                Workers Launched: 4
-               ->  Parallel Seq Scan on tenk1 (actual rows=1960 loops=50)
+               ->  Parallel Seq Scan on tenk1 (actual rows=1960.00 loops=50)
                      Filter: (hundred > 1)

    test select_parallel              ... FAILED      744 ms
     partition_prune              ... FAILED      861 ms
     explain                      ... FAILED      134 ms
     memoize                      ... FAILED      250 ms

2) This change is not required as part of this patch:
--- a/src/backend/access/transam/xact.c
+++ b/src/backend/access/transam/xact.c
@@ -122,7 +122,7 @@ bool                bsysscan = false;
  * lookups as fast as possible.
  */
 static FullTransactionId XactTopFullTransactionId = {InvalidTransactionId};
-static int nParallelCurrentXids = 0;
+static int     nParallelCurrentXids = 0;
 static TransactionId *ParallelCurrentXids;


I have fixed the regression and removed non-related code.
Regards,
Vignesh


--
Ibrar Ahmed

Re: explain analyze rows=%.0f

From
Ibrar Ahmed
Date:


On Thu, Jul 7, 2022 at 10:53 PM Greg Stark <stark@mit.edu> wrote:
> -               ->  Parallel Seq Scan on tenk1 (actual rows=1960 loops=50)
> +               ->  Parallel Seq Scan on tenk1 (actual rows=1960.00

At the not inconsiderable risk of bike-shedding....

I'm wondering if printing something like 0.00 will be somewhat
deceptive when the real value is non-zero but less than 1 row per 200
loops. I wonder if the number of decimal places should be calculated
to produce a minimum of one non-zero digit for non-zero values.

--
greg

+               ->  Parallel Seq Scan on tenk1 (actual rows=1960.00

I have added a new check to remove any ".00" from the output because in
the case of parallel queries we are getting that. Secondly, it is disturbing many test case outputs.

--
Ibrar Ahmed

Re: explain analyze rows=%.0f

From
Amit Kapila
Date:
On Fri, Jul 8, 2022 at 3:50 AM Justin Pryzby <pryzby@telsasoft.com> wrote:
>
> On Thu, Jul 07, 2022 at 04:21:37PM -0400, Robert Haas wrote:
> > I mean, what I really want here if I'm honest is to not have the
> > system divide the number of rows by the loop count. And it sort of
> > sounds like maybe that's what you want, too. You want to know whether
> > the loop count is actually zero, not whether it's close to zero when
> > you divide it by some number that might be gigantic.
> ...
> > involves a dozen or two different nested loops, and if we didn't
> > insist on dividing the time by the loop count, it would be MUCH EASIER
> > to figure out whether the time spent in the Index Scan is a
> > significant percentage of the total time or not.
>
> I think the guiding princible for what to do should be to reduce how much is
> needed to explain about how to interpret what explain is showing...
>
> The docs say this:
> | In such cases, the loops value reports the total number of executions of the
> | node, and the actual time and rows values shown are averages per-execution.
> | This is done to make the numbers comparable with the way that the cost
> | estimates are shown.  Multiply by the loops value to get the total time
> | actually spent in the node.
>
> On Thu, Jul 07, 2022 at 01:45:19PM -0700, Peter Geoghegan wrote:
> > Plus you could probably
> > make some kind of concession in the direction of maintaining
> > compatibility with the current approach if you had to. Right?
>
> The minimum would be to show the information in a way that makes it clear that
> it's "new style" output showing a total and not an average, so that a person
> who sees it knows how to interpret it (same for the web "explain tools")
>
> A concession would be to show the current information *plus* total/raw values.
>
> This thread is about how to display the existing values.
>

I feel the discussion has slightly deviated which makes it unclear
whether this patch is required or not?

-- 
With Regards,
Amit Kapila.



Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Fri, Jul 22, 2022 at 6:47 AM Amit Kapila <amit.kapila16@gmail.com> wrote:
> I feel the discussion has slightly deviated which makes it unclear
> whether this patch is required or not?

My opinion is that showing some fractional digits at least when
loops>1 would be better than what we have now. It might not be the
best thing we could do, but it would be better than doing nothing.

-- 
Robert Haas
EDB: http://www.enterprisedb.com



Re: explain analyze rows=%.0f

From
Naeem Akhter
Date:
The following review has been posted through the commitfest application:
make installcheck-world:  tested, passed
Implements feature:       tested, passed
Spec compliant:           not tested
Documentation:            not tested

Verified patch 'explain_float_row_v3.patch' on master & REL_15_STABLE branches.

The new status of this patch is: Ready for Committer

Re: explain analyze rows=%.0f

From
Hamid Akhtar
Date:
The following review has been posted through the commitfest application:
make installcheck-world:  tested, failed
Implements feature:       tested, failed
Spec compliant:           not tested
Documentation:            not tested

LGTM

Re: explain analyze rows=%.0f

From
Hamid Akhtar
Date:
The following review has been posted through the commitfest application:
make installcheck-world:  tested, passed
Implements feature:       tested, passed
Spec compliant:           not tested
Documentation:            not tested

The previous review was incorrectly posted. Updating the pat.ch review

Re: explain analyze rows=%.0f

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> On Fri, Jul 22, 2022 at 6:47 AM Amit Kapila <amit.kapila16@gmail.com> wr=
ote:
>> I feel the discussion has slightly deviated which makes it unclear
>> whether this patch is required or not?

> My opinion is that showing some fractional digits at least when
> loops>1 would be better than what we have now. It might not be the
> best thing we could do, but it would be better than doing nothing.

Yeah, I think that is a reasonable compromise.

I took a brief look through the patch, and I have some review
comments:

* Code like this is pretty awful:

                appendStringInfo(es->str,
                                 (nloops =3D=3D 1 || !HAS_DECIMAL(rows)) ?
                                 " rows=3D%.0f loops=3D%.0f)" : " rows=3D%=
.2f loops=3D%.0f)",
                                 rows, nloops);

Don't use variable format strings.  They're hard to read and they
probably defeat compile-time checks that the arguments match the
format string.  You could use a "*" field width instead, ie

                appendStringInfo(es->str,
                                 " rows=3D%.*f loops=3D%.0f)",
                                 (nloops =3D=3D 1 || !HAS_DECIMAL(rows)) ?=
 2 : 0,
                                 rows, nloops);

That'd also allow you to reduce the code churn you've added by
splitting some appendStringInfo calls.

* I'm fairly concerned about how stable this'll be in the buildfarm,
in particular I fear HAS_DECIMAL() is not likely to give consistent
results across platforms.  I gather that an earlier version of the patch
tried to check whether the fractional part would be zero to two decimal
places, rather than whether it's exactly zero.  Probably want to put
back something like that.

* Another thought is that the non-text formats tend to prize output
consistency over readability, so maybe we should just always use 2
fractional digits there, rather than trying to minimize visible changes.

* We need some doc adjustments, surely, to explain what the heck this
means.

            regards, tom lane



Re: explain analyze rows=%.0f

From
Andrey Lepikhov
Date:
On 22/7/2022 16:47, Amit Kapila wrote:
> I feel the discussion has slightly deviated which makes it unclear
> whether this patch is required or not?

After quick review I want to express my thoughts.
At first, We have been waiting for this feature for years. Often clients 
give an explain to us where we see something like:
"rows=0, loops=1000000".
Without verbose mode, I can't even understand whether this node produces 
any rows or not.
So, I think this feature is useful for parameterized plans mostly.
Also, printing two decimal digits or even three isn't meaningful - 
sometimes we have a plan where number of loops is about 1E6 or even 1E7, 
but number of real rows is equal 100 or 1000.
To overcome this issue, I see two options:
1. Show the exact number of tuples without division by loops (fair case 
but invasive and bad for automation tools).
2. Show rows in scientific format like X.XXEXX.
I vote for second option.

-- 
regards,
Andrey Lepikhov
Postgres Professional




Re: explain analyze rows=%.0f

From
Ibrar Ahmed
Date:


On Sun, Nov 6, 2022 at 10:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:
Robert Haas <robertmhaas@gmail.com> writes:
> On Fri, Jul 22, 2022 at 6:47 AM Amit Kapila <amit.kapila16@gmail.com> wr=
ote:
>> I feel the discussion has slightly deviated which makes it unclear
>> whether this patch is required or not?

> My opinion is that showing some fractional digits at least when
> loops>1 would be better than what we have now. It might not be the
> best thing we could do, but it would be better than doing nothing.

Yeah, I think that is a reasonable compromise.


Thanks, I have modified everything as suggested, except one point
 
I took a brief look through the patch, and I have some review
comments:

* Code like this is pretty awful:

                appendStringInfo(es->str,
                                 (nloops =3D=3D 1 || !HAS_DECIMAL(rows)) ?
                                 " rows=3D%.0f loops=3D%.0f)" : " rows=3D%=
.2f loops=3D%.0f)",
                                 rows, nloops);

Don't use variable format strings.  They're hard to read and they
probably defeat compile-time checks that the arguments match the
format string.  You could use a "*" field width instead, ie

                appendStringInfo(es->str,
                                 " rows=3D%.*f loops=3D%.0f)",
                                 (nloops =3D=3D 1 || !HAS_DECIMAL(rows)) ?=
 2 : 0,
                                 rows, nloops);

That'd also allow you to reduce the code churn you've added by
splitting some appendStringInfo calls.

* I'm fairly concerned about how stable this'll be in the buildfarm,
in particular I fear HAS_DECIMAL() is not likely to give consistent
results across platforms.  I gather that an earlier version of the patch
tried to check whether the fractional part would be zero to two decimal
places, rather than whether it's exactly zero.  Probably want to put
back something like that.

* Another thought is that the non-text formats tend to prize output
consistency over readability, so maybe we should just always use 2
fractional digits there, rather than trying to minimize visible changes.

In that, we need to adjust a lot of test case outputs. 

* We need some doc adjustments, surely, to explain what the heck this
means.
 


                        regards, tom lane


--
Ibrar Ahmed

Re: explain analyze rows=%.0f

From
"Gregory Stark (as CFM)"
Date:
On Wed, 4 Jan 2023 at 10:05, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:
>
> Thanks, I have modified everything as suggested, except one point
>
> > Don't use variable format strings. They're hard to read and they
> > probably defeat compile-time checks that the arguments match the
> > format string. You could use a "*" field width instead, ie
...
> > * Another thought is that the non-text formats tend to prize output
> > consistency over readability, so maybe we should just always use 2
> > fractional digits there, rather than trying to minimize visible changes.
>
> In that, we need to adjust a lot of test case outputs.

> > * We need some doc adjustments, surely, to explain what the heck this
> > means.

That sounds like three points :) But they seem like pretty good
arguments to me and straightforward if a little tedious to adjust.

This patch was marked Returned with Feedback and then later Waiting on
Author. And it hasn't had any updates since January. What is the state
on this feedback? If it's already done we can set the patch to Ready
for Commit and if not do you disagree with the proposed changes?

It's actually the kind of code cleanup changes I'm reluctant to bump a
patch for. It's not like a committer can't make these kinds of changes
when committing. But there are so many patches they're likely to just
focus on a different patch when there are adjustments like this
pending.

-- 
Gregory Stark
As Commitfest Manager



Re: explain analyze rows=%.0f

From
Ibrar Ahmed
Date:


On Mon, Mar 20, 2023 at 7:56 PM Gregory Stark (as CFM) <stark.cfm@gmail.com> wrote:
On Wed, 4 Jan 2023 at 10:05, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:
>
> Thanks, I have modified everything as suggested, except one point
>
> > Don't use variable format strings. They're hard to read and they
> > probably defeat compile-time checks that the arguments match the
> > format string. You could use a "*" field width instead, ie
...
> > * Another thought is that the non-text formats tend to prize output
> > consistency over readability, so maybe we should just always use 2
> > fractional digits there, rather than trying to minimize visible changes.
>
> In that, we need to adjust a lot of test case outputs.

> > * We need some doc adjustments, surely, to explain what the heck this
> > means.

That sounds like three points :) But they seem like pretty good
arguments to me and straightforward if a little tedious to adjust.

This patch was marked Returned with Feedback and then later Waiting on
Author. And it hasn't had any updates since January. What is the state
on this feedback? If it's already done we can set the patch to Ready
for Commit and if not do you disagree with the proposed changes?

If there is a consensus to modify the test cases' output, I am willing to
make the necessary changes and adjust the patch accordingly. However,
if there is a preference to keep the output of certain test cases unchanged,
I can rebase and modify the patch accordingly to accommodate those preferences.


 
It's actually the kind of code cleanup changes I'm reluctant to bump a
patch for. It's not like a committer can't make these kinds of changes
when committing. But there are so many patches they're likely to just
focus on a different patch when there are adjustments like this
pending.

--
Gregory Stark
As Commitfest Manager


--
Ibrar Ahmed

Re: explain analyze rows=%.0f

From
Daniel Gustafsson
Date:
> On 8 Jun 2023, at 19:49, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:
> On Mon, Mar 20, 2023 at 7:56 PM Gregory Stark (as CFM) <stark.cfm@gmail.com <mailto:stark.cfm@gmail.com>> wrote:

> This patch was marked Returned with Feedback and then later Waiting on
> Author. And it hasn't had any updates since January. What is the state
> on this feedback? If it's already done we can set the patch to Ready
> for Commit and if not do you disagree with the proposed changes?
>
> If there is a consensus to modify the test cases' output, I am willing to
> make the necessary changes and adjust the patch accordingly. However,
> if there is a preference to keep the output of certain test cases unchanged,
> I can rebase and modify the patch accordingly to accommodate those preferences.

As there hasn't been any other comments I suggest updating your patch to
address Tom's comments to see if we can make progress here.

--
Daniel Gustafsson




Re: explain analyze rows=%.0f

From
Daniel Gustafsson
Date:
> On 3 Jul 2023, at 18:34, Daniel Gustafsson <daniel@yesql.se> wrote:
>
>> On 8 Jun 2023, at 19:49, Ibrar Ahmed <ibrar.ahmad@gmail.com> wrote:
>> On Mon, Mar 20, 2023 at 7:56 PM Gregory Stark (as CFM) <stark.cfm@gmail.com <mailto:stark.cfm@gmail.com>> wrote:
>
>> This patch was marked Returned with Feedback and then later Waiting on
>> Author. And it hasn't had any updates since January. What is the state
>> on this feedback? If it's already done we can set the patch to Ready
>> for Commit and if not do you disagree with the proposed changes?
>>
>> If there is a consensus to modify the test cases' output, I am willing to
>> make the necessary changes and adjust the patch accordingly. However,
>> if there is a preference to keep the output of certain test cases unchanged,
>> I can rebase and modify the patch accordingly to accommodate those preferences.
>
> As there hasn't been any other comments I suggest updating your patch to
> address Tom's comments to see if we can make progress here.

Since there hasn't been any updates here, and the thread has been stalled, I'm
marking this returned with feedback.  Please feel free to resubmit a version of
the patch addressing comments to a future CF.

--
Daniel Gustafsson




Re: explain analyze rows=%.0f

From
Matheus Alcantara
Date:
Hi

Em seg., 13 de jan. de 2025 às 17:18, Ilia Evdokimov
<ilya.evdokimov@tantorlabs.com> escreveu:
> I guess, it's not ideal to modify the existing example in the documentation of the v5 patch because readers wouldn't
immediatelyunderstand why decimal fractions appear there. Instead, I'll add a brief note in the documentation
clarifyinghow rows and loops are displayed when the average row count is below one. 
>
> The changes the of documentation are attached v6 patch.
>
v6 is not applying on master, could you please rebase?

git apply v6-0001-Clarify-display-of-rows-and-loops-as-decimal-fraction.patch
error: patch failed: src/test/regress/expected/partition_prune.out:3041
error: src/test/regress/expected/partition_prune.out: patch does not apply

--
Matheus Alcantara



Re: explain analyze rows=%.0f

From
Ilia Evdokimov
Date:
On 07.02.2025 22:59, Matheus Alcantara wrote:
> Hi
> v6 is not applying on master, could you please rebase?
>
> git apply v6-0001-Clarify-display-of-rows-and-loops-as-decimal-fraction.patch
> error: patch failed: src/test/regress/expected/partition_prune.out:3041
> error: src/test/regress/expected/partition_prune.out: patch does not apply
>
Hi

Strange, I don't have any problems to apply it on master.

postgres$ git branch
     * master
postgres$ git pull
     Already up to date.
postgres$ git apply 
v6-0001-Clarify-display-of-rows-and-loops-as-decimal-fraction.patch
postgres$

--
Best regards,
Ilia Evdokimov,
Tantor Labs LLC.





Re: explain analyze rows=%.0f

From
Matheus Alcantara
Date:
Em sex., 7 de fev. de 2025 às 17:41, Ilia Evdokimov
<ilya.evdokimov@tantorlabs.com> escreveu:
> Strange, I don't have any problems to apply it on master.
>
> postgres$ git branch
>      * master
> postgres$ git pull
>      Already up to date.
> postgres$ git apply
> v6-0001-Clarify-display-of-rows-and-loops-as-decimal-fraction.patch
> postgres$

Just for reference I'm trying to apply based on commit fb056564ec5.
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=fb056564ec5bc1c18dd670c963c893cdb6de927e

--
Matheus Alcantara



Re: explain analyze rows=%.0f

From
Matheus Alcantara
Date:
Thanks for the new patch version.

-- v7-0001-Clarify-display-of-rows-and-loops-as-decimal-fraction.patch
> +                if (nloops > 1 && planstate->instrument->ntuples < nloops)
> +                    appendStringInfo(es->str," rows=%.2f loops=%.2f)", rows, nloops);
>
Sorry, but why do the 'loops' need to be changed? IIUC the issue is
only with the
rows field? When testing this patch I got some results similar with this:
    Index Scan using t_a_idx on t  (cost=0.14..0.28 rows=1 width=8)
(actual time=0.049..0.049 rows=0.50 loops=2.00)

> When the total number of returned tuples is less than the number of
> loops currently shows 'rows = 0'. This can mislead users into thinking
> that no rows were returned at all, even though some might have appeared
> occasionally.
>
I think that this can happen when the returned rows and the loops are small
enough to result in a 'row' value like 0.00045? I'm not sure if we have
"bigger" values (e.g 1074(ntuples) / 117(nloops) which would result in 9.17
rows) this would also be true, what do you think? If you could provide
an example of this case would be great!

-    executing the index scans on <literal>tenk2</literal>.
+    executing the index scans on <literal>tenk2</literal>.   If a subplan node
+    is executed multiple times and the average number of rows is less than one,
+    the rows and <literal>loops</literal> values are shown as a
decimal fraction
+    (with two digits after the decimal point) to indicate that some rows
+    were actually processed rather than simply rounding down to zero.

* I think that it would be good to mention what a 'row' value in
decimal means. For
 example, if its says "0.1 rows" the user should assume that typically 0 rows
 will be returned but sometimes it can return 1 or more.

* There are more spaces than necessary before "If a subplan node ..."

* Maybe wrap 'rows' with <literal> </literal>?

-- 
Matheus Alcantara



Re: explain analyze rows=%.0f

From
Ilia Evdokimov
Date:
On 10.02.2025 18:32, Matheus Alcantara wrote:
> Thanks for the new patch version.
>
> -- v7-0001-Clarify-display-of-rows-and-loops-as-decimal-fraction.patch
>> +                if (nloops > 1 && planstate->instrument->ntuples < nloops)
>> +                    appendStringInfo(es->str," rows=%.2f loops=%.2f)", rows, nloops);
>>
> Sorry, but why do the 'loops' need to be changed? IIUC the issue is
> only with the
> rows field? When testing this patch I got some results similar with this:
>      Index Scan using t_a_idx on t  (cost=0.14..0.28 rows=1 width=8)
> (actual time=0.049..0.049 rows=0.50 loops=2.00)


The only reason I added this was to make it clear to the user that loops 
 > 1, along with the 'rows' value. If no one finds it useful and it only 
raises more questions, I can remove it.



>
>> When the total number of returned tuples is less than the number of
>> loops currently shows 'rows = 0'. This can mislead users into thinking
>> that no rows were returned at all, even though some might have appeared
>> occasionally.
>>
> I think that this can happen when the returned rows and the loops are small
> enough to result in a 'row' value like 0.00045? I'm not sure if we have
> "bigger" values (e.g 1074(ntuples) / 117(nloops) which would result in 9.17
> rows) this would also be true, what do you think? If you could provide
> an example of this case would be great!


Based on what was discussed earlier in the thread, there are cases with 
large loops [0]. However, I believe it's better not to display average 
rows with excessively long digits or in scientific notation. And, of 
course, I agree with you regarding small values. I think we should also 
add a check to ensure that the total rows is actually greater than zero. 
When the total rows is zero, we could simply display it as an integer 
without decimals. It could help users average rows is very small but not 
zero. What do you think about this approach?


>
> -    executing the index scans on <literal>tenk2</literal>.
> +    executing the index scans on <literal>tenk2</literal>.   If a subplan node
> +    is executed multiple times and the average number of rows is less than one,
> +    the rows and <literal>loops</literal> values are shown as a
> decimal fraction
> +    (with two digits after the decimal point) to indicate that some rows
> +    were actually processed rather than simply rounding down to zero.
>
> * I think that it would be good to mention what a 'row' value in
> decimal means. For
>   example, if its says "0.1 rows" the user should assume that typically 0 rows
>   will be returned but sometimes it can return 1 or more.
>
> * There are more spaces than necessary before "If a subplan node ..."
>
> * Maybe wrap 'rows' with <literal> </literal>?
>

I agree with the last two points. As for the first one—maybe we could 
simply state that the average rows value can be decimal, especially for 
very small values?


[0]: 
https://www.postgresql.org/message-id/a9393107-6bb9-c835-50b7-c0f453a514b8%40postgrespro.ru

--
Best regards,
Ilia Evdokimov,
Tantor Labs LLC.




Re: explain analyze rows=%.0f

From
Matheus Alcantara
Date:
Em seg., 10 de fev. de 2025 às 13:38, Ilia Evdokimov
<ilya.evdokimov@tantorlabs.com> escreveu:
> > -- v7-0001-Clarify-display-of-rows-and-loops-as-decimal-fraction.patch
> >> +                if (nloops > 1 && planstate->instrument->ntuples < nloops)
> >> +                    appendStringInfo(es->str," rows=%.2f loops=%.2f)", rows, nloops);
> >>
> > Sorry, but why do the 'loops' need to be changed? IIUC the issue is
> > only with the
> > rows field? When testing this patch I got some results similar with this:
> >      Index Scan using t_a_idx on t  (cost=0.14..0.28 rows=1 width=8)
> > (actual time=0.049..0.049 rows=0.50 loops=2.00)
>
>
> The only reason I added this was to make it clear to the user that loops
>  > 1, along with the 'rows' value. If no one finds it useful and it only
> raises more questions, I can remove it.
>
Ok, I get it. IMHO the current behaviour of using %.0f for 'loops' would be
better, but let's see if anyone else has opinions about it.

> >> When the total number of returned tuples is less than the number of
> >> loops currently shows 'rows = 0'. This can mislead users into thinking
> >> that no rows were returned at all, even though some might have appeared
> >> occasionally.
> >>
> > I think that this can happen when the returned rows and the loops are small
> > enough to result in a 'row' value like 0.00045? I'm not sure if we have
> > "bigger" values (e.g 1074(ntuples) / 117(nloops) which would result in 9.17
> > rows) this would also be true, what do you think? If you could provide
> > an example of this case would be great!
>
>
> Based on what was discussed earlier in the thread, there are cases with
> large loops [0]. However, I believe it's better not to display average
> rows with excessively long digits or in scientific notation. And, of
> course, I agree with you regarding small values. I think we should also
> add a check to ensure that the total rows is actually greater than zero.
> When the total rows is zero, we could simply display it as an integer
> without decimals. It could help users average rows is very small but not
> zero. What do you think about this approach?
>
Yeah, I agree with you about the long digits. My question is more about why do
we need the planstate->instrument->ntuples < nloops check? I tried to remove
this check and I got a lot of EXPLAIN output that shows 'rows' values with .00,
so I'm just trying to understand the reason. From what I've understood about
this thread is that just avoiding .00 decimals of 'rows' values that could be
just integers would be enough, is that right or I'm missing something here? I'm
just worried if we could have a scenario where nloops > 1 &&
planstate->instrument->ntuples < nloops which would make the 'rows' not be
formatted correctly.

>
> >
> > -    executing the index scans on <literal>tenk2</literal>.
> > +    executing the index scans on <literal>tenk2</literal>.   If a subplan node
> > +    is executed multiple times and the average number of rows is less than one,
> > +    the rows and <literal>loops</literal> values are shown as a
> > decimal fraction
> > +    (with two digits after the decimal point) to indicate that some rows
> > +    were actually processed rather than simply rounding down to zero.
> >
> > * I think that it would be good to mention what a 'row' value in
> > decimal means. For
> >   example, if its says "0.1 rows" the user should assume that typically 0 rows
> >   will be returned but sometimes it can return 1 or more.
> >
> > * There are more spaces than necessary before "If a subplan node ..."
> >
> > * Maybe wrap 'rows' with <literal> </literal>?
> >
>
> I agree with the last two points. As for the first one—maybe we could
> simply state that the average rows value can be decimal, especially for
> very small values?
>
I'm just not sure about the "small values"; the 'rows' in decimal will only
happen with small values? What would be a "small value" in this context? My main
point here is more that I think that it would be good to mention *why* the
'rows' can be decimal, not just describe that it could be decimal.

--
Matheus Alcantara



Re: explain analyze rows=%.0f

From
Ilia Evdokimov
Date:


On 10.02.2025 23:43, Matheus Alcantara wrote:
When the total number of returned tuples is less than the number of
loops currently shows 'rows = 0'. This can mislead users into thinking
that no rows were returned at all, even though some might have appeared
occasionally.

I think that this can happen when the returned rows and the loops are small
enough to result in a 'row' value like 0.00045? I'm not sure if we have
"bigger" values (e.g 1074(ntuples) / 117(nloops) which would result in 9.17
rows) this would also be true, what do you think? If you could provide
an example of this case would be great!

Based on what was discussed earlier in the thread, there are cases with
large loops [0]. However, I believe it's better not to display average
rows with excessively long digits or in scientific notation. And, of
course, I agree with you regarding small values. I think we should also
add a check to ensure that the total rows is actually greater than zero.
When the total rows is zero, we could simply display it as an integer
without decimals. It could help users average rows is very small but not
zero. What do you think about this approach?

Yeah, I agree with you about the long digits. My question is more about why do
we need the planstate->instrument->ntuples < nloops check? I tried to remove
this check and I got a lot of EXPLAIN output that shows 'rows' values with .00,
so I'm just trying to understand the reason. From what I've understood about
this thread is that just avoiding .00 decimals of 'rows' values that could be
just integers would be enough, is that right or I'm missing something here? I'm
just worried if we could have a scenario where nloops > 1 &&
planstate->instrument->ntuples < nloops which would make the 'rows' not be
formatted correctly.


Sorry for missing your question earlier. If you notice in the code above, the variable(average) 'rows' is defined as:

double rows = planstate->instrument->ntuples / nloops;

This represents the total rows divided by the number of loops. The condition means that variable 'rows' will always  between zero and one. Therefore, the average rows under such conditions cannot be greater than or even equal to one. I wrote this condition specifically to avoid the verbose expression 'rows > 0 && rows < 1'. However, since this might not be obvious to everyone, perhaps it'd be better to write is using 'rows' directly or add a comment explaining this logic.


-    executing the index scans on <literal>tenk2</literal>.
+    executing the index scans on <literal>tenk2</literal>.   If a subplan node
+    is executed multiple times and the average number of rows is less than one,
+    the rows and <literal>loops</literal> values are shown as a
decimal fraction
+    (with two digits after the decimal point) to indicate that some rows
+    were actually processed rather than simply rounding down to zero.

* I think that it would be good to mention what a 'row' value in
decimal means. For  example, if its says "0.1 rows" the user should assume that typically 0 rows  will be returned but sometimes it can return 1 or more.

* There are more spaces than necessary before "If a subplan node ..."

* Maybe wrap 'rows' with <literal> </literal>?

I agree with the last two points. As for the first one—maybe we could
simply state that the average rows value can be decimal, especially for
very small values?

I'm just not sure about the "small values"; the 'rows' in decimal will only
happen with small values? What would be a "small value" in this context? My main
point here is more that I think that it would be good to mention *why* the
'rows' can be decimal, not just describe that it could be decimal.


As for 'small values', it means that the average rows is between zero and one, to avoid rounding errors and misunderstanding. I think this would be ideal.

--
Best regards,
Ilia Evdokimov,
Tantor Labs LLC.

Re: explain analyze rows=%.0f

From
Matheus Alcantara
Date:
Em seg., 10 de fev. de 2025 às 18:14, Ilia Evdokimov
<ilya.evdokimov@tantorlabs.com> escreveu:
> Sorry for missing your question earlier. If you notice in the code above, the variable(average) 'rows' is defined as:
>
> double rows = planstate->instrument->ntuples / nloops;
>
> This represents the total rows divided by the number of loops. The condition
> means that variable 'rows' will always  between zero and one. Therefore, the
> average rows under such conditions cannot be greater than or even equal to
> one. I wrote this condition specifically to avoid the verbose expression
> 'rows > 0 && rows < 1'. However, since this might not be obvious to everyone,
> perhaps it'd be better to write is using 'rows' directly or add a comment
> explaining this logic.
>
Thanks for the details! It makes sense to me now. I think that adding a comment
could be a good idea

> I agree with the last two points. As for the first one—maybe we could
> simply state that the average rows value can be decimal, especially for
> very small values?
>
> I'm just not sure about the "small values"; the 'rows' in decimal will only
> happen with small values? What would be a "small value" in this context? My main
> point here is more that I think that it would be good to mention *why* the
> 'rows' can be decimal, not just describe that it could be decimal.
>
>
> As for 'small values', it means that the average rows is between zero and
> one, to avoid rounding errors and misunderstanding. I think this would be
> ideal.
>
Get it, sounds reasonable to me.

--
Matheus Alcantara



Re: explain analyze rows=%.0f

From
Andrei Lepikhov
Date:
On 8/2/2025 04:28, Ilia Evdokimov wrote:
> On 08.02.2025 00:01, Matheus Alcantara wrote:
>> Just for reference I'm trying to apply based on commit fb056564ec5.
> You are right, because two commits were appeared after creating v6-patch 
> on partition_prune.out and patch v6 must not have applied on master. 
> Then I created v7 patch rebased on fb056564ec5 . Thank for your remark!
I support the idea in general, but I believe it should be expanded to 
cover all cases of parameterised plan nodes. Each rescan iteration may 
produce a different number of tuples, and rounding can obscure important 
data.

For example, consider five loops of a scan node: the first loop returns 
nine tuples, and each other - zero tuples. When we calculate the 
average, 9 divided by 5 equals 1.8. This results in an explanation that 
indicates "rows = 1," masking almost 40% of the data.

Now, if we apply the same two loops but have a total of 900,000 tuples, 
then 400,000 masked tuples represent a significant portion of the data.

Moreover, switching to a floating-point type for row explanations in 
each parameterised node would provide a more comprehensive view and add 
valuable information about the parameterisation of the node, which may 
not be immediately apparent.

-- 
regards, Andrei Lepikhov



Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Tue, Feb 11, 2025 at 12:14 PM Andrei Lepikhov <lepihov@gmail.com> wrote:
> I support the idea in general, but I believe it should be expanded to
> cover all cases of parameterised plan nodes. Each rescan iteration may
> produce a different number of tuples, and rounding can obscure important
> data.
>
> For example, consider five loops of a scan node: the first loop returns
> nine tuples, and each other - zero tuples. When we calculate the
> average, 9 divided by 5 equals 1.8. This results in an explanation that
> indicates "rows = 1," masking almost 40% of the data.
>
> Now, if we apply the same two loops but have a total of 900,000 tuples,
> then 400,000 masked tuples represent a significant portion of the data.
>
> Moreover, switching to a floating-point type for row explanations in
> each parameterised node would provide a more comprehensive view and add
> valuable information about the parameterisation of the node, which may
> not be immediately apparent.

I agree strongly with all of this. I believe we should just implement
what was agreed here:

https://www.postgresql.org/message-id/21013.1243618236%40sss.pgh.pa.us

Let's just display 2 fractional digits when nloops>1, else 0, and call it good.

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: explain analyze rows=%.0f

From
Alena Rybakina
Date:
Hi! Thank you for your valuable work on this!

On 11.02.2025 22:18, Ilia Evdokimov wrote:
>
> On 11.02.2025 20:41, Robert Haas wrote:
>> On Tue, Feb 11, 2025 at 12:14 PM Andrei Lepikhov <lepihov@gmail.com> 
>> wrote:
>>> I support the idea in general, but I believe it should be expanded to
>>> cover all cases of parameterised plan nodes. Each rescan iteration may
>>> produce a different number of tuples, and rounding can obscure 
>>> important
>>> data.
>>>
>>> For example, consider five loops of a scan node: the first loop returns
>>> nine tuples, and each other - zero tuples. When we calculate the
>>> average, 9 divided by 5 equals 1.8. This results in an explanation that
>>> indicates "rows = 1," masking almost 40% of the data.
>>>
>>> Now, if we apply the same two loops but have a total of 900,000 tuples,
>>> then 400,000 masked tuples represent a significant portion of the data.
>>>
>>> Moreover, switching to a floating-point type for row explanations in
>>> each parameterised node would provide a more comprehensive view and add
>>> valuable information about the parameterisation of the node, which may
>>> not be immediately apparent.
>> I agree strongly with all of this. I believe we should just implement
>> what was agreed here:
>>
>> https://www.postgresql.org/message-id/21013.1243618236%40sss.pgh.pa.us
>>
>> Let's just display 2 fractional digits when nloops>1, else 0, and 
>> call it good.
>>
>
> Thank you for your review!
>
> With such example, it's hard to disagree with it. This would really 
> add valuable information. Taking all opinions into account, I have 
> updated the patch v8. I have also included a check for the case where 
> there are only zeros after the decimal point. We do not want to 
> clutter the rows with unnecessary zeros.
>
I looked at the patch and agree with them. I would suggest adding a 
description of how this can help in analyzing the query plans -
I think there is a lack of a description of the reason why this is done 
in the commit message.
I would also add the same to the documentation with an example.

-- 
Regards,
Alena Rybakina
Postgres Professional




Re: explain analyze rows=%.0f

From
Andrei Lepikhov
Date:
On 12/2/2025 00:41, Robert Haas wrote:
> On Tue, Feb 11, 2025 at 12:14 PM Andrei Lepikhov <lepihov@gmail.com> wrote:
>> Moreover, switching to a floating-point type for row explanations in
>> each parameterised node would provide a more comprehensive view and add
>> valuable information about the parameterisation of the node, which may
>> not be immediately apparent.
> 
> I agree strongly with all of this. I believe we should just implement
> what was agreed here:
> 
> https://www.postgresql.org/message-id/21013.1243618236%40sss.pgh.pa.us
> 
> Let's just display 2 fractional digits when nloops>1, else 0, and call it good.
Why are there only two fractional digits?

I reviewed the user reports where we identified issues without 
sufficient data, based on explains only, and typical examples included 
loop numbers ranging from 1E5 to 1E7 and tuples from 1E2 to 1E5. 
Therefore, it may make sense to display fractional digits up to two 
meaningful (non-zero) digits.

-- 
regards, Andrei Lepikhov



Re: explain analyze rows=%.0f

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> On Tue, Feb 11, 2025 at 12:14 PM Andrei Lepikhov <lepihov@gmail.com> wrote:
>> I support the idea in general, but I believe it should be expanded to
>> cover all cases of parameterised plan nodes. Each rescan iteration may
>> produce a different number of tuples, and rounding can obscure important
>> data.

> I agree strongly with all of this. I believe we should just implement
> what was agreed here:
> https://www.postgresql.org/message-id/21013.1243618236%40sss.pgh.pa.us
> Let's just display 2 fractional digits when nloops>1, else 0, and call it good.

Note that that formulation has nothing especially to do with
parameterized plan nodes.  Any nestloop inner side would end up
getting shown with fractional rowcounts.  Maybe that's fine.

I suggest that when thinking about what to change here,
you start by considering how you'd adjust the docs at
https://www.postgresql.org/docs/devel/using-explain.html
to explain the new behavior.  If you can't explain it
clearly for users, then maybe it's not such a great idea.

            regards, tom lane



Re: explain analyze rows=%.0f

From
Andrei Lepikhov
Date:
On 12/2/2025 03:46, Tom Lane wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
>> On Tue, Feb 11, 2025 at 12:14 PM Andrei Lepikhov <lepihov@gmail.com> wrote:
>>> I support the idea in general, but I believe it should be expanded to
>>> cover all cases of parameterised plan nodes. Each rescan iteration may
>>> produce a different number of tuples, and rounding can obscure important
>>> data.
> 
>> I agree strongly with all of this. I believe we should just implement
>> what was agreed here:
>> https://www.postgresql.org/message-id/21013.1243618236%40sss.pgh.pa.us
>> Let's just display 2 fractional digits when nloops>1, else 0, and call it good.
> 
> Note that that formulation has nothing especially to do with
> parameterized plan nodes.  Any nestloop inner side would end up
> getting shown with fractional rowcounts.  Maybe that's fine.
I partly agree with this approach. Playing around a bit, I couldn't 
invent a case where we have different numbers of tuples without 
parameters. But I can imagine it is possible or may be possible in 
future. So, it is not necessary to tangle fractional output with a 
parameterised node.
I'm unsure about the inner subtree of a JOIN - subplan may refer to the 
upper query and process a different number of tuples for every 
evaluation without any JOIN operator.
May we agree on a more general formula to print at least two meaningful 
digits if we have a fractional part?

Examples:
- actual rows = 2, nloops = 2 -> rows = 1
- actual rows = 9, nloops = 5 -> rows = 1.8
- actual rows = 101, nloops = 100 -> rows = 1.0
- actual rows = 99, nloops = 1000000 -> rows = 0.000099

It may guarantee that an EXPLAIN exposes most of the data passed the 
node, enough to tangle it with actual timing and tuples at the upper 
levels of the query.

> 
> I suggest that when thinking about what to change here,
> you start by considering how you'd adjust the docs at
> https://www.postgresql.org/docs/devel/using-explain.html
> to explain the new behavior.  If you can't explain it
> clearly for users, then maybe it's not such a great idea.
Agree

-- 
regards, Andrei Lepikhov



Re: explain analyze rows=%.0f

From
Ilia Evdokimov
Date:
On 12.02.2025 11:54, Andrei Lepikhov wrote:
> May we agree on a more general formula to print at least two 
> meaningful digits if we have a fractional part?
>
> Examples:
> - actual rows = 2, nloops = 2 -> rows = 1
> - actual rows = 9, nloops = 5 -> rows = 1.8
> - actual rows = 101, nloops = 100 -> rows = 1.0
> - actual rows = 99, nloops = 1000000 -> rows = 0.000099
>
> It may guarantee that an EXPLAIN exposes most of the data passed the 
> node, enough to tangle it with actual timing and tuples at the upper 
> levels of the query.

I think the idea of keeping two significant digits after the decimal 
point is quite reasonable. The thing is, rows=0.000001 or something 
similar can only occur when loops is quite large. If we show the order 
of magnitude in rows, it will be easier for the user to estimate the 
order of total rows. For example, if we see this:

rows=0.000056 loops=4718040

the user can quickler approximate the order of total rows for analyzing 
the upper levels of the query.

However, keep in mind that I am against using the E notation, as many 
users have mentioned that they are not mathematicians and are not 
familiar with the concept of "E".


>
>>
>> I suggest that when thinking about what to change here,
>> you start by considering how you'd adjust the docs at
>> https://www.postgresql.org/docs/devel/using-explain.html
>> to explain the new behavior.  If you can't explain it
>> clearly for users, then maybe it's not such a great idea.
> Agree
>
So do I. Firstly, I'll think how to explain it.

--
Best regards,
Ilia Evdokimov,
Tantor Labs LLC.




Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Tue, Feb 11, 2025 at 2:18 PM Ilia Evdokimov
<ilya.evdokimov@tantorlabs.com> wrote:
> With such example, it's hard to disagree with it. This would really add
> valuable information. Taking all opinions into account, I have updated
> the patch v8. I have also included a check for the case where there are
> only zeros after the decimal point. We do not want to clutter the rows
> with unnecessary zeros.

I disagree. We don't do this for any other fractional value we print
in any other part of the system, and I do not think this case should
be some kind of weird exception.

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Wed, Feb 12, 2025 at 5:10 AM Ilia Evdokimov
<ilya.evdokimov@tantorlabs.com> wrote:
> I think the idea of keeping two significant digits after the decimal
> point is quite reasonable. The thing is, rows=0.000001 or something
> similar can only occur when loops is quite large. If we show the order
> of magnitude in rows, it will be easier for the user to estimate the
> order of total rows. For example, if we see this:
>
> rows=0.000056 loops=4718040
>
> the user can quickler approximate the order of total rows for analyzing
> the upper levels of the query.

I agree that showing 2 digits after the decimal point in all cases is
not ideal, but I suggest that we take a practical approach. Figuring
out dynamically what number of decimal digits to display in each case
sounds complicated and we may spend a bunch of time arguing about the
details of that and get nothing committed. If we just show 2 digits
after the decimal point, it will not be perfect, but it will be 10^2
times better than what we have now.

If I'm honest, what I actually think we should do is stop dividing
values by nloops before printing them out. Every time I'm looking at a
quantity that has been divided by nloops, the very first thing I do is
try to figure out what the original value was. The whole reason I want
to display at least a couple of decimal digits here is so that I can
do that more accurately, but of course the dream would be not having
to reverse engineer it like that at all. However, I expect fierce
opposition to that idea, and no matter how misguided I may think that
opposition might be, a patch in the tree is worth two in the
CommitFest.

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: explain analyze rows=%.0f

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> I agree that showing 2 digits after the decimal point in all cases is
> not ideal, but I suggest that we take a practical approach. Figuring
> out dynamically what number of decimal digits to display in each case
> sounds complicated and we may spend a bunch of time arguing about the
> details of that and get nothing committed. If we just show 2 digits
> after the decimal point, it will not be perfect, but it will be 10^2
> times better than what we have now.

I was idly speculating yesterday about letting the Ryu code print
the division result, so that we get a variable number of digits.
Realistically, that'd probably result in many cases in more digits
than anybody wants, so it's not a serious proposal.  I'm cool with
the fixed-two-digits approach to start with.

            regards, tom lane



Re: explain analyze rows=%.0f

From
Andrei Lepikhov
Date:
On 13/2/2025 01:40, Tom Lane wrote:
> I was idly speculating yesterday about letting the Ryu code print
> the division result, so that we get a variable number of digits.
> Realistically, that'd probably result in many cases in more digits
> than anybody wants, so it's not a serious proposal.  I'm cool with
> the fixed-two-digits approach to start with.
Okay, since no one else voted for the meaningful-numbers approach, I 
would say that fixed size is better than nothing. It may cover some of 
my practical cases, but unfortunately, not the most problematic ones.

-- 
regards, Andrei Lepikhov



Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Wed, Feb 12, 2025 at 2:55 PM Andrei Lepikhov <lepihov@gmail.com> wrote:
> On 13/2/2025 01:40, Tom Lane wrote:
> > I was idly speculating yesterday about letting the Ryu code print
> > the division result, so that we get a variable number of digits.
> > Realistically, that'd probably result in many cases in more digits
> > than anybody wants, so it's not a serious proposal.  I'm cool with
> > the fixed-two-digits approach to start with.
> Okay, since no one else voted for the meaningful-numbers approach, I
> would say that fixed size is better than nothing. It may cover some of
> my practical cases, but unfortunately, not the most problematic ones.

I don't love it either, but I do think it is significantly better than nothing.

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Thu, Feb 13, 2025 at 4:05 AM Ilia Evdokimov
<ilya.evdokimov@tantorlabs.com> wrote:
> 1. Documentation (v9-0001-Clarify-display-of-rows-as-decimal-fractions-DOC.patch)
>
> One thing that bothers me is that the documentation explains how to compute total time, but it does not clarify how
tocompute total rows. Maybe this was obvious to others before, but now that we are displaying rows as a fraction, we
shouldexplicitly document how to interpret it alongside total time. 
>
> I believe it would be helpful to show a non-integer rows value in an example query. However, to achieve this, we need
theindex scan results to vary across iterations. One way to accomplish this is by using the condition t1.unique2 >
t2.unique2.Additionally, I suggest making loops a round number (e.g., 100) for better readability, which can be
achievedusing t1.thousand < 10. The final query would look like this: 
>
> EXPLAIN ANALYZE SELECT *
> FROM tenk1 t1, tenk2 t2
> WHERE t1.thousand < 10 AND t1.unique2 > t2.unique2;
>
> I believe this is an ideal example for the documentation because it not only demonstrates fractional rows, but also
keepsthe execution plan nearly unchanged. While the estimated and actual average row counts become slightly rounded, I
don'tsee another way to ensure different results for each index scan. 

Anyone else have an opinion on this? I see Ilia's point, but a
non-equality join is probably an atypical case.

> 2. Code and tests (v9-0002-Clarify-display-of-rows-as-decimal-fractions.patch)
>
> I left the code and tests unchanged since we agreed on a fixed format of two decimal places.

This still has the HAS_DECIMAL() thing to which I objected.

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: explain analyze rows=%.0f

From
Andrei Lepikhov
Date:
On 13/2/2025 21:42, Robert Haas wrote:
> On Thu, Feb 13, 2025 at 4:05 AM Ilia Evdokimov
> <ilya.evdokimov@tantorlabs.com> wrote:
>> 1. Documentation (v9-0001-Clarify-display-of-rows-as-decimal-fractions-DOC.patch)
>>
>> One thing that bothers me is that the documentation explains how to compute total time, but it does not clarify how
tocompute total rows. Maybe this was obvious to others before, but now that we are displaying rows as a fraction, we
shouldexplicitly document how to interpret it alongside total time.
 
>>
>> I believe it would be helpful to show a non-integer rows value in an example query. However, to achieve this, we
needthe index scan results to vary across iterations. One way to accomplish this is by using the condition t1.unique2 >
t2.unique2.Additionally, I suggest making loops a round number (e.g., 100) for better readability, which can be
achievedusing t1.thousand < 10. The final query would look like this:
 
>>
>> EXPLAIN ANALYZE SELECT *
>> FROM tenk1 t1, tenk2 t2
>> WHERE t1.thousand < 10 AND t1.unique2 > t2.unique2;
>>
>> I believe this is an ideal example for the documentation because it not only demonstrates fractional rows, but also
keepsthe execution plan nearly unchanged. While the estimated and actual average row counts become slightly rounded, I
don'tsee another way to ensure different results for each index scan.
 
> 
> Anyone else have an opinion on this? I see Ilia's point, but a
> non-equality join is probably an atypical case.
For me, it is ok: it demonstrates the feature and may legally happen. 
But this patch should be the second one, isn't it?
> 
>> 2. Code and tests (v9-0002-Clarify-display-of-rows-as-decimal-fractions.patch)
>>
>> I left the code and tests unchanged since we agreed on a fixed format of two decimal places.
> 
> This still has the HAS_DECIMAL() thing to which I objected.
I don't understand why using math.h and the floor() routine is 
necessary. I personally prefer x%y!=0 expression.

-- 
regards, Andrei Lepikhov



Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Mon, Feb 17, 2025 at 3:08 AM Ilia Evdokimov
<ilya.evdokimov@tantorlabs.com> wrote:
> You're right—floor shouldn't be used since it behaves differently across
> platforms, as Tom also pointed out earlier. I like the idea of using %,
> but since the compiler doesn't allow this operation with double, we need
> to cast it to int64. I checked how nloops and ntuples are modified -
> they are only incremented by 1. So that the casting might be safe. If I
> missed anything or if there's a reason why this casting wouldn't be
> safe, please let me know.

What I've been advocating for is just:

if (nloops > 1)

Instead of:

if (nloops > 1 && rows_is_fractonal)

I don't think it's really safe to just cast a double back to int64. In
practice, the number of tuples should never be large enough to
overflow int64, but if it did, this result would be nonsense. Also, if
the double ever lost precision, the result would be nonsense. If we
want to have an exact count of tuples, we ought to change ntuples and
ntuples2 to be uint64. But I don't think we should do that in this
patch, because that adds a whole bunch of new problems to worry about
and might cause us to get nothing committed. Instead, I think we
should just always show two decimal digits if there's more than one
loop.

That's simpler than what the patch currently does and avoids this
problem. Perhaps it's objectionable for some other reason, but if so,
can somebody please spell out what that reason is so we can talk about
it?

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: explain analyze rows=%.0f

From
Andrei Lepikhov
Date:
On 17/2/2025 15:19, Robert Haas wrote:
> On Mon, Feb 17, 2025 at 3:08 AM Ilia Evdokimov
> if (nloops > 1)
> 
> Instead of:
> 
> if (nloops > 1 && rows_is_fractonal)
> 
> I don't think it's really safe to just cast a double back to int64. In
> practice, the number of tuples should never be large enough to
> overflow int64, but if it did, this result would be nonsense. Also, if
> the double ever lost precision, the result would be nonsense. If we
> want to have an exact count of tuples, we ought to change ntuples and
> ntuples2 to be uint64. But I don't think we should do that in this
> patch, because that adds a whole bunch of new problems to worry about
> and might cause us to get nothing committed. Instead, I think we
> should just always show two decimal digits if there's more than one
> loop.
> 
> That's simpler than what the patch currently does and avoids this
> problem. Perhaps it's objectionable for some other reason, but if so,
> can somebody please spell out what that reason is so we can talk about
> it?
I can understand two decimal places. You might be concerned about 
potential issues with some codes that parse PostgreSQL explains.
However, I believe it would be beneficial to display fractional parts 
only when iterations yield different numbers of tuples. Given that I 
often work with enormous explains, I think this approach would enhance 
the readability and comprehension of the output. Frequently, I may see 
only part of the EXPLAIN on the screen. A floating-point row number 
format may immediately give an idea about parameterisation (or another 
reason for the subtree's variability) and trace it down to the source.

-- 
regards, Andrei Lepikhov



Re: explain analyze rows=%.0f

From
Ilia Evdokimov
Date:
On 18.02.2025 23:55, Andrei Lepikhov wrote:
> On 17/2/2025 15:19, Robert Haas wrote:
>> On Mon, Feb 17, 2025 at 3:08 AM Ilia Evdokimov
>> if (nloops > 1)
>>
>> Instead of:
>>
>> if (nloops > 1 && rows_is_fractonal)
>>
>> I don't think it's really safe to just cast a double back to int64. In
>> practice, the number of tuples should never be large enough to
>> overflow int64, but if it did, this result would be nonsense. Also, if
>> the double ever lost precision, the result would be nonsense. If we
>> want to have an exact count of tuples, we ought to change ntuples and
>> ntuples2 to be uint64. But I don't think we should do that in this
>> patch, because that adds a whole bunch of new problems to worry about
>> and might cause us to get nothing committed. Instead, I think we
>> should just always show two decimal digits if there's more than one
>> loop.
>>
>> That's simpler than what the patch currently does and avoids this
>> problem. Perhaps it's objectionable for some other reason, but if so,
>> can somebody please spell out what that reason is so we can talk about
>> it?
> I can understand two decimal places. You might be concerned about 
> potential issues with some codes that parse PostgreSQL explains.
> However, I believe it would be beneficial to display fractional parts 
> only when iterations yield different numbers of tuples. Given that I 
> often work with enormous explains, I think this approach would enhance 
> the readability and comprehension of the output. Frequently, I may see 
> only part of the EXPLAIN on the screen. A floating-point row number 
> format may immediately give an idea about parameterisation (or another 
> reason for the subtree's variability) and trace it down to the source.
>

The idea of indicating to the user that different iterations produced 
varying numbers of rows is quite reasonable. Most likely, this would 
require adding a new boolean field to the Instrumentation structure, 
which would track this information by comparing the rows value from the 
current and previous iterations.

However, there is a major issue: this case would be quite difficult to 
document clearly. Even with an example and explanatory text, users may 
still be confused about why rows=100 means the same number of rows on 
all iterations, while rows=100.00 indicates variation. Even if we 
describe this in the documentation, a user seeing rows=100.00 will most 
likely assume it represents an average of 100 rows per iteration and may 
still not realize that the actual number of rows varied.

If we want to convey this information more clearly, we should consider a 
more explicit approach. For example, instead of using a fractional 
value, we could display the minimum and maximum row counts observed 
during execution (e.g.,rows=10..20, formatting details could be 
discussed). However, in my opinion, this discussion is beyond the scope 
of this thread.

Any thoughts?

--
Best regards,
Ilia Evdokimov,
Tantor Labs LLC.




Re: explain analyze rows=%.0f

From
Andrei Lepikhov
Date:
On 19/2/2025 15:01, Ilia Evdokimov wrote:
> If we want to convey this information more clearly, we should consider a 
> more explicit approach. For example, instead of using a fractional 
> value, we could display the minimum and maximum row counts observed 
> during execution (e.g.,rows=10..20, formatting details could be 
> discussed). However, in my opinion, this discussion is beyond the scope 
> of this thread.
I think this approach is redundant. Usually, a problem is an unknown 
number of tuples because, having millions of loops, we produce a small 
number of tuples, but each tuple costs a lot because of a subplan or 
something like that. By printing fractional digits (of course, we need 
to view significant ones, but we already agreed on a couple of digits 
for now), we may resolve this corner-case problem fully - at least all 
examples in my experience would be resolved, and I have even maintained 
a patch for years to manage this.
Moreover, it is too simple to implement. I just can't imagine why not to 
do it right now.

-- 
regards, Andrei Lepikhov



Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Wed, Feb 19, 2025 at 9:01 AM Ilia Evdokimov
<ilya.evdokimov@tantorlabs.com> wrote:
> The idea of indicating to the user that different iterations produced
> varying numbers of rows is quite reasonable. Most likely, this would
> require adding a new boolean field to the Instrumentation structure,
> which would track this information by comparing the rows value from the
> current and previous iterations.

Yep.

> However, there is a major issue: this case would be quite difficult to
> document clearly. Even with an example and explanatory text, users may
> still be confused about why rows=100 means the same number of rows on
> all iterations, while rows=100.00 indicates variation. Even if we
> describe this in the documentation, a user seeing rows=100.00 will most
> likely assume it represents an average of 100 rows per iteration and may
> still not realize that the actual number of rows varied.

I imagine this is a surmountable problem.

> If we want to convey this information more clearly, we should consider a
> more explicit approach. For example, instead of using a fractional
> value, we could display the minimum and maximum row counts observed
> during execution (e.g.,rows=10..20, formatting details could be
> discussed). However, in my opinion, this discussion is beyond the scope
> of this thread.

I quite agree. I think we've spent too much time on this already; this
idea was first proposed in 2009, and we just haven't gotten around to
doing anything about it. Redesigning the feature a few more times
(with an expanded scope) isn't going to make that better.

So, I've committed v11-0001. I'm not altogether convinced that
v11-0002 is necessary -- no portion of the documentation that it
modifies is falsified by the committed patch. Maybe we can just call
this one done for now and move on?

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Fri, Feb 21, 2025 at 4:20 PM Robert Haas <robertmhaas@gmail.com> wrote:
> So, I've committed v11-0001. I'm not altogether convinced that
> v11-0002 is necessary -- no portion of the documentation that it
> modifies is falsified by the committed patch. Maybe we can just call
> this one done for now and move on?

Well, not quite yet, at least. There are two buildfarm failures since
I committed this. The first is here:

https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=conchuela&dt=2025-02-21%2021%3A22%3A30

diff -U3 /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/expected/explain.out
/home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/results/explain.out
--- /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/expected/explain.out
2025-02-21 22:20:39.229225000 +0100
+++ /home/pgbf/buildroot/HEAD/pgsql.build/src/test/regress/results/explain.out
2025-02-21 22:22:38.193054000 +0100
@@ -528,7 +528,7 @@
                              "Plan Rows": 0,                +
                              "Plan Width": 0,               +
                              "Total Cost": 0.0,             +
-                             "Actual Rows": 0.0,            +
+                             "Actual Rows": 0,              +
                              "Actual Loops": 0,             +
                              "Startup Cost": 0.0,           +
                              "Async Capable": false,        +
@@ -575,7 +575,7 @@
                      "Plan Rows": 0,                        +
                      "Plan Width": 0,                       +
                      "Total Cost": 0.0,                     +
-                     "Actual Rows": 0.0,                    +
+                     "Actual Rows": 0,                      +
                      "Actual Loops": 0,                     +
                      "Startup Cost": 0.0,                   +
                      "Async Capable": false,                +

I imagine what happened here is that the parallel workers didn't
start, and so nloops was 1 instead of >1, and so we got an integer
output instead of a fractional one. It looks like there's some kind of
JSON-based EXPLAIN filtering happening here, so maybe we should just
also be filtering out the actual rows column? But I fear this will
turn out to be a problem for other parallel-query using tests that
were modified by this commit: it can always happen that the workers
start slowly enough that the leader finishes the whole query before
the worker starts up. Maybe making the number of decimal digits
contingent on nloops wasn't such a hot idea - I think it means that
every parallel query has a small but non-zero probability of producing
different EXPLAIN output in the probably-rare case where the worker
startup is slow.

The other failure is here:

https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=loach&dt=2025-02-21%2021%3A25%3A19

This failure seems not to have uploaded any log artifacts, so I'm not
sure how to tell what caused this.

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: explain analyze rows=%.0f

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> Well, not quite yet, at least. There are two buildfarm failures since
> I committed this. The first is here:

A very significant fraction of the buildfarm is now pink.
If you don't have a fix pretty nearly ready, please revert.

> I imagine what happened here is that the parallel workers didn't
> start, and so nloops was 1 instead of >1, and so we got an integer
> output instead of a fractional one.

Sounds plausible.  I note that this run:

https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skimmer&dt=2025-02-21%2023%3A00%3A10

got through a couple of iterations of the core tests before failing,
showing that it's not 100% reproducible even on the machines where
it's happening.

            regards, tom lane



a very significant fraction of the buildfarm is now pink

From
Robert Haas
Date:
On Fri, Feb 21, 2025 at 7:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> A very significant fraction of the buildfarm is now pink.
> If you don't have a fix pretty nearly ready, please revert.

When we're going to do a release, you want no commits for at least 24
hours before the release so that we can make sure the buildfarm is
clean. But when I commit something and the buildfarm fails, you want
it reverted within a handful of hours before I can even be sure of
having seen all the failures the commit caused, let alone had time to
think about what the best fix might be. It doesn't make sense to me
that we need 24 hours to be sure that the buildfarm is passing, but in
3 hours I'm supposed to see all the failures -- including the ones
that only happen on buildfarm animals that run once a day, I guess? --
and analyze them -- and decide on a fix -- naturally without any sort
of discussion because there's no time for that -- and code the fix --
and push it. I have a really hard time seeing how that's a reasonable
expectation.

I understand that the buildfarm can't be red all the time or nobody
can distinguish the problems they caused from preexisting ones. At the
same time, it seems completely unreasonable for us to say that, on the
one hand, the buildfarm has to be green at absolutely all times, and
on the other time, buildfarm owners are not required to provide any
sort of resources to test things before they are committed. IMHO, one
of those policies absolutely has to change. The current situation is
way too stressful for committers and it's burning people out and
making them unwilling to commit things -- or if they do commit things,
then they end up insta-reverting them, committing them again later,
maybe insta-reverting them a second time because they didn't actually
find all the problems the first time, and then maybe even round three,
four, or five. The commit log ends up with a bunch of garbage from the
repeated commits and reverts, and if it goes on long enough,
eventually somebody shows up to say "wow, this patch seems to be in
terrible shape, maybe it shouldn't ever be committed again" right when
the committer's stress level is already going through the ceiling. And
sometimes that is justified, but sometimes it isn't.

In the case of my commit today, the failures are the result of a
2-line regression diff with no functional impact that neither CI nor
any of the 11 reviewers noticed. That just shouldn't be the sort of
thing that results in somebody having to work evenings and weekends.
Perhaps if it DIDN'T result in a committer having to work evenings and
weekends, it wouldn't have taken 16 years for us to do something about
that problem.

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Fri, Feb 21, 2025 at 7:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Sounds plausible.  I note that this run:
>
> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skimmer&dt=2025-02-21%2023%3A00%3A10
>
> got through a couple of iterations of the core tests before failing,
> showing that it's not 100% reproducible even on the machines where
> it's happening.

Yeah, I pushed an attempt at a band-aid fix. Hopefully that will make
it clear whether there are other problems and how bad they are. It
looks like the XversionUpgrade tests are not this commit's fault.

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: explain analyze rows=%.0f

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> Yeah, I pushed an attempt at a band-aid fix. Hopefully that will make
> it clear whether there are other problems and how bad they are. It
> looks like the XversionUpgrade tests are not this commit's fault.

They are not, they're Jeff's problem.

I apologize if I ruffled your feathers.  But the buildfarm is a shared
resource and it doesn't do to leave it significantly broken over a
weekend, which I feared you might be about to do.  Committing stuff
at 4PM on a Friday is perhaps not the best plan if your intention
is to not work weekends.

On a positive note, at least some of the formerly-pink members
seem to be happy now.  So as you hoped, we can get more info about
whether there are less-frequent problems.

As for a permanent fix: maybe, instead of printing fractions when
actual nloops > 1, we should print fractions if *potential*
nloops > 1?  That is, anytime we're on the inside of a nestloop,
under a Gather, etc.  Seems like this'd not be too hard to track
while recursively descending the PlanState tree.

            regards, tom lane



Re: a very significant fraction of the buildfarm is now pink

From
Andres Freund
Date:
Hi,

On 2025-02-21 20:09:24 -0500, Robert Haas wrote:
> In the case of my commit today, the failures are the result of a
> 2-line regression diff with no functional impact that neither CI nor
> any of the 11 reviewers noticed. That just shouldn't be the sort of
> thing that results in somebody having to work evenings and weekends.
> Perhaps if it DIDN'T result in a committer having to work evenings and
> weekends, it wouldn't have taken 16 years for us to do something about
> that problem.

I think you're right that it really depends on what the breakage is.

If just about all animals die early in the test, it's more important the
problem gets fixed more or the faulty commit(s) reverted. For one it prevents
detecting other problems, as later, independent failures are hidden. For
another it quite likely will affect other developers "at home", too.

I'd also say that breaking CI and BF is probably something I'd consider more
urgent, as that could indicate the commit was just generally less well tested.

If, as the case here, a few animals fail with a minor regression.diffs output,
one that doesn't indicate any major issue but just some corner case change in
output, it's ok to leave it for the next day or so (*).

Greetings,

Andres Freund


(*) With some adjustments based on circumstances. If the breakage is over a
weekend, it's perhaps a bit less urgent. However, if it's the Friday before a
feature freeze, it's obviously a differnt story.



Re: explain analyze rows=%.0f

From
Andres Freund
Date:
Hi,

On 2025-02-21 21:11:07 -0500, Tom Lane wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
> > Yeah, I pushed an attempt at a band-aid fix. Hopefully that will make
> > it clear whether there are other problems and how bad they are. It
> > looks like the XversionUpgrade tests are not this commit's fault.
> 
> They are not, they're Jeff's problem.
> 
> I apologize if I ruffled your feathers.  But the buildfarm is a shared
> resource and it doesn't do to leave it significantly broken over a
> weekend, which I feared you might be about to do.  Committing stuff
> at 4PM on a Friday is perhaps not the best plan if your intention
> is to not work weekends.

IDK, there were a 8 animal showing rather minor and easy to identify
regression.diffs changes, with 2-3 as many animals being fine.  Of course it's
not desirable to have to dig through such failures when checking whether
another patch caused problems, but it's also not the end of the world if
things stay that way for a bit, particularly on the weekend.

Greetings,

Andres Freund



Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Fri, Feb 21, 2025 at 9:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> I apologize if I ruffled your feathers.  But the buildfarm is a shared
> resource and it doesn't do to leave it significantly broken over a
> weekend, which I feared you might be about to do.  Committing stuff
> at 4PM on a Friday is perhaps not the best plan if your intention
> is to not work weekends.

I really, really think we need some policy changes. Several committers
have told me that they literally won't commit anything on Friday for
fear of this exact kind of problem, but that means we lose a lot of
hours every week that people could potentially be dealing with patches
that have been waiting for an exceedingly long time for someone to
commit them. I feel like this is entirely a self-inflicted injury. We
could decide that CI is primary and the buildfarm is secondary, so
that as long as CI is green, you have a week to deal with anything
else that breaks. We could decide, as I said on the new thread I
started, to insist on having a vehicle for pre-commit buildfarm
testing. We could decide to just chill out over a couple of lines of
regression test diffs over a weekend where most people aren't going to
be working anyway. We could *at the very least* decide that you get 24
hours before you have to revert, perhaps with an exception if we're in
the last 14 days before feature freeze. There are probably other
things we could do. Yelling at people every time a patch turns the
buildfarm red for a few hours seems worse to me than MANY of the other
options.

Really, I feel like the way the buildfarm works is a relic of the
past. We're living in an era where cfbot is testing hundreds of
branches on a daily basis, all of which are completely untrusted code
downloaded from the Internet, but our buildfarm can't manage more than
about six and can't provide any testing resources even to committers.
In 2025, that's really far from the state of the art -- and CI seems
to be progressing a great deal faster than the BF, so the gap between
them figures to only get wider.

> On a positive note, at least some of the formerly-pink members
> seem to be happy now.  So as you hoped, we can get more info about
> whether there are less-frequent problems.

Cool.

> As for a permanent fix: maybe, instead of printing fractions when
> actual nloops > 1, we should print fractions if *potential*
> nloops > 1?  That is, anytime we're on the inside of a nestloop,
> under a Gather, etc.  Seems like this'd not be too hard to track
> while recursively descending the PlanState tree.

I guess we could do that, but how about just always displaying two
decimal places? I feel like you're really worried that people will be
confused if the row count is not an integer, but I'm not sure anyone
else has echoed that concern, and I am doubtful that it's really a
problem. Either people don't understand that the row count is divided
through by nloops, in which case the decimal places might actually
give them a clue that they've misunderstood, or they do understand
that, in which case the decimal places are not confusing. I feel like
we've gone to a lot of trouble to come up with complex solutions --
what was actually committed was one of the simpler things proposed --
but I am still not convinced that the simplest solution isn't for the
best.

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: a very significant fraction of the buildfarm is now pink

From
Robert Haas
Date:
On Fri, Feb 21, 2025 at 9:14 PM Andres Freund <andres@anarazel.de> wrote:
> I'd also say that breaking CI and BF is probably something I'd consider more
> urgent, as that could indicate the commit was just generally less well tested.

This is possible, but I think we oversell it by quite a lot. It's
quite possible for someone to put hundreds of hours into a patch and
forget to check CI before pushing; and it's also quite possible for
someone to do a terrible job vetting a patch where CI is clean. When
somebody is already stressed out about their patch breaking the
buildfarm, the very last thing they need is somebody who hasn't read
the patch or understood what the problems are to show up and say "hey,
maybe this patch should be reverted for all time and never considered
again ever!". It's just making a very stressful situation more
stressful. And it's cheap. If somebody shows up and says "hey, this
was improvidently committed for the following six design-level
reasons," that is abundantly fair and deserves major respect. Such
reviews take real time, thought, and work. Idly speculating that
someone's failure to check CI is a sign that they've also done
everything else wrong takes almost no work at all. I hate that we do
that to people. As much as I hate it when it happens to me, I think I
hate it even more when it happens to other people. It's a terrible way
to treat people who have poured their heart and soul into becoming
committers and who really care about the project, at least until we
beat the caring out of them.

But the real point of my previous email is that I just do not think
it's reasonable to expect people to fix complex programming problems
within hours. As much as I can be grumpy about CI, anything that goes
wrong with CI should in theory be something you can avoid ever having
to deal with on a Friday night no matter when you choose to commit,
because you can test things in advance. I know the BF has more
configurations than CI, but instead of having LESS capability to test
things in advance, it has NONE. Twenty years ago, post-commit testing
was probably the best you could hope for, but today it isn't.

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: a very significant fraction of the buildfarm is now pink

From
Jelte Fennema-Nio
Date:
This was very painful to read. +1 on making changes. Both for a
culture change to require less urgency over the weekend if it's a
minor failure, and probably also a tooling change to make this less of
a problem in general.

On Sat, 22 Feb 2025 at 04:38, Robert Haas <robertmhaas@gmail.com> wrote:
>
> On Fri, Feb 21, 2025 at 9:14 PM Andres Freund <andres@anarazel.de> wrote:
> > I'd also say that breaking CI and BF is probably something I'd consider more
> > urgent, as that could indicate the commit was just generally less well tested.
>
> This is possible, but I think we oversell it by quite a lot. It's
> quite possible for someone to put hundreds of hours into a patch and
> forget to check CI before pushing;
...snip...
> Twenty years ago, post-commit testing
> was probably the best you could hope for, but today it isn't.

100% agreed. As someone also working on other open source projects,
this reads incredibly foreign. This is a problem that simply cannot
happen in most projects I've worked on: GitHub will simply not allow
you to push to master/main if CI fails (at least not without an
explicit override). Postgres is the only project I work on where all
these things only run post-push to master.

To be clear, I'm not saying that we should be using GitHub for this,
although I personally think that would be a good idea given our CI is
triggered by pushes to GitHub. But it at least seems like one thing
that we need is a process change to make it less likely that people
accidentally push broken changes to master. e.g. We create a staging
repo (which could be on GitHub) and each committer has their own
branch (or branch prefix). Then when a committer pushes there, CI is
run and probably also some significant part of the build farm (and
maybe a committer could even opt in to running the full build farm).
If they pass, then the branch is automatically rebased on top of
master and pushed to the production repo its master branch.



Re: explain analyze rows=%.0f

From
Ilia Evdokimov
Date:


On 22.02.2025 00:20, Robert Haas wrote:
So, I've committed v11-0001. I'm not altogether convinced that
v11-0002 is necessary -- no portion of the documentation that it
modifies is falsified by the committed patch. Maybe we can just call
this one done for now and move on?


Not quite. If no one objects to leaving the documentation as it is, there is one nuance that needs to be fixed—in example [0], there is a node with loops=10, so it should be updated to reflect the current state. I've attached a patch for that.

Attachment

Re: a very significant fraction of the buildfarm is now pink

From
Ilia Evdokimov
Date:
Hi,

Regarding the patch push, I am not a committer, but perhaps my 
perspective might be interesting. When I noticed late on Friday evening 
that the tests had failed, I was quite anxious about the situation, 
thinking I would need to fix it right away. However, Robert committed 
the fix before that.

In general, when someone commits in any project, I believe the scale of 
the commit should be taken into account. In the case of postgres, if the 
commit changes code in critical areas like the planner, WAL, or some 
API, or involves large code changes, it’s important to be prepared to 
fix any issues that may arise. However, when changes are more minor—such 
as documentation, renaming something, or small refactors—committing late 
on a Friday may be less of a concern.

In other words, the larger the changes or the more vulnerable the areas 
of the code, the more one should be prepared for potential issues. But 
how to determine this boundary in postgres, I am not sure. I am 
confident that you will have a much better sense and experience of how 
to handle it.

--
Best regards,
Ilia Evdokimov,
Tantor Labs LLC.




Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Fri, Feb 21, 2025 at 10:00 PM Robert Haas <robertmhaas@gmail.com> wrote:
> I guess we could do that, but how about just always displaying two
> decimal places? I feel like you're really worried that people will be
> confused if the row count is not an integer, but I'm not sure anyone
> else has echoed that concern, and I am doubtful that it's really a
> problem. Either people don't understand that the row count is divided
> through by nloops, in which case the decimal places might actually
> give them a clue that they've misunderstood, or they do understand
> that, in which case the decimal places are not confusing. I feel like
> we've gone to a lot of trouble to come up with complex solutions --
> what was actually committed was one of the simpler things proposed --
> but I am still not convinced that the simplest solution isn't for the
> best.

So here's a patch for that. Thoughts?

--
Robert Haas
EDB: http://www.enterprisedb.com

Attachment

Re: explain analyze rows=%.0f

From
Ilia Evdokimov
Date:
On 24.02.2025 19:20, Robert Haas wrote:
> On Fri, Feb 21, 2025 at 10:00 PM Robert Haas <robertmhaas@gmail.com> wrote:
>> I guess we could do that, but how about just always displaying two
>> decimal places? I feel like you're really worried that people will be
>> confused if the row count is not an integer, but I'm not sure anyone
>> else has echoed that concern, and I am doubtful that it's really a
>> problem. Either people don't understand that the row count is divided
>> through by nloops, in which case the decimal places might actually
>> give them a clue that they've misunderstood, or they do understand
>> that, in which case the decimal places are not confusing. I feel like
>> we've gone to a lot of trouble to come up with complex solutions --
>> what was actually committed was one of the simpler things proposed --
>> but I am still not convinced that the simplest solution isn't for the
>> best.
> So here's a patch for that. Thoughts?
>

If no one is concerned about rows being a non-integer, then I support 
this, as it's quite strange for the average rows to be an integer only 
for me. If we go with this approach, we should also update all examples 
in the documentation accordingly. I attached patch with changes in 
documentation.

--
Best regards,
Ilia Evdokimov,
Tantor Labs LLC.

Attachment

Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Mon, Feb 24, 2025 at 12:12 PM Ilia Evdokimov
<ilya.evdokimov@tantorlabs.com> wrote:
> If no one is concerned about rows being a non-integer, then I support
> this, as it's quite strange for the average rows to be an integer only
> for me. If we go with this approach, we should also update all examples
> in the documentation accordingly. I attached patch with changes in
> documentation.

Thanks, that's very helpful. If we go forward with this, I'll commit
your patch and mine together.

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: a very significant fraction of the buildfarm is now pink

From
James Hunter
Date:
On Fri, Feb 21, 2025 at 5:09 PM Robert Haas <robertmhaas@gmail.com> wrote:
>
> On Fri, Feb 21, 2025 at 7:04 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > A very significant fraction of the buildfarm is now pink.
> > If you don't have a fix pretty nearly ready, please revert.
>
> When we're going to do a release, you want no commits for at least 24
> hours before the release so that we can make sure the buildfarm is
> clean. But when I commit something and the buildfarm fails, you want
> it reverted within a handful of hours before I can even be sure of
> having seen all the failures the commit caused, let alone had time to
> think about what the best fix might be. It doesn't make sense to me
> that we need 24 hours to be sure that the buildfarm is passing, but in
> 3 hours I'm supposed to see all the failures -- including the ones
> that only happen on buildfarm animals that run once a day, I guess? --
> and analyze them -- and decide on a fix -- naturally without any sort
> of discussion because there's no time for that -- and code the fix --
> and push it. I have a really hard time seeing how that's a reasonable
> expectation.
>
> I understand that the buildfarm can't be red all the time or nobody
> can distinguish the problems they caused from preexisting ones.

I think you're right, and the solution has to be something like:
1. If it's > 24 hours until a release, then test failures on buildfarm
must be allowed.
2. Buildfarm needs to track failures at a fine enough granularity that
I can tell whether my change caused the failure.

I've worked at a few database companies -- some had workable test
frameworks, others didn't. The biggest mistake some companies made was
to collapse an entire test suite into a single, binary result: "ALL
PASSED" vs. "AT LEAST ONE FAILED." (It's fine to collapse the test
suite *before you merge your commit,* because then you are testing
*only* your commit. After you merge it, however, CI / buildfarm ends
up testing *multiple* new commits at the same time, so the tests need
to report at finer granularity.)

If we can't do CI / buildfarm at the granularity of a single commit
(and, at some point, no organization can -- the test universe becomes
too large!), then we have to design for the possibility that one or
more of multiple commits will cause a test failure, while the
remaining commits are innocent. And so the test framework has to
identify the (likely) culprit for a particular new test failure, and
then give the author *enough time* to fix that failure. And while the
author is fixing his failure, other authors need to be able to merge
their commits, etc.

(Note that actual *compilation* failures are more severe than test
failures, because if PostgreSQL can't compile, then no one can get any
work done if they rebase to TIP. But I think you are discussing tests,
not compilation.)

One company I worked for tracked test failures by individual unit
test. So, "parallel" schedule would have 220+ tests, each tracking
SUCCESS, [EXISTING] FAILURE, and NEW FAILURE. On every NEW FAILURE,
the system notified the unit test owner (this is feasible inside a
corporation, but perhaps not for an open source community) and
concurrently started running the equivalent of "git bisect" to
identify the offending commit. Once it identified (what it thought
was) the offending commit, it notified that committer as well.

> In the case of my commit today, the failures are the result of a
> 2-line regression diff with no functional impact that neither CI nor
> any of the 11 reviewers noticed.

This is exactly the sort of case where tracking NEW FAILURE at the
unit test level is useful. So you made a change that caused a
regression diff, and the new output is better than the old (or at
least neutral). Flag this individual unit test, send it to Robert, and
development continues.

Given enough unit tests, along with reasonable pre-commit CI testing,
the probability of two commits causing regressions / diffs in the same
unit test, over a 1-week period (let's say), is very small.

So: +1 to your proposal.

James



Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Mon, Feb 24, 2025 at 2:16 PM Robert Haas <robertmhaas@gmail.com> wrote:
> On Mon, Feb 24, 2025 at 12:12 PM Ilia Evdokimov
> <ilya.evdokimov@tantorlabs.com> wrote:
> > If no one is concerned about rows being a non-integer, then I support
> > this, as it's quite strange for the average rows to be an integer only
> > for me. If we go with this approach, we should also update all examples
> > in the documentation accordingly. I attached patch with changes in
> > documentation.
>
> Thanks, that's very helpful. If we go forward with this, I'll commit
> your patch and mine together.

Since Tom doesn't seem to want to further object to trying it this
way, I've gone ahead and done this for now. Granted, it's not
altogether clear from the buildfarm that we would have ever gotten any
more failures after 44cbba9a7f51a3888d5087fc94b23614ba2b81f2, but it
seems to me that there's no way to rule out the possibility, and
low-frequency buildfarm failures that might only occur once a year are
in some ways more annoying than high-frequency ones, especially to
people who put a lot of energy into tracking down those failures. It
just seems unprincipled to me to leave things in a state where we know
that such failures are possible, and wonky to have things in a state
where whether parallel query gets used or not could make the
difference between getting a report of X rows and getting a report of
X.00 rows. If a user notices that difference in their own environment
-- regression tests aside -- they're not likely to guess what has
happened.

Of course, if this commit draws objections or runs into problems with
the buildfarm or with users, we might have to reconsider. I'm not dead
set on having it this way rather than any other. But I think it's more
principled than making 0-vs-2 digits predicated on a random event; and
it's a lot simpler than any of the other options that have been
proposed.

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: explain analyze rows=%.0f

From
Ilia Evdokimov
Date:
On 27.02.2025 19:51, Robert Haas wrote:
> On Mon, Feb 24, 2025 at 2:16 PM Robert Haas <robertmhaas@gmail.com> wrote:
>> On Mon, Feb 24, 2025 at 12:12 PM Ilia Evdokimov
>> <ilya.evdokimov@tantorlabs.com> wrote:
>>> If no one is concerned about rows being a non-integer, then I support
>>> this, as it's quite strange for the average rows to be an integer only
>>> for me. If we go with this approach, we should also update all examples
>>> in the documentation accordingly. I attached patch with changes in
>>> documentation.
>> Thanks, that's very helpful. If we go forward with this, I'll commit
>> your patch and mine together.
> Since Tom doesn't seem to want to further object to trying it this
> way, I've gone ahead and done this for now. Granted, it's not
> altogether clear from the buildfarm that we would have ever gotten any
> more failures after 44cbba9a7f51a3888d5087fc94b23614ba2b81f2, but it
> seems to me that there's no way to rule out the possibility, and
> low-frequency buildfarm failures that might only occur once a year are
> in some ways more annoying than high-frequency ones, especially to
> people who put a lot of energy into tracking down those failures. It
> just seems unprincipled to me to leave things in a state where we know
> that such failures are possible, and wonky to have things in a state
> where whether parallel query gets used or not could make the
> difference between getting a report of X rows and getting a report of
> X.00 rows. If a user notices that difference in their own environment
> -- regression tests aside -- they're not likely to guess what has
> happened.
>
> Of course, if this commit draws objections or runs into problems with
> the buildfarm or with users, we might have to reconsider. I'm not dead
> set on having it this way rather than any other. But I think it's more
> principled than making 0-vs-2 digits predicated on a random event; and
> it's a lot simpler than any of the other options that have been
> proposed.
>

Then I am moving the commitfest entry [0] to the Committed status.

[0]: https://commitfest.postgresql.org/patch/5501/

--
Best regards,
Ilia Evdokimov,
Tantor Labs LLC.




Re: explain analyze rows=%.0f

From
Alena Rybakina
Date:
Hi! I got a query plan with a strange number of rows. Could you please 
help me understand it?

create temp table ta (id int primary key, val int);
create temp table tb (id int primary key, aval int);
create temp table tc (id int primary key, aid int);
insert into ta select id, id from generate_series(1,1000) as id;
insert into tb select id, id from generate_series(500,1000) as id;
insert into tc select id, id from generate_series(400,1000) as id;

EXPLAIN (ANALYZE, COSTS OFF, SUMMARY OFF, TIMING OFF, BUFFERS OFF)
  SELECT 1
    FROM ta ta1
   WHERE EXISTS (SELECT 1
                   FROM tb
                   JOIN tc
                     ON tb.id = ta1.id and
                        ta1.id < 1000
                     where exists (select 1 from ta ta2 where ta2.id = 
ta1.id));

                                       QUERY PLAN
--------------------------------------------------------------------------------------
  Seq Scan on ta ta1 (actual rows=500.00 loops=1)
    Filter: EXISTS(SubPlan 2)
    Rows Removed by Filter: 500
    SubPlan 2
      ->  Result (actual rows=0.50 loops=1000)
            One-Time Filter: ((ta1.id < 1000) AND (InitPlan 1).col1)
            InitPlan 1
              ->  Index Only Scan using ta_pkey on ta ta2 (actual 
rows=1.00 loops=999)
                    Index Cond: (id = ta1.id)
                    Heap Fetches: 999
            ->  Nested Loop (actual rows=0.50 loops=999)
                  ->  Seq Scan on tb (actual rows=0.50 loops=999)
                        Filter: (id = ta1.id)
                        Rows Removed by Filter: 375
                  ->  Seq Scan on tc (actual rows=1.00 loops=500)
(15 rows)

To be honest I can't understand why 0.50 number of rows here?

-- 
Regards,
Alena Rybakina
Postgres Professional




Re: explain analyze rows=%.0f

From
Matthias van de Meent
Date:
On Thu, 6 Mar 2025 at 14:18, Alena Rybakina <a.rybakina@postgrespro.ru> wrote:
>
> Hi! I got a query plan with a strange number of rows. Could you please
> help me understand it?
>
> To be honest I can't understand why 0.50 number of rows here?

Because the scan matched only ~(500 rows over 999 iterations = 500/999
~=) 0.50 rows for every loop, on average, for these plan nodes:

>             ->  Nested Loop (actual rows=0.50 loops=999)
>                   ->  Seq Scan on tb (actual rows=0.50 loops=999)

And for this, it was 500 rows total in 1000 iterations, which also
rounds to 0.50:

>     SubPlan 2
>       ->  Result (actual rows=0.50 loops=1000)
>             One-Time Filter: ((ta1.id < 1000) AND (InitPlan 1).col1)

As of ddb17e38 (and its follow-up 95dbd827), we display fractional
rows-per-loop, with 2 digits of precision, rather than a rounded
integer. This allows a user to distinguish plan nodes with 0.49
rows/loop and 0.01 rows/loop, and that can help inform the user about
how to further optimize their usage of indexes and other optimization
paths.

Kind regards,

Matthias van de Meent
Neon (https://neon.tech)



Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Thu, Mar 6, 2025 at 8:30 AM Matthias van de Meent
<boekewurm+postgres@gmail.com> wrote:
> On Thu, 6 Mar 2025 at 14:18, Alena Rybakina <a.rybakina@postgrespro.ru> wrote:
> >
> > Hi! I got a query plan with a strange number of rows. Could you please
> > help me understand it?
> >
> > To be honest I can't understand why 0.50 number of rows here?
>
> Because the scan matched only ~(500 rows over 999 iterations = 500/999
> ~=) 0.50 rows for every loop, on average, for these plan nodes:

This is a good and correct explanation, but I'm VERY curious to hear
more from Alena. Like, Tom expressed the concern before we did this
that the fractional digits would confuse people, and the fact that
someone who is a regular participant on this mailing list was one of
the people confused gives credence to that concern. But I want to know
what exactly Alena found (or finds) confusing here. The Nested Loop
executes 999 times, so perhaps Alena thought that 0.50 was the TOTAL
number of rows across ALL of those executions rather than the AVERAGE
number of rows per execution? Because then 0.50 would indeed be a very
surprising result. Or maybe she just didn't realize that part of the
plan executed 999 times? Or something else?

Alena, if you're willing, please elaborate on what you think is confusing here!

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: explain analyze rows=%.0f

From
Alena Rybakina
Date:
On 06.03.2025 17:13, Robert Haas wrote:
> On Thu, Mar 6, 2025 at 8:30 AM Matthias van de Meent
> <boekewurm+postgres@gmail.com> wrote:
>> On Thu, 6 Mar 2025 at 14:18, Alena Rybakina <a.rybakina@postgrespro.ru> wrote:
>>> Hi! I got a query plan with a strange number of rows. Could you please
>>> help me understand it?
>>>
>>> To be honest I can't understand why 0.50 number of rows here?
>> Because the scan matched only ~(500 rows over 999 iterations = 500/999
>> ~=) 0.50 rows for every loop, on average, for these plan nodes:
> This is a good and correct explanation, but I'm VERY curious to hear
> more from Alena. Like, Tom expressed the concern before we did this
> that the fractional digits would confuse people, and the fact that
> someone who is a regular participant on this mailing list was one of
> the people confused gives credence to that concern. But I want to know
> what exactly Alena found (or finds) confusing here. The Nested Loop
> executes 999 times, so perhaps Alena thought that 0.50 was the TOTAL
> number of rows across ALL of those executions rather than the AVERAGE
> number of rows per execution? Because then 0.50 would indeed be a very
> surprising result. Or maybe she just didn't realize that part of the
> plan executed 999 times? Or something else?
>
> Alena, if you're willing, please elaborate on what you think is confusing here!

To be honest, I initially took it as the total number of tuples and 
couldn't figure out for myself how to interpret the result - 0 tuples or 
1 tuple in the end. Maybe it wasn't quite correct to perceive it that 
way, but Matthias's explanation helped me figure out the reason why such 
a strange result was obtained, although it's not usual to see it.

-- 
Regards,
Alena Rybakina
Postgres Professional




Re: explain analyze rows=%.0f

From
Alena Rybakina
Date:
On 06.03.2025 16:30, Matthias van de Meent wrote:
> On Thu, 6 Mar 2025 at 14:18, Alena Rybakina <a.rybakina@postgrespro.ru> wrote:
>> Hi! I got a query plan with a strange number of rows. Could you please
>> help me understand it?
>>
>> To be honest I can't understand why 0.50 number of rows here?
> Because the scan matched only ~(500 rows over 999 iterations = 500/999
> ~=) 0.50 rows for every loop, on average, for these plan nodes:
>
>>              ->  Nested Loop (actual rows=0.50 loops=999)
>>                    ->  Seq Scan on tb (actual rows=0.50 loops=999)
> And for this, it was 500 rows total in 1000 iterations, which also
> rounds to 0.50:
>
>>      SubPlan 2
>>        ->  Result (actual rows=0.50 loops=1000)
>>              One-Time Filter: ((ta1.id < 1000) AND (InitPlan 1).col1)
> As of ddb17e38 (and its follow-up 95dbd827), we display fractional
> rows-per-loop, with 2 digits of precision, rather than a rounded
> integer. This allows a user to distinguish plan nodes with 0.49
> rows/loop and 0.01 rows/loop, and that can help inform the user about
> how to further optimize their usage of indexes and other optimization
> paths.
>
Thanks for the explanation. Now I understand.

-- 
Regards,
Alena Rybakina
Postgres Professional




Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Thu, Mar 6, 2025 at 4:18 PM Alena Rybakina <a.rybakina@postgrespro.ru> wrote:
> To be honest, I initially took it as the total number of tuples and
> couldn't figure out for myself how to interpret the result - 0 tuples or
> 1 tuple in the end. Maybe it wasn't quite correct to perceive it that
> way, but Matthias's explanation helped me figure out the reason why such
> a strange result was obtained, although it's not usual to see it.

Yeah, I thought the same back when I first started using PostgreSQL
and had to learn that it wasn't the case. I still think that's what we
*should* be displaying -- I think dividing by nloops obscures the data
users actually want. But it would admittedly be a pretty big
compatibility break at this point, and at least if we display a couple
of decimal places it's more possible to approximate the undivided
value.

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: explain analyze rows=%.0f

From
Ilia Evdokimov
Date:
Hi,

In the stats_ext regression test, there is a function 
check_estimated_rows that returns actual rows as an integer. Currently, 
none of the test cases produce a non-zero fractional part in actual rows.

The question is: should this function be modified to return a fractional 
number instead of an integer?

Personally, I don’t see much value in doing so, because the purpose of 
the test is to compare the estimated row count with the actual number of 
rows. It is also unlikely that there will be test cases where loops > 1, 
and the presence of a fractional part does not change the essence of the 
test.

What do you think?

--
Best regards,
Ilia Evdokimov,
Tantor Labs LLC.




Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Tue, Mar 11, 2025 at 5:58 AM Ilia Evdokimov
<ilya.evdokimov@tantorlabs.com> wrote:
> In the stats_ext regression test, there is a function
> check_estimated_rows that returns actual rows as an integer. Currently,
> none of the test cases produce a non-zero fractional part in actual rows.
>
> The question is: should this function be modified to return a fractional
> number instead of an integer?
>
> Personally, I don’t see much value in doing so, because the purpose of
> the test is to compare the estimated row count with the actual number of
> rows. It is also unlikely that there will be test cases where loops > 1,
> and the presence of a fractional part does not change the essence of the
> test.
>
> What do you think?

I suppose the way this is currently coded, it will have the effect of
extracting the integer part of the row estimate. That doesn't really
seem like a bad idea to me, so I'm not sure it's worth trying to
adjust anything here.

One thing that I just noticed, though, is that we added two decimal
places to the actual row count, but not the estimated row count. So
now we get stuff like this:

         ->  Seq Scan on pgbench_branches b  (cost=0.00..1.10 rows=10
width=364) (actual time=0.007..0.009 rows=10.00 loops=1)

But why isn't it just as valuable to have two decimal places for the
estimate? I theorize that the cases that are really a problem here are
those where the row count estimate is between 0 and 1 per row, and
rounding to an integer loses all precision.

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: explain analyze rows=%.0f

From
Tom Lane
Date:
Robert Haas <robertmhaas@gmail.com> writes:
> But why isn't it just as valuable to have two decimal places for the
> estimate? I theorize that the cases that are really a problem here are
> those where the row count estimate is between 0 and 1 per row, and
> rounding to an integer loses all precision.

Currently, the planner rounds *all* rowcount estimates to integers
(cf. clamp_row_est()).  Maybe it'd be appropriate to rethink that,
but it's not just a matter of changing EXPLAIN's print format.

            regards, tom lane



Re: explain analyze rows=%.0f

From
Robert Haas
Date:
On Mon, Mar 31, 2025 at 1:49 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Robert Haas <robertmhaas@gmail.com> writes:
> > But why isn't it just as valuable to have two decimal places for the
> > estimate? I theorize that the cases that are really a problem here are
> > those where the row count estimate is between 0 and 1 per row, and
> > rounding to an integer loses all precision.
>
> Currently, the planner rounds *all* rowcount estimates to integers
> (cf. clamp_row_est()).  Maybe it'd be appropriate to rethink that,
> but it's not just a matter of changing EXPLAIN's print format.

Oh, right. I've never really understood why we round off to integers,
but the fact that we don't allow row counts < 1 feels like something
pretty important. My intuition is that it probably helps a lot more
than it hurts, too.

--
Robert Haas
EDB: http://www.enterprisedb.com



Re: explain analyze rows=%.0f

From
Ilia Evdokimov
Date:


On 31.03.2025 22:09, Robert Haas wrote:
Oh, right. I've never really understood why we round off to integers,
but the fact that we don't allow row counts < 1 feels like something
pretty important. My intuition is that it probably helps a lot more
than it hurts, too.


We definitely shouldn’t remove the row counts < 1 check, since there are many places in the planner where we divide by rows. This mechanism was added specifically to prevent division by zero. Also, allowing rows estimates below 1 can sometimes make the planner overly optimistic, leading it to prefer cheaper-looking plans that may not perform well in practice. For example, choosing a Nested Loop instead of a more appropriate Hash Join.

Allowing fractional rows > 1 might help improve planner accuracy in some cases, but this needs further study to fully understand the impact.

--
Best regards,
Ilia Evdokimov,
Tantor Labs LLC.

Re: explain analyze rows=%.0f

From
Andrei Lepikhov
Date:
On 3/31/25 19:35, Robert Haas wrote:
> But why isn't it just as valuable to have two decimal places for the
> estimate? I theorize that the cases that are really a problem here are
> those where the row count estimate is between 0 and 1 per row, and
> rounding to an integer loses all precision.
Issues I've ever seen were about zero rows number - we lose 
understanding of what the job the executor has done in the node - loops 
= 10000000 doesn't tell us any helpful information in that case.
Maybe in the next version, we replace the two fractional numbers rule 
with two valuable numbers. At least, it removes boring xxx.00 numbers 
from EXPLAIN.

-- 
regards, Andrei Lepikhov