Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip scan and SAOP scans) - Mailing list pgsql-hackers

From Alena Rybakina
Subject Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip scan and SAOP scans)
Date
Msg-id 9a721d24-91fc-4736-9e65-04705ba9fc84@postgrespro.ru
Whole thread Raw
In response to Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip scan and SAOP scans)  (Peter Geoghegan <pg@bowt.ie>)
Responses Re: Showing primitive index scan count in EXPLAIN ANALYZE (for skip scan and SAOP scans)
List pgsql-hackers
Sorry it took me so long to answer, I had some minor health complications

On 12.11.2024 23:00, Peter Geoghegan wrote:
On Sun, Nov 10, 2024 at 2:00 PM Alena Rybakina
<a.rybakina@postgrespro.ru> wrote:
Or maybe I was affected by fatigue, but I don’t understand this point, to be honest. I see from the documentation and your first letter that it specifies how many times in total the tuple search would be performed during the index execution. Is that not quite right?
Well, nodes that appear on the inner side of a nested loop join (and
in a few other contexts) generally have their row counts (and a few
other things) divided by the total number of executions. The idea is
that we're showing the average across all executions of the node -- if
the user wants the true absolute number, they're expected to multiply
nrows by nloops themselves. This is slightly controversial behavior,
but it is long established (weirdly, we never divide by nloops for
"Buffers").

I understood what you mean and I faced this situation before when I saw extremely more number of actual rows that could be and it was caused by the number of scanned tuples per cycles. [0]

[0] https://www.postgresql.org/message-id/flat/9f4a159b-f527-465f-b82e-38b4b7df812f@postgrespro.ru

Initial versions of my patch didn't do this. The latest version does
divide like this, though. In general it isn't all that likely that an
inner index scan would have more than a single primitive index scan,
in any case, so which particular behavior I use here (divide vs don't
divide) is not something that I feel strongly about.

I think we should divide them because by dividing the total buffer usage by the number of loops, user finds the average buffer consumption per loop. This gives them a clearer picture of the resource intensity per basic unit of work.

-- 
Regards,
Alena Rybakina
Postgres Professional

pgsql-hackers by date:

Previous
From: Ravi
Date:
Subject: [PERF] Improve Cardinality Estimation for Joins with GROUP BY Having Single Clause
Next
From: Alvaro Herrera
Date:
Subject: Re: Difference in dump from original and restored database due to NOT NULL constraints on children