From 26cd0203819f8ed03a63fa998000f62af80c70db Mon Sep 17 00:00:00 2001 From: Maciek Sakrejda Date: Fri, 26 Sep 2025 17:17:41 -0700 Subject: [PATCH v3] Improve EXPLAIN docs - clarify fractional rows in loops - add some basic info on parallel execution --- doc/src/sgml/perform.sgml | 64 ++++++++++++++++++++++++++++++++++++--- 1 file changed, 60 insertions(+), 4 deletions(-) diff --git a/doc/src/sgml/perform.sgml b/doc/src/sgml/perform.sgml index 5f6f1db0467..f0747059f50 100644 --- a/doc/src/sgml/perform.sgml +++ b/doc/src/sgml/perform.sgml @@ -756,13 +756,69 @@ WHERE t1.unique1 < 10 AND t1.unique2 = t2.unique2; loops value reports the total number of executions of the node, and the actual time and rows values shown are averages per-execution. This is done to make the numbers - comparable with the way that the cost estimates are shown. Multiply by - the loops value to get the total time actually spent in - the node. In the above example, we spent a total of 0.030 milliseconds - executing the index scans on tenk2. + comparable with the way that the cost estimates are shown. Multiply by the + loops value to get the total time actually spent in the + node and the total number of rows processed by the node across all + executions. In the above example, we spent a total of 0.030 + milliseconds executing the index scans on tenk2, and + they handled a total of 10 rows. + Parallel execution will also cause nodes to be executed more than once. + This is also indicated with the loops value. We can + change some planner settings to have the planner pick a parallel plan for + the above query: + + + +SET min_parallel_table_scan_size = 0; +SET parallel_tuple_cost = 0; +SET parallel_setup_cost = 0; + +EXPLAIN ANALYZE SELECT * +FROM tenk1 t1, tenk2 t2 +WHERE t1.unique1 < 10 AND t1.unique2 = t2.unique2; + QUERY PLAN +-------------------------------------------------------------------&zwsp;-------------------------------------------------------------------&zwsp;---- + Gather (cost=4.65..70.96 rows=10 width=488) (actual time=1.161..11.655 rows=10.00 loops=1) + Workers Planned: 2 + Workers Launched: 2 + Buffers: shared hit=78 read=6 + -> Nested Loop (cost=4.65..70.96 rows=4 width=488) (actual time=0.247..0.317 rows=3.33 loops=3) + Buffers: shared hit=78 read=6 + -> Parallel Bitmap Heap Scan on tenk1 t1 (cost=4.36..39.31 rows=4 width=244) (actual time=0.228..0.249 rows=3.33 loops=3) + Recheck Cond: (unique1 < 10) + Heap Blocks: exact=10 + Buffers: shared hit=54 + -> Bitmap Index Scan on tenk1_unique1 (cost=0.00..4.36 rows=10 width=0) (actual time=0.438..0.439 rows=10.00 loops=1) + Index Cond: (unique1 < 10) + Index Searches: 1 + Buffers: shared hit=2 + -> Index Scan using tenk2_unique2 on tenk2 t2 (cost=0.29..7.90 rows=1 width=244) (actual time=0.016..0.017 rows=1.00 loops=10) + Index Cond: (unique2 = t1.unique2) + Index Searches: 10 + Buffers: shared hit=24 read=6 + Planning: + Buffers: shared hit=327 read=3 + Planning Time: 4.781 ms + Execution Time: 11.858 ms +(22 rows) + + + + The parallel bitmap heap scan was executed three separate times in parallel: + once in the leader (since + is on by default), and once in each of the two launched workers. Similarly + to sequential repeated executions, rows and actual time are averages + per-worker. Multiply by the loops value to get the + total number of rows processed by the node across all workers. The total + time spent in all workers can be similarly calculated, but note this time + is spent concurrently, so it is not equivalent to total time spent in the + node. + + + In some cases EXPLAIN ANALYZE shows additional execution statistics beyond the plan node execution times and row counts. For example, Sort and Hash nodes provide extra information: -- 2.43.0