75.1. How It Works #

If during execution of a query, a replan trigger fired, replanning of this query starts. A partially executed query is bypassed and an attempt is made to collect information to assist further replanning. If no useful information is found, execution of the query starts from the beginning, but without activated replan triggers. The information on the query statement and node cardinalities is saved. The information on nodes is saved for:

  • Non-parameterized nodes

  • Parameterized nodes whose actual cardinality is larger than the one used by the planner

Replanning continues while the trigger condition is met or the configured maximum number of replanning attempts is not exceeded or until new information is no longer found. The maximum number of replanning attempts is defined by the replan_max_attempts configuration parameter.

If a replan trigger is activated, a query that meets the trigger condition is executed in an implicitly created subtransaction. In the event of replanning, this helps in cleaning up after a previous execution and releasing system resources, such as memory or disc space.

75.1.1. Replan Triggers #

The following triggers can interrupt a query and launch replanning:

  • Query execution time: the trigger fires when the query runs longer. The value is defined by the replan_query_execution_time configuration parameter.

  • Number of processed node tuples: the trigger fires when this number exceeds the number normally expected by the planner, which is multiplied by the value of the replan_overrun_limit configuration parameter.

75.1.2. Viewing Replanning Details #

If replan_show_signature is on, information related to replanning is included in the EXPLAIN ANALYZE output. The SUMMARY section additionally includes the following characteristics:

  • Replan Active — whether a replan trigger was active during the query execution.

  • Table Entries — number of tables in the query, including subqueries.

  • Controlled Statements — number of statements that were replanned during execution of the query.

  • Replanning Attempts — total number of the query reruns/replanning attempts.

  • Total Execution Time — total time of the query execution including the time of all reruns/replanning attempts.

For each plan node, the following characteristics are included:

  • NodeSign — a 64-bit signature (hash) of the node that uniquely identifies the node of the query plan. Not all nodes are signed, but only those where errors of the optimizer estimation model can cause errors in searching the optimal query plan.

  • Cardinality — cardinality of the plan node achieved in previous executions of the query. Only available with the VERBOSE parameter.

  • Groups Number — number of groups computed for this plan node from previous executions of the query. Only available with the VERBOSE parameter.

75.1.3. Example #

Example 75.1. Using Replanning

The following example illustrates replanning:

    SET replan_show_signature = 'on';
    SET replan_query_execution_time = 10;
    SET replan_enable = 'on';

    DROP TABLE IF EXISTS replan_test CASCADE;
    CREATE TABLE replan_test WITH (autovacuum_enabled = off) AS
    SELECT 1 AS x, 'abc' AS payload FROM generate_series(1,1E4);
    ANALYZE replan_test;

    INSERT INTO replan_test (x,payload) (
    SELECT gs, repeat('a', 256)
    FROM generate_series(1,100) AS gs);


  EXPLAIN ANALYZE VERBOSE
  SELECT count(*) FROM (
  SELECT q2.x,q2.payload, max(q2.x) FROM (
  SELECT rt1.x, rt1.payload
  FROM replan_test rt1
  ) AS q2
  GROUP BY (q2.x,q2.payload) HAVING avg(q2.x) > 30
  ) AS q1, replan_test rt2
  WHERE q1.x=rt2.x
  ;
  

Output:

  Aggregate  (cost=560.06..560.07 rows=1 width=8) (actual time=4.705..4.707 rows=1 loops=1)
  NodeSign: 4127104911444856927
  Cardinality: -1
  Groups Number: -1
  Output: count(*)
  ->  Hash Join  (cost=232.05..533.39 rows=10667 width=0) (actual time=4.681..4.701 rows=70 loops=1)
      NodeSign: 15478158356720060206
      Cardinality: -1
      Groups Number: -1
      Hash Cond: (rt2.x = q1.x)
      ->  Seq Scan on public.replan_test rt2  (cost=0.00..154.67 rows=10667 width=4) (actual time=0.007..0.785 rows=10100 loops=1)
            NodeSign: 17491169463296369090
            Cardinality: -1
            Groups Number: -1
            Output: rt2.x, rt2.payload
      ->  Hash  (cost=231.99..231.99 rows=5 width=4) (actual time=3.083..3.084 rows=70 loops=1)
            NodeSign: 17790666881744499777
            Output: q1.x
            Buckets: 1024  Batches: 1  Memory Usage: 11kB
            ->  Subquery Scan on q1  (cost=230.42..231.99 rows=5 width=4) (actual time=3.026..3.073 rows=70 loops=1)
                  NodeSign: 17790666881744499777
                  Output: q1.x
                  ->  HashAggregate  (cost=230.42..231.94 rows=5 width=12) (actual time=3.025..3.065 rows=70 loops=1)
                        NodeSign: 13744336777062894583
                        Cardinality: 5
                        Groups Number: 101
                        Output: rt1.x, rt1.payload, NULL::integer
                        Group Key: rt1.x, rt1.payload
                        Filter: (avg(rt1.x) > '30'::numeric)
                        Batches: 1  Memory Usage: 80kB
                        Rows Removed by Filter: 31
                        ->  Seq Scan on public.replan_test rt1  (cost=0.00..154.67 rows=10100 width=8) (actual time=0.004..0.865 rows=10100 loops=1)
                              NodeSign: 14925644762129093451
                              Cardinality: 10100
                              Groups Number: -1
                              Output: rt1.x, rt1.payload
  Planning Time: 0.135 ms
  Execution Time: 4.746 ms
  Replan Active: true
  Table Entries: 2
  Controlled Statements: 1
  Replanning Attempts: 1
  Total Execution Time: 15.530 ms