Re: Documenting when to retry on serialization failure - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Documenting when to retry on serialization failure
Date
Msg-id 2783115.1648065036@sss.pgh.pa.us
Whole thread Raw
In response to Re: Documenting when to retry on serialization failure  (Simon Riggs <simon.riggs@enterprisedb.com>)
Responses Re: Documenting when to retry on serialization failure  (Simon Riggs <simon.riggs@enterprisedb.com>)
List pgsql-hackers
Simon Riggs <simon.riggs@enterprisedb.com> writes:
> I've tried to sum up the various points from everybody into this doc
> patch. Thanks all for replies.

This seemed rather badly in need of copy-editing.  How do you
like the attached text?

            regards, tom lane

diff --git a/doc/src/sgml/mvcc.sgml b/doc/src/sgml/mvcc.sgml
index da07f3f6c6..cd659dd994 100644
--- a/doc/src/sgml/mvcc.sgml
+++ b/doc/src/sgml/mvcc.sgml
@@ -588,7 +588,7 @@ ERROR:  could not serialize access due to concurrent update
     applications using this level must
     be prepared to retry transactions due to serialization failures.
     In fact, this isolation level works exactly the same as Repeatable
-    Read except that it monitors for conditions which could make
+    Read except that it also monitors for conditions which could make
     execution of a concurrent set of serializable transactions behave
     in a manner inconsistent with all possible serial (one at a time)
     executions of those transactions.  This monitoring does not
@@ -1720,6 +1720,60 @@ SELECT pg_advisory_lock(q.id) FROM
    </sect2>
   </sect1>

+  <sect1 id="mvcc-serialization-failure-handling">
+   <title>Serialization Failure Handling</title>
+
+   <indexterm>
+    <primary>serialization failure</primary>
+   </indexterm>
+   <indexterm>
+    <primary>retryable error</primary>
+   </indexterm>
+
+   <para>
+    Both Repeatable Read and Serializable isolation levels can produce
+    errors that are designed to prevent serialization anomalies.  As
+    previously stated, applications using these levels must be prepared to
+    retry transactions that fail due to serialization errors.  Such an
+    error's message text will vary according to the precise circumstances,
+    but it will always have the SQLSTATE code <literal>40001</literal>
+    (<literal>serialization_failure</literal>).
+   </para>
+
+   <para>
+    It may also be advisable to retry deadlock failures.
+    These have the SQLSTATE code <literal>40P01</literal>
+    (<literal>deadlock_detected</literal>).
+   </para>
+
+   <para>
+    In some circumstances, a failure that is arguably a serialization
+    problem may manifest as a unique-key failure, with SQLSTATE
+    code <literal>23505</literal> (<literal>unique_violation</literal>),
+    or as an exclusion constraint failure, with SQLSTATE
+    code <literal>23P01</literal> (<literal>exclusion_violation</literal>).
+    Therefore, retrying these cases may also be advisable, although one must
+    be careful that such an error could be persistent.
+   </para>
+
+   <para>
+    It is important to retry the complete transaction, including all logic
+    that decides which SQL to issue and/or which values to use.
+    Therefore, <productname>PostgreSQL</productname> does not offer an
+    automatic retry facility, since it cannot do so with any guarantee of
+    correctness.
+   </para>
+
+   <para>
+    Transaction retry does not guarantee that the retried transaction will
+    complete; multiple retries may be needed.  In cases with very high
+    contention, it is possible that completion of a transaction may take
+    many attempts.  In cases involving a conflicting prepared transaction,
+    it may not be possible to make progress until the prepared transaction
+    commits or rolls back.
+   </para>
+  </sect1>
+
   <sect1 id="mvcc-caveats">
    <title>Caveats</title>


pgsql-hackers by date:

Previous
From: Andrew Dunstan
Date:
Subject: Re: SQL/JSON: functions
Next
From: Tom Lane
Date:
Subject: Re: ubsan