Thread: Fixing or diagnosing Canceled on identification as a pivot, during write

Fixing or diagnosing Canceled on identification as a pivot, during write

From
"Andrus"
Date:
After switching to PostgreSql 9.1 serializable transaction level for all transactions during posting single document errors
 
40001:ERROR: could not serialize access due to read/write dependencies among transactions
Reason code: Canceled on identification as a pivot, during write.;
 
started to appear in log file.
 
Code which causes them is below.
 
Code involves only single document (in this example id 95162) . Is document is probably not accesed by others.
How to fix or diagnose this error ?
 
tasutud1 is temporary table created in transaction earlier:
 
CREATE TEMP TABLE tasutud1 (dokumnr INTEGER, tasutud NUMERIC(1)) ON COMMIT DROP
 
Other tables are permanent tables updated and accessed by 7 users concurrently.
 
 
Code where exception occurs is :
 
CREATE TEMP TABLE ids(dokumnr INT) ON COMMIT DROP;
INSERT INTO ids VALUES(95162);
analyze ids;UPDATE DOK set
      kinnitatud = TRUE,
      doksumma=CASE WHEN TRUE THEN COALESCE(doksumma.doksumma,0) ELSE 0 END,
      tasumata =CASE WHEN TRUE AND dok.krdokumnr IS NULL and
      dok.doktyyp IN ('G','O') THEN
           doksumma.doksumma-COALESCE(doksumma.tasutud,0) ELSE 0 END
     FROM
(SELECT
    ids.dokumnr,
    SUM( CASE WHEN rid.toode is NULL OR LENGTH(RTRIM(rid.toode))>2 OR toode.grupp<>'S' or
               (STRPOS(toode.klass,'T')!=0 AND STRPOS(toode.klass,'E')=0)
      THEN
        ROUND(COALESCE(rid.hind,0)*CASE WHEN COALESCE(rid.kogus,0)=0 THEN 1 ELSE rid.kogus END*CASE WHEN COALESCE(rid.kogpak,0)=0 THEN 1 ELSE rid.kogpak END,2) ELSE 0 END ) AS doksumma,
    max(tasutud1.tasutud) as tasutud
  FROM ids
  JOIN dok USING(dokumnr)
  JOIN rid USING(dokumnr)
  LEFT JOIN toode USING(toode)
  LEFT JOIN tasutud1 ON tasutud1.dokumnr=ids.dokumnr
WHERE not rid.fifoexpens and not rid.calculrow
  and (not dok.inventuur or rid.kogus<>0 )
GROUP BY 1
) doksumma
left join bilkaib on bilkaib.dokumnr=doksumma.dokumnr and bilkaib.alusdok='LO'
WHERE dok.dokumnr=doksumma.dokumnr
 
Should this code split into multiple commands to find which part causes exception or other idea ?
 
Andrus.

Re: Fixing or diagnosing Canceled on identification as a pivot, during write

From
"Kevin Grittner"
Date:
"Andrus" <kobruleht2@hot.ee> wrote:
> After switching to PostgreSql 9.1 serializable transaction level
> for all transactions during posting single document errors
>
> 40001:ERROR: could not serialize access due to read/write
> dependencies among transactions
> Reason code: Canceled on identification as a pivot, during write.;
>
> started to appear in log file.

This means that the database transaction was canceled to prevent
data anomalies from a race condition between different database
transactions.  The documentation says "applications using this level
must be prepared to retry transactions due to serialization
failures."  Are these transaction succeeding when they are retried?
What percentage of statements are requiring a retry?  If the
percentage is high enough to cause concern, have you tried the
things recommended in the "For optimal performance" section of the
Serializable docs?

http://www.postgresql.org/docs/9.1/interactive/transaction-iso.html#XACT-SERIALIZABLE

> Code which causes them is below.
>
> Code involves only single document (in this example id 95162) . Is
> document is probably not accesed by others.

It is not necessarily this statement which is causing the conflict;
it might be getting canceled because it happens to be the next
statement run in the transaction after the race condition is
detected based on other statements.

> How to fix or diagnose this error ?

These are not something that you necessarily need to "fix" -- they
are necessary to protect your data if you want to use serializable
transactions to protect your data integrity rather than taking out
locks which cause blocking.

> tasutud1 is temporary table created in transaction earlier:
>
> CREATE TEMP TABLE tasutud1 (dokumnr INTEGER, tasutud NUMERIC(1))
> ON COMMIT DROP

Temporary tables do not contribute to this sort of event.

> Other tables are permanent tables updated and accessed by 7 users
> concurrently.
>
>
> UPDATE DOK set
>       kinnitatud = TRUE,
>       doksumma=CASE WHEN TRUE THEN COALESCE(doksumma.doksumma,0)
>         ELSE 0 END,
>       tasumata =CASE WHEN TRUE AND dok.krdokumnr IS NULL and
>       dok.doktyyp IN ('G','O') THEN
>            doksumma.doksumma-COALESCE(doksumma.tasutud,0) ELSE 0
>         END
>      FROM
> (SELECT
>     ids.dokumnr,
>     SUM( CASE WHEN rid.toode is NULL OR LENGTH(RTRIM(rid.toode))>2
>   OR toode.grupp<>'S' or
>                (STRPOS(toode.klass,'T')!=0 AND
>                 STRPOS(toode.klass,'E')=0)
>       THEN
>         ROUND(COALESCE(rid.hind,0)*CASE WHEN
>           COALESCE(rid.kogus,0)=0 THEN 1
>       ELSE rid.kogus END*CASE WHEN COALESCE(rid.kogpak,0)=0 THEN 1
>          ELSE rid.kogpak
> END,2) ELSE 0 END ) AS doksumma,
>     max(tasutud1.tasutud) as tasutud
>   FROM ids
>   JOIN dok USING(dokumnr)
>   JOIN rid USING(dokumnr)
>   LEFT JOIN toode USING(toode)
>   LEFT JOIN tasutud1 ON tasutud1.dokumnr=ids.dokumnr
> WHERE not rid.fifoexpens and not rid.calculrow
>   and (not dok.inventuur or rid.kogus<>0 )
> GROUP BY 1
> ) doksumma
> left join bilkaib on bilkaib.dokumnr=doksumma.dokumnr and
> bilkaib.alusdok='LO'
> WHERE dok.dokumnr=doksumma.dokumnr
>
> Should this code split into multiple commands to find which part
> causes exception or other idea ?

Splitting up this statement is probably not a good idea.  You might
want to look at the overall transaction and make sure that it is a
sensible grouping of work -- in particular that for correctness
everything written by it should persisted and become visible
*atomically*.  If that is *not* required for correctness, then
splitting the transaction into smaller units of work might be a good
idea.

The first thing to do is to ensure that your are running through
some sort of framework which will automatically retry any
transaction which fails with SQLSTATE 40001.  You should expect some
exceptions like this, although in most workloads it is a fraction of
a percent.  They can happen on any statement in the transaction,
including COMMIT.

If it is happening enough to cause a significant performance hit,
please review the documentation cited above and take the suggested
steps.  (Declare transactions READ ONLY where possible, don't leave
connections idle in transaction for extended periods, adjust
configuration settings, etc.)

If you're still having problems at that point, it would be helpful
to know more about your overall mix of transactions.

-Kevin