Re: backup manifests and contemporaneous buildfarm failures - Mailing list pgsql-hackers

From Petr Jelinek
Subject Re: backup manifests and contemporaneous buildfarm failures
Date
Msg-id d80c1566-7c1c-b2df-d4f6-509535ac52ef@2ndquadrant.com
Whole thread Raw
In response to Re: backup manifests and contemporaneous buildfarm failures  (Andres Freund <andres@anarazel.de>)
List pgsql-hackers
On 04/04/2020 05:06, Andres Freund wrote:
> Hi,
> 
> Peter, Petr, CCed you because it's probably a bug somewhere around the
> initial copy code for logical replication.
> 
> 
> On 2020-04-03 20:48:09 -0400, Robert Haas wrote:
>> 'serinus' is also failing. This is less obviously related:
> 
> Hm. Tests passed once since then.
> 
> 
>> 2020-04-04 02:08:57.299 CEST [5e87d019.506c1:4] LOG:  received
>> replication command: CREATE_REPLICATION_SLOT
>> "tap_sub_16390_sync_16384" TEMPORARY LOGICAL pgoutput USE_SNAPSHOT
>> 2020-04-04 02:08:57.299 CEST [5e87d019.506c1:5] ERROR:  replication
>> slot "tap_sub_16390_sync_16384" already exists
> 
> That already seems suspicious. I checked the following (successful) run
> and I did not see that in the stage's logs.
> 
> Looking at the failing log, it fails because for some reason there's
> rounds (once due to a refresh, once due to an intention replication
> failure) of copying the relation. Each creates its own temporary slot.
> 
> first time:
> 2020-04-04 02:08:57.276 CEST [5e87d019.506bd:1] LOG:  connection received: host=[local]
> 2020-04-04 02:08:57.278 CEST [5e87d019.506bd:4] LOG:  received replication command: CREATE_REPLICATION_SLOT
"tap_sub_16390_sync_16384"TEMPORARY LOGICAL pgoutput USE_SNAPSHOT
 
> 2020-04-04 02:08:57.282 CEST [5e87d019.506bd:9] LOG:  statement: COPY public.tab_rep TO STDOUT
> 2020-04-04 02:08:57.284 CEST [5e87d019.506bd:10] LOG:  disconnection: session time: 0:00:00.007 user=bf
database=postgreshost=[local]
 
> 
> second time:
> 2020-04-04 02:08:57.288 CEST [5e87d019.506bf:1] LOG:  connection received: host=[local]
> 2020-04-04 02:08:57.289 CEST [5e87d019.506bf:4] LOG:  received replication command: CREATE_REPLICATION_SLOT
"tap_sub_16390_sync_16384"TEMPORARY LOGICAL pgoutput USE_SNAPSHOT
 
> 2020-04-04 02:08:57.293 CEST [5e87d019.506bf:9] LOG:  statement: COPY public.tab_rep TO STDOUT
> 
> third time:
> 2020-04-04 02:08:57.297 CEST [5e87d019.506c1:1] LOG:  connection received: host=[local]
> 2020-04-04 02:08:57.299 CEST [5e87d019.506c1:4] LOG:  received replication command: CREATE_REPLICATION_SLOT
"tap_sub_16390_sync_16384"TEMPORARY LOGICAL pgoutput USE_SNAPSHOT
 
> 2020-04-04 02:08:57.299 CEST [5e87d019.506c1:5] ERROR:  replication slot "tap_sub_16390_sync_16384" already exists
> 
> Note that the connection from the second attempt has not yet
> disconnected. Hence the error about the replication slot already
> existing - it's a temporary replication slot that'd otherwise already
> have been dropped.
> 
> 
> Seems the logical rep code needs to do something about this race?
> 

The downstream:

> 2020-04-04 02:08:57.275 CEST [5e87d019.506bc:1] LOG:  logical replication table synchronization worker for
subscription"tap_sub", table "tab_rep" has started
 
> 2020-04-04 02:08:57.282 CEST [5e87d019.506bc:2] ERROR:  duplicate key value violates unique constraint
"tab_rep_pkey"
> 2020-04-04 02:08:57.282 CEST [5e87d019.506bc:3] DETAIL:  Key (a)=(1) already exists.
> 2020-04-04 02:08:57.282 CEST [5e87d019.506bc:4] CONTEXT:  COPY tab_rep, line 1
> 2020-04-04 02:08:57.283 CEST [5e87d018.50689:5] LOG:  background worker "logical replication worker" (PID 329404)
exitedwith exit code 1
 
> 2020-04-04 02:08:57.287 CEST [5e87d019.506be:1] LOG:  logical replication table synchronization worker for
subscription"tap_sub", table "tab_rep" has started
 
> 2020-04-04 02:08:57.293 CEST [5e87d019.506be:2] ERROR:  duplicate key value violates unique constraint
"tab_rep_pkey"
> 2020-04-04 02:08:57.293 CEST [5e87d019.506be:3] DETAIL:  Key (a)=(1) already exists.
> 2020-04-04 02:08:57.293 CEST [5e87d019.506be:4] CONTEXT:  COPY tab_rep, line 1
> 2020-04-04 02:08:57.295 CEST [5e87d018.50689:6] LOG:  background worker "logical replication worker" (PID 329406)
exitedwith exit code 1
 
> 2020-04-04 02:08:57.297 CEST [5e87d019.506c0:1] LOG:  logical replication table synchronization worker for
subscription"tap_sub", table "tab_rep" has started
 
> 2020-04-04 02:08:57.299 CEST [5e87d019.506c0:2] ERROR:  could not create replication slot "tap_sub_16390_sync_16384":
ERROR: replication slot "tap_sub_16390_sync_16384" already exists
 
> 2020-04-04 02:08:57.300 CEST [5e87d018.50689:7] LOG:  background worker "logical replication worker" (PID 329408)
exitedwith exit code 
 

Looks like we are simply retrying so fast that upstream will not have 
finished cleanup after second try by the time we already run the third one.

The last_start_times is supposed to protect against that so I guess 
there is some issue with how that works.

-- 
Petr Jelinek
2ndQuadrant - PostgreSQL Solutions for the Enterprise
https://www.2ndQuadrant.com/



pgsql-hackers by date:

Previous
From: Pavel Stehule
Date:
Subject: Re: Proposal: is_castable
Next
From: Petr Jelinek
Date:
Subject: Re: adding partitioned tables to publications