Re: Failed to delete old ReorderBuffer spilled files - Mailing list pgsql-hackers

From Craig Ringer
Subject Re: Failed to delete old ReorderBuffer spilled files
Date
Msg-id CAMsr+YGRksx2ZzyD4zxMnAUitvJxbo1h=g+RZ97=OQBAqLXKmw@mail.gmail.com
Whole thread Raw
In response to Re: Failed to delete old ReorderBuffer spilled files  (Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp>)
Responses Re: Failed to delete old ReorderBuffer spilled files  (Masahiko Sawada <sawada.mshk@gmail.com>)
List pgsql-hackers
On 22 November 2017 at 12:15, Kyotaro HORIGUCHI <horiguchi.kyotaro@lab.ntt.co.jp> wrote:
At Wed, 22 Nov 2017 12:57:34 +0900, Michael Paquier <michael.paquier@gmail.com> wrote in <CAB7nPqQP52cLEUZJv-1MoCiejNYQ4CGs=tzwhP2oEErvv7R3Bg@mail.gmail.com>
> On Wed, Nov 22, 2017 at 11:49 AM, Craig Ringer <craig@2ndquadrant.com> wrote:
> > On 20 November 2017 at 18:35, atorikoshi
> > <torikoshi_atsushi_z2@lab.ntt.co.jp> wrote:
> >> I put many queries into one transaction and made ReorderBuffer spill
> >> data to disk, and sent SIGKILL to postgres before the end of the
> >> transaction.
> >>
> >> After starting up postgres again, I observed the files spilled to
> >> data wasn't deleted.
> >
> > Since this can only happen  on crash exits, and the reorderbuffer data is
> > useless after a decoding backend exits, why don't we just recursively delete
> > the tree contents on Pg startup?
>
> +1. You would just need an extra step after say
> DeleteAllExportedSnapshotFiles() in startup.c. Looks saner to me do so
> so as well.

The old files are being removed at startup by
StartupReorderBuffer.

That seems to contradict the statement above, that "after starting up postgres again, I observed the files spilled to disk weren't deleted".
 
At the time of assertion failure, the files are not of the
previous run, but they are created after reconnection from the
subscriber.

Are you saying the problem files do not exist when we start up, but are created and then leaked after logical decoding resumes after an unclean startup?

... Yes, that does appear to be the case, per the original report:

"7. After a while, we can see the files remaining.
  (Immediately after starting publiser, we can not see these files.)"

I was confused by "remaining". They're not remaining, they've been re-created.

But if they're re-created, why are they not recreated correctly after an unclean shutdown? What persistent state is causing that? We should be clobbering saved reorder buffer temp files, snapshots, etc at startup. The slot state is pretty simple, it'd just be a bit behind.

The key difference seems to be that we hard-kill the server so it can't write anything to clog. The xact is implicitly aborted, we never wrote any xlog record for a commit or abort. The problem is presumably with decoding xacts that were implicitly aborted by server crash, after we restart the server and resume decoding.

The assertion failure reported is in ReorderBufferRestoreCleanup, which makes sense.

Because there's no xlog record of the crash, we never set the buffer's final_lsn in ReorderBufferCommit or ReorderBufferAbort . 

Note the comment there:

 * NB: Transactions handled here have to have actively aborted (i.e. have
 * produced an abort record). Implicitly aborted transactions are handled via
 * ReorderBufferAbortOld(); transactions we're just not interested in, but
 * which have committed are handled in ReorderBufferForget().

That's only called from DecodeStandbyOp in response to an xl_running_xacts.

Here's the backtrace.

Core was generated by `postgres: wal sender process postgres [local'.
Program terminated with signal SIGABRT, Aborted.
...
#2  0x00000000008537b7 in ExceptionalCondition (conditionName=conditionName@entry=0x9fcdf6 "!(txn->final_lsn != 0)", errorType=errorType@entry=0x89bcb4 "FailedAssertion", 
    fileName=fileName@entry=0x9fcd04 "reorderbuffer.c", lineNumber=lineNumber@entry=2576) at assert.c:54
#3  0x00000000006fec02 in ReorderBufferRestoreCleanup (rb=rb@entry=0x1b4c370, txn=txn@entry=0x1b5c3b8) at reorderbuffer.c:2576
#4  0x0000000000700693 in ReorderBufferCleanupTXN (rb=rb@entry=0x1b4c370, txn=txn@entry=0x1b5c3b8) at reorderbuffer.c:1145
#5  0x0000000000701516 in ReorderBufferAbortOld (rb=0x1b4c370, oldestRunningXid=558) at reorderbuffer.c:1730
#6  0x00000000006f5a47 in DecodeStandbyOp (ctx=0x1af9ce0, buf=buf@entry=0x7ffd11761200) at decode.c:325
#7  0x00000000006f65bf in LogicalDecodingProcessRecord (ctx=<optimized out>, record=<optimized out>) at decode.c:117
#8  0x00000000007098ab in XLogSendLogical () at walsender.c:2766
#9  0x000000000070a875 in WalSndLoop (send_data=send_data@entry=0x709857 <XLogSendLogical>) at walsender.c:2134
#10 0x000000000070b011 in StartLogicalReplication (cmd=cmd@entry=0x1a9cd68) at walsender.c:1101
#11 0x000000000070b46f in exec_replication_command (cmd_string=cmd_string@entry=0x1afec30 "START_REPLICATION SLOT \"sub\" LOGICAL 0/0 (proto_version '1', publication_names '\"pub\"')") at walsender.c:1527
#12 0x0000000000758809 in PostgresMain (argc=<optimized out>, argv=argv@entry=0x1aab870, dbname=<optimized out>, username=<optimized out>) at postgres.c:4086
#13 0x00000000006e178d in BackendRun (port=port@entry=0x1aa3430) at postmaster.c:4357
#14 0x00000000006e35e9 in BackendStartup (port=port@entry=0x1aa3430) at postmaster.c:4029
#15 0x00000000006e39e3 in ServerLoop () at postmaster.c:1753
#16 0x00000000006e4b36 in PostmasterMain (argc=argc@entry=3, argv=argv@entry=0x1a7c5d0) at postmaster.c:1361
#17 0x000000000065d093 in main (argc=3, argv=0x1a7c5d0) at main.c:228

So it's clear why we can call ReorderBufferCleanupOld with no final_lsn set.

I think your approach of setting final_lsn to the last buffered action is fine.

I wonder if we should be setting it in ReorderBufferAbortOld though, when we know for sure that we're seeing final_lsn==0 in response to this specific situation. That way we don't mask other potential bugs the assertions are there to catch.

--
 Craig Ringer                   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

pgsql-hackers by date:

Previous
From: Simon Riggs
Date:
Subject: Re: Logical Replication and triggers
Next
From: Amit Langote
Date:
Subject: Re: With commit 4e5fe9ad19, range partition missing handling for theNULL partition key