Thread: Hot standby v5 patch assertion failure

Hot standby v5 patch assertion failure

From
Mark Kirkwood
Date:
Trying out a few different scenarios I ran across this:

1/ Setup master and replica with replica using pg_standby
2/ Create a new database ("bench" in my case)
3/ Initialize pgbench schema size 100
4/ Run with 2 clients and 10000 transactions
5/ Replica gets assertion failure

This is Postgres head from 2nd Nov (NZST) with v5 patch applied on 
Freebsd 7.1 Prerelease.
Here is the last few log entries:

DEBUG:  executing restore command "pg_standby -l -d -s 2 -t 
/tmp/pgsql.trigger.5439 /data0/pgarchive/8.4 00000001000000000000006F 
pg_xlog/RECOVERYXLOG 000000010000000000000069 2>>standby.log"
LOG:  restored log file "00000001000000000000006F" from archive
DEBUG:  RecordKnown xid 6999 parent 0 slot 99 latestObsvXid 6998 
firstXid t firstSubXid f markSubtrans f
CONTEXT:  xlog redo update: rel 1663/16384/16397; tid 141396/12; new 
158829/8
DEBUG:  start recovery xid = 6999 lsn = 0/6F001708
CONTEXT:  xlog redo update: rel 1663/16384/16397; tid 141396/12; new 
158829/8
DEBUG:  removing recovery locks: slot 99
CONTEXT:  xlog redo commit: 2008-11-03 11:57:18.958241+13
DEBUG:  RecordKnown xid 7000 parent 0 slot 98 latestObsvXid 6999 
firstXid t firstSubXid f markSubtrans f
CONTEXT:  xlog redo update: rel 1663/16384/16397; tid 32133/24; new 
158828/58
DEBUG:  start recovery xid = 7000 lsn = 0/6F0055C8
CONTEXT:  xlog redo update: rel 1663/16384/16397; tid 32133/24; new 
158828/58
DEBUG:  removing recovery locks: slot 98
CONTEXT:  xlog redo commit: 2008-11-03 11:57:18.963507+13
DEBUG:  removing recovery locks: slot 99
CONTEXT:  xlog redo commit: 2008-11-03 11:57:18.967145+13
DEBUG:  RecordKnown xid 7002 parent 0 slot 98 latestObsvXid 7000 
firstXid t firstSubXid f markSubtrans f
CONTEXT:  xlog redo update: rel 1663/16384/16397; tid 9614/62; new 158828/59
DEBUG:  start recovery xid = 7002 lsn = 0/6F012EE4
CONTEXT:  xlog redo update: rel 1663/16384/16397; tid 9614/62; new 158828/59
TRAP: FailedAssertion("!(!((UnobservedXids[index]) != ((TransactionId) 
0)))", File: "procarray.c", Line: 2037)
DEBUG:  reaping dead processes
LOG:  startup process (PID 12600) was terminated by signal 6: Abort trap
LOG:  aborting startup due to startup process failure
DEBUG:  proc_exit(1)
DEBUG:  shmem_exit(1)
DEBUG:  exit(1)


regards

Mark


Re: Hot standby v5 patch assertion failure

From
Simon Riggs
Date:
On Mon, 2008-11-03 at 12:16 +1300, Mark Kirkwood wrote:

> Trying out a few different scenarios I ran across this:
> 
> CONTEXT:  xlog redo update: rel 1663/16384/16397; tid 9614/62; new 158828/59
> DEBUG:  start recovery xid = 7002 lsn = 0/6F012EE4
> CONTEXT:  xlog redo update: rel 1663/16384/16397; tid 9614/62; new 158828/59
> TRAP: FailedAssertion("!(!((UnobservedXids[index]) != ((TransactionId) 
> 0)))", File: "procarray.c", Line: 2037)

OK, thanks Mark. I'll start looking at it now.

-- Simon Riggs           www.2ndQuadrant.comPostgreSQL Training, Services and Support



Re: Hot standby v5 patch assertion failure

From
Simon Riggs
Date:
On Mon, 2008-11-03 at 06:41 +0000, Simon Riggs wrote:
> On Mon, 2008-11-03 at 12:16 +1300, Mark Kirkwood wrote:
> 
> > Trying out a few different scenarios I ran across this:
> > 
> > CONTEXT:  xlog redo update: rel 1663/16384/16397; tid 9614/62; new 158828/59
> > DEBUG:  start recovery xid = 7002 lsn = 0/6F012EE4
> > CONTEXT:  xlog redo update: rel 1663/16384/16397; tid 9614/62; new 158828/59
> > TRAP: FailedAssertion("!(!((UnobservedXids[index]) != ((TransactionId) 
> > 0)))", File: "procarray.c", Line: 2037)
> 
> OK, thanks Mark. I'll start looking at it now.

It's nice to know the exact line something fails on. I'd instrumented
the whole of the UnobservedXids code to trap failures. I've had a couple
of errors in that already during development. But what to do about it?

I'm thinking the best way to handle this is just to simplify this part
of the code some, rather than continue tweaking it. The code attempts to
optimise the removal of UnobservedXids, but that feels now like a
premature optimisation. So I can probably drop ~100 lines of code.

I'm now adding the btree logic also, as well as updating the patch to
current head. So I'll return with an updated patch as soon as all that's
done and I've run a complete re-test.

-- Simon Riggs           www.2ndQuadrant.comPostgreSQL Training, Services and Support



Re: Hot standby v5 patch assertion failure

From
Simon Riggs
Date:
On Mon, 2008-11-03 at 12:16 +1300, Mark Kirkwood wrote:
> Trying out a few different scenarios I ran across this:
>
> 1/ Setup master and replica with replica using pg_standby
> 2/ Create a new database ("bench" in my case)
> 3/ Initialize pgbench schema size 100
> 4/ Run with 2 clients and 10000 transactions
> 5/ Replica gets assertion failure

I've been unable to reproduce this error in more than 2 days of bashing.
The bash test I use is a pgbench variant designed to cause write
contention, while at the same time running reads against those same
blocks on standby, plus running parallel installcheck.

I suspect now there was a problem in ProcArrayClearUnobservedXids(), so
I clear the array each time now, whether or not we are in assert mode.
i.e. better hygiene around reused data structures. So I *haven't*
reworked my earlier code, just checked it all again.

So, new patch enclosed. This fixes everything reported so far, plus
another 2 bugs I found and fixed during re-test.

Outstanding items currently:
* btree VACUUM code - only partially coded so far
* complete prepared xacts support
* README file update with tech overview of the patch
* hash index handling (or any index type that doesn't write WAL)
I expect to roll out v6 with the first three of these items next week.

So this patch has 100% of what is intended for "infrastructure changes
for recovery".

I've also profiled the startup process. Those tests show that the
overhead of Hot Standby code to normal archive recovery is around 4% CPU
increase on a test of pgbench -c 4. None of the functions added for Hot
Standby is in the top 10 profiled. Given that recovery is I/O bound this
probably means no decrease in performance, so other results welcome.
[Profile results at bottom]

Startup process no longer performs restartpoints, so overall this will
be faster than 8.3, even before we consider Koichi's tuning patch.


Each sample counts as 0.01 seconds.
  %   cumulative   self              self     total
time   seconds   seconds    calls  ms/call  ms/call  name
10.34      0.03     0.03   138436     0.00     0.00  hash_search_with_hash_value
10.34      0.06     0.03    31559     0.00     0.00  PageAddItem
10.34      0.09     0.03        1    30.00   229.58  StartupXLOG
  6.90      0.11     0.02    19466     0.00     0.01  heap_xlog_update
  6.90      0.13     0.02                             smgrwrite
  3.45      0.14     0.01   272400     0.00     0.00  transtime
  3.45      0.15     0.01   126284     0.00     0.00  hash_any
  3.45      0.16     0.01    84249     0.00     0.00  hash_search
  3.45      0.17     0.01    84039     0.00     0.00  smgropen
  3.45      0.18     0.01    42180     0.00     0.00  smgrnblocks
  3.45      0.19     0.01    42019     0.00     0.00  ReadBuffer_common
  3.45      0.20     0.01    42015     0.00     0.00  XLogReadBufferExtended
  3.45      0.21     0.01    37772     0.00     0.00  timesub
  3.45      0.22     0.01    11972     0.00     0.00  PageHeaderIsValid
  3.45      0.23     0.01    11972     0.00     0.00  mdread
  3.45      0.24     0.01    11626     0.00     0.00  Insert
  3.45      0.25     0.01     6480     0.00     0.00  RecordKnownAssignedTransactionIds (*)
  3.45      0.26     0.01     1121     0.01     0.01  tzload
  3.45      0.27     0.01      812     0.01     0.01  heap_page_prune_execute
  3.45      0.28     0.01        8     1.25     1.25  element_alloc
  1.72      0.29     0.01    46010     0.00     0.00  TransactionIdFollowsOrEquals
  1.72      0.29     0.01                             TransactionIdFollows
  0.00      0.29     0.00   410293     0.00     0.00  itemoffcompare
  0.00      0.29     0.00   200330     0.00     0.00  leaps_thru_end_of
  0.00      0.29     0.00   197907     0.00     0.00  detzcode
  0.00      0.29     0.00   177950     0.00     0.00  detzcode64
  0.00      0.29     0.00   140873     0.00     0.00  LWLockAcquire
  0.00      0.29     0.00   140873     0.00     0.00  LWLockRelease
  0.00      0.29     0.00   126255     0.00     0.00  tag_hash
  0.00      0.29     0.00   114891     0.00     0.00  swapfunc
  0.00      0.29     0.00   100394     0.00     0.00  increment_overflow

(*) newly added by this patch

--
 Simon Riggs           www.2ndQuadrant.com
 PostgreSQL Training, Services and Support

Attachment

Re: Hot standby v5 patch assertion failure

From
Mark Kirkwood
Date:
Simon Riggs wrote:
> On Mon, 2008-11-03 at 12:16 +1300, Mark Kirkwood wrote:
>   
>> Trying out a few different scenarios I ran across this:
>>
>> 1/ Setup master and replica with replica using pg_standby
>> 2/ Create a new database ("bench" in my case)
>> 3/ Initialize pgbench schema size 100
>> 4/ Run with 2 clients and 10000 transactions
>> 5/ Replica gets assertion failure
>>     
>
> I've been unable to reproduce this error in more than 2 days of bashing.
> The bash test I use is a pgbench variant designed to cause write
> contention, while at the same time running reads against those same
> blocks on standby, plus running parallel installcheck.
>
> I suspect now there was a problem in ProcArrayClearUnobservedXids(), so
> I clear the array each time now, whether or not we are in assert mode.
> i.e. better hygiene around reused data structures. So I *haven't*
> reworked my earlier code, just checked it all again.
>
> So, new patch enclosed. This fixes everything reported so far, plus
> another 2 bugs I found and fixed during re-test.
>
>   
Patching with v5d, I can no longer reproduce this either. Excellent!

Cheers

Mark