Re: 9.4 btree index corruption - Mailing list pgsql-hackers
From | Heikki Linnakangas |
---|---|
Subject | Re: 9.4 btree index corruption |
Date | |
Msg-id | 53826D13.8060801@vmware.com Whole thread Raw |
In response to | Re: 9.4 btree index corruption (Jeff Janes <jeff.janes@gmail.com>) |
List | pgsql-hackers |
On 05/25/2014 05:45 PM, Jeff Janes wrote: > On Sun, May 25, 2014 at 7:16 AM, Heikki Linnakangas <hlinnakangas@vmware.com >> wrote: > >> On 05/21/2014 10:22 PM, Jeff Janes wrote: >> >>> Testing partial-write crash-recovery in 9.4 (e12d7320ca494fd05134847e30) >>> with foreign keys, I found some btree index corruption. >>> >>> ... > >> >>> https://drive.google.com/folderview?id=0Bzqrh1SO9FcENWd6ZXlwVWpxU0E& >>> usp=sharing >>> >> >> I downloaded the data directory and investigated. I got this message when >> I started it up: >> >> 20392 2014-05-25 05:51:37.835 PDT:ERROR: right sibling 4044 of block 460 >> is not next child 23513 of block 86458 in index "foo_p_id_idx" >> 20392 2014-05-25 05:51:37.835 PDT:CONTEXT: automatic vacuum of table >> "jjanes.public.foo" >> >> Interestingly, it's complaining about parent page 86458, while yours >> claimed it was 1264. I don't know why; perhaps a huge number of insertions >> happened after that error, causing the parent level pages to be split, >> moving the downlinks it complains about to the right. Did you continue >> running the test after that error occurred? >> > > Yes, I didn't set it up to abort upon vacuum errors, so it continued to run > until it reached the 1,000,000 wrap around limit. > > Between when the vacuums started failing and when it reached wrap around > limit, it seemed to run normally (other than the increasing bloat and > non-advancement of frozenxid). > > I only noticed the ERROR when I was looking through the log file in > postmortem. Looking back, I see 10 errors with 1264, then it switched to > 86458. I didn't notice the change and reported only the first ERROR > message. Ok, that explains the difference in the error message. > I'll apply your patch and see what happens, but 90 further hours of testing > gave no more occurrences of the error so I'm afraid that not seeing errors > is not much evidence that the error was fixed. So I'll have to rely on > your knowledge of the code. I've pushed the fix. >> This is what the tree looks like around those pages: >> >> Level 1: >> +-------------+ +-------------+ +-------------+ >> | Blk 1264 | | Blk 160180 | | Blk 86458 | >> | | | | | | >> | Downlinks: | <-> | Downlinks: | <-> | Downlinks: | >> | ... | | ... | | 1269 | >> | | | | | 460 | >> | | | | | 23513 | >> +-------------+ +-------------+ +-------------+ >> >> > > Is this done with the pageinspect extension? Yeah, with some more tooling on top of it. The elegant diagram I drew by hand. Thanks again! BTW, it would be useful to keep the WAL logs for as long as you have disk space. I'm not sure if that would've helped in this case, but it can be really useful to see all the actions that were done to the page before something goes wrong. - Heikki
pgsql-hackers by date: