Re: data file corruption - Mailing list pgsql-general

From PG User
Subject Re: data file corruption
Date
Msg-id CAB7tN8bxmyYLN16F_jWCu+ayBpCcpn=vyNby4iCz8Yus6-oSGg@mail.gmail.com
Whole thread Raw
In response to Re: data file corruption  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
Hi Tom,
Thank you for your reply.

I checked and found some weird behavior: We have 32K blocksize.
The incorrect blocks started at block 41 and span over 115 blocks (41 to 155), at the exact block boundary of 32K. Then at 156th block, we have both tuples with correct number of attributes as well as tuples with incorrect number of attributes.

  Block Id: 0  linp Index: 1   Attributes: 10   Size: 24
  Block Id: 0  linp Index: 2   Attributes: 10   Size: 24
  Block Id: 0  linp Index: 3   Attributes: 10   Size: 24
  Block Id: 0  linp Index: 4   Attributes: 10   Size: 24
  Block Id: 0  linp Index: 5   Attributes: 10   Size: 24
  Block Id: 0  linp Index: 6   Attributes: 10   Size: 24
  Block Id: 0  linp Index: 7   Attributes: 10   Size: 24
  Block Id: 0  linp Index: 8   Attributes: 10   Size: 24
  Block Id: 0  linp Index: 9   Attributes: 10   Size: 24
...
...
  Block Id: 40  linp Index: 161   Attributes: 10   Size: 24
  Block Id: 40  linp Index: 162   Attributes: 10   Size: 24
  Block Id: 40  linp Index: 163   Attributes: 10   Size: 24
  Block Id: 41  linp Index: 1   Attributes: 92   Size: 40
  Block Id: 41  linp Index: 2   Attributes: 92   Size: 40
  Block Id: 41  linp Index: 3   Attributes: 92   Size: 40
  Block Id: 41  linp Index: 4   Attributes: 92   Size: 40
  Block Id: 41  linp Index: 5   Attributes: 92   Size: 40
  Block Id: 41  linp Index: 6   Attributes: 92   Size: 40
...
...
  Block Id: 155  linp Index: 22   Attributes: 92   Size: 40
  Block Id: 155  linp Index: 23   Attributes: 92   Size: 40
  Block Id: 155  linp Index: 24   Attributes: 92   Size: 40
  Block Id: 155  linp Index: 25   Attributes: 92   Size: 40
  Block Id: 155  linp Index: 26   Attributes: 92   Size: 40
  Block Id: 155  linp Index: 27   Attributes: 92   Size: 40
  Block Id: 155  linp Index: 28   Attributes: 10   Size: 24
  Block Id: 155  linp Index: 29   Attributes: 10   Size: 24
  Block Id: 155  linp Index: 30   Attributes: 10   Size: 24
  Block Id: 155  linp Index: 31   Attributes: 10   Size: 24
  Block Id: 155  linp Index: 32   Attributes: 10   Size: 24

One more important thing:
All correct blocks (blocks with all items with 10 attributes) have 264 items.
But block 40 - though has all items with 10 attributes - have 163 items and rest is free space.
Block 40 is last block before blocks with 92 attributes start. 

Questions:
So I think, the malicious operation has done something to previous block (block 40) to create block boundary, making free space? 

So from these logs, this looks like this is happening at file level or hardware level, Am I correct?

Thank you again.

- Nachiket


On Thu, May 23, 2013 at 6:22 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:
PG User <pguser1982@gmail.com> writes:
> ... Later we found that invalid header is actually valid header for other
> table's data file. So somehow data of one table got inserted into another
> and both tables have different # of attributes. Can this be possible? Any
> hardware issue can cause this?

There are (at least) 3 possibilities:

1. Postgres got confused and wrote a block to the wrong place.

2. The kernel (filesystem) got confused and wrote a block to the wrong
place.

3. The disk hardware got confused and wrote a block to the wrong place.

The easiest way to narrow it down would be to try to identify the size
of the misplaced write.  If postgres messed up, it'd certainly have
written a whole 8K page to the wrong place.  A filesystem bug would more
likely have misplaced a single filesystem block, which might be 8K but
I think it's more usually 1K or 4K on modern machines (a little research
about your OS should tell you what blocksize is being used on this
filesystem).  Or if the disk screwed up, it'd most likely have misplaced
a single 512-byte sector.  So take a closer look at the pg_filedump
results for the questionable page, and see if you can determine how much
of the page looks to have been interpolated from someplace else.

FWIW, if I had to bet with no further data, I'd bet on door #2.
Bugs of this sort have been found in Postgres, but not in a long time.
Is your kernel up-to-date?

                        regards, tom lane

pgsql-general by date:

Previous
From: Juliano Amaral Chaves
Date:
Subject: Insert with query
Next
From:
Date:
Subject: Re: [PERFORM] Very slow inner join query Unacceptable latency.