Thread: Re: [GENERAL] Undetected corruption of table files

Re: [GENERAL] Undetected corruption of table files

From
"Albe Laurenz"
Date:
Tom Lane wrote:
>> Would it be an option to have a checksum somewhere in each
>> data block that is verified upon read?
>
> That's been proposed before and rejected before.  See the archives ...

I searched for "checksum" and couldn't find it. Could someone
give me a pointer? I'm not talking about WAL files here.

Thanks,
Laurenz Albe

Re: [GENERAL] Undetected corruption of table files

From
Tom Lane
Date:
"Albe Laurenz" <all@adv.magwien.gv.at> writes:
> Tom Lane wrote:
>>> Would it be an option to have a checksum somewhere in each
>>> data block that is verified upon read?

>> That's been proposed before and rejected before.  See the archives ...

> I searched for "checksum" and couldn't find it. Could someone
> give me a pointer? I'm not talking about WAL files here.

"CRC" maybe?  Also, make sure your search goes all the way back; I think
the prior discussions were around the same time WAL was initially put
in, and/or when we dropped the WAL CRC width from 64 to 32 bits.
The very measurable overhead of WAL CRCs are the main thing that's
discouraged us from having page CRCs.  (Well, that and the lack of
evidence that they'd actually gain anything.)

            regards, tom lane

Re: [GENERAL] Undetected corruption of table files

From
"Jonah H. Harris"
Date:
On 8/27/07, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> that and the lack of evidence that they'd actually gain anything

I find it somewhat ironic that PostgreSQL strives to be fairly
non-corruptable, yet has no way to detect a corrupted page.  The only
reason for not having CRCs is because it will slow down performance...
which is exactly opposite of conventional PostgreSQL wisdom (no
performance trade-off for durability).

--
Jonah H. Harris, Software Architect | phone: 732.331.1324
EnterpriseDB Corporation            | fax: 732.331.1301
33 Wood Ave S, 3rd Floor            | jharris@enterprisedb.com
Iselin, New Jersey 08830            | http://www.enterprisedb.com/

Re: [GENERAL] Undetected corruption of table files

From
"Trevor Talbot"
Date:
On 8/27/07, Jonah H. Harris <jonah.harris@gmail.com> wrote:
> On 8/27/07, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > that and the lack of evidence that they'd actually gain anything
>
> I find it somewhat ironic that PostgreSQL strives to be fairly
> non-corruptable, yet has no way to detect a corrupted page.  The only
> reason for not having CRCs is because it will slow down performance...
> which is exactly opposite of conventional PostgreSQL wisdom (no
> performance trade-off for durability).

But how does detecting a corrupted data page gain you any durability?
All it means is that the platform underneath screwed up, and you've
already *lost* durability.  What do you do then?

It seems like the same idea as an application trying to detect RAM errors.

Re: Undetected corruption of table files

From
Gregory Stark
Date:
"Tom Lane" <tgl@sss.pgh.pa.us> writes:

> "Albe Laurenz" <all@adv.magwien.gv.at> writes:
>> Tom Lane wrote:
>>>> Would it be an option to have a checksum somewhere in each
>>>> data block that is verified upon read?
>
>>> That's been proposed before and rejected before.  See the archives ...
>
>> I searched for "checksum" and couldn't find it. Could someone
>> give me a pointer? I'm not talking about WAL files here.
>
> "CRC" maybe?  Also, make sure your search goes all the way back; I think
> the prior discussions were around the same time WAL was initially put
> in, and/or when we dropped the WAL CRC width from 64 to 32 bits.
> The very measurable overhead of WAL CRCs are the main thing that's
> discouraged us from having page CRCs.  (Well, that and the lack of
> evidence that they'd actually gain anything.)

I thought we determined the reason WAL CRCs are expensive is because we have
to checksum each WAL record individually. I recall the last time this came up
I ran some microbenchmarks and found that the cost to CRC an entire 8k block
was on the order of tens of microseconds.

The last time it came up was in the context of allowing turning off
full_page_writes but offering a guarantee that torn pages would be detected on
recovery and no later. I was a proponent of using writev to embed bytes in
each 512 byte block and Jonah said it would be no faster than a CRC (and
obviously considerably more complicated). My benchmarks showed that Jonah was
right and the CRC was cheaper than a the added cost of using writev.

I do agree the benefits of having a CRC are overstated. Most times corruption
is caused by bad memory and a CRC will happily checksum the corrupted memory
just fine. A checksum is no guarantee. But I've also seen data corruption
caused by bad memory in an i/o controller, for example. There are always going
to be cases where it could help.

--
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com

Re: [GENERAL] Undetected corruption of table files

From
Alban Hertroys
Date:
Jonah H. Harris wrote:
> On 8/27/07, Tom Lane <tgl@sss.pgh.pa.us> wrote:
>> that and the lack of evidence that they'd actually gain anything
>
> I find it somewhat ironic that PostgreSQL strives to be fairly
> non-corruptable, yet has no way to detect a corrupted page.  The only
> reason for not having CRCs is because it will slow down performance...
> which is exactly opposite of conventional PostgreSQL wisdom (no
> performance trade-off for durability).

Why? I can't say I speak for the developers, but I think the reason is
that data corruption can (with the very rare exception of undetected
programming errors) only be caused by hardware problems.

If you have a "proper" production database server, your memory has error
checking, and your RAID controller has something of the kind as well. If
not you would probably be running the database on a filesystem that has
reliable integrity verification mechanisms.

In the worst case (all the above mechanisms fail), you have backups.

IMHO the problem is covered quite adequately. The operating system and
the hardware cover for the database, as they should; it's _their_ job.

--
Alban Hertroys
alban@magproductions.nl

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
   7500 AK Enschede

// Integrate Your World //

Re: [GENERAL] Undetected corruption of table files

From
Tom Lane
Date:
"Trevor Talbot" <quension@gmail.com> writes:
> On 8/27/07, Jonah H. Harris <jonah.harris@gmail.com> wrote:
>> I find it somewhat ironic that PostgreSQL strives to be fairly
>> non-corruptable, yet has no way to detect a corrupted page.

> But how does detecting a corrupted data page gain you any durability?
> All it means is that the platform underneath screwed up, and you've
> already *lost* durability.  What do you do then?

Indeed.  In fact, the most likely implementation of this (refuse to do
anything with a page with a bad CRC) would be a net loss from that
standpoint, because you couldn't get *any* data out of a page, even if
only part of it had been zapped.

            regards, tom lane

Re: [GENERAL] Undetected corruption of table files

From
"Jonah H. Harris"
Date:
On 8/27/07, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> Indeed.  In fact, the most likely implementation of this (refuse to do
> anything with a page with a bad CRC) would be a net loss from that
> standpoint, because you couldn't get *any* data out of a page, even if
> only part of it had been zapped.

At least you would know it was corrupted, instead of getting funky
errors and/or crashes.

--
Jonah H. Harris, Software Architect | phone: 732.331.1324
EnterpriseDB Corporation            | fax: 732.331.1301
33 Wood Ave S, 3rd Floor            | jharris@enterprisedb.com
Iselin, New Jersey 08830            | http://www.enterprisedb.com/

Re: [GENERAL] Undetected corruption of table files

From
Decibel!
Date:
On Mon, Aug 27, 2007 at 12:08:17PM -0400, Jonah H. Harris wrote:
> On 8/27/07, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > Indeed.  In fact, the most likely implementation of this (refuse to do
> > anything with a page with a bad CRC) would be a net loss from that
> > standpoint, because you couldn't get *any* data out of a page, even if
> > only part of it had been zapped.

I think it'd be perfectly reasonable to have a mode where you could
bypass the check so that you could see what was in the corrupted page
(as well as deleting everything on the page so that you could "fix" the
corruption). Obviously, this should be restricted to superusers.

> At least you would know it was corrupted, instead of getting funky
> errors and/or crashes.

Or worse, getting what appears to be perfectly valid data, but isn't.
--
Decibel!, aka Jim Nasby                        decibel@decibel.org
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)

Attachment

Re: [GENERAL] Undetected corruption of table files

From
"Albe Laurenz"
Date:
Tom Lane wrote:
>>>> Would it be an option to have a checksum somewhere in each
>>>> data block that is verified upon read?
>
>>> That's been proposed before and rejected before.  See the
>>> archives ...
>
> I think
> the prior discussions were around the same time WAL was initially put
> in, and/or when we dropped the WAL CRC width from 64 to 32 bits.
> The very measurable overhead of WAL CRCs are the main thing that's
> discouraged us from having page CRCs.  (Well, that and the lack of
> evidence that they'd actually gain anything.)

Hmmm - silence me if I'm misunderstanding this, but the most
conclusive hit I had was a mail by you:

http://archives.postgresql.org/pgsql-general/2001-10/msg01142.php

which only got affirmative feedback.

Also, there's a TODO entry:

- Add optional CRC checksum to heap and index pages

This seems to me to be exactly what I wish for...

To the best of my knowledge, the most expensive thing in databases
today is disk I/O, because CPU speed is increasing faster. Although
calculating a checksum upon writing a block to disk will
certainly incur CPU overhead, what may have seemed too expensive
a couple of years ago could be acceptable today.

I understand the argument that it's the task of hardware and
OS to see that data don't get corrupted, but it would improve
PostgreSQL's reliabitity if it can detect such errors and at
least issue a warning.
This wouldn't fix the underlying problem, but it would tell you
to not overwrite last week's backup tape...

Not all databases are on enterprise scale storage systems, and
there's also the small possibility of PostgreSQL bugs that could
be detected that way.

Yours,
Laurenz Albe


Re: [GENERAL] Undetected corruption of table files

From
Lincoln Yeoh
Date:
At 11:48 PM 8/27/2007, Trevor Talbot wrote:
>On 8/27/07, Jonah H. Harris <jonah.harris@gmail.com> wrote:
> > On 8/27/07, Tom Lane <tgl@sss.pgh.pa.us> wrote:
> > > that and the lack of evidence that they'd actually gain anything
> >
> > I find it somewhat ironic that PostgreSQL strives to be fairly
> > non-corruptable, yet has no way to detect a corrupted page.  The only
> > reason for not having CRCs is because it will slow down performance...
> > which is exactly opposite of conventional PostgreSQL wisdom (no
> > performance trade-off for durability).
>
>But how does detecting a corrupted data page gain you any durability?
>All it means is that the platform underneath screwed up, and you've
>already *lost* durability.  What do you do then?

The benefit I see is you get to change the platform underneath
earlier than later.

Whether that's worth it or not I don't know - real world stats/info
would be good.

Even my home PATA drives tend to grumble about stuff first before
they fail, so it might not be worthwhile doing the extra work.

Regards,
Link.




Re: [GENERAL] Undetected corruption of table files

From
Florian Weimer
Date:
* Alban Hertroys:

> If you have a "proper" production database server, your memory has
> error checking, and your RAID controller has something of the kind
> as well.

To my knowledge, no readily available controller performs validation
on reads (not even for RAID-1 or RAID-10, where it would be pretty
straightforward).

Something like an Adler32 checksum (not a full CRC) on each page might
be helpful.  However, what I'd really like to see is something that
catches missed writes, but this is very difficult to implement AFAICT.

--
Florian Weimer                <fweimer@bfk.de>
BFK edv-consulting GmbH       http://www.bfk.de/
Kriegsstraße 100              tel: +49-721-96201-1
D-76133 Karlsruhe             fax: +49-721-96201-99

Re: [GENERAL] Undetected corruption of table files

From
Jan Wieck
Date:
On 8/28/2007 4:14 AM, Albe Laurenz wrote:
> Not all databases are on enterprise scale storage systems, and
> there's also the small possibility of PostgreSQL bugs that could
> be detected that way.

Computing a checksum just before writing the block will NOT detect any 
faulty memory or Postgres bug that corrupted the block. You will have a 
perfectly fine checksum over the corrupted data.

A checksum only detects corruptions that happen between write and read. 
Most data corruptions that happen during that time however lead to some 
sort of read error reported by the disk.


Jan

-- 
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.                                  #
#================================================== JanWieck@Yahoo.com #


Re: [GENERAL] Undetected corruption of table files

From
"Albe Laurenz"
Date:
Jan Wieck wrote:
> Computing a checksum just before writing the block will NOT detect any

> faulty memory or Postgres bug that corrupted the block. You will have
a
> perfectly fine checksum over the corrupted data.
>
> A checksum only detects corruptions that happen between write and
read.
> Most data corruptions that happen during that time however lead to
some
> sort of read error reported by the disk.

I have thought some more about it, and tend to agree now:
Checksums will only detect disk failure, and that's only
one of the many integrity problems that can happen.
And one that can be reduced to a reasonable degree with good
storage systems.

So the benefit of checksums is not enough to bother.

Yours,
Laurenz Albe


Re: [GENERAL] Undetected corruption of table files

From
Decibel!
Date:
On Fri, Aug 31, 2007 at 02:34:09PM +0200, Albe Laurenz wrote:
> I have thought some more about it, and tend to agree now:
> Checksums will only detect disk failure, and that's only
> one of the many integrity problems that can happen.
> And one that can be reduced to a reasonable degree with good
> storage systems.
>
> So the benefit of checksums is not enough to bother.

Uhm... how often do we get people asking about corruption on -admin
alone? 2-3x a month? ISTM it would be very valuable to those folks to
be able to tell them if the corruption occurred between writing a page
out and reading it back in.

Even if we don't care about folks running on suspect hardware, having a
CRC would make it far more reasonable to recommend full_page_writes=off.
I never turn that off and recommend to folks that they don't turn it off
because there's no way to know if it will or has corrupted data.

BTW, a method that would buy additional protection would be to compute
the CRC for a page every time you modify it in such a way that generates
a WAL record, and record that CRC with the WAL record. That would
protect from corruption that happened anytime after the page was
modified, instead of just when smgr went to write it out. How useful
that is I don't know...
--
Decibel!, aka Jim Nasby                        decibel@decibel.org
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)

Re: [GENERAL] Undetected corruption of table files

From
Tom Lane
Date:
Decibel! <decibel@decibel.org> writes:
> Even if we don't care about folks running on suspect hardware, having a
> CRC would make it far more reasonable to recommend full_page_writes=3Doff.

This argument seems ridiculous.  Finding out that you have corrupt data
is no substitute for not having corrupt data.

> BTW, a method that would buy additional protection would be to compute
> the CRC for a page every time you modify it in such a way that generates
> a WAL record, and record that CRC with the WAL record. That would
> protect from corruption that happened anytime after the page was
> modified, instead of just when smgr went to write it out. How useful
> that is I don't know...

Two words: hint bits.
        regards, tom lane


Re: [GENERAL] Undetected corruption of table files

From
Decibel!
Date:
On Fri, Aug 31, 2007 at 03:11:29PM -0400, Tom Lane wrote:
> Decibel! <decibel@decibel.org> writes:
> > Even if we don't care about folks running on suspect hardware, having a
> > CRC would make it far more reasonable to recommend full_page_writes=3Doff.
>
> This argument seems ridiculous.  Finding out that you have corrupt data
> is no substitute for not having corrupt data.

Of course. But how will you discover you have corrupt data if there's no
mechanism to detect it?
--
Decibel!, aka Jim Nasby                        decibel@decibel.org
EnterpriseDB      http://enterprisedb.com      512.569.9461 (cell)