Re: backup manifests - Mailing list pgsql-hackers

From Stephen Frost
Subject Re: backup manifests
Date
Msg-id 20200326210000.GZ13712@tamriel.snowman.net
Whole thread Raw
In response to Re: backup manifests  (Mark Dilger <mark.dilger@enterprisedb.com>)
List pgsql-hackers
Greetings,

* Mark Dilger (mark.dilger@enterprisedb.com) wrote:
> > On Mar 26, 2020, at 12:37 PM, Stephen Frost <sfrost@snowman.net> wrote:
> > * Mark Dilger (mark.dilger@enterprisedb.com) wrote:
> >>> On Mar 26, 2020, at 9:34 AM, Stephen Frost <sfrost@snowman.net> wrote:
> >>> I'm not actually argueing about which hash functions we should support,
> >>> but rather what the default is and if crc32c, specifically, is actually
> >>> a reasonable choice.  Just because it's fast and we already had an
> >>> implementation of it doesn't justify its use as the default.  Given that
> >>> it doesn't actually provide the check that is generally expected of
> >>> CRC checksums (100% detection of single-bit errors) when the file size
> >>> gets over 512MB makes me wonder if we should have it at all, yes, but it
> >>> definitely makes me think it shouldn't be our default.
> >>
> >> I don't understand your focus on the single-bit error issue.
> >
> > Maybe I'm wrong, but my understanding was that detecting single-bit
> > errors was one of the primary design goals of CRC and why people talk
> > about CRCs of certain sizes having 'limits'- that's the size at which
> > single-bit errors will no longer, necessarily, be picked up and
> > therefore that's where the CRC of that size starts falling down on that
> > goal.
>
> I think I agree with all that.  I'm not sure it is relevant.  When people use CRCs to detect things *other than*
transmissionerrors, they are in some sense using a hammer to drive a screw.  At that point, the analysis of how good
thehammer is, and how big a nail it can drive, is no longer relevant.  The relevant discussion here is how appropriate
aCRC is for our purpose.  I don't know the answer to that, but it doesn't seem the single-bit error analysis is the
rightanalysis. 

I disagree that it's not relevant- it's, in fact, the one really clear
thing we can get a pretty straight-forward answer on, and that seems
really useful to me.

> >> If you are sending your backup across the wire, single bit errors during transmission should already be detected
aspart of the networking protocol.  The real issue has to be detection of the kinds of errors or modifications that are
mostlikely to happen in practice.  Which are those?  People manually mucking with the files?  Bugs in backup scripts?
Corruptionon the storage device?  Truncated files?  The more bits in the checksum (assuming a well designed checksum
algorithm),the more likely we are to detect accidental modification, so it is no surprise if a 64-bit crc does better
than32-bit crc.  But that logic can be taken arbitrarily far.  I don't see the connection between, on the one hand, an
analysisof single-bit error detection against file size, and on the other hand, the verification of backups. 
> >
> > We'd like something that does a good job at detecting any differences
> > between when the file was copied off of the server and when the command
> > is run- potentially weeks or months later.  I would expect most issues
> > to end up being storage-level corruption over time where the backup is
> > stored, which could be single bit flips or whole pages getting zeroed or
> > various other things.  Files changing size probably is one of the less
> > common things, but, sure, that too.
> >
> > That we could take this "arbitrarily far" is actually entirely fine-
> > that's a good reason to have alternatives, which this patch does have,
> > but that doesn't mean we should have a default that's not suitable for
> > the files that we know we're going to be storing.
> >
> > Consider that we could have used a 16-bit CRC instead, but does that
> > actually make sense?  Ok, sure, maybe someone really wants something
> > super fast- but should that be our default?  If not, then what criteria
> > should we use for the default?
>
> I'll answer this below....
>
> >> From a support perspective, I think the much more important issue is making certain that checksums are turned on.
Aone in a billion chance of missing an error seems pretty acceptable compared to the, let's say, one in two chance that
yourcustomer didn't use checksums.  Why are we even allowing this to be turned off?  Is there a usage case compelling
thatoption? 
> >
> > The argument is that adding checksums takes more time.  I can understand
> > that argument, though I don't really agree with it.  Certainly a few
> > percent really shouldn't be that big of an issue, and in many cases even
> > a sha256 hash isn't going to have that dramatic of an impact on the
> > actual overall time.
>
> I see two dangers here:
>
> (1) The user enables checksums of some type, and due to checksums not being perfect, corruption happens but goes
undetected,leaving her in a bad place. 
>
> (2) The user makes no checksum selection at all, gets checksums of the *default* type, determines it is too slow for
herpurposes, and instead of adjusting the checksum algorithm to something faster, simply turns checksums off;
corruptionhappens and of course is undetected, leaving her in a bad place. 

Alright, I have tried to avoid referring back to pgbackrest, but I can't
help it here.

We have never, ever, had a user come to us and complain that pgbackrest
is too slow because we're using a SHA hash.  We have also had them by
default since absolutely day number one, and we even removed the option
to disable them in 1.0.  We've never even been asked if we should
implement some other hash or checksum which is faster.

> I think the risk of (2) is far worse, which makes me tend towards a default that is fast enough not to encourage
anybodyto disable checksums altogether.  I have no opinion about which algorithm is best suited to that purpose,
becauseI haven't benchmarked any.  I'm pretty much going off what Robert said, in terms of how big an impact using a
heavieralgorithm would be.  Perhaps you'd like to run benchmarks and make a concrete proposal for another algorithm,
withnumbers showing the runtime changes?  You mentioned up-thread that prior timings which showed a 40-50% slowdown
werenot including all the relevant stuff, so perhaps you could fix that in your benchmark and let us know what is
includedin the timings? 

I don't even know what the 40-50% slowdown numbers included.  Also, the
general expectation in this community is that whomever is pushing a
given patch forward should be providing the benchmarks to justify their
choice.

> I don't think we should be contemplating for v13 any checksum algorithms for the default except the ones already in
theoptions list.  Doing that just derails the patch.  If you want highwayhash or similar to be the default, can't we
holdoff until v14 and think about changing the default?  Maybe I'm missing something, but I don't see any reason why it
wouldbe hard to change this after the first version has already been released. 

I'd rather we default to something that we are all confident and happy
with, erroring on the side of it being overkill rather than something
that we know isn't really appropriate for the data volume.

Thanks,

Stephen

Attachment

pgsql-hackers by date:

Previous
From: Tom Lane
Date:
Subject: Re: plan cache overhead on plpgsql expression
Next
From: Nikita Glukhov
Date:
Subject: Re: Ltree syntax improvement