Re: backup manifests - Mailing list pgsql-hackers

From Andres Freund
Subject Re: backup manifests
Date
Msg-id 20200327043009.lulu32phijz3e37y@alap3.anarazel.de
Whole thread Raw
In response to Re: backup manifests  (Robert Haas <robertmhaas@gmail.com>)
Responses Re: backup manifests  (Stephen Frost <sfrost@snowman.net>)
List pgsql-hackers
Hi,

On 2020-03-26 11:37:48 -0400, Robert Haas wrote:
> I mean, you're just repeating the same argument here, and it's just
> not valid. Regardless of the file size, the chances of a false
> checksum match are literally less than one in a billion. There is
> every reason to believe that users will be happy with a low-overhead
> method that has a 99.9999999+% chance of detecting corrupt files. I do
> agree that a 64-bit CRC would probably be not much more expensive and
> improve the probability of detecting errors even further

I *seriously* doubt that it's true that 64bit CRCs wouldn't be
slower. The only reason CRC32C is semi-fast is that we're accelerating
it using hardware instructions (on x86-64 and ARM at least). Before that
it was very regularly the bottleneck for processing WAL - and it still
sometimes is. Most CRCs aren't actually very fast to compute, because
they don't lend themselves to benefit from ILP or SIMD.  We spent a fair
bit of time optimizing our crc implementation before the hardware
support was widespread.


> but I wanted to restrict this patch to using infrastructure we already
> have. The choices there are the various SHA functions (so I supported
> those), MD5 (which I deliberately omitted, for reasons I hope you'll
> be the first to agree with), CRC-32C (which is fast), a couple of
> other CRC-32 variants (which I omitted because they seemed redundant
> and one of them only ever existed in PostgreSQL because of a coding
> mistake), and the hacked-up version of FNV that we use for page-level
> checksums (which is only 16 bits and seems to have no advantages for
> this purpose).

FWIW, FNV is only 16bit because we reduce its size to 16 bit. See the
tail of pg_checksum_page.


I'm not sure the error detection guarantees of various CRC algorithms
are that relevant here, btw. IMO, for something like checksums in a
backup, just having a single one-bit error isn't as common as having
larger errors (e.g. entire blocks beeing zeroed). And to detect that
32bit checksums aren't that good.


> > As for folks who are that close to the edge on their backup timing that
> > they can't have it slow down- chances are pretty darn good that they're
> > not far from ending up needing to find a better solution than
> > pg_basebackup anyway.  Or they don't need to generate a manifest (or, I
> > suppose, they could have one but not have checksums..).
> 
> 40-50% is a lot more than "if you were on the edge."

sha256 does about approx 400MB/s per core on modern intel CPUs. That's
way below commonly accessible storage / network capabilities (and even
if you're only doing 200MB/s, you're still going to spend roughly half
of the CPU time just doing hashing.  It's unlikely that you're going to
see much speedups for sha256 just by upgrading a CPU. While there are
hardware instructions available, they don't result in all that large
improvements. Of course, we could also start using the GPU (err, really
no).

Defaulting to that makes very little sense to me. You're not just going
to spend that time while backing up, but also when validating backups
(i.e. network limits suddenly aren't a relevant bottleneck anymore).


> > I fail to see the usefulness of a tool that doesn't actually verify that
> > the backup is able to be restored from.
> >
> > Even pg_basebackup (in both fetch and stream modes...) checks that we at
> > least got all the WAL that's needed for the backup from the server
> > before considering the backup to be valid and telling the user that
> > there was a successful backup.  With what you're proposing here, we
> > could have someone do a pg_basebackup, get back an ERROR saying the
> > backup wasn't valid, and then run pg_validatebackup and be told that the
> > backup is valid.  I don't get how that's sensible.
> 
> I'm sorry that you can't see how that's sensible, but it doesn't mean
> that it isn't sensible. It is totally unrealistic to expect that any
> backup verification tool can verify that you won't get an error when
> trying to use the backup. That would require that everything that the
> validation tool try to do everything that PostgreSQL will try to do
> when the backup is used, including running recovery and updating the
> data files. Anything less than that creates a real possibility that
> the backup will verify good but fail when used. This tool has a much
> narrower purpose, which is to try to verify that we (still) have the
> files the server sent as part of the backup and that, to the best of
> our ability to detect such things, they have not been modified. As you
> know, or should know, the WAL files are not sent as part of the
> backup, and so are not verified. Other things that would also be
> useful to check are also not verified. It would be fantastic to have
> more verification tools in the future, but it is difficult to see why
> anyone would bother trying if an attempt to get the first one
> committed gets blocked because it does not yet do everything. Very few
> patches try to do everything, and those that do usually get blocked
> because, by trying to do too much, they get some of it badly wrong.

It sounds to me that if there are to be manifests for the WAL, it should
be a separate (set of) manifests. Trying to somehow tie together the
manifest for the base backup, and the one for the WAL, makes little
sense to me. They're commonly not computed in one place, often not even
stored in the same place. For PITR relevant WAL doesn't even exist yet
at the time the manifest is created (and thus obviously cannot be
included in the base backup manifest). And fairly obviously one would
want to be able to verify the correctness of WAL between two
basebackups.

I don't see much point in complicating the design to somehow capture WAL
in the manifest, when it's only going to solve a small set of cases.

Seems better to (later?) add support for generating manifests for WAL
files, and then have a tool that can verify all the manifests required
to restore a base backup.

Greetings,

Andres Freund



pgsql-hackers by date:

Previous
From: Amit Kapila
Date:
Subject: Re: error context for vacuum to include block number
Next
From: Justin Pryzby
Date:
Subject: Re: error context for vacuum to include block number