Re: backup manifests - Mailing list pgsql-hackers

From Robert Haas
Subject Re: backup manifests
Date
Msg-id CA+TgmoYsYu+WkpSw7Lhq-repWKDPaB=A6hGNeq9jiYQaq-GB1A@mail.gmail.com
Whole thread Raw
In response to Re: backup manifests  (David Steele <david@pgmasters.net>)
Responses Re: backup manifests  (David Steele <david@pgmasters.net>)
List pgsql-hackers
On Tue, Nov 19, 2019 at 4:34 PM David Steele <david@pgmasters.net> wrote:
> On 11/19/19 5:00 AM, Rushabh Lathia wrote:
> > My colleague Suraj did testing and noticed the performance impact
> > with the checksums.   On further testing, he found that specifically with
> > sha its more of performance impact.
>
> We have found that SHA1 adds about 3% overhead when the backup is also
> compressed (gzip -6), which is what most people want to do.  This
> percentage goes down even more if the backup is being transferred over a
> network or to an object store such as S3.

I don't really understand why your tests and Suraj's tests are showing
such different results, or how compression plays into it. I tried
running shasum -a$N lineitem-big.csv on my laptop, where that file
contains ~70MB of random-looking data whose source I no longer
remember. Here are the results by algorithm: SHA1, ~25 seconds; SHA224
or SHA256, ~52 seconds; SHA384 and SHA512, ~39 seconds. Aside from the
interesting discovery that the algorithms with more bits actually run
faster on this machine, this seems to show that there's only about a
~2x difference between the SHA1 that you used and that I (pretty much
arbitrarily) used. But Rushabh and Suraj are reporting 43-54%
overhead, and even if you divide that by two it's a lot more than 3%.

One possible explanation is that the compression is really slow, and
so it makes the checksum overhead a smaller percentage of the total.
Like, if you've already slowed down the backup by 8x, then 24%
overhead turns into 3% overhead! But I assume that's not the real
explanation here. Another explanation is that your tests were
I/O-bound rather than CPU-bound, maybe because you tested with a much
larger database or a much smaller amount of I/O bandwidth. If you had
CPU cycles to burn, then neither compression nor checksums will cost
much in terms of overall runtime. But that's a little hard to swallow,
too, because I don't think the testing mentioned above was done using
any sort of exotic test configuration, so why would yours be so
different? Another possibility is that Suraj and Rushabh messed up the
tests, or alternatively that you did. Or, it could be that your
checksum implementation is way faster than the one PG uses, and so the
impact was much less. I don't know, but I'm having a hard time
understanding the divergent results. Any ideas?

> We judged that the lower collision rate of SHA1 justified the additional
> expense.
>
> That said, making SHA256 optional seems reasonable.  We decided not to
> make our SHA1 checksums optional to reduce the test matrix and because
> parallelism largely addressed performance concerns.

Just to be clear, I really don't have any objection to using SHA1
instead of SHA256, or anything else for that matter. I picked the one
to use out of a hat for the purpose of having a POC quickly; I didn't
have any intention to insist on that as the final selection. It seems
likely that anything we pick here will eventually be considered
obsolete, so I think we need to allow for configurability, but I don't
have a horse in the game as far as an initial selection goes.

Except - and this gets back to the previous point - I don't want to
slow down backups by 40% by default. I wouldn't mind slowing them down
3% by default, but 40% is too much overhead. I think we've gotta
either the overhead of using SHA way down or not use SHA by default.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company



pgsql-hackers by date:

Previous
From: David Steele
Date:
Subject: Re: backup manifests
Next
From: Mark Dilger
Date:
Subject: Re: Assertion failing in master, predicate.c