Re: pg_verify_checksums failure with hash indexes - Mailing list pgsql-hackers

From Robert Haas
Subject Re: pg_verify_checksums failure with hash indexes
Date
Msg-id CA+TgmoZV2k-XbDppPFoQVGoAjkR5psvDPcG3WZ5JjnKF-dAr=Q@mail.gmail.com
Whole thread Raw
In response to Re: pg_verify_checksums failure with hash indexes  (Amit Kapila <amit.kapila16@gmail.com>)
Responses Re: pg_verify_checksums failure with hash indexes  (Dilip Kumar <dilipbalaut@gmail.com>)
List pgsql-hackers
On Thu, Aug 30, 2018 at 7:27 AM, Amit Kapila <amit.kapila16@gmail.com> wrote:
> We have previously changed this define in 620b49a1 with the intent to
> allow many non-unique values in hash indexes without worrying to reach
> the limit of the number of overflow pages.  I think this didn't occur
> to us that it won't work for smaller block sizes.  As such, I don't
> see any problem with the suggested fix.  It will allow us the same
> limit for the number of overflow pages at 8K block size and a smaller
> limit at smaller block size.  I am not sure if we can do any better
> with the current design.  As it will change the metapage, I think we
> need to bump HASH_VERSION.

I wouldn't bother bumping HASH_VERSION.  First, the fix needs to be
back-patched, and you certainly can't back-patch a HASH_VERSION bump.
Second, you should just pick a formula that gives the same answer as
now for the cases where the overrun doesn't occur, and some other
sufficiently-value for the cases where an overrun currently does
occur.  If you do that, you're not changing the behavior in any case
that currently works, so there's really no reason for a version bump.
It just becomes a bug fix at that point.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


pgsql-hackers by date:

Previous
From: Michael Paquier
Date:
Subject: Re: pg_verify_checksums and -fno-strict-aliasing
Next
From: "Shinoda, Noriyoshi (PN Japan GCS Delivery)"
Date:
Subject: RE: [HACKERS] Proposal to add work_mem option to postgres_fdw module