Re: Cache Hash Index meta page. - Mailing list pgsql-hackers

From Jeff Janes
Subject Re: Cache Hash Index meta page.
Date
Msg-id CAMkU=1xSZhM8fQ7or96giUAq6VU4FpxQWqRxo36s3Kexif83+A@mail.gmail.com
Whole thread Raw
In response to Cache Hash Index meta page.  (Mithun Cy <mithun.cy@enterprisedb.com>)
Responses Re: Cache Hash Index meta page.  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-hackers
On Fri, Jul 22, 2016 at 3:02 AM, Mithun Cy <mithun.cy@enterprisedb.com> wrote:
> I have created a patch to cache the meta page of Hash index in
> backend-private memory. This is to save reading the meta page buffer every
> time when we want to find the bucket page. In “_hash_first” call, we try to
> read meta page buffer twice just to make sure bucket is not split after we
> found bucket page. With this patch meta page buffer read is not done, if the
> bucket is not split after caching the meta page.
>
> Idea is to cache the Meta page data in rd_amcache and store maxbucket number
> in hasho_prevblkno of bucket primary page (which will always be NULL other
> wise, so reusing it here for this cause!!!).

If it is otherwise unused, shouldn't we rename the field to reflect
what it is now used for?

What happens on a system which has gone through pg_upgrade?  Are we
sure that those on-disk representations will always have
InvalidBlockNumber in that fields?  If not, then it seems we can't
support pg_upgrade at all.  If so, I don't see a provision for
properly dealing with pages which still have InvalidBlockNumber in
them.  Unless I am missing something, the code below will always think
it found the right bucket in such cases, won't it?

if (opaque->hasho_prevblkno <=  metap->hashm_maxbucket)

Cheers,

Jeff



pgsql-hackers by date:

Previous
From: Peter Geoghegan
Date:
Subject: Re: Parallel tuplesort (for parallel B-Tree index creation)
Next
From: Tom Lane
Date:
Subject: Re: [sqlsmith] FailedAssertion("!(k == indices_count)", File: "tsvector_op.c", Line: 511)