Re: Why hash OIDs? - Mailing list pgsql-hackers

From Tom Lane
Subject Re: Why hash OIDs?
Date
Msg-id 29528.1535500962@sss.pgh.pa.us
Whole thread Raw
In response to Re: Why hash OIDs?  (Thomas Munro <thomas.munro@enterprisedb.com>)
Responses Re: Why hash OIDs?
List pgsql-hackers
Thomas Munro <thomas.munro@enterprisedb.com> writes:
> On Wed, Aug 29, 2018 at 11:05 AM Thomas Munro
> <thomas.munro@enterprisedb.com> wrote:
>> On Wed, Aug 29, 2018 at 2:09 AM Robert Haas <robertmhaas@gmail.com> wrote:
>>> rhaas=# create table a (x serial primary key);
>>> CREATE TABLE
>>> rhaas=# create table b (x serial primary key);
>>> CREATE TABLE
>>> rhaas=# select 'a'::regclass::oid, 'b'::regclass::oid;
>>> oid  |  oid
>>> -------+-------
>>> 16422 | 16430
>>> (1 row)
>>> If you have a lot of tables like that, bad things are going to happen
>>> to your hash table.

>> Right.  I suppose that might happen accidentally when creating a lot
>> of partitions.
>> Advance the OID generator by some prime number after every CREATE TABLE?

> Erm, s/prime/random/.   Or use a different OID generator for each
> catalogue so that attributes etc don't create gaps in pg_class OIDs.

I think this argument is a red herring TBH.  The example Robert shows is
of *zero* interest for dynahash or catcache, unless it's taking only the
low order 3 bits of the OID for the bucket number.  But actually we'll
increase the table size proportionally to the number of entries, so
that you can't have say 1000 table entries without at least 10 bits
being used for the bucket number.  That means that you'd only have
trouble if those 1000 tables all had OIDs exactly 1K (or some multiple
of that) apart.  Such a case sounds quite contrived from here.

            regards, tom lane


pgsql-hackers by date:

Previous
From: Thomas Munro
Date:
Subject: Re: Why hash OIDs?
Next
From: Kyotaro HORIGUCHI
Date:
Subject: Re: Reopen logfile on SIGHUP