Re: OID/XID allocation (was Re: is PG able to handle a >500 GB Database?) - Mailing list pgsql-general

From Bruce Momjian
Subject Re: OID/XID allocation (was Re: is PG able to handle a >500 GB Database?)
Date
Msg-id 200101230002.TAA18170@candle.pha.pa.us
Whole thread Raw
In response to OID/XID allocation (was Re: is PG able to handle a >500 GB Database?)  (Tom Lane <tgl@sss.pgh.pa.us>)
List pgsql-general
> Bruce Momjian <pgman@candle.pha.pa.us> writes:
> > What about pre-fetching of OID's.  Does that still happen for every
> > backend?
>
> Only ones that actually allocate some OIDs, I think.
>
> > What about XID's?
>
> XIDs are wasted on a postmaster restart, but not per-backend, because
> they are cached in shared memory instead of locally.  I've been thinking
> about changing the code so that OIDs are allocated in the same fashion.
> That would mean an extra spinlock grab per OID allocation, but so what?
> We grab several spinlocks per row creation already.  And we could
> increase the number of OIDs allocated per pg_variable file update,
> which would save some time.
>
> Haven't got round to it yet though, and I'm not sure but what Vadim
> might be planning to throw out all that code anyway ...

Added to TODO:

  * Move OID retrieval into shared memory to prevent lose of unused oids

Also, currently the oid can _not_ be used to determine the order rows
were inserted because one backend can grab its block of 50 and another
backend can start and insert a row first.

If we could change this with litle risk, it would be nice to have in
7.1, but I am sure someone will object.  :-)

--
  Bruce Momjian                        |  http://candle.pha.pa.us
  pgman@candle.pha.pa.us               |  (610) 853-3000
  +  If your life is a hard drive,     |  830 Blythe Avenue
  +  Christ can be your backup.        |  Drexel Hill, Pennsylvania 19026

pgsql-general by date:

Previous
From: "Mitch Vincent"
Date:
Subject: PL/pgSQL Question
Next
From: Bruce Momjian
Date:
Subject: Re: OID/XID allocation (was Re: is PG able to handle a >500 GB Database?)