Thread: heap_create with OID?
I am wondering if anyone can see an immediate problem in creating a function: heap_create_oid which is just like heap_create, but takes an oid arg. This oid (assuming it's free) would be used to create the object. This could then bes called from a new heap_create_with_catalog_oid which in turn will be called from inv_create_oid to ultimately allow pg_restore to recreate BLOBs, via lo_create_oid. This plan was developed by simply looking at the LO code in total isolation, so I am aware it could be totally flawed for the rest of the DB, but I would appreciate some input... ---------------------------------------------------------------- Philip Warner | __---_____ Albatross Consulting Pty. Ltd. |----/ - \ (A.C.N. 008 659 498) | /(@) ______---_ Tel: (+61) 0500 83 82 81 | _________ \ Fax: (+61) 0500 83 82 82 | ___________ | Http://www.rhyme.com.au | / \| | --________-- PGP key available upon request, | / and from pgp5.ai.mit.edu:11371 |/
Would this allow you to create a tuble with a user-specified oid? We definitely need this functionality for all tables, not just large objects. However you won't be able to discover if the oid is free or not. You could end up with duplicate oids. Thus anybody who uses the feature should know what they're doing. Philip Warner wrote: > > I am wondering if anyone can see an immediate problem in creating a function: > > heap_create_oid > > which is just like heap_create, but takes an oid arg. This oid (assuming > it's free) would be used to create the object. > > This could then bes called from a new > > heap_create_with_catalog_oid > > which in turn will be called from > > inv_create_oid > > to ultimately allow pg_restore to recreate BLOBs, via lo_create_oid. > > This plan was developed by simply looking at the LO code in total > isolation, so I am aware it could be totally flawed for the rest of the DB, > but I would appreciate some input...
Philip Warner <pjw@rhyme.com.au> writes: > I am wondering if anyone can see an immediate problem in creating a function: > heap_create_oid > which is just like heap_create, but takes an oid arg. This oid (assuming > it's free) would be used to create the object. How exactly do you propose to determine that the OID is free? If the requested OID is greater than the current OID counter, how will you prevent a future conflict? There's no inherent problem in forcing a particular choice of OID; initdb does it on every run. The trick is to avoid a conflict of OID assignments. My own feeling is that the current LO setup is fundamentally flawed by its reliance on specific OID values, and that the right answer is to find a way to avoid that. contrib/lo might provide some food for thought here (although it's clearly not the whole answer either). regards, tom lane
At 16:44 4/07/00 +1000, Chris Bitmead wrote: > >Would this allow you to create a tuble with a user-specified oid? To be honest, I have no idea - like I said, I just looked at the LO code and heap.c. But I believe it will let me create a lo with the right OID. > >However you won't be able to discover if the oid is free or not. You >could end up with duplicate oids. Thus anybody who uses the feature >should know what they're doing. OK; then it needs to be done by pg_restore ONLY when a full restore is being done. pg_dump already has a crude way of setting the max OID at the start of a restore; all I need to do is use that process when restoring blobs. ---------------------------------------------------------------- Philip Warner | __---_____ Albatross Consulting Pty. Ltd. |----/ - \ (A.C.N. 008 659 498) | /(@) ______---_ Tel: (+61) 0500 83 82 81 | _________ \ Fax: (+61) 0500 83 82 82 | ___________ | Http://www.rhyme.com.au | / \| | --________-- PGP key available upon request, | / and from pgp5.ai.mit.edu:11371 |/
At 02:57 4/07/00 -0400, Tom Lane wrote: >Philip Warner <pjw@rhyme.com.au> writes: >> I am wondering if anyone can see an immediate problem in creating a function: >> heap_create_oid >> which is just like heap_create, but takes an oid arg. This oid (assuming >> it's free) would be used to create the object. > >How exactly do you propose to determine that the OID is free? Based on the comments so far, I don't. pg_dump already allows a 'with-oid' option for table dumping & loading, and the way it gets around the problem is to save the max oid at dump time, then set the max oid at restore time. I am only seeking a way of doing a valid backup/restore which includes BLOBS; so even if I say to the user that BLOBs can only be restored as part of a full restore, then I think it's a lot better than the current system. >There's no inherent problem in forcing a particular choice of OID; >initdb does it on every run. The trick is to avoid a conflict of >OID assignments. OK. This is good news. The way pg_dump sets the max oid is: CREATE TABLE pgdump_oid (dummy int4); COPY pgdump_oid WITH OIDS FROM stdin; 1282249 0 \. where 1282249 is the max oid you want to set. It's pretty crude, but I assume effective. What would be very nice is if I could 'reserve' the OIDs of the to-be-created BLOBS without having to create rows/tables etc, and without affecting the max oid. But that is probably pointless if a decent BLOB implementation is coming along. >My own feeling is that the current LO setup is fundamentally flawed You will get no disagreement from me. >the right answer is >to find a way to avoid that. I'm reluctant to do too much in this area until the capabilities of TOAST are sorted out... ISTM that, even if a new BLOB architecture comes along, some kind of migration utility will be needed. Does this all sound reasonable? ---------------------------------------------------------------- Philip Warner | __---_____ Albatross Consulting Pty. Ltd. |----/ - \ (A.C.N. 008 659 498) | /(@) ______---_ Tel: (+61) 0500 83 82 81 | _________ \ Fax: (+61) 0500 83 82 82 | ___________ | Http://www.rhyme.com.au | / \| | --________-- PGP key available upon request, | / and from pgp5.ai.mit.edu:11371 |/
At 02:57 4/07/00 -0400, Tom Lane wrote: >Philip Warner <pjw@rhyme.com.au> writes: >> I am wondering if anyone can see an immediate problem in creating a function: >> heap_create_oid >> which is just like heap_create, but takes an oid arg. This oid (assuming >> it's free) would be used to create the object. >There's no inherent problem in forcing a particular choice of OID; >initdb does it on every run. The trick is to avoid a conflict of >OID assignments. I've modified the relevant routines up to and including defining lo_create_oid, and was wondering what the best way to procede in testing the new routine might be...is there a document that decribes modifying the standard set of functions and adding lo_create_oid? ---------------------------------------------------------------- Philip Warner | __---_____ Albatross Consulting Pty. Ltd. |----/ - \ (A.C.N. 008 659 498) | /(@) ______---_ Tel: (+61) 0500 83 82 81 | _________ \ Fax: (+61) 0500 83 82 82 | ___________ | Http://www.rhyme.com.au | / \| | --________-- PGP key available upon request, | / and from pgp5.ai.mit.edu:11371 |/
> > Would this allow you to create a tuble with a user-specified oid? > > We definitely need this functionality for all tables, not just large > objects. > > However you won't be able to discover if the oid is free or not. You > could end up with duplicate oids. Thus anybody who uses the feature > should know what they're doing. Yes, this type of thing is clearly needed. We need to be able to UPDATE the oid column too. -- Bruce Momjian | http://www.op.net/~candle pgman@candle.pha.pa.us | (610) 853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania19026
Tom Lane wrote: > My own feeling is that the current LO setup is fundamentally flawed > by its reliance on specific OID values, and that the right answer is > to find a way to avoid that. contrib/lo might provide some food for > thought here (although it's clearly not the whole answer either). I have some ideas to replace the entire LO handling by something completely different. More compatible with the Oracle way of handling large objects. That is, that on INSERT/UPDATE someone uses a function to place an LO reference into the tuple. Then having a new interface in libpq, that works like file I/O on thereferences that appear in a result set. To open them for writing, the user must have selected them for update, otherwise he can open them for reading. The file I/O itself can be based on the fastpath interface. The LO's follow Oracles copy semantic, so if someone does INSERT ... SELECT, the LO gets duplicated. All LO's for one base table can be stored in one big, segmented external heap (more or less like toast values).The system would automatically know when objects are obsolete. An INSERT operation might then return a result set as if someone did a SELECT FOR UPDATE of exactly what he just inserted. So he immediately has access to the LO references to place the values. Don't know yet how to interface it from psql - but let me sleep another night or so over it. Jan -- #======================================================================# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #================================================== JanWieck@Yahoo.com #
At 10:00 4/07/00 -0400, Bruce Momjian wrote: > >Yes, this type of thing is clearly needed. We need to be able to UPDATE >the oid column too. > Updating OIDs sounds like a *very* scary thing to do. My motivation for heap_create_oid is very specific - I want to be able to do a valid database backup/restore, and need to handle the current (bad) BLOB implementation. I would sincerely hope that (a) heap_create_oid would be marked in some way as 'internal use only', and (b) go away in 7.2 or 7.3 when TOASTED BLOBs come along... FWIW, I now have a working lo_create_oid, that does what is expected. When (and if) the new pg_restore is accepted, I'll probably have a look at implementing BLOB backup, but there might be some very justifiable resistance to heap_create_oid...so before I go much further I'd appreciate a barometer reading on the likely acceptability of heap_create_oid. ---------------------------------------------------------------- Philip Warner | __---_____ Albatross Consulting Pty. Ltd. |----/ - \ (A.C.N. 008 659 498) | /(@) ______---_ Tel: (+61) 0500 83 82 81 | _________ \ Fax: (+61) 0500 83 82 82 | ___________ | Http://www.rhyme.com.au | / \| | --________-- PGP key available upon request, | / and from pgp5.ai.mit.edu:11371 |/
At 14:41 4/07/00 +0200, Jan Wieck wrote: >Tom Lane wrote: > > I have some ideas to replace the entire LO handling by > something completely different. More compatible with the > Oracle way of handling large objects. That is, that on > INSERT/UPDATE someone uses a function to place an LO > reference into the tuple. Then having a new interface in > libpq, that works like file I/O on the references that appear > in a result set. To open them for writing, the user must have > selected them for update, otherwise he can open them for > reading. It might be worth looking at the SQL standard and it's 'locators' for BLOB data. They sound a lot like this, but may have some requirements that affect your design. > The file I/O itself can be based on the fastpath > interface. What's the 'fastpath interface'? > The LO's follow Oracles copy semantic, so if someone does > INSERT ... SELECT, the LO gets duplicated. Great. The standard seems (by default) to treat them as just another 'value', which this is consistent with. It may be relevant to mention that the standard also allows substr, ||, etc to work on BLOBs. > An INSERT operation might then return a result set as if > someone did a SELECT FOR UPDATE of exactly what he just > inserted. So he immediately has access to the LO references > to place the values. > > Don't know yet how to interface it from psql - but let me > sleep another night or so over it. FWIW, Dec/Rdb uses a kind of cursor for the BLOB data. The row insert must be done in a normal insert cursor, then another cursor is opened to write the BLOB data. It's very cumbersome. But if you use an insert cursor for your table rows, then at least you do have a context available for BLOB insertion. ---------------------------------------------------------------- Philip Warner | __---_____ Albatross Consulting Pty. Ltd. |----/ - \ (A.C.N. 008 659 498) | /(@) ______---_ Tel: (+61) 0500 83 82 81 | _________ \ Fax: (+61) 0500 83 82 82 | ___________ | Http://www.rhyme.com.au | / \| | --________-- PGP key available upon request, | / and from pgp5.ai.mit.edu:11371 |/
> > > The file I/O itself can be based on the fastpath > > interface. > > What's the 'fastpath interface'? > The interface between PG backend and libpq that is for example used for COPY command. (Simplification) this intarface directly connect two routines between backend and frondend without standard "the path of query". Look at src/backend/tcop/fastpath.c Karel
> Updating OIDs sounds like a *very* scary thing to do. My motivation for > heap_create_oid is very specific - I want to be able to do a valid database > backup/restore, and need to handle the current (bad) BLOB implementation. I > would sincerely hope that (a) heap_create_oid would be marked in some way > as 'internal use only', and (b) go away in 7.2 or 7.3 when TOASTED BLOBs > come along... If you accidentally delete a row with a specific oid, how do you re-add the row? -- Bruce Momjian | http://www.op.net/~candle pgman@candle.pha.pa.us | (610) 853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania19026
Philip Warner <pjw@rhyme.com.au> writes: > The way pg_dump sets the max oid is: > CREATE TABLE pgdump_oid (dummy int4); > COPY pgdump_oid WITH OIDS FROM stdin; > 1282249 0 > \. > where 1282249 is the max oid you want to set. > It's pretty crude, but I assume effective. Ah, right. IIRC there is a check in COPY to set the OID counter to max(current OID counter, user-supplied OID). So you could use this technique to install all the BLOBs with the right OIDs, as long as you were careful to load them in OID order: just load another row into the temp table each time you need to advance the OID counter. Kinda klugy, but no backend changes needed. BTW I'd recommend making pgdump_oid a TEMP table, so that it'll go away by itself. regards, tom lane
> Philip Warner <pjw@rhyme.com.au> writes: > > The way pg_dump sets the max oid is: > > > CREATE TABLE pgdump_oid (dummy int4); > > COPY pgdump_oid WITH OIDS FROM stdin; > > 1282249 0 > > \. > > > where 1282249 is the max oid you want to set. > > > It's pretty crude, but I assume effective. > > Ah, right. IIRC there is a check in COPY to set the OID counter to > max(current OID counter, user-supplied OID). So you could use this > technique to install all the BLOBs with the right OIDs, as long as > you were careful to load them in OID order: just load another row > into the temp table each time you need to advance the OID counter. > Kinda klugy, but no backend changes needed. > > BTW I'd recommend making pgdump_oid a TEMP table, so that it'll go away > by itself. Done. Renamed pgdump_oid to pg_dump_oid too. We didn't have TEMPORARY tables at the time I did this. The temporary allows multiple concurrent loads because there is no table name conflict. -- Bruce Momjian | http://www.op.net/~candle pgman@candle.pha.pa.us | (610) 853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania19026
Philip Warner wrote: > Updating OIDs sounds like a *very* scary thing to do. It's scary, but useful in some specific situations. Restoring lost data is an obvious one. There would be applications in writing certain forms of database replication schemes. There is also a technique in object databases where the client side code has to know what the oid for an object will be before it is actually added to the database.
At 09:42 5/07/00 +1000, Chris Bitmead wrote: >Philip Warner wrote: > >> Updating OIDs sounds like a *very* scary thing to do. > >It's scary, but useful in some specific situations. Restoring lost data >is an obvious one. There would be applications in writing certain forms >of database replication schemes. There is also a technique in object >databases where the client side code has to know what the oid for an >object will be before it is actually added to the database. None of these strictly involve updating OIDs; they seem to be choosing OIDs at creation time, which is a very different thing. ---------------------------------------------------------------- Philip Warner | __---_____ Albatross Consulting Pty. Ltd. |----/ - \ (A.C.N. 008 659 498) | /(@) ______---_ Tel: (+61) 0500 83 82 81 | _________ \ Fax: (+61) 0500 83 82 82 | ___________ | Http://www.rhyme.com.au | / \| | --________-- PGP key available upon request, | / and from pgp5.ai.mit.edu:11371 |/
At 12:48 4/07/00 -0400, Tom Lane wrote: >Philip Warner <pjw@rhyme.com.au> writes: > >Ah, right. IIRC there is a check in COPY to set the OID counter to >max(current OID counter, user-supplied OID). So you could use this >technique to install all the BLOBs with the right OIDs, as long as >you were careful to load them in OID order: just load another row >into the temp table each time you need to advance the OID counter. >Kinda klugy, but no backend changes needed. This might work...the idea would presumably be to load the BLOBs before the schema, since the OIDs of the BLOBS might be low, and hence conflict could occur if many tables were defined after the first blob was loaded in the original DB. So: 1. createdb 2. load all blobs 3. Do remaining restore steps The only real problem (and this is in pg_dump as well), is if a table is dumped with OIDs and the OIDs conflict with automatically generated ones for the schema. Although this may not be an issue: how unique are OIDs, anyway? Will corruption of any kind occur in the above case (even with the old pg_dump)? I guess I'll throw out lo_create_oi, heap_create_oid & related code. I guess it taught me a little more about PG internals. ---------------------------------------------------------------- Philip Warner | __---_____ Albatross Consulting Pty. Ltd. |----/ - \ (A.C.N. 008 659 498) | /(@) ______---_ Tel: (+61) 0500 83 82 81 | _________ \ Fax: (+61) 0500 83 82 82 | ___________ | Http://www.rhyme.com.au | / \| | --________-- PGP key available upon request, | / and from pgp5.ai.mit.edu:11371 |/
Philip Warner <pjw@rhyme.com.au> writes: > This might work...the idea would presumably be to load the BLOBs before the > schema, since the OIDs of the BLOBS might be low, and hence conflict could > occur if many tables were defined after the first blob was loaded in the > original DB. > 1. createdb > 2. load all blobs > 3. Do remaining restore steps If you're reloading everything in OID order, then blobs need no special ordering treatment ;-) > The only real problem (and this is in pg_dump as well), is if a table is > dumped with OIDs and the OIDs conflict with automatically generated ones > for the schema. Although this may not be an issue: how unique are OIDs, > anyway? Will corruption of any kind occur in the above case (even with the > old pg_dump)? In theory OIDs are unique (within one database anyway). There are a couple of ways that theory can fail: 1. User-supplied OIDs (via COPY WITH OIDs) might conflict. 2. Run the system long enough, the OID counter will wrap around and start generating already-used numbers. The next question is what happens if we do have duplicate OIDs. AFAIK, if the duplicate OIDs are for different kinds of entities (eg, rows in different tables) the answer is "no problem". Duplicate OIDs for, say, two tables would be disastrous --- but the unique index on pg_class.oid should prevent you from creating duplicates. I *think* that 7.0 has unique indexes on every system table where it's important to prevent duplicate OIDs. (If not, it's an easily corrected omission; anyone want to run through the tables and double-check?) OIDs aren't magic, they're just another number. The system depends on the assumption that OIDs are unique keys for certain system tables, and it enforces (or should enforce) this assumption. Applications might depend on the assumption that OIDs are unique keys for their own tables; if so they can and should enforce that assumption with unique indexes. But there's no hidden mechanism that will make the system go belly-up just because there are identical OIDs floating around. regards, tom lane PS: I believe the "comment" support will misbehave if there are duplicate OIDs in different system tables, since it assumes that an OID is sufficient to identify any database object regardless of type. But the comment stuff is not exactly mission-critical...
At 02:14 5/07/00 -0400, Tom Lane wrote: > >> 1. createdb >> 2. load all blobs >> 3. Do remaining restore steps > >If you're reloading everything in OID order, then blobs need no special >ordering treatment ;-) It's actually a bit more complex than that, in that for efficiency I put indexes, triggers, and ACLs at the end, just after date, which once more introduces the possibility of a conflict. Also, the ordering is fine, but with data loading (without OID), it still may fail (unless I put the data after the BLOBs), so a strict OID order won't work without OID based data load. And again, that would also need to be done in OID order, which is a big ask. Personally, I dislike the the concept of being able to set OID on some things but not on others. I'd prefer a lower level backup interface to allow creation of dumped entities with specified OIDs (ie. tables, indexes etc). That way we get a database that is indistinguishable from the original. This might wait to see what backup techniques are possible after WAL & slot reuse. This is not an argument for heap_create_oid, unless it is extended to *all* database entities, which some other people might favour. I'd be interested in other people's thoughts on this, since I am somewhat out of my depth! ---------------------------------------------------------------- Philip Warner | __---_____ Albatross Consulting Pty. Ltd. |----/ - \ (A.C.N. 008 659 498) | /(@) ______---_ Tel: (+61) 0500 83 82 81 | _________ \ Fax: (+61) 0500 83 82 82 | ___________ | Http://www.rhyme.com.au | / \| | --________-- PGP key available upon request, | / and from pgp5.ai.mit.edu:11371 |/
> > The only real problem (and this is in pg_dump as well), is if a table is > > dumped with OIDs and the OIDs conflict with automatically generated ones > > for the schema. Although this may not be an issue: how unique are OIDs, > > anyway? Will corruption of any kind occur in the above case (even with the > > old pg_dump)? > > In theory OIDs are unique (within one database anyway). There are a > couple of ways that theory can fail: > > 1. User-supplied OIDs (via COPY WITH OIDs) might conflict. > > 2. Run the system long enough, the OID counter will wrap around and > start generating already-used numbers. Seems we should throw an error on rollover, or at least jump from 0 to 17k to skip the system oid's. That zero oid is going to cause a problem, perhaps. Of course, it is has never happened, but maybe we should do it for people's sanity. > > The next question is what happens if we do have duplicate OIDs. AFAIK, > if the duplicate OIDs are for different kinds of entities (eg, rows in > different tables) the answer is "no problem". Duplicate OIDs for, say, > two tables would be disastrous --- but the unique index on pg_class.oid > should prevent you from creating duplicates. I *think* that 7.0 has > unique indexes on every system table where it's important to prevent > duplicate OIDs. (If not, it's an easily corrected omission; anyone want > to run through the tables and double-check?) It sure does, or at least ones that are used by the cache. > OIDs aren't magic, they're just another number. The system depends on > the assumption that OIDs are unique keys for certain system tables, and > it enforces (or should enforce) this assumption. Applications might > depend on the assumption that OIDs are unique keys for their own tables; > if so they can and should enforce that assumption with unique indexes. > But there's no hidden mechanism that will make the system go belly-up > just because there are identical OIDs floating around. > > regards, tom lane > > PS: I believe the "comment" support will misbehave if there are > duplicate OIDs in different system tables, since it assumes that an OID > is sufficient to identify any database object regardless of type. > But the comment stuff is not exactly mission-critical... Agreed. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania19026
Bruce Momjian <pgman@candle.pha.pa.us> writes: >> 2. Run the system long enough, the OID counter will wrap around and >> start generating already-used numbers. > Seems we should throw an error on rollover, or at least jump from 0 to > 17k to skip the system oid's. That zero oid is going to cause a > problem, perhaps. Throwing an error will definitely not do --- that'd mean your whole installation comes to a screeching halt as soon as you hit 4G OIDs. That cure is way worse than the disease... We do need to avoid generating a zero OID. I vaguely recall having seen some code somewhere that did that, but it was pretty crude/ugly, and I'm not even sure it was in the OID generator rather than in one specific routine that used OIDs :-(. (What I seem to recall having seen was some code that called newoid() a second time if it got a zero. This is bletcherous of course.) As long as we've gotta avoid zero anyway, skipping over the whole system-reserved range doesn't sound like a bad idea, even though it's not going to guarantee anything. But it's got to be done at the bottom level of the counter-incrementing logic, not higher up. regards, tom lane
Bruce Momjian wrote: > Seems we should throw an error on rollover, or at least jump from 0 to > 17k to skip the system oid's. That zero oid is going to cause a > problem, perhaps. Of course, it is has never happened, but maybe we > should do it for people's sanity. Skipping the system oids is no guarantee things will keep working on rollover. In fact I'm not sure it would help at all. The only way would be to compress the existing oids via a utility or go to 64 bit. Has anyone actually run out? >
Tom Lane wrote: > Throwing an error will definitely not do --- that'd mean your whole > installation comes to a screeching halt as soon as you hit 4G OIDs. > That cure is way worse than the disease... Is there really an alternative? There is no guarantee things will keep working after a wrap around. Some applications might, some might not depending on how they work. Isn't throwing an error better than continuing on, possibly doing the wrong thing?
Chris Bitmead <chrisb@nimrod.itg.telstra.com.au> writes: > Tom Lane wrote: >> Throwing an error will definitely not do --- that'd mean your whole >> installation comes to a screeching halt as soon as you hit 4G OIDs. >> That cure is way worse than the disease... > Is there really an alternative? There is no guarantee things will keep > working after a wrap around. Some applications might, some might not > depending on how they work. Isn't throwing an error better than > continuing on, possibly doing the wrong thing? No. Throwing an error (refusing to wrap the counter around) means that you will refuse to create any more rows until after the dbadmin dumps, initdbs, reloads. (If you think that's bad, consider also that in its current form pg_dumpall would fail under those conditions, meaning you couldn't even dump.) Stopping the whole DB dead in the water cannot be a better answer than *anything*. The main point of my prior post is that a well-designed DB will have unique indexes to guarantee uniqueness of anything it needs to assume is unique ... such as OIDs in particular tables. The presence of those indexes might mean that individual transactions fail (when they try to use a duplicate OID value), and that's surely nasty. But it's not as nasty as forcibly shutting down ALL inserts, which is what throwing an error implies. Obviously this isn't a perfect, no-need-for-improvement approach. We need 64-bit OIDs or the ability not to assign OIDs to tables that don't need 'em before we can really say "problem solved". But throwing an error on wraparound is *worse* than the laissez-faire approach we take now. It's at least possible to design a DB to survive wraparound now; if we throw an error survival is not possible. regards, tom lane