I have a patch in the queue [1] that among other things tries to reduce the number of XIDs consumed during pg_upgrade by making pg_restore group its commands into batches of a thousand or so per transaction. This had been passing tests, so I was quite surprised when the cfbot started to show it as falling over. Investigation showed that it is now failing because 6185c9737 added these objects to the regression tests and didn't drop them:
CREATE TYPE rainbow AS ENUM ('red', 'orange', 'yellow', 'green', 'blue', 'purple'); CREATE DOMAIN rgb AS rainbow CHECK (VALUE IN ('red', 'green', 'blue'));
In binary-upgrade mode, pg_dump dumps the enum type like this:
CREATE TYPE public.rainbow AS ENUM ( ); ALTER TYPE public.rainbow ADD VALUE 'red'; ALTER TYPE public.rainbow ADD VALUE 'orange'; ALTER TYPE public.rainbow ADD VALUE 'yellow'; ALTER TYPE public.rainbow ADD VALUE 'green'; ALTER TYPE public.rainbow ADD VALUE 'blue'; ALTER TYPE public.rainbow ADD VALUE 'purple';
and then, if we're within the same transaction, creation of the domain falls over with
ERROR: unsafe use of new value "red" of enum type rainbow HINT: New enum values must be committed before they can be used.
So I'm glad we found that sooner not later, but something needs to be done about it if [1] is to get committed. It doesn't seem particularly hard to fix though: we just have to track the enum type OIDs made in the current transaction, using largely the same approach as is already used in pg_enum.c to track enum value OIDs. enum.sql contains a comment opining that this is too expensive, but I don't see why it is as long as we don't bother to track commands that are nested within subtransactions. That would be a bit complicated to do correctly, but pg_dump/pg_restore doesn't need it.