My organization has a number of very large tables (most 100s of GB, a
couple over a Tb) that were created many years ago by a tool that
defaulted to integer PKs rather than bigint. Those PKs have a number
of integer FKs in related tables as well. We shouldn't have let them
get so big, but that's a conversation for another day.
Some are approaching overflow and we're slowly doing the work to
migrate to bigint. Mostly via the well understood "add a new id_bigint
column, populate on new tuples, backfill the old, switch the PK"
method. The backfill is slow on these large tables, but it works and
there's plenty of blog posts and documentation to follow.
It did make me curious though: would it be possible for postgres to
support gradual migration from integer to bigint in a more transparent
way, where new and updated tuples are written as bigint, but existing
tuples can be read as integer?
I assume maybe a complication is that the catalog says the column is
either 32bit int or 64bit bigint and making that conditional is hard.
There's presumably other considerations I'm unaware of too. My core
question: are there significant technical blockers to supporting this
kind of gradual in place migration, or has it just not been enough of
a problem that it's received attention?
James