Logical replication: Not feasible due to the number of schemas, tables, and overall data volume
I'm not sure why this is not feasible. Can you expand on this?
* For a 15 TB database with roughly 1 day downtime, what would be the most reliable approach to migrate from RHEL 7 → RHEL 9 while avoiding collation/index corruption issues?
pg_dump is the most reliable, and the slowest. Keep in mind that only the actual data needs to move over (not the indexes, which get rebuilt after the data is loaded). You could also mix-n-match pg_logical and pg_dump if you have a few tables that are super large. Whether either approach fits in your 24 hour window is hard to say without you running some tests.
* Would using pg_upgrade (with --check and --clone options) be safe when moving between OS versions with different glibc libraries?
No, you cannot use pg_upgrade for this. It can move your system across Postgres versions, but across servers/operating systems.
* If we temporarily remain on PostgreSQL 11, is it mandatory to rebuild all indexes after restoring the base backup on RHEL 9 to ensure data consistency? Would running REINDEX DATABASE across all databases be sufficient?
Yes, and yes.
* Are there any community-tested procedures or best practices for migrating large (15 TB+) environments between RHEL 7 and RHEL 9 with minimal downtime?
Yes - logical replication is both battle-tested and best practice for such an upgrade. But with such a large downtime window, investigate pg_dump to v18. You can find a large table and dump just that one table to start getting some measurements, e.g. run from the new server: