Determining Indexes to Rebuild on new libc - Mailing list pgsql-admin

From Don Seiler
Subject Determining Indexes to Rebuild on new libc
Date
Msg-id CAHJZqBBWUAzKKkTaOf_QDYtmuMavntg4ba-_ScNJRKkuxtRsyw@mail.gmail.com
Whole thread Raw
Responses Re: Determining Indexes to Rebuild on new libc  (Ron <ronljohnsonjr@gmail.com>)
Re: Determining Indexes to Rebuild on new libc  (Jim Mlodgenski <jimmy76@gmail.com>)
List pgsql-admin
Good morning,

As we're staring down the eventuality of having to migrate to a newer OS (currently on Ubuntu 18.04 LTS), we're preparing for the collation change madness that will ensue. We're looking at logical replication but there is a lot to unpack there as well given the number of databases and the massive size of a few of them. I had been inclined to bite the bullet and do logical replication (or dump/restore on smaller DBs) but the timeframe for the project is being pushed up so I'm looking for shortcuts where possible (obviously without risking DB integrity). This would also give me the opportunity for other modifications like enabling data checksums on the new DBs that I had been sorely wanting for years now.

One question that gets asked is if we could do physical replication, cut over, and then only rebuild indexes that "need it" in order to minimize the subsequent downtime. i.e. can we determine which indexes will actually have a potential problem. For example, a lot of indexes are on text/varchar datatype fields that hold UUID data and nothing more (basic alphanumeric characters and hyphens embedded). If we can be certain that these fields truly only hold this type of data, could we skip rebuilding them after the cutover to a newer OS (eg Ubuntu 22.04 LTS with the newer libc collation)?

Thanks,
Don.

--
Don Seiler
www.seiler.us

pgsql-admin by date:

Previous
From: Matheus Alcantara
Date:
Subject: Re: Slow Scripts - Create Script
Next
From: Ron
Date:
Subject: Re: Determining Indexes to Rebuild on new libc