Dear Team,Thank you for your valuable inputs on the PostgreSQL upgrade.
Given the challenges encountered with pg_upgrade in a Kubernetes environment, we're considering a more traditional approach involving pg_dumpall to take backup and then restore the data using psql utility.
Can you please advise if this approach will be fine or you see any issues with it?
A high level overview of the steps:
1. Backup:
* Connect to the existing PostgreSQL 12 pod.
* Execute pg_dumpall to create a complete database dump.
2. New Deployment:
* Create a new PostgreSQL 16 pod.
I think no need to use initidb as it will be autoinitialized .
3. Restore:
* Transfer the backup file to the new pod.
* Use psql utility to restore the database from the dump.
4. Verification:
* Thoroughly test the restored database to ensure data integrity and functionality.
5. Cutover:
* Once verification is complete, switch over traffic to the new PostgreSQL 16 pod.
* Delete the old PostgreSQL 12 pod.
Best Regards,
Ramzy
> On Nov 19, 2024, at 1:40 PM, Kris Deugau <kdeugau@vianet.ca> wrote:
>
> I stand corrected. I hadn't read the docs on pg_upgrade for quite a while, but after reading the last section in https://www.postgresql.org/docs/current/pgupgrade.html:
>
> "If you did not start the new cluster, the old cluster was unmodified except that, when linking started, a .old suffix was appended to $PGDATA/global/pg_control. To reuse the old cluster, remove the .old suffix from $PGDATA/global/pg_control; you can then restart the old cluster."
>
> I see what you mean.
>
There's nothing wrong per se about taking the snapshot before, I was just saving the potential time of re-running pg_upgrade. Heck, take a snapshot before *and* after ;-)