On Wed, Jul 17, 2024 at 9:26 AM Thomas Simpson <ts@talentstack.to> wrote:
[snip]
uge time; I know the hardware is capable of multi-GB/s throughput but the reload is taking a long time - projected to be about 10 days to reload at the current rate (about 30Mb/sec). The old server and new server have a 10G link between them and storage is SSD backed, so the hardware is capable of much much more than it is doing now.
Is there a way to improve the reload performance? Tuning of any type - even if I need to undo it later once the reload is done.
That would, of course, depend on what you're currently doing. pg_dumpall of a Big Database is certainly suboptimal compared to "pg_dump -Fd --jobs=24".
This is what I run (which I got mostly from a databasesoup.com blog post) on the target instance before doing "pg_restore -Fd --jobs=24":
Of course, these parameter values were for my hardware.
My backups were in progress when all the issues happened, so they're not such a good starting point and I'd actually prefer the clean reload since this DB has been through multiple upgrades (without reloads) until now so I know it's not especially clean. The size has always prevented the full reload before but the database is relatively low traffic now so I can afford some time to reload, but ideally not 10 days.