Re: 回复:Re: speed up pg_upgrade with large number of tables - Mailing list pgsql-hackers

From Nathan Bossart
Subject Re: 回复:Re: speed up pg_upgrade with large number of tables
Date
Msg-id ZogNYtkXvYVAMGAf@nathan
Whole thread Raw
In response to 回复:Re: speed up pg_upgrade with large number of tables  ("杨伯宇(长堂)" <yangboyu.yby@alibaba-inc.com>)
Responses 回复:Re: 回复:Re: speed up pg_upgrade with large number of tables
List pgsql-hackers
On Fri, Jul 05, 2024 at 05:24:42PM +0800, 杨伯宇(长堂) wrote:
>> > So, I'm thinking, why not add a "--skip-check" option in pg_upgrade to skip it?
>> > See "1-Skip_Compatibility_Check_v1.patch".
>> 
>> How would a user know that nothing has changed in the cluster between running
>> the check and running the upgrade with a skipped check? Considering how
>> complicated it is to understand exactly what pg_upgrade does it seems like
>> quite a large caliber footgun.

I am also -1 on this one for the same reasons as Daniel.

>> I would be much more interested in making the check phase go faster, and indeed
>> there is ongoing work in this area. Since it sounds like you have a dev and
>> test environment with a big workload, testing those patches would be helpful.
>> https://commitfest.postgresql.org/48/4995/ is one that comes to mind.
> Very meaningful work! I will try it.

Thanks!  Since you mentioned that you have multiple databases with 1M+
databases, you might also be interested in commit 2329cad.  That should
speed up the pg_dump step quite a bit.

-- 
nathan



pgsql-hackers by date:

Previous
From: Erik Wienhold
Date:
Subject: Re: XML test error on Arch Linux
Next
From: "feichanghong"
Date:
Subject: Optimize commit performance with a large number of 'on commit delete rows' temp tables