回复:Re: 回复:Re: speed up pg_upgrade with large number of tables - Mailing list pgsql-hackers

From 杨伯宇(长堂)
Subject 回复:Re: 回复:Re: speed up pg_upgrade with large number of tables
Date
Msg-id c00591ff-0203-479c-8547-b734f6ce3b29.yangboyu.yby@alibaba-inc.com
Whole thread Raw
In response to Re: 回复:Re: speed up pg_upgrade with large number of tables  (Nathan Bossart <nathandbossart@gmail.com>)
Responses Re: 回复:Re: 回复:Re: speed up pg_upgrade with large number of tables
List pgsql-hackers
> Thanks! Since you mentioned that you have multiple databases with 1M+
> databases, you might also be interested in commit 2329cad. That should
> speed up the pg_dump step quite a bit.
Wow, I noticed this commit(2329cad) when it appeared in commitfest. It has
doubled the speed of pg_dump in this scenario. Thank you for your effort!

Besides, https://commitfest.postgresql.org/48/4995/ seems insufficient to
this situation. Some time-consuming functions like check_for_data_types_usage
are not yet able to run in parallel. But these patches could be a great
starting point for a more efficient parallelism implementation. Maybe we can
do it later.

pgsql-hackers by date:

Previous
From: Bertrand Drouvot
Date:
Subject: Re: Pluggable cumulative statistics
Next
From: jian he
Date:
Subject: Re: Doc Rework: Section 9.16.13 SQL/JSON Query Functions