Testing pg_restore with different --jobs= values will be easier. pg_dump is what's going to be reading from a constantly varying system.
Hello,
each time I do a replatforming of this kind, with DB up to 2 TB, I did create the target DB, eventually needed users then the appropriate databases, and finally, a simple script to pipe pg_dump into psql, databases one by one.
So.. one thread. Each time, it was limited by the network bandwidth. My last replatforming with a 10 Gb net and a 1.5 TB DB did show a transfer of 500 Mbytes per second (5Gbs) so.. less than an hour.
which is just fine. Launch it, have lunch, a coffee, and ...done for test. For Prod, I am used to do it at the quietest night of the week end. and have a nap ( a short one !)...:-)