Tom Lane <tgl@sss.pgh.pa.us> writes:
> Also, it's far from obvious to me that "largest first" is the best rule
> anyhow; it's likely to be more complicated than that.
>
> But anyway, the right place to add this sort of consideration is in
> pg_restore --parallel, not pg_dump. I don't know how hard it would be
> for the scheduler algorithm in there to take table size into account,
> but at least in principle it should be possible to find out the size of
> the (compressed) table data from examination of the archive file.
From some experiences with pgloader and loading data in migration
processes, often enough the most gains are to be had when you load the
biggest table in parallel with loading all the little ones. It often
makes it so that the big table loading time is not affected, and by the
time it's done the rest of the database is done too.
Loading several big'o'tables in parallel tend not to give benefits in
the tests I've done so far, but that might be an artefact of python
multi threading, I will do some testing with proper tooling later.
Regards,
--
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support