"David Rowley" <dgrowleyml@gmail.com> writes:
> If pg_dump was to still follow the dependencies of objects, would there be
> any reason why it shouldn't backup larger tables first?
Pretty much every single discussion/complaint about pg_dump's ordering
choices has been about making its behavior more deterministic not less
so. So I can't imagine such a change would go over well with most folks.
Also, it's far from obvious to me that "largest first" is the best rule
anyhow; it's likely to be more complicated than that.
But anyway, the right place to add this sort of consideration is in
pg_restore --parallel, not pg_dump. I don't know how hard it would be
for the scheduler algorithm in there to take table size into account,
but at least in principle it should be possible to find out the size of
the (compressed) table data from examination of the archive file.
regards, tom lane