Re: Should pg_dump dump larger tables first? - Mailing list pgsql-hackers

From Dimitri Fontaine
Subject Re: Should pg_dump dump larger tables first?
Date
Msg-id m238xhd5un.fsf@2ndQuadrant.fr
Whole thread Raw
In response to Re: Should pg_dump dump larger tables first?  (Tom Lane <tgl@sss.pgh.pa.us>)
Responses Re: Should pg_dump dump larger tables first?
Re: Should pg_dump dump larger tables first?
List pgsql-hackers
Tom Lane <tgl@sss.pgh.pa.us> writes:
> Also, it's far from obvious to me that "largest first" is the best rule
> anyhow; it's likely to be more complicated than that.
>
> But anyway, the right place to add this sort of consideration is in
> pg_restore --parallel, not pg_dump.  I don't know how hard it would be
> for the scheduler algorithm in there to take table size into account,
> but at least in principle it should be possible to find out the size of
> the (compressed) table data from examination of the archive file.

From some experiences with pgloader and loading data in migration
processes, often enough the most gains are to be had when you load the
biggest table in parallel with loading all the little ones. It often
makes it so that the big table loading time is not affected, and by the
time it's done the rest of the database is done too.

Loading several big'o'tables in parallel tend not to give benefits in
the tests I've done so far, but that might be an artefact of python
multi threading, I will do some testing with proper tooling later.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr     PostgreSQL : Expertise, Formation et Support



pgsql-hackers by date:

Previous
From: Tatsuo Ishii
Date:
Subject: Re: review: pgbench - aggregation of info written into log
Next
From: John R Pierce
Date:
Subject: Re: Should pg_dump dump larger tables first?