Hi,
we have rather uncommon case - DB with ~ 50GB of data, but this is
spread across ~ 80000 tables.
Running pg_dump -Fd -jxx dumps in parallel, but only data, and MOST of
the time is spent on queries that run sequentially, and as far as I can
tell, get schema of tables, and sequence values.
This happens on Pg 9.5. Are there any plans to make getting schema
faster for such cases? Either by parallelization, or at least by getting
schema for all tables "at once", and having pg_dump "sort it out",
instead of getting schema for each table separately?
Best regards,
depesz
--
The best thing about modern society is how easy it is to avoid contact with it.
http://depesz.com/