I'm testing out various pg_dump scenarios using the -n switch and I have
a few questions:
- When using the -n switch, is the whole schema locked from all non-read
DML/DDL operations?
- If the whole schema is locked, once each table is dumped, is it then
released for non-read DML/DDL operations?
- Once the final table is dumped
(i.e. pg_dump: dumping contents of table zip_data),
are there additional background processes that are still performing
maintenance tasks? Or is the entire process complete and all objects
are released?
I'm asking because I have a schema with a large table with many indexes
that is consuming the majority of the dump. This version of the dump
takes about 4 hours.
As a solution, we run 2 separate dumps in parallel, one with the schema
excluding the large table and one including only the large table.
The option with just the large table takes 2.5 hours. However, the
option with the schema excluding the large table still takes 4 hours.
If pg_dump locks each table individually, then releases when the dump is
completed, I must be encountering lock contention.
Also, I use the -v switch, however I'm not getting any information on
how long the dump of each object takes, is there an option that exists
where I can collect this information in the log file?
Thanks.
Tony