o.blomqvist@secomintl.com (Otto Blomqvist) writes:
> I have a small database (10MB gz dump). When I do a pg_dump -l -s (to
> list the schema) of the original database it takes below 1 second. But
> when I do dump of a copy of the database (using a full restore into a
> new DB) it takes like 10-15 seconds to do the schema list (pg_dump -l
> -s). I need to compare the schemes of about 20 tables and this takes a
> while... Anyone have any ideas ? I can't figure out why the newly
> created copy would be so much slower.
The first thought that comes to mind is that the new database needs to
be VACUUM ANALYZEd. pg_dump does some fairly complicated queries
against the system catalogs, and it's not surprising that you might see
bad plans for those queries if the statistics aren't up-to-date.
If VACUUM ANALYZE doesn't help, I'd be interested to look more closely.
regards, tom lane