pg_dump issues - Mailing list pgsql-hackers

From Andrew Dunstan
Subject pg_dump issues
Date
Msg-id 4E878132.4080903@dunslane.net
Whole thread Raw
Responses Re: pg_dump issues
Re: pg_dump issues
List pgsql-hackers
While investigating a client problem I just observed that pg_dump takes 
a surprisingly large amount of time to dump a schema with a large number 
of views. The client's hardware is quite spiffy, and yet pg_dump is 
taking many minutes to dump a schema with some 35,000 views. Here's a 
simple test case:
   create schema views;   do 'begin for i in 1 .. 10000 loop execute $$create view views.v_$$   || i ||$$ as select
current_dateas d, current_timestamp as ts,   $_$a$_$::text || n as t, n from generate_series(1,5) as n$$; end   loop;
end;';


On my modest hardware this database took 4m18.864s for pg_dump to run. 
Should we be looking at replacing the retail operations which consume 
most of this time with something that runs faster?

There is also this gem of behaviour, which is where I started:
   p1                p2   begin;   drop view foo;                      pg_dump   commit;                      boom.

with this error:
   2011-10-01 16:38:20 EDT [27084] 30063 ERROR:  could not open   relation with OID 133640   2011-10-01 16:38:20 EDT
[27084]30064 STATEMENT:  SELECT   pg_catalog.pg_get_viewdef('133640'::pg_catalog.oid) AS viewdef
 

Of course, this isn't caused by having a large catalog, but it's 
terrible nevertheless. I'm not sure what to do about it.

cheers

andrew



pgsql-hackers by date:

Previous
From: Daniel Farina
Date:
Subject: Re: pg_cancel_backend by non-superuser
Next
From: "Mr. Aaron W. Swenson"
Date:
Subject: Re: Bug with pg_ctl -w/wait and config-only directories