On 10/01/2011 05:48 PM, Joe Abbate wrote:
> On 10/01/2011 05:08 PM, Andrew Dunstan wrote:
>> While investigating a client problem I just observed that pg_dump takes
>> a surprisingly large amount of time to dump a schema with a large number
>> of views. The client's hardware is quite spiffy, and yet pg_dump is
>> taking many minutes to dump a schema with some 35,000 views. Here's a
>> simple test case:
>>
>> create schema views;
>> do 'begin for i in 1 .. 10000 loop execute $$create view views.v_$$
>> || i ||$$ as select current_date as d, current_timestamp as ts,
>> $_$a$_$::text || n as t, n from generate_series(1,5) as n$$; end
>> loop; end;';
>>
>>
>> On my modest hardware this database took 4m18.864s for pg_dump to run.
>> Should we be looking at replacing the retail operations which consume
>> most of this time with something that runs faster?
> How modest? Was there anything else in the database? I tried with 9000
> views (because I didn't want to bother increasing
> max_locks_per_transaction) and the pg_dump in less than 10 seconds
> (8.991s) redirecting (plain-text) output to a file (this is on a Core i5).
Yeah, it must be pretty modest :-) On more powerful h/w I get the same.
I need to dig further into why it's taking so long to dump my client's
schema on server class hardware.
>> There is also this gem of behaviour, which is where I started:
>>
>> p1 p2
>> begin;
>> drop view foo;
>> pg_dump
>> commit;
>> boom.
>>
>> with this error:
>>
>> 2011-10-01 16:38:20 EDT [27084] 30063 ERROR: could not open
>> relation with OID 133640
>> 2011-10-01 16:38:20 EDT [27084] 30064 STATEMENT: SELECT
>> pg_catalog.pg_get_viewdef('133640'::pg_catalog.oid) AS viewdef
>>
>> Of course, this isn't caused by having a large catalog, but it's
>> terrible nevertheless. I'm not sure what to do about it.
> Couldn't you run pg_dump with --lock-wait-timeout?
>
How would that help? This isn't a lock failure.
cheers
andrew