Re: [HACKERS] pg_dump and thousands of schemas - Mailing list pgsql-performance
From | Denis |
---|---|
Subject | Re: [HACKERS] pg_dump and thousands of schemas |
Date | |
Msg-id | 1352216421913-5730877.post@n5.nabble.com Whole thread Raw |
In response to | Re: [HACKERS] pg_dump and thousands of schemas (Tom Lane <tgl@sss.pgh.pa.us>) |
Responses |
Re: [HACKERS] pg_dump and thousands of schemas
|
List | pgsql-performance |
Tom Lane-2 wrote > Denis < > socsam@ > > writes: >> I've read all the posts in thread, and as I understood in version 9.2 >> some >> patches were applied to improve pg_dump speed. I've just installed >> PostgreSQL 9.2.1 and I still have the same problem. I have a database >> with >> 2600 schemas in it. I try to dump each schema individually, but it takes >> too >> much time for every schema (about 30-40 seconds per schema, no matter >> what >> the data size is). > > Could you provide a test case for that? Maybe the output of pg_dump -s, > anonymized as you see fit? > >> Also for each schema dump I have a slow query log entry, > > Could you provide EXPLAIN ANALYZE output for that query? > > regards, tom lane > > > -- > Sent via pgsql-performance mailing list ( > pgsql-performance@ > ) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-performance Here is the output of EXPLAIN ANALYZE. It took 5 seconds but usually it takes from 10 to 15 seconds when I am doing backup. Sort (cost=853562.04..854020.73 rows=183478 width=219) (actual time=5340.477..5405.604 rows=183924 loops=1) Sort Key: c.oid Sort Method: external merge Disk: 33048kB -> Hash Left Join (cost=59259.80..798636.25 rows=183478 width=219) (actual time=839.297..4971.299 rows=183924 loops=1) Hash Cond: (c.reltoastrelid = tc.oid) -> Hash Right Join (cost=29530.77..146976.65 rows=183478 width=183) (actual time=404.959..3261.462 rows=183924 loops=1 ) Hash Cond: ((d.classid = c.tableoid) AND (d.objid = c.oid) AND (d.refclassid = c.tableoid)) Join Filter: (c.relkind = 'S'::"char") -> Seq Scan on pg_depend d (cost=0.00..71403.54 rows=995806 width=20) (actual time=1.137..878.571 rows=998642 lo ops=1) Filter: ((objsubid = 0) AND (deptype = 'a'::"char")) Rows Removed by Filter: 2196665 -> Hash (cost=21839.91..21839.91 rows=183478 width=175) (actual time=402.947..402.947 rows=183924 loops=1) Buckets: 1024 Batches: 32 Memory Usage: 876kB -> Seq Scan on pg_class c (cost=0.00..21839.91 rows=183478 width=175) (actual time=0.017..267.614 rows=183 924 loops=1) Filter: (relkind = ANY ('{r,S,v,c,f}'::"char"[])) Rows Removed by Filter: 383565 -> Hash (cost=18333.79..18333.79 rows=560979 width=40) (actual time=434.258..434.258 rows=567489 loops=1) Buckets: 4096 Batches: 32 Memory Usage: 703kB -> Seq Scan on pg_class tc (cost=0.00..18333.79 rows=560979 width=40) (actual time=0.003..273.418 rows=567489 lo ops=1) SubPlan 1 -> Seq Scan on pg_authid (cost=0.00..1.01 rows=1 width=68) (actual time=0.001..0.001 rows=1 loops=183924) Filter: (oid = c.relowner) Rows Removed by Filter: 2 SubPlan 2 -> Seq Scan on pg_tablespace t (cost=0.00..1.02 rows=1 width=64) (actual time=0.001..0.001 rows=0 loops=183924) Filter: (oid = c.reltablespace) Rows Removed by Filter: 2 SubPlan 3 -> Function Scan on unnest x (cost=0.00..1.25 rows=100 width=32) (actual time=0.001..0.001 rows=0 loops=183924) Total runtime: 5428.498 ms Here is the output of "pg_dump -s" test.dump <http://postgresql.1045698.n5.nabble.com/file/n5730877/test.dump> -- View this message in context: http://postgresql.1045698.n5.nabble.com/pg-dump-and-thousands-of-schemas-tp5709766p5730877.html Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
pgsql-performance by date: