Re: performance database for backup/restore - Mailing list pgsql-performance

From Jeff Janes
Subject Re: performance database for backup/restore
Date
Msg-id CAMkU=1zo9vtmm6X5XQxwDXPppzZnfxrc-eW3FHxS0WpcUzPX3g@mail.gmail.com
Whole thread Raw
In response to performance database for backup/restore  (Jeison Bedoya <jeisonb@audifarma.com.co>)
Responses Re: performance database for backup/restore  (Jeison Bedoya <jeisonb@audifarma.com.co>)
List pgsql-performance
2013/5/21 Jeison Bedoya <jeisonb@audifarma.com.co>
Hi people, i have a database with 400GB running in a server with 128Gb RAM, and 32 cores, and storage over SAN with fiberchannel, the problem is when i go to do a backup whit pg_dumpall take a lot of 5 hours, next i do a restore and take a lot of 17 hours, that is a normal time for that process in that machine? or i can do something to optimize the process of backup/restore.

How many database objects do you have?  A few large objects will dump and restore faster than a huge number of smallish objects.

Where is your bottleneck?  "top" should show you whether it is CPU or IO.

I can pg_dump about 6GB/minute to /dev/null using all defaults with a small number of large objects.

Cheers,

Jeff

pgsql-performance by date:

Previous
From: "ktm@rice.edu"
Date:
Subject: Re: performance database for backup/restore
Next
From: Kasahara Tatsuhito
Date:
Subject: Re: pg_statsinfo : error could not connect to repository