Re: Updating large postgresql database with blobs - Mailing list pgsql-hackers

From CAJ CAJ
Subject Re: Updating large postgresql database with blobs
Date
Msg-id 467669b30703121029s4f2c2820t404befb25168aa04@mail.gmail.com
Whole thread Raw
In response to Re: Updating large postgresql database with blobs  (Andrew Dunstan <andrew@dunslane.net>)
Responses Re: Updating large postgresql database with blobs
List pgsql-hackers
<snip> 

> What is the fastest way to upgrade postgres for large databases that
> has binary objects?

Your procedure dumps and restore the databases twice. This seems less
than sound. My prediction is that you could get a 50% speed improvement
by fixing that ...

Thanks for the response. This'd be wonderful if I can get my process right. My assumptions (probably incorrect) are that pgdump has to be excuted twice on a database with blobs. Once to get the  data and once to get the blob  (using the -b flag).


The only thing you really need pg_dumpall for is the global tables. I
would just use pg_dumpall -g to get those, and then use pg_dump -F c  +
pg_restore for each actual database.

This makes sense :) I assume that running pg_dump with -b will get all of the data including the blobs?

Another thing is to make sure that pg_dump/pg_restore are not competing
with postgres for access to the same disk(s). One way to do that is to
run them from a different machine - they don't have to be run on the
server machine - of course then the network can become a bottleneck, so
YMMV.

We are using separate servers for dump and restore.

Thanks again for your  suggestions. This helps immensely.
 

pgsql-hackers by date:

Previous
From: "Simon Riggs"
Date:
Subject: Re: Synchronized Scan update
Next
From: Jonathan Scher
Date:
Subject: To connect a debbuger...