Thread: Restore performance?
Hi I'm currently upgrading a Posgresql 7.3.2 database to a 8.1.<something-good> I'd run pg_dump | gzip > sqldump.gz on the old system. That took about 30 hours and gave me an 90GB zipped file. Running cat sqldump.gz | gunzip | psql into the 8.1 database seems to take about the same time. Are there any tricks I can use to speed this dump+restore process up? The database contains quite alot of BLOB, thus the size. Jesper -- ./Jesper Krogh, jesper@krogh.cc, Jabber ID: jesper@jabbernet.dk
Jesper, If they both took the same amount of time, then you are almost certainly bottlenecked on gzip. Try a faster CPU or use "gzip -fast". - Luke ________________________________ From: pgsql-performance-owner@postgresql.org on behalf of Jesper Krogh Sent: Mon 4/10/2006 12:55 AM To: pgsql-performance@postgresql.org Subject: [PERFORM] Restore performance? Hi I'm currently upgrading a Posgresql 7.3.2 database to a 8.1.<something-good> I'd run pg_dump | gzip > sqldump.gz on the old system. That took about 30 hours and gave me an 90GB zipped file. Running cat sqldump.gz | gunzip | psql into the 8.1 database seems to take about the same time. Are there any tricks I can use to speed this dump+restore process up? The database contains quite alot of BLOB, thus the size. Jesper -- ./Jesper Krogh, jesper@krogh.cc, Jabber ID: jesper@jabbernet.dk ---------------------------(end of broadcast)--------------------------- TIP 6: explain analyze is your friend
Jesper Krogh wrote: > Hi > > I'm currently upgrading a Posgresql 7.3.2 database to a > 8.1.<something-good> > > I'd run pg_dump | gzip > sqldump.gz on the old system. That took about > 30 hours and gave me an 90GB zipped file. Running > cat sqldump.gz | gunzip | psql > into the 8.1 database seems to take about the same time. Are there > any tricks I can use to speed this dump+restore process up? If you can have both database systems up at the same time, you could pg_dump | psql. Regards, Andreas
> If they both took the same amount of time, then you are almost certainly > bottlenecked on gzip. > > Try a faster CPU or use "gzip -fast". gzip does not seem to be the bottleneck, on restore is psql the nr. 1 consumer on cpu-time. Jesper Sorry for the double post. -- Jesper Krogh
> I'd run pg_dump | gzip > sqldump.gz on the old system. That took about > 30 hours and gave me an 90GB zipped file. Running > cat sqldump.gz | gunzip | psql > into the 8.1 database seems to take about the same time. Are there > any tricks I can use to speed this dump+restore process up? > > The database contains quite alot of BLOB, thus the size. You could try slony - it can do almost-zero-downtime upgrades. Greetings Marcin Mank
"Jesper Krogh" <jesper@krogh.cc> writes: > gzip does not seem to be the bottleneck, on restore is psql the nr. 1 > consumer on cpu-time. Hm. We've seen some situations where readline mistakenly decides that the input is interactive and wastes lots of cycles doing useless processing (like keeping history). Try "psql -n" and see if that helps. regards, tom lane
On 4/10/06, Jesper Krogh <jesper@krogh.cc> wrote:
was the last restore successfull ?
if so why do you want to repeat ?
some tips
1. run new version of postgres in a different port and pipe pg_dump to psql
this may save the CPU time of compression , there is no need for a temporary
dump file.
pg_dump | /path/to/psql813 -p 54XX newdb
2. use new version of pg_dump to dump the old database as new version
is supposed to be wiser.
3. make sure you are trapping the restore errors properly
psql newdb 2>&1 | cat | tee err works for me.
Hi
I'm currently upgrading a Posgresql 7.3.2 database to a
8.1.<something-good>
I'd run pg_dump | gzip > sqldump.gz on the old system. That took about
30 hours and gave me an 90GB zipped file. Running
cat sqldump.gz | gunzip | psql
into the 8.1 database seems to take about the same time. Are there
any tricks I can use to speed this dump+restore process up?
was the last restore successfull ?
if so why do you want to repeat ?
some tips
1. run new version of postgres in a different port and pipe pg_dump to psql
this may save the CPU time of compression , there is no need for a temporary
dump file.
pg_dump | /path/to/psql813 -p 54XX newdb
2. use new version of pg_dump to dump the old database as new version
is supposed to be wiser.
3. make sure you are trapping the restore errors properly
psql newdb 2>&1 | cat | tee err works for me.
The database contains quite alot of BLOB, thus the size.
Jesper
--
./Jesper Krogh, jesper@krogh.cc, Jabber ID: jesper@jabbernet.dk
---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend
sorry for the post , i didn' saw the other replies only after posting.
On 4/10/06, Rajesh Kumar Mallah <mallah.rajesh@gmail.com > wrote:
On 4/10/06, Jesper Krogh <jesper@krogh.cc > wrote:Hi
I'm currently upgrading a Posgresql 7.3.2 database to a
8.1.<something-good>
I'd run pg_dump | gzip > sqldump.gz on the old system. That took about
30 hours and gave me an 90GB zipped file. Running
cat sqldump.gz | gunzip | psql
into the 8.1 database seems to take about the same time. Are there
any tricks I can use to speed this dump+restore process up?
was the last restore successfull ?
if so why do you want to repeat ?
some tips
1. run new version of postgres in a different port and pipe pg_dump to psql
this may save the CPU time of compression , there is no need for a temporary
dump file.
pg_dump | /path/to/psql813 -p 54XX newdb
2. use new version of pg_dump to dump the old database as new version
is supposed to be wiser.
3. make sure you are trapping the restore errors properly
psql newdb 2>&1 | cat | tee err works for me.
The database contains quite alot of BLOB, thus the size.
Jesper
--
./Jesper Krogh, jesper@krogh.cc, Jabber ID: jesper@jabbernet.dk
---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend
Rajesh Kumar Mallah wrote: >> I'd run pg_dump | gzip > sqldump.gz on the old system. That took about >> 30 hours and gave me an 90GB zipped file. Running >> cat sqldump.gz | gunzip | psql >> into the 8.1 database seems to take about the same time. Are there >> any tricks I can use to speed this dump+restore process up? > > > was the last restore successfull ? > if so why do you want to repeat ? "about the same time" == Estimated guess from restoring a few tables I was running a testrun, without disabling updates to the production database, the real run is scheduled for easter where there hopefully is no users on the system. So I need to repeat, I'm just trying to get a feelingabout how long time I need to allocate for the operation. > 1. run new version of postgres in a different port and pipe pg_dump to psql > this may save the CPU time of compression , there is no need for a temporary > dump file. > > pg_dump | /path/to/psql813 -p 54XX newdb I'll do that. It is a completely different machine anyway. > 2. use new version of pg_dump to dump the old database as new version > is supposed to be wiser. Check. > 3. make sure you are trapping the restore errors properly > psql newdb 2>&1 | cat | tee err works for me. Thats noted. -- Jesper Krogh, jesper@krogh.cc
4. fsync can also be turned off while loading huge dataset ,
but seek others comments too (as study docs) as i am not sure about the
reliability. i think it can make a lot of difference.
On 4/10/06, Jesper Krogh <jesper@krogh.cc> wrote:
Rajesh Kumar Mallah wrote:
>> I'd run pg_dump | gzip > sqldump.gz on the old system. That took about
>> 30 hours and gave me an 90GB zipped file. Running
>> cat sqldump.gz | gunzip | psql
>> into the 8.1 database seems to take about the same time. Are there
>> any tricks I can use to speed this dump+restore process up?
>
>
> was the last restore successfull ?
> if so why do you want to repeat ?
"about the same time" == Estimated guess from restoring a few tables
I was running a testrun, without disabling updates to the production
database, the real run is scheduled for easter where there hopefully is
no users on the system. So I need to repeat, I'm just trying to get a
feelingabout how long time I need to allocate for the operation.
> 1. run new version of postgres in a different port and pipe pg_dump to psql
> this may save the CPU time of compression , there is no need for a temporary
> dump file.
>
> pg_dump | /path/to/psql813 -p 54XX newdb
I'll do that. It is a completely different machine anyway.
> 2. use new version of pg_dump to dump the old database as new version
> is supposed to be wiser.
Check.
> 3. make sure you are trapping the restore errors properly
> psql newdb 2>&1 | cat | tee err works for me.
Thats noted.
--
Jesper Krogh, jesper@krogh.cc
Rajesh Kumar Mallah wrote: > 4. fsync can also be turned off while loading huge dataset , > but seek others comments too (as study docs) as i am not sure about the > reliability. i think it can make a lot of difference. Also be sure to increase maintenance_work_mem so that index creation goes faster. -- Alvaro Herrera http://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support
> I'd run pg_dump | gzip > sqldump.gz on the old system. If the source and destination databases are on different machines, you can pipe pg_dump on the source machine to pg_restore on the destination machine by using netcat. If you only have 100 Mbps ethernet, compressing the data will be faster. If you have Gb Ethernet, maybe you don't need to compress, but it doesn't hurt to test. use pg_restore instead of psql, and use a recent version of pg_dump which can generate dumps in the latest format. If you need fast compression, use gzip -1 or even lzop, which is incredibly fast. Turn off fsync during the restore and set maintenance_work_mem to use most of your available RAM for index creation. I think that creating foreign key constraints uses large joins ; it might be good to up work_mem also. Check the speed of your disks with dd beforehand. You might get a surprise. Maybe you can also play with the bgwriter and checkpoint parameters.
On Apr 10, 2006, at 3:55 AM, Jesper Krogh wrote: > I'd run pg_dump | gzip > sqldump.gz on the old system. That took > about > 30 hours and gave me an 90GB zipped file. Running > cat sqldump.gz | gunzip | psql > into the 8.1 database seems to take about the same time. Are there > any tricks I can use to speed this dump+restore process up? > > The database contains quite alot of BLOB, thus the size. Well, your pg_dump command lost your BLOBs since the plain text format doesn't support them. But once you use the -Fc format on your dump and enable blob backups, you can speed up reloads by increasing your checkpoint segments to a big number like 256 and the checkpoint timeout to something like 10 minutes. All other normal tuning parameters should be what you plan to use for your normal operations, too.
> Well, your pg_dump command lost your BLOBs since the plain text > format doesn't support them. Well, no.. they are stored as BYTEA not Large Objects.. They are encoded in ASCII in the pg_dump output. > But once you use the -Fc format on your dump and enable blob backups, > you can speed up reloads by increasing your checkpoint segments to a big > number like 256 and the checkpoint timeout to something like 10 minutes. > All other normal tuning parameters should be what you plan > to use for your normal operations, too. Thanks. Jesper -- Jesper Krogh
>> Well, your pg_dump command lost your BLOBs since the plain text >> format doesn't support them. > > Well, no.. they are stored as BYTEA not Large Objects.. They are encoded > in ASCII in the pg_dump output. As a side note: plain text dump format in 8.1 supprts LOBs