Thread: Fw: postgres DB duplication
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
My Website : http://www.angelfire.com/ia/japan/
BoomBox : http://www.b00mb0x.org
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 If the test and productive machine are the same type (i.e. intel) and the postgresql version is the same then I would try to do a filesystem copy. Just make sure both machines have the postmaster shut down and then copy over the /var/lib/pgsql (or wherever your system puts the stuff). Remember: same postgres version, same architecture. On Tuesday 27 January 2004 04:36 pm, Colm De Barra wrote: > Hi > I've built up 30 gig database on a performance test machine which > now needs to be transferred over onto the production hardware > which will be going live. > The documentation suggests using pg_dump or a file system level backup > - does anyone have any experience of using these on large DBs ? > Any tips or advice before I take the plunge ?? > > Cheers > Colm > > > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > My Website : http://www.angelfire.com/ia/japan/ > BoomBox : http://www.b00mb0x.org > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -- UC - -- Open Source Solutions 4U, LLC 2570 Fleetwood Drive Phone: +1 650 872 2425 San Bruno, CA 94066 Cell: +1 650 302 2405 United States Fax: +1 650 872 2417 -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.2.3 (GNU/Linux) iD8DBQFAFxiQjqGXBvRToM4RApMtAJwLvFCIJUvaxUD9I3Y96dDjJAw/cwCgkV3q UIEOokjbCwzEfRAIhBFWUhk= =cpMy -----END PGP SIGNATURE-----
A file system mirror (provided its the same type of system) will be much easier. Just shut down postgres on the test machine and rsync away. Dump and restore on something that size will take a looooooong time. On Jan 27, 2004, at 7:36 PM, Colm De Barra wrote: > Hi > I've built up 30 gig database on a performance test machine which > now needs to be transferred over onto the production hardware > which will be going live. > The documentation suggests using pg_dump or a file system level backup > - does anyone have any experience of using these on large DBs ? > Any tips or advice before I take the plunge ?? > > Cheers > Colm > > > > > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > My Website :http://www.angelfire.com/ia/japan/ > BoomBox :http://www.b00mb0x.org > ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > > > -------------------- Andrew Rawnsley President The Ravensfield Digital Resource Group, Ltd. (740) 587-0114 www.ravensfield.com
If you want to make a cold backup, be carreful if you have distributed your database on different disks with symbolic links or by using the initloaction command as a tar or cpio won't save the symbolic tree . Otherwise you can save all your $PGDATA directory and copy it on your new machine.
One more thing. If the Postgres version is not the same between your development machine and your exploitation machine (for exemple 7.3.3 and 7.4.1), you have to use pg_dump / pg_restore, as the data structure have changed between the 2 releases.
About the time. For a 17 Gb database on a 4 disks RAID0 LUN with PG 7.4, pg_dump takes about 17 min and pg_restore takes about 67 min.
I hope that these info will help you.
Thierry Missimilly
Colm De Barra wrote:
HiI've built up 30 gig database on a performance test machine whichnow needs to be transferred over onto the production hardwarewhich will be going live.The documentation suggests using pg_dump or a file system level backup - does anyone have any experience of using these on large DBs ?Any tips or advice before I take the plunge ?? CheersColm
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
My Website : http://www.angelfire.com/ia/japan/
BoomBox : http://www.b00mb0x.org
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~