Thread: Moving a live production database to different server and postgres release
Hi, I’m in the process of moving our production database to a different physical server, running a different OS and a newer release of postgreSQL. My problem is that I’m not really sure how to go about it. My initial idea was to use WAL archiving to reproduce the db on the new server and then get it up to date with the logs from the time of base backup creation to the time the new server can get up. That was until I found out WAL archiving doesn’t work between major postgreSQL releases. I can’t make a simple pg_dump – pg_restore and then redirect traffic when the new server is up either, because during that time new data will have been inserted in the original db. My best idea so far is to do a pg_dump and somehow archive all the DML in the original db from that point in time for later insertion in the new db, but I don’t know how that would be done practically. And I don’t even know if that’s the best way to go, as I said, it’s only an idea. If anyone can give me some ideas on this, I’d be much obliged. Best Regards /Ulas
Re: Moving a live production database to different server and postgres release
From
Raymond O'Donnell
Date:
On 11/06/2010 11:24, Ulas Albayrak wrote: > My initial idea was to use WAL archiving to reproduce the db on the > new server and then get it up to date with the logs from the time of > base backup creation to the time the new server can get up. That was > until I found out WAL archiving doesn’t work between major postgreSQL > releases. Slony-I (http://www.slony.info) can handle different major releases, allowing you to switch from one server to another in seconds. Ray. -- Raymond O'Donnell :: Galway :: Ireland rod@iol.ie
Re: Moving a live production database to different server and postgres release
From
"Greg Sabino Mullane"
Date:
-----BEGIN PGP SIGNED MESSAGE----- Hash: RIPEMD160 > My best idea so far is to do a pg_dump and somehow archive all the DML > in the original db from that point in time for later insertion in the > new db, but I dont know how that would be done practically. And I > dont even know if thats the best way to go, as I said, its only an > idea. What you need is a replication system. Take a look at Slony or Bucardo. Basically, you copy over everything except for the data to the new database, switch the replication system on, let it catch up, then stop apps from hitting the server, wait for the new one to catch up, and point your apps to the new one. Important factors that you left out are exactly how big your database is, what version you are on, what version you are moving to, and how busy your system is. Also keep in mind that both Bucardo and Slony are trigger based on primary keys or unique indexes, so tables without such constraints cannot be replicated: you'll need to either add a unique constraint to the tables, or copy them separately (e.g. pg_dump -t tablename or Bucardo's fullcopy mode). If you weren't also moving your OS and server, pg_migrator (aka pg_upgrade) might work for you as well: it does an inplace, one-time upgrade but only supports a limited number of versions at the moment. - -- Greg Sabino Mullane greg@turnstep.com End Point Corporation http://www.endpoint.com/ PGP Key: 0x14964AC8 201006110927 http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8 -----BEGIN PGP SIGNATURE----- iEYEAREDAAYFAkwSOk8ACgkQvJuQZxSWSsgt6QCfYgx6mBibJjNNY88iPBOJNmSL +FAAoLEVuYUw/VJWg3tRC25VH+ZrNsiH =yhFJ -----END PGP SIGNATURE-----