Thread: Strategy for moving a large DB to another machine with least possible down-time
Strategy for moving a large DB to another machine with least possible down-time
From
Andreas Joseph Krogh
Date:
Hi all.
PG-version: 9.3.5
I have a DB large enough for it to be impractical to pg_dump/restore it (would require too much down-time for customer). Note that I'm noe able to move the whole cluster, only *one* DB in that cluster.
What is the best way to perform such a move, can i use PITR, rsync + webl-replay magic, what else?
Can Barman help with this, maybe?
Thanks.
--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
Attachment
Re: Strategy for moving a large DB to another machine with least possible down-time
From
Bill Moran
Date:
On Sun, 21 Sep 2014 13:36:18 +0200 (CEST) Andreas Joseph Krogh <andreas@visena.com> wrote: > Hi all. PG-version: 9.3.5 I have a DB large enough for it to be impractical > to pg_dump/restore it (would require too much down-time for customer). Note > that I'm noe able to move the whole cluster, only *one* DB in that cluster. > What is the best way to perform such a move, can i use PITR, rsync + > webl-replay magic, what else? Can Barman help with this, maybe? Thanks. -- I've used Slony to do this kind of thing with great success in the past. The biggest advantage of slony is that you can install it without stopping the DB server, wait patiently while it takes however long is needed to synch up the two servers without having much impact (if any) on operations, then switch over when you're ready. The disadvantage to Slony is that the setup/config is a bit involved. -- Bill Moran I need your help to succeed: http://gamesbybill.com
Re: Strategy for moving a large DB to another machine with least possible down-time
From
Andreas Joseph Krogh
Date:
På søndag 21. september 2014 kl. 13:51:00, skrev Bill Moran <wmoran@potentialtech.com>:
On Sun, 21 Sep 2014 13:36:18 +0200 (CEST)
Andreas Joseph Krogh <andreas@visena.com> wrote:
> Hi all. PG-version: 9.3.5 I have a DB large enough for it to be impractical
> to pg_dump/restore it (would require too much down-time for customer). Note
> that I'm noe able to move the whole cluster, only *one* DB in that cluster.
> What is the best way to perform such a move, can i use PITR, rsync +
> webl-replay magic, what else? Can Barman help with this, maybe? Thanks. --
I've used Slony to do this kind of thing with great success in the past.
The biggest advantage of slony is that you can install it without stopping the
DB server, wait patiently while it takes however long is needed to synch up
the two servers without having much impact (if any) on operations, then switch
over when you're ready. The disadvantage to Slony is that the setup/config is
a bit involved.
I see this limitation in Slyny:
http://slony.info/documentation/2.2/limitations.html
Slony-I does not automatically replicate
Changes to large objects (BLOBS)
Changes made by DDL commands
Changes to users and roles
Not being able to replicate BLOBS is a show-stopper for me as we have lots of them.
Seems PITR is my only option?
--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
Attachment
Re: Strategy for moving a large DB to another machine with least possible down-time
From
Adrian Klaver
Date:
On 09/21/2014 05:44 AM, Andreas Joseph Krogh wrote: > På søndag 21. september 2014 kl. 13:51:00, skrev Bill Moran > <wmoran@potentialtech.com <mailto:wmoran@potentialtech.com>>: > > > I see this limitation in Slyny: > http://slony.info/documentation/2.2/limitations.html > > Slony-I does not automatically replicate > > * > > Changes to large objects (BLOBS) > > * > > Changes made by DDL commands > > * > > Changes to users and roles > > Not being able to replicate BLOBS is a show-stopper for me as we have > lots of them. Well I would say it depends on where you are storing the binary data, in large objects or in a bytea column? If you are using bytea columns then you would be okay. If it is large objects then you have a problem. > Seems PITR is my only option? > -- > *Andreas Joseph Krogh* -- Adrian Klaver adrian.klaver@aklaver.com
Re: Strategy for moving a large DB to another machine with least possible down-time
From
Andreas Joseph Krogh
Date:
På søndag 21. september 2014 kl. 15:48:00, skrev Adrian Klaver <adrian.klaver@aklaver.com>:
On 09/21/2014 05:44 AM, Andreas Joseph Krogh wrote:
> På søndag 21. september 2014 kl. 13:51:00, skrev Bill Moran
> <wmoran@potentialtech.com <mailto:wmoran@potentialtech.com>>:
>
>
> I see this limitation in Slyny:
> http://slony.info/documentation/2.2/limitations.html
>
> Slony-I does not automatically replicate
>
> *
>
> Changes to large objects (BLOBS)
>
> *
>
> Changes made by DDL commands
>
> *
>
> Changes to users and roles
>
> Not being able to replicate BLOBS is a show-stopper for me as we have
> lots of them.
Well I would say it depends on where you are storing the binary data, in
large objects or in a bytea column? If you are using bytea columns then
you would be okay. If it is large objects then you have a problem.
Large-objects, not BYTEA, as they allow for much more efficient streaming (require less memory).
--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
Attachment
Re: Strategy for moving a large DB to another machine with least possible down-time
From
Adrian Klaver
Date:
On 09/21/2014 06:50 AM, Andreas Joseph Krogh wrote: > På søndag 21. september 2014 kl. 15:48:00, skrev Adrian Klaver > <adrian.klaver@aklaver.com <mailto:adrian.klaver@aklaver.com>>: > > On 09/21/2014 05:44 AM, Andreas Joseph Krogh wrote: > > På søndag 21. september 2014 kl. 13:51:00, skrev Bill Moran > > <wmoran@potentialtech.com <mailto:wmoran@potentialtech.com>>: > > > > > > > I see this limitation in Slyny: > > http://slony.info/documentation/2.2/limitations.html > > > > Slony-I does not automatically replicate > > > > * > > > > Changes to large objects (BLOBS) > > > > * > > > > Changes made by DDL commands > > > > * > > > > Changes to users and roles > > > > Not being able to replicate BLOBS is a show-stopper for me as we have > > lots of them. > > Well I would say it depends on where you are storing the binary data, in > large objects or in a bytea column? If you are using bytea columns then > you would be okay. If it is large objects then you have a problem. > > Large-objects, not BYTEA, as they allow for much more efficient > streaming (require less memory). Here are some other suggestions from the project, though they are cluster wide(including PITR): http://www.postgresql.org/docs/9.3/static/backup-file.html > -- > *Andreas Joseph Krogh* > CTO / Partner - Visena AS > Mobile: +47 909 56 963 > andreas@visena.com <mailto:andreas@visena.com> > www.visena.com <https://www.visena.com> > <https://www.visena.com> -- Adrian Klaver adrian.klaver@aklaver.com
Re: Strategy for moving a large DB to another machine with least possible down-time
From
Andreas Joseph Krogh
Date:
På søndag 21. september 2014 kl. 16:10:54, skrev Adrian Klaver <adrian.klaver@aklaver.com>:
On 09/21/2014 06:50 AM, Andreas Joseph Krogh wrote:
> På søndag 21. september 2014 kl. 15:48:00, skrev Adrian Klaver
> <adrian.klaver@aklaver.com <mailto:adrian.klaver@aklaver.com>>:
>
> On 09/21/2014 05:44 AM, Andreas Joseph Krogh wrote:
> > På søndag 21. september 2014 kl. 13:51:00, skrev Bill Moran
> > <wmoran@potentialtech.com <mailto:wmoran@potentialtech.com>>:
> >
>
> >
> > I see this limitation in Slyny:
> > http://slony.info/documentation/2.2/limitations.html
> >
> > Slony-I does not automatically replicate
> >
> > *
> >
> > Changes to large objects (BLOBS)
> >
> > *
> >
> > Changes made by DDL commands
> >
> > *
> >
> > Changes to users and roles
> >
> > Not being able to replicate BLOBS is a show-stopper for me as we have
> > lots of them.
>
> Well I would say it depends on where you are storing the binary data, in
> large objects or in a bytea column? If you are using bytea columns then
> you would be okay. If it is large objects then you have a problem.
>
> Large-objects, not BYTEA, as they allow for much more efficient
> streaming (require less memory).
Here are some other suggestions from the project, though they are
cluster wide(including PITR):
http://www.postgresql.org/docs/9.3/static/backup-file.html
I think the rsync-approach is the most attractive option.
--
Andreas Joseph Krogh
CTO / Partner - Visena AS
Mobile: +47 909 56 963
Attachment
Re: Strategy for moving a large DB to another machine with least possible down-time
From
Andy Colson
Date:
On 09/21/2014 06:36 AM, Andreas Joseph Krogh wrote: > Hi all. > PG-version: 9.3.5 > I have a DB large enough for it to be impractical to pg_dump/restore it (would require too much down-time for customer).Note that I'm noe able to move the whole cluster, only *one* DB in that cluster. > What is the best way to perform such a move, can i use PITR, rsync + webl-replay magic, what else? > Can Barman help with this, maybe? > Thanks. > -- > *Andreas Joseph Krogh* > CTO / Partner - Visena AS > Mobile: +47 909 56 963 > andreas@visena.com <mailto:andreas@visena.com> > www.visena.com <https://www.visena.com> > <https://www.visena.com> I had a less big'sih table I wanted to move, but not everything else. I had a timestamp on the table I could use for "closeenough to unique". I wrote a perl script that would dump 100K records at a time (ordered by the timestamp). It woulddump records and then disconnect and sleep for 30 seconds'ish which kept usage low. It took a while, but once it caught up, I changed the script to get the max(timestamp) from olddb and newdb and only copythe missing ones. I could keep them in sync this way until I was ready to switch over. -Andy