Thread: Migrate 2 DB's - v8.3

Migrate 2 DB's - v8.3

From
Jeff Baldwin
Date:
Hello,

I am working to migrate 2 DB's (not the entire postgres instance), from 1 host to another... and I need some guidance on the best approach/practice.

I have migrated ~25 other DB's in this environment, and I was able to use pg_dump/pgrestore for those, and it worked fine.  These final 2 are live DB's, and I need to move them with minimal downtime (1-2hrs is acceptable).

The DB's are blob DB's that are 45 and 90G in size, and are in the same Data Center, with 1G connection in between

I am running postres 8.3 (I know :) ), so there may be some limitations there as well.   

Any help/guidance on the best way to approach this, are greatly appreciated.

Kind Regards,
Jeff

Re: Migrate 2 DB's - v8.3

From
Melvin Davidson
Date:


On Fri, May 27, 2016 at 4:56 PM, Jeff Baldwin <tarheeljeff@gmail.com> wrote:
Hello,

I am working to migrate 2 DB's (not the entire postgres instance), from 1 host to another... and I need some guidance on the best approach/practice.

I have migrated ~25 other DB's in this environment, and I was able to use pg_dump/pgrestore for those, and it worked fine.  These final 2 are live DB's, and I need to move them with minimal downtime (1-2hrs is acceptable).

The DB's are blob DB's that are 45 and 90G in size, and are in the same Data Center, with 1G connection in between

I am running postres 8.3 (I know :) ), so there may be some limitations there as well.   

Any help/guidance on the best way to approach this, are greatly appreciated.

Kind Regards,
Jeff

Well generically speaking, since you are migrating from 8.3, you are limited to pg_dump in plain format.
It would be nice (important) to know the PostgreSQL version you are migrating to, as well as what O/S you are working with.

--
Melvin Davidson
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.

Re: Migrate 2 DB's - v8.3

From
Jeff Baldwin
Date:
Melvin,

Thank you for taking the time to reply to my question.

Below are the details you have requested:

SOURCE:
CentOS release 4.6 
Postgres 8.3

TARGET:
CentOS release 6.2
Postgres 8.3

Kind Regards,
Jeff

On Fri, May 27, 2016 at 5:05 PM Melvin Davidson <melvin6925@gmail.com> wrote:

On Fri, May 27, 2016 at 4:56 PM, Jeff Baldwin <tarheeljeff@gmail.com> wrote:
Hello,

I am working to migrate 2 DB's (not the entire postgres instance), from 1 host to another... and I need some guidance on the best approach/practice.

I have migrated ~25 other DB's in this environment, and I was able to use pg_dump/pgrestore for those, and it worked fine.  These final 2 are live DB's, and I need to move them with minimal downtime (1-2hrs is acceptable).

The DB's are blob DB's that are 45 and 90G in size, and are in the same Data Center, with 1G connection in between

I am running postres 8.3 (I know :) ), so there may be some limitations there as well.   

Any help/guidance on the best way to approach this, are greatly appreciated.

Kind Regards,
Jeff

Well generically speaking, since you are migrating from 8.3, you are limited to pg_dump in plain format.
It would be nice (important) to know the PostgreSQL version you are migrating to, as well as what O/S you are working with.


--
Melvin Davidson
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.

Re: Migrate 2 DB's - v8.3

From
Melvin Davidson
Date:


On Fri, May 27, 2016 at 5:09 PM, Jeff Baldwin <tarheeljeff@gmail.com> wrote:
Melvin,

Thank you for taking the time to reply to my question.

Below are the details you have requested:

SOURCE:
CentOS release 4.6 
Postgres 8.3

TARGET:
CentOS release 6.2
Postgres 8.3

Kind Regards,
Jeff

On Fri, May 27, 2016 at 5:05 PM Melvin Davidson <melvin6925@gmail.com> wrote:

On Fri, May 27, 2016 at 4:56 PM, Jeff Baldwin <tarheeljeff@gmail.com> wrote:
Hello,

I am working to migrate 2 DB's (not the entire postgres instance), from 1 host to another... and I need some guidance on the best approach/practice.

I have migrated ~25 other DB's in this environment, and I was able to use pg_dump/pgrestore for those, and it worked fine.  These final 2 are live DB's, and I need to move them with minimal downtime (1-2hrs is acceptable).

The DB's are blob DB's that are 45 and 90G in size, and are in the same Data Center, with 1G connection in between

I am running postres 8.3 (I know :) ), so there may be some limitations there as well.   

Any help/guidance on the best way to approach this, are greatly appreciated.

Kind Regards,
Jeff

Well generically speaking, since you are migrating from 8.3, you are limited to pg_dump in plain format.
It would be nice (important) to know the PostgreSQL version you are migrating to, as well as what O/S you are working with.


--
Melvin Davidson
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


OK, well since both PostgreSQL versions are the same, then you can use custom format.

I would first by creating a testdb in the target server. Then export one small table in customer format and verify that you can use pg_restore to load to
the testdb. If that works, time how long a full dump takes in the old server as a start point. Then time how long it takes to do a full load into testdb.
You will then know how big of a window you need for migrating.

--
Melvin Davidson
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.

Re: Migrate 2 DB's - v8.3

From
Jeff Baldwin
Date:
Thanks Melvin.

I have done just this, and the time required to dump/restore in this manner far exceeds the outage window we can afford to have (max of 2hrs).   I am looking for alternatives to the standard dump/restore that might help me save time.

For instance... if I could do a continuous rsync of only the 2 DB's in question.   Then stop the source DB and sync only the delta to the target, or something along those lines.    I've also been looking at barman and Slony to see if they might fit the bill as well.

Thanks again for the replies.

Jeff

On Fri, May 27, 2016 at 5:18 PM Melvin Davidson <melvin6925@gmail.com> wrote:


On Fri, May 27, 2016 at 5:09 PM, Jeff Baldwin <tarheeljeff@gmail.com> wrote:
Melvin,

Thank you for taking the time to reply to my question.

Below are the details you have requested:

SOURCE:
CentOS release 4.6 
Postgres 8.3

TARGET:
CentOS release 6.2
Postgres 8.3

Kind Regards,
Jeff

On Fri, May 27, 2016 at 5:05 PM Melvin Davidson <melvin6925@gmail.com> wrote:

On Fri, May 27, 2016 at 4:56 PM, Jeff Baldwin <tarheeljeff@gmail.com> wrote:
Hello,

I am working to migrate 2 DB's (not the entire postgres instance), from 1 host to another... and I need some guidance on the best approach/practice.

I have migrated ~25 other DB's in this environment, and I was able to use pg_dump/pgrestore for those, and it worked fine.  These final 2 are live DB's, and I need to move them with minimal downtime (1-2hrs is acceptable).

The DB's are blob DB's that are 45 and 90G in size, and are in the same Data Center, with 1G connection in between

I am running postres 8.3 (I know :) ), so there may be some limitations there as well.   

Any help/guidance on the best way to approach this, are greatly appreciated.

Kind Regards,
Jeff

Well generically speaking, since you are migrating from 8.3, you are limited to pg_dump in plain format.
It would be nice (important) to know the PostgreSQL version you are migrating to, as well as what O/S you are working with.


--
Melvin Davidson
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


OK, well since both PostgreSQL versions are the same, then you can use custom format.

I would first by creating a testdb in the target server. Then export one small table in customer format and verify that you can use pg_restore to load to
the testdb. If that works, time how long a full dump takes in the old server as a start point. Then time how long it takes to do a full load into testdb.
You will then know how big of a window you need for migrating.

--
Melvin Davidson
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.

Re: Migrate 2 DB's - v8.3

From
Melvin Davidson
Date:


On Fri, May 27, 2016 at 5:23 PM, Jeff Baldwin <tarheeljeff@gmail.com> wrote:
Thanks Melvin.

I have done just this, and the time required to dump/restore in this manner far exceeds the outage window we can afford to have (max of 2hrs).   I am looking for alternatives to the standard dump/restore that might help me save time.

For instance... if I could do a continuous rsync of only the 2 DB's in question.   Then stop the source DB and sync only the delta to the target, or something along those lines.    I've also been looking at barman and Slony to see if they might fit the bill as well.

Thanks again for the replies.

Jeff

On Fri, May 27, 2016 at 5:18 PM Melvin Davidson <melvin6925@gmail.com> wrote:


On Fri, May 27, 2016 at 5:09 PM, Jeff Baldwin <tarheeljeff@gmail.com> wrote:
Melvin,

Thank you for taking the time to reply to my question.

Below are the details you have requested:

SOURCE:
CentOS release 4.6 
Postgres 8.3

TARGET:
CentOS release 6.2
Postgres 8.3

Kind Regards,
Jeff

On Fri, May 27, 2016 at 5:05 PM Melvin Davidson <melvin6925@gmail.com> wrote:

On Fri, May 27, 2016 at 4:56 PM, Jeff Baldwin <tarheeljeff@gmail.com> wrote:
Hello,

I am working to migrate 2 DB's (not the entire postgres instance), from 1 host to another... and I need some guidance on the best approach/practice.

I have migrated ~25 other DB's in this environment, and I was able to use pg_dump/pgrestore for those, and it worked fine.  These final 2 are live DB's, and I need to move them with minimal downtime (1-2hrs is acceptable).

The DB's are blob DB's that are 45 and 90G in size, and are in the same Data Center, with 1G connection in between

I am running postres 8.3 (I know :) ), so there may be some limitations there as well.   

Any help/guidance on the best way to approach this, are greatly appreciated.

Kind Regards,
Jeff

Well generically speaking, since you are migrating from 8.3, you are limited to pg_dump in plain format.
It would be nice (important) to know the PostgreSQL version you are migrating to, as well as what O/S you are working with.


--
Melvin Davidson
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


OK, well since both PostgreSQL versions are the same, then you can use custom format.

I would first by creating a testdb in the target server. Then export one small table in customer format and verify that you can use pg_restore to load to
the testdb. If that works, time how long a full dump takes in the old server as a start point. Then time how long it takes to do a full load into testdb.
You will then know how big of a window you need for migrating.

--
Melvin Davidson
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.


Well, Slony certainly will do the trick.
Keep in mind you will need to do schema only first to the slave.
You set up replication from the old server with the db on the new server as the slave. Then you initiate replication. It will probably take a long time to
replicate, but then you have the option to promote the slave at your time preference (IE: your 2 hr window). It should only take a few minutes for
Slony to do the switchover, but the best thing to do is a dry run first. IOW, you'll have to do the whole thing twice to get an accurate switch time,
but you won't need to change your network until you are ready to go live.


--
Melvin Davidson
I reserve the right to fantasize.  Whether or not you
wish to share my fantasy is entirely up to you.

Re: Migrate 2 DB's - v8.3

From
Alan Hodgson
Date:
On Friday, May 27, 2016 05:32:08 PM Melvin Davidson wrote:
> Well, Slony certainly will do the trick.
> Keep in mind you will need to do schema only first to the slave.
> You set up replication from the old server with the db on the new server as
> the slave. Then you initiate replication. It will probably take a long time
> to
> replicate, but then you have the option to promote the slave at your time
> preference (IE: your 2 hr window). It should only take a few minutes for
> Slony to do the switchover, but the best thing to do is a dry run first.
> IOW, you'll have to do the whole thing twice to get an accurate switch time,
> but you won't need to change your network until you are ready to go live.

Slony doesn't do BLOBs, afaik, unless he's using BYTEA fields.

Otherwise I believe dump/reload is OP's only choice. He should be able to do
90GB in 2 hours on fast enough hardware; just pipe it over the network to do
the restore simultaneous with the dump.

Also remove as many indexes as possible beforehand and use create concurrently
manually afterwards to add them back in.


Re: Migrate 2 DB's - v8.3

From
Jeff Baldwin
Date:
Thank you for your time Alan.

I'd like to confirm my understanding of your statement, and ask a question.

To move the DB,  you are suggesting something like this:
pg_dump -h dbms11 -U postgres -C mls11 | psql -h localhost -d mls11 -U postgres

I'm not familiar with removing/adding indexes (I'm not a DBA, just trying to pretend to be one for this project).   Can you elaborate on what might I need to do there?

Kind Regards,
Jeff

On Fri, May 27, 2016 at 6:05 PM Alan Hodgson <ahodgson@lists.simkin.ca> wrote:
On Friday, May 27, 2016 05:32:08 PM Melvin Davidson wrote:
> Well, Slony certainly will do the trick.
> Keep in mind you will need to do schema only first to the slave.
> You set up replication from the old server with the db on the new server as
> the slave. Then you initiate replication. It will probably take a long time
> to
> replicate, but then you have the option to promote the slave at your time
> preference (IE: your 2 hr window). It should only take a few minutes for
> Slony to do the switchover, but the best thing to do is a dry run first.
> IOW, you'll have to do the whole thing twice to get an accurate switch time,
> but you won't need to change your network until you are ready to go live.

Slony doesn't do BLOBs, afaik, unless he's using BYTEA fields.

Otherwise I believe dump/reload is OP's only choice. He should be able to do
90GB in 2 hours on fast enough hardware; just pipe it over the network to do
the restore simultaneous with the dump.

Also remove as many indexes as possible beforehand and use create concurrently
manually afterwards to add them back in.


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: Migrate 2 DB's - v8.3

From
Hannes Erven
Date:
Jeff,


is (temporarily) migrating the whole cluster an option? What I have in mind is roughly this:
- rsync/copy complete db dir to target (with src still being in production), throttle/repeat as necessary
- stop source db
- rsync again
- start src + target dbs
- drop moved databases in src
- drop unwanted databases in target

That way you could have minimal downtime (seconds to minutes) at the expense of temporary disk usage on the target
host.
Additional bonus: it's all standard Postgres tools (in contrast to e.g. a fancy trigger-based replication) and will
alsokeep any statistics and analyzes. 



Best regards,

-hannes




Am 27. Mai 2016 23:23:04 MESZ, schrieb Jeff Baldwin <tarheeljeff@gmail.com>:
>Thanks Melvin.
>
>I have done just this, and the time required to dump/restore in this
>manner
>far exceeds the outage window we can afford to have (max of 2hrs).   I
>am
>looking for alternatives to the standard dump/restore that might help
>me
>save time.
>
>For instance... if I could do a continuous rsync of only the 2 DB's in
>question.   Then stop the source DB and sync only the delta to the
>target,
>or something along those lines.    I've also been looking at barman and
>Slony to see if they might fit the bill as well.
>
>Thanks again for the replies.
>
>Jeff
>
>On Fri, May 27, 2016 at 5:18 PM Melvin Davidson <melvin6925@gmail.com>
>wrote:
>
>>
>>
>> On Fri, May 27, 2016 at 5:09 PM, Jeff Baldwin <tarheeljeff@gmail.com>
>> wrote:
>>
>>> Melvin,
>>>
>>> Thank you for taking the time to reply to my question.
>>>
>>> Below are the details you have requested:
>>>
>>> SOURCE:
>>> CentOS release 4.6
>>> Postgres 8.3
>>>
>>> TARGET:
>>> CentOS release 6.2
>>> Postgres 8.3
>>>
>>> Kind Regards,
>>> Jeff
>>>
>>> On Fri, May 27, 2016 at 5:05 PM Melvin Davidson
><melvin6925@gmail.com>
>>> wrote:
>>>
>>>>
>>>> On Fri, May 27, 2016 at 4:56 PM, Jeff Baldwin
><tarheeljeff@gmail.com>
>>>> wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> I am working to migrate 2 DB's (not the entire postgres instance),
>from
>>>>> 1 host to another... and I need some guidance on the best
>approach/practice.
>>>>>
>>>>> I have migrated ~25 other DB's in this environment, and I was able
>to
>>>>> use pg_dump/pgrestore for those, and it worked fine.  These final
>2 are
>>>>> live DB's, and I need to move them with minimal downtime (1-2hrs
>is
>>>>> acceptable).
>>>>>
>>>>> The DB's are blob DB's that are 45 and 90G in size, and are in the
>same
>>>>> Data Center, with 1G connection in between
>>>>>
>>>>> I am running postres 8.3 (I know :) ), so there may be some
>limitations
>>>>> there as well.
>>>>>
>>>>> Any help/guidance on the best way to approach this, are greatly
>>>>> appreciated.
>>>>>
>>>>> Kind Regards,
>>>>> Jeff
>>>>>
>>>>
>>>> Well generically speaking, since you are migrating from 8.3, you
>are
>>>> limited to pg_dump in plain format.
>>>> It would be nice (important) to know the PostgreSQL version you are
>>>> migrating to, as well as what O/S you are working with.
>>>>
>>>>
>>>> --
>>>> *Melvin Davidson*
>>>> I reserve the right to fantasize.  Whether or not you
>>>> wish to share my fantasy is entirely up to you.
>>>>
>>>
>> OK, well since both PostgreSQL versions are the same, then you can
>use
>> custom format.
>>
>> I would first by creating a testdb in the target server. Then export
>one
>> small table in customer format and verify that you can use pg_restore
>to
>> load to
>> the testdb. If that works, time how long a full dump takes in the old
>> server as a start point. Then time how long it takes to do a full
>load into
>> testdb.
>> You will then know how big of a window you need for migrating.
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>



Re: Migrate 2 DB's - v8.3

From
Jeff Baldwin
Date:
Hannes,

Thank you for the message. ---  I like your idea, but one thing I forgot to mention is that my target postgres cluster has production DB's running on it already.   I think your solution would overwrite those?   Or cause any other issues on the target side?

Perhaps I could stand up a 2nd postgres instance on the target server, and move the data there first?  Then it would at least be on the same box/storage.   Then I could pg_dump/pgrestore the 2 DB's I need into the production cluster, and shutdown the 2nd instance on the target server.  Or is that not necessary?

Just some thoughts...
 
Jeff



On Fri, May 27, 2016 at 6:41 PM Hannes Erven <hannes@erven.at> wrote:
Jeff,


is (temporarily) migrating the whole cluster an option? What I have in mind is roughly this:
- rsync/copy complete db dir to target (with src still being in production), throttle/repeat as necessary
- stop source db
- rsync again
- start src + target dbs
- drop moved databases in src
- drop unwanted databases in target

That way you could have minimal downtime (seconds to minutes) at the expense of temporary disk usage on the target host.
Additional bonus: it's all standard Postgres tools (in contrast to e.g. a fancy trigger-based replication) and will also keep any statistics and analyzes.



Best regards,

-hannes




Am 27. Mai 2016 23:23:04 MESZ, schrieb Jeff Baldwin <tarheeljeff@gmail.com>:
>Thanks Melvin.
>
>I have done just this, and the time required to dump/restore in this
>manner
>far exceeds the outage window we can afford to have (max of 2hrs).   I
>am
>looking for alternatives to the standard dump/restore that might help
>me
>save time.
>
>For instance... if I could do a continuous rsync of only the 2 DB's in
>question.   Then stop the source DB and sync only the delta to the
>target,
>or something along those lines.    I've also been looking at barman and
>Slony to see if they might fit the bill as well.
>
>Thanks again for the replies.
>
>Jeff
>
>On Fri, May 27, 2016 at 5:18 PM Melvin Davidson <melvin6925@gmail.com>
>wrote:
>
>>
>>
>> On Fri, May 27, 2016 at 5:09 PM, Jeff Baldwin <tarheeljeff@gmail.com>
>> wrote:
>>
>>> Melvin,
>>>
>>> Thank you for taking the time to reply to my question.
>>>
>>> Below are the details you have requested:
>>>
>>> SOURCE:
>>> CentOS release 4.6
>>> Postgres 8.3
>>>
>>> TARGET:
>>> CentOS release 6.2
>>> Postgres 8.3
>>>
>>> Kind Regards,
>>> Jeff
>>>
>>> On Fri, May 27, 2016 at 5:05 PM Melvin Davidson
><melvin6925@gmail.com>
>>> wrote:
>>>
>>>>
>>>> On Fri, May 27, 2016 at 4:56 PM, Jeff Baldwin
><tarheeljeff@gmail.com>
>>>> wrote:
>>>>
>>>>> Hello,
>>>>>
>>>>> I am working to migrate 2 DB's (not the entire postgres instance),
>from
>>>>> 1 host to another... and I need some guidance on the best
>approach/practice.
>>>>>
>>>>> I have migrated ~25 other DB's in this environment, and I was able
>to
>>>>> use pg_dump/pgrestore for those, and it worked fine.  These final
>2 are
>>>>> live DB's, and I need to move them with minimal downtime (1-2hrs
>is
>>>>> acceptable).
>>>>>
>>>>> The DB's are blob DB's that are 45 and 90G in size, and are in the
>same
>>>>> Data Center, with 1G connection in between
>>>>>
>>>>> I am running postres 8.3 (I know :) ), so there may be some
>limitations
>>>>> there as well.
>>>>>
>>>>> Any help/guidance on the best way to approach this, are greatly
>>>>> appreciated.
>>>>>
>>>>> Kind Regards,
>>>>> Jeff
>>>>>
>>>>
>>>> Well generically speaking, since you are migrating from 8.3, you
>are
>>>> limited to pg_dump in plain format.
>>>> It would be nice (important) to know the PostgreSQL version you are
>>>> migrating to, as well as what O/S you are working with.
>>>>
>>>>
>>>> --
>>>> *Melvin Davidson*
>>>> I reserve the right to fantasize.  Whether or not you
>>>> wish to share my fantasy is entirely up to you.
>>>>
>>>
>> OK, well since both PostgreSQL versions are the same, then you can
>use
>> custom format.
>>
>> I would first by creating a testdb in the target server. Then export
>one
>> small table in customer format and verify that you can use pg_restore
>to
>> load to
>> the testdb. If that works, time how long a full dump takes in the old
>> server as a start point. Then time how long it takes to do a full
>load into
>> testdb.
>> You will then know how big of a window you need for migrating.
>>
>> --
>> *Melvin Davidson*
>> I reserve the right to fantasize.  Whether or not you
>> wish to share my fantasy is entirely up to you.
>>

Re: Migrate 2 DB`s - v8.3

From
"Greg Sabino Mullane"
Date:
-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160


> To move the DB,  you are suggesting something like this:
> pg_dump -h dbms11 -U postgres -C mls11 | psql -h localhost -d mls11

Basically yes.

> I'm not familiar with removing/adding indexes (I'm not a DBA, just trying
> to pretend to be one for this project).   Can you elaborate on what might I
> need to do there?

It basically means doing a DROP INDEX foobar; for each index on the new
database, copying the data over, and then doing CREATE INDEX CONCURRENTLY foobar ...

You mentioned that a pg_dump and psql restore takes longer than your 2 hour
window, but a lot of that time may simply be the index creations. You should
test out how long your biggest table takes by doing this:

* Copy the schema only to the new server:

pg_dump mls11 -h dbms11 --schema-only -C | psql

* Pick your largest table on the new server, and drop all indexes,
triggers, and constraints on it. Then time copying the data:

time pg_dump mls11 -h dbms11 --data-only -t foobar | psql mls11 -h newhost

This should give you a better indication of the bare minimum time needed
for that table. If you can find a newer version of pg_dump, you can do
all of the above a lot easier like so:

pg_dump mls11 -h dbms11 --section=pre-data -C | psql
time pg_dump mls11 -h dbms11 --section=data | psql mls11

This copies all the tables, and prevents the indexes and foreign keys from
being created. If that comes under your 2 hour window, you can at least have
a usable production database, and then start adding the indexed and foreign keys
back in. There are some further tricks one can do to speed up the transfer time,
but this will get you in the basic ballpark.

(It should be noted that Postgres 8.3 is extremely old and completely
unsupported. The inability to easily migrate to a new server is unlikely
to be your last problem because of this. You may even want to push for
a migration to 9.5 if you can, as that will also incur the same migration
timings as moving to a new 8.3 server, but at the end of the day you will
have a shiny 9.5 database.)

If that transfer is still over the 2 hour window, you will have to look into
a trigger based solution that can handle such an old version (which basically
means Slony or Bucardo). Even if it cannot copy all of the tables, it may be
able to do some of them, and then you can use pg_dump | psql for the rest.


- --
Greg Sabino Mullane greg@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201605272040
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----

iEYEAREDAAYFAldI6WkACgkQvJuQZxSWSsgkHACg2KjWStQF9qhIL6fNFhFB74Za
utAAoMa2WqCEfURl57g+hZc+LCEAnhT/
=WXCu
-----END PGP SIGNATURE-----




Re: Migrate 2 DB`s - v8.3

From
Jeff Baldwin
Date:
Thanks Greg,

Sounds like I've unknowingly stumbled onto a good path, the one you suggested.

I actually installed v9.5 on the target server.   I have it running on a different port (5444) and using a different data directory than the v8.3 install.   

I'm doing the dump, and forwarding it to the remote.   It's been running for a while... but I'm actually not seeing anything show up on the target side.   Does it dump locally first and then pipe over?  Here is details of the happenings:  http://pastebin.com/fEm3uJqy 

pg_dump -v -C mls | psql -h db-blob04 -d mls -p 5444 -U postgres

Perhaps I will kill this eventually and try the timings you suggest with just the data.

Thoughts/comments are always welcome....




On Fri, May 27, 2016 at 8:43 PM Greg Sabino Mullane <greg@turnstep.com> wrote:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160


> To move the DB,  you are suggesting something like this:
> pg_dump -h dbms11 -U postgres -C mls11 | psql -h localhost -d mls11

Basically yes.

> I'm not familiar with removing/adding indexes (I'm not a DBA, just trying
> to pretend to be one for this project).   Can you elaborate on what might I
> need to do there?

It basically means doing a DROP INDEX foobar; for each index on the new
database, copying the data over, and then doing CREATE INDEX CONCURRENTLY foobar ...

You mentioned that a pg_dump and psql restore takes longer than your 2 hour
window, but a lot of that time may simply be the index creations. You should
test out how long your biggest table takes by doing this:

* Copy the schema only to the new server:

pg_dump mls11 -h dbms11 --schema-only -C | psql

* Pick your largest table on the new server, and drop all indexes,
triggers, and constraints on it. Then time copying the data:

time pg_dump mls11 -h dbms11 --data-only -t foobar | psql mls11 -h newhost

This should give you a better indication of the bare minimum time needed
for that table. If you can find a newer version of pg_dump, you can do
all of the above a lot easier like so:

pg_dump mls11 -h dbms11 --section=pre-data -C | psql
time pg_dump mls11 -h dbms11 --section=data | psql mls11

This copies all the tables, and prevents the indexes and foreign keys from
being created. If that comes under your 2 hour window, you can at least have
a usable production database, and then start adding the indexed and foreign keys
back in. There are some further tricks one can do to speed up the transfer time,
but this will get you in the basic ballpark.

(It should be noted that Postgres 8.3 is extremely old and completely
unsupported. The inability to easily migrate to a new server is unlikely
to be your last problem because of this. You may even want to push for
a migration to 9.5 if you can, as that will also incur the same migration
timings as moving to a new 8.3 server, but at the end of the day you will
have a shiny 9.5 database.)

If that transfer is still over the 2 hour window, you will have to look into
a trigger based solution that can handle such an old version (which basically
means Slony or Bucardo). Even if it cannot copy all of the tables, it may be
able to do some of them, and then you can use pg_dump | psql for the rest.


- --
Greg Sabino Mullane greg@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201605272040
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----

iEYEAREDAAYFAldI6WkACgkQvJuQZxSWSsgkHACg2KjWStQF9qhIL6fNFhFB74Za
utAAoMa2WqCEfURl57g+hZc+LCEAnhT/
=WXCu
-----END PGP SIGNATURE-----




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: Migrate 2 DB`s - v8.3

From
Jeff Baldwin
Date:
I got that figured out, and the data is now going into my v9.5 cluster (shiny and new!).

I happen to hit 'Enter' on my terminal window after it was stagnant for ~1hr, and it gave me this error:

psql: fe_sendauth: no password supplied

I corrected that with pgpass and things are looking good.

Thanks.
jeff

On Fri, May 27, 2016 at 9:17 PM Jeff Baldwin <tarheeljeff@gmail.com> wrote:
Thanks Greg,

Sounds like I've unknowingly stumbled onto a good path, the one you suggested.

I actually installed v9.5 on the target server.   I have it running on a different port (5444) and using a different data directory than the v8.3 install.   

I'm doing the dump, and forwarding it to the remote.   It's been running for a while... but I'm actually not seeing anything show up on the target side.   Does it dump locally first and then pipe over?  Here is details of the happenings:  http://pastebin.com/fEm3uJqy 

pg_dump -v -C mls | psql -h db-blob04 -d mls -p 5444 -U postgres

Perhaps I will kill this eventually and try the timings you suggest with just the data.

Thoughts/comments are always welcome....




On Fri, May 27, 2016 at 8:43 PM Greg Sabino Mullane <greg@turnstep.com> wrote:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160


> To move the DB,  you are suggesting something like this:
> pg_dump -h dbms11 -U postgres -C mls11 | psql -h localhost -d mls11

Basically yes.

> I'm not familiar with removing/adding indexes (I'm not a DBA, just trying
> to pretend to be one for this project).   Can you elaborate on what might I
> need to do there?

It basically means doing a DROP INDEX foobar; for each index on the new
database, copying the data over, and then doing CREATE INDEX CONCURRENTLY foobar ...

You mentioned that a pg_dump and psql restore takes longer than your 2 hour
window, but a lot of that time may simply be the index creations. You should
test out how long your biggest table takes by doing this:

* Copy the schema only to the new server:

pg_dump mls11 -h dbms11 --schema-only -C | psql

* Pick your largest table on the new server, and drop all indexes,
triggers, and constraints on it. Then time copying the data:

time pg_dump mls11 -h dbms11 --data-only -t foobar | psql mls11 -h newhost

This should give you a better indication of the bare minimum time needed
for that table. If you can find a newer version of pg_dump, you can do
all of the above a lot easier like so:

pg_dump mls11 -h dbms11 --section=pre-data -C | psql
time pg_dump mls11 -h dbms11 --section=data | psql mls11

This copies all the tables, and prevents the indexes and foreign keys from
being created. If that comes under your 2 hour window, you can at least have
a usable production database, and then start adding the indexed and foreign keys
back in. There are some further tricks one can do to speed up the transfer time,
but this will get you in the basic ballpark.

(It should be noted that Postgres 8.3 is extremely old and completely
unsupported. The inability to easily migrate to a new server is unlikely
to be your last problem because of this. You may even want to push for
a migration to 9.5 if you can, as that will also incur the same migration
timings as moving to a new 8.3 server, but at the end of the day you will
have a shiny 9.5 database.)

If that transfer is still over the 2 hour window, you will have to look into
a trigger based solution that can handle such an old version (which basically
means Slony or Bucardo). Even if it cannot copy all of the tables, it may be
able to do some of them, and then you can use pg_dump | psql for the rest.


- --
Greg Sabino Mullane greg@turnstep.com
End Point Corporation http://www.endpoint.com/
PGP Key: 0x14964AC8 201605272040
http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8
-----BEGIN PGP SIGNATURE-----

iEYEAREDAAYFAldI6WkACgkQvJuQZxSWSsgkHACg2KjWStQF9qhIL6fNFhFB74Za
utAAoMa2WqCEfURl57g+hZc+LCEAnhT/
=WXCu
-----END PGP SIGNATURE-----




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Re: Migrate 2 DB's - v8.3

From
Hannes Erven
Date:
Hi Jeff,


 > Thank you for the message. ---  I like your idea, but one thing I
 > forgot to mention is that my target postgres cluster has production
 > DB's running on it already.

Oh, that's important information.

The only way to "merge" additional databases into an existing cluster
would be dumping and importing (in either direction -- either merge
existing target dbs into the rsync'ed cluster or vice versa).


My original suggestion would transfer the original cluster as-is to the
target machine into an empty directory, and create a new (possibly
additional) postgresql instance on the target.

In Ubuntu (and I guess in CentOS as well) there is built-in support for
multiple clusters (instances) on a single host (see pg_lsclusters or
pg_ctlcluster). Every cluster will listen on a different port.

This enables (but also forces) per-cluster management of replication,
backup and upgrades. However, that approach might not scale well and
tuning memory/io settings might be more difficult than with a single
cluster on the DB host.


 > Perhaps I could stand up a 2nd postgres instance on the target
 > server, and move the data there first?  Then it would at least be on
 > the same box/storage.   Then I could pg_dump/pgrestore the 2 DB's I
 > need into the production cluster, and shutdown the 2nd instance on
 > the target server.

That would be an option, but consider that in this case the IO subsystem
of the target server would have to read and write your data at the same
time.
Unless you have source and target clusters on independent spindles or a
really slow source server, this will possibly take even longer than to
pipe the data over your GBIT network connection.


Best regards,

    -hannes


Re: Migrate 2 DB's - v8.3

From
Francisco Olarte
Date:
Jeff:

On Sat, May 28, 2016 at 12:38 AM, Jeff Baldwin <tarheeljeff@gmail.com> wrote:
> Thank you for your time Alan.
..
> To move the DB,  you are suggesting something like this:
> pg_dump -h dbms11 -U postgres -C mls11 | psql -h localhost -d mls11 -U
> postgres

I'd like to point one thing, you MAY get a little more speed if you
run pg_dump AND psql each in the same host as the DB it's operating on
to minimize latency ( and I would time unix socket vs network first in
case it differs ). ( to do that I would try something like 'ssh dbms11
"pg_dump  mls11 " | psql -d mls11' with all the needed doodahs, and
maybe use something like netcat or socat instead of ssh ). The
rationale being the intermediate dump is just a data stream and not
latency sensitive ( except for the window*latency problem, but you are
not going to hit that on a LAN ), while the dump/restore does DB work
which is more latency sensitive ( I do not know how many RTTs it would
need, specially with blobs, but you can try it ).

¿ How many hours does it take in your tests? Because if you have 1-2
and you can do the dump psql pipe trick, which is quite robust, in 3-4
you may push for it ( arguing it's a simpler an more testable process
).

Francisco Olarte.


Re: Migrate 2 DB's - v8.3

From
Martín Marqués
Date:
I still don't understand why the OP is getting into so much trouble and
doesn't upgrade to a newer version like 9.3 or 9.4 (or even 9.5).

All this hassle to stay on an unsupported postgres is just useless, IMNSHO.

Regards,

El 28/05/16 a las 12:26, Francisco Olarte escribió:
> Jeff:
>
> On Sat, May 28, 2016 at 12:38 AM, Jeff Baldwin <tarheeljeff@gmail.com> wrote:
>> Thank you for your time Alan.
> ..
>> To move the DB,  you are suggesting something like this:
>> pg_dump -h dbms11 -U postgres -C mls11 | psql -h localhost -d mls11 -U
>> postgres
>
> I'd like to point one thing, you MAY get a little more speed if you
> run pg_dump AND psql each in the same host as the DB it's operating on
> to minimize latency ( and I would time unix socket vs network first in
> case it differs ). ( to do that I would try something like 'ssh dbms11
> "pg_dump  mls11 " | psql -d mls11' with all the needed doodahs, and
> maybe use something like netcat or socat instead of ssh ). The
> rationale being the intermediate dump is just a data stream and not
> latency sensitive ( except for the window*latency problem, but you are
> not going to hit that on a LAN ), while the dump/restore does DB work
> which is more latency sensitive ( I do not know how many RTTs it would
> need, specially with blobs, but you can try it ).
>
> ¿ How many hours does it take in your tests? Because if you have 1-2
> and you can do the dump psql pipe trick, which is quite robust, in 3-4
> you may push for it ( arguing it's a simpler an more testable process
> ).
>
> Francisco Olarte.
>
>


--
Martín Marqués                http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


Re: Migrate 2 DB's - v8.3

From
Francisco Olarte
Date:
Martin:

Could you please avoid unedited top posts?

On Sat, May 28, 2016 at 7:53 PM, Martín Marqués <martin@2ndquadrant.com> wrote:
> I still don't understand why the OP is getting into so much trouble and
> doesn't upgrade to a newer version like 9.3 or 9.4 (or even 9.5).

Neither do I, but the thing is trying to find the fastest way to move
a single database between two clusters on different machines, or
trying to find a way to do it without downtime, is useful on its own,
whicever the versions are.

> All this hassle to stay on an unsupported postgres is just useless, IMNSHO.

You can ask the OP for the reason to stay in 8.3 directly. Maybe is
something as simple as "I'm the one who pays, you do what I pay you
for.". I've had several of these.


Francisco Olarte.