Thread: How to upgrade from 9.1 to 9.2 with replication?
I have replication set up on servers with 9.1 and want to upgrade to 9.2 I was hoping I could just bring them both down, upgrade them both and bring them both up and continue replication, but that doesn't seem to work, the replication server won't come up. Is there anyway to do this upgrade with out taking a new base backup and rebuilding the replication drive? -- View this message in context: http://postgresql.1045698.n5.nabble.com/How-to-upgrade-from-9-1-to-9-2-with-replication-tp5728941.html Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
On 10/18/2012 5:21 PM, delongboy wrote:
Not that I know of.I have replication set up on servers with 9.1 and want to upgrade to 9.2 I was hoping I could just bring them both down, upgrade them both and bring them both up and continue replication, but that doesn't seem to work, the replication server won't come up. Is there anyway to do this upgrade with out taking a new base backup and rebuilding the replication drive?
I tried this as well when the development branches were out in a "sandbox" and it failed as it did for you.
For 9.1 -> 9.2 what I did was bring down the cluster, upgrade the master, then initdb the slave and run the script that brings over a new basebackup with the WAL archives ("-x" switch), and when complete just started the slave back up in slave mode.
This unfortunately does require a new data copy to be pulled across to the slave. For the local copies this isn't so bad as wire speed is fast enough to make it reasonable; for the actual backup units at a remove it takes a while as the copy has to go across a WAN link. I cheat on that by using a SSH tunnel with compression turned on (which, incidentally, it would be really nice if Postgres supported internally, and it could quite easily -- I've considered working up a patch set for this and submitting it.)
For really BIG databases (as opposed to moderately-big) this could be a much-more material problem than it is for me.
On 10/19/2012 09:44 AM, Karl Denninger wrote: > For really BIG databases (as opposed to moderately-big) this could be a > much-more material problem than it is for me. Which reminds me. I really wish pg_basebackup let you specify an alternative compression handler. We've been using pigz on our systems because our database is so large. It cuts backup time drastically, from about 2.5 hours to 28 minutes. Until a CPU can compress at the same speed it can read data from disk devices, that's going to continue to be a problem. Parallel compression is great. So even after our recent upgrade, we've kept using our home-grown backup system. :( -- Shaun Thomas OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604 312-444-8534 sthomas@optionshouse.com ______________________________________________ See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email
On Fri, Oct 19, 2012 at 11:44 AM, Karl Denninger <karl@denninger.net> wrote: > On 10/18/2012 5:21 PM, delongboy wrote: > > I have replication set up on servers with 9.1 and want to upgrade to 9.2 > I was hoping I could just bring them both down, upgrade them both and bring > them both up and continue replication, but that doesn't seem to work, the > replication server won't come up. > Is there anyway to do this upgrade with out taking a new base backup and > rebuilding the replication drive? > > Not that I know of. > > I tried this as well when the development branches were out in a "sandbox" > and it failed as it did for you. > > For 9.1 -> 9.2 what I did was bring down the cluster, upgrade the master, > then initdb the slave and run the script that brings over a new basebackup > with the WAL archives ("-x" switch), and when complete just started the > slave back up in slave mode. > > This unfortunately does require a new data copy to be pulled across to the > slave. For the local copies this isn't so bad as wire speed is fast enough > to make it reasonable; for the actual backup units at a remove it takes a > while as the copy has to go across a WAN link. I cheat on that by using a > SSH tunnel with compression turned on (which, incidentally, it would be > really nice if Postgres supported internally, and it could quite easily -- > I've considered working up a patch set for this and submitting it.) > > For really BIG databases (as opposed to moderately-big) this could be a > much-more material problem than it is for me. Did you try? Bring both down. pg_upgrade master Bring master up pg_upgrade slave rsync master->slave (differential update, much faster than basebackup) Bring slave up
On 10/19/2012 10:02 AM, Claudio Freire wrote:
Surprises in that regard could manifest in very unfortunate results that only become apparent a significant distance down the road.
That's an interesting idea that might work; are replicated servers in a consistent state guaranteed to have byte-identical filespaces? (other than the config file(s), of course) I have not checked that assumption.On Fri, Oct 19, 2012 at 11:44 AM, Karl Denninger <karl@denninger.net> wrote:On 10/18/2012 5:21 PM, delongboy wrote: I have replication set up on servers with 9.1 and want to upgrade to 9.2 I was hoping I could just bring them both down, upgrade them both and bring them both up and continue replication, but that doesn't seem to work, the replication server won't come up. Is there anyway to do this upgrade with out taking a new base backup and rebuilding the replication drive? Not that I know of. I tried this as well when the development branches were out in a "sandbox" and it failed as it did for you. For 9.1 -> 9.2 what I did was bring down the cluster, upgrade the master, then initdb the slave and run the script that brings over a new basebackup with the WAL archives ("-x" switch), and when complete just started the slave back up in slave mode. This unfortunately does require a new data copy to be pulled across to the slave. For the local copies this isn't so bad as wire speed is fast enough to make it reasonable; for the actual backup units at a remove it takes a while as the copy has to go across a WAN link. I cheat on that by using a SSH tunnel with compression turned on (which, incidentally, it would be really nice if Postgres supported internally, and it could quite easily -- I've considered working up a patch set for this and submitting it.) For really BIG databases (as opposed to moderately-big) this could be a much-more material problem than it is for me.Did you try? Bring both down. pg_upgrade master Bring master up pg_upgrade slave rsync master->slave (differential update, much faster than basebackup) Bring slave up
Surprises in that regard could manifest in very unfortunate results that only become apparent a significant distance down the road.
On 10/19/2012 10:49 AM, Karl Denninger wrote: > That's an interesting idea that might work; are replicated servers in a > consistent state guaranteed to have byte-identical filespaces? (other > than the config file(s), of course) I have not checked that assumption. Well, if they didn't before, they will after the rsync is finished. Update the config and start as a slave, and it's the same as a basebackup. -- Shaun Thomas OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604 312-444-8534 sthomas@optionshouse.com ______________________________________________ See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email
Shaun Thomas wrote: > Update the config and start as a slave, and it's the same as a > basebackup. ... as long as the rsync was bracketed by calls to pg_start_backup() and pg_stop_backup(). -Kevin
On 10/25/2012 07:10 AM, Kevin Grittner wrote: > ... as long as the rsync was bracketed by calls to pg_start_backup() > and pg_stop_backup(). Or they took it during a filesystem snapshot, or shut the database down. I thought that the only thing start/stop backup did was mark the beginning and end transaction logs for the duration of the backup so they could be backed up separately for a minimal replay. An rsync doesn't need that, because it's binary compatible. You get two exact copies of the database, provided data wasn't changing. That's easy enough to accomplish, really. Or is there some embedded magic in streaming replication that requires start/stop backup? I've never had problems starting slaves built from an rsync before. -- Shaun Thomas OptionsHouse | 141 W. Jackson Blvd. | Suite 500 | Chicago IL, 60604 312-444-8534 sthomas@optionshouse.com ______________________________________________ See http://www.peak6.com/email_disclaimer/ for terms and conditions related to this email
On Thu, Oct 25, 2012 at 9:47 AM, Shaun Thomas <sthomas@optionshouse.com> wrote: >> ... as long as the rsync was bracketed by calls to pg_start_backup() >> and pg_stop_backup(). > > > Or they took it during a filesystem snapshot, or shut the database down. > > I thought that the only thing start/stop backup did was mark the beginning > and end transaction logs for the duration of the backup so they could be > backed up separately for a minimal replay. > > An rsync doesn't need that, because it's binary compatible. You get two > exact copies of the database, provided data wasn't changing. That's easy > enough to accomplish, really. ... as long as the rsync was bracketed by calls to pg_start_backup() and pg_stop_backup(). Or they took it during a filesystem snapshot, or shut the database down. I thought that the only thing start/stop backup did was mark the beginning and end transaction logs for the duration of the backup so they could be backed up separately for a minimal replay. An rsync doesn't need that, because it's binary compatible. You get two exact copies of the database, provided data wasn't changing. That's easy enough to accomplish, really. Well, that's the thing. Without pg_start_backup, the database is changing and rsync will not make a perfect copy. With pg_start_backup, the replica will replay the WAL from the start_backup point, and any difference rsync left will be ironed out. That's why I say: rsync - the first one takes a long time start backup rsync - this one will take a lot less stop backup
I brought down the master then the slave and upgraded both. Then I did the rsync and brought both up.. This worked. However with the database being very large it took quite a while. It seemed rsync had to make a lot of changes.. this surprised me. I thought they would be almost identical. But in the end it did work. just took longer than I had hoped. We will soon be tripling the size of our database as we move oracle data in.. so this process may not be so feasible next time. -- View this message in context: http://postgresql.1045698.n5.nabble.com/How-to-upgrade-from-9-1-to-9-2-with-replication-tp5728941p5729618.html Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
On 10/25/2012 9:12 AM, delongboy wrote:
1. Set up a SECOND instance of the slave with the NEW software version, but do not populate it.
2. Turn off the original slave.
3. Upgrade the master. This is your "hard" downtime you cannot avoid. Restart the master on the new version and resume operations.
3. At this point the slave cannot connect as it has a version mismatch, so do NOT restart it.
4. pg_start_backup('Upgrading') and rsync the master to the NEW slave directory ex config files (postgresql.conf, recovery.conf and pg_hba.conf, plus the SSL keys if you're using it). Do NOT rsync pg_xlog's contents or the WAL archive logs from the master. Then pg_stop_backup(). Copy in the config files from your slave repository (very important as you must NOT start the slave server without the correct slave config or it will immediately destroy the context that allows it come up as a slave and you get to start over with #4.)
5. Bring up the NEW slave instance. It will immediately connect back to the new master and catch up. This will not take very long as the only data it needs to fetch is that which changed during #4 above.
If you have multiple slaves you can do multiple rsync's (in parallel if you wish) to them between the pg_start_backup and pg_stop_backup calls. The only "gotcha" doing it this way is that you must be keeping enough WAL records on the master to cover the time between the pg_start_backup call and when you bring the slaves back up in replication mode so they're able to retrieve the WAL data and come back into sync. If you come up short the restart will fail.
When the slaves restart they will come into consistency almost immediately but will be materially behind until the replication protocol catches up.
BTW this is much faster than using pg_basebackup (by a factor of four or more at my installation!) -- it appears that the latter does not effectively use compression of the data stream even if your SSL config is in use and would normally use it; rsync used with the "z" option does use it and very effectively so.
What I have done successfully is this.I brought down the master then the slave and upgraded both. Then I did the rsync and brought both up.. This worked. However with the database being very large it took quite a while. It seemed rsync had to make a lot of changes.. this surprised me. I thought they would be almost identical. But in the end it did work. just took longer than I had hoped. We will soon be tripling the size of our database as we move oracle data in.. so this process may not be so feasible next time.
1. Set up a SECOND instance of the slave with the NEW software version, but do not populate it.
2. Turn off the original slave.
3. Upgrade the master. This is your "hard" downtime you cannot avoid. Restart the master on the new version and resume operations.
3. At this point the slave cannot connect as it has a version mismatch, so do NOT restart it.
4. pg_start_backup('Upgrading') and rsync the master to the NEW slave directory ex config files (postgresql.conf, recovery.conf and pg_hba.conf, plus the SSL keys if you're using it). Do NOT rsync pg_xlog's contents or the WAL archive logs from the master. Then pg_stop_backup(). Copy in the config files from your slave repository (very important as you must NOT start the slave server without the correct slave config or it will immediately destroy the context that allows it come up as a slave and you get to start over with #4.)
5. Bring up the NEW slave instance. It will immediately connect back to the new master and catch up. This will not take very long as the only data it needs to fetch is that which changed during #4 above.
If you have multiple slaves you can do multiple rsync's (in parallel if you wish) to them between the pg_start_backup and pg_stop_backup calls. The only "gotcha" doing it this way is that you must be keeping enough WAL records on the master to cover the time between the pg_start_backup call and when you bring the slaves back up in replication mode so they're able to retrieve the WAL data and come back into sync. If you come up short the restart will fail.
When the slaves restart they will come into consistency almost immediately but will be materially behind until the replication protocol catches up.
BTW this is much faster than using pg_basebackup (by a factor of four or more at my installation!) -- it appears that the latter does not effectively use compression of the data stream even if your SSL config is in use and would normally use it; rsync used with the "z" option does use it and very effectively so.
On Sun, Oct 28, 2012 at 12:15 PM, Karl Denninger <karl@denninger.net> wrote: > 4. pg_start_backup('Upgrading') and rsync the master to the NEW slave > directory ex config files (postgresql.conf, recovery.conf and pg_hba.conf, > plus the SSL keys if you're using it). Do NOT rsync pg_xlog's contents or > the WAL archive logs from the master. Then pg_stop_backup(). Copy in the > config files from your slave repository (very important as you must NOT > start the slave server without the correct slave config or it will > immediately destroy the context that allows it come up as a slave and you > get to start over with #4.) > > 5. Bring up the NEW slave instance. It will immediately connect back to the > new master and catch up. This will not take very long as the only data it > needs to fetch is that which changed during #4 above. > > If you have multiple slaves you can do multiple rsync's (in parallel if you > wish) to them between the pg_start_backup and pg_stop_backup calls. The > only "gotcha" doing it this way is that you must be keeping enough WAL > records on the master to cover the time between the pg_start_backup call and > when you bring the slaves back up in replication mode so they're able to > retrieve the WAL data and come back into sync. If you come up short the > restart will fail. > > When the slaves restart they will come into consistency almost immediately > but will be materially behind until the replication protocol catches up. That's why I perform two rsyncs, one without pg_start_backup, and one with. Without, you get no guarantees, but it helps rsync be faster next time. So you cut down on the amount of changes that second rsync will have to transfer, you may even skip whole segments, if your update patterns aren't too random. I still have a considerable amount of time between the start_backup and end_backup, but I have minimal downtimes and it never failed. Just for the record, we do this quite frequently in our pre-production servers, since the network there is a lot slower and replication falls irreparably out of sync quite often. And nobody notices when we re-sync the slave. (ie: downtime at the master is nonexistent).
On Sun, Oct 28, 2012 at 9:40 PM, Claudio Freire <klaussfreire@gmail.com> wrote:
I also think that's a good option for most case, but not because it is faster, in fact if you count the whole process, it is slower. But the master will be on backup state (between pg_start_backup and pg_stop_backup) for a small period of time which make things go faster on the master (nothing different on slave though).
If you have incremental backup, a restore_command on recovery.conf seems better than running rsync again when the slave get out of sync. Doesn't it?
On Sun, Oct 28, 2012 at 12:15 PM, Karl Denninger <karl@denninger.net> wrote:That's why I perform two rsyncs, one without pg_start_backup, and one
> 4. pg_start_backup('Upgrading') and rsync the master to the NEW slave
> directory ex config files (postgresql.conf, recovery.conf and pg_hba.conf,
> plus the SSL keys if you're using it). Do NOT rsync pg_xlog's contents or
> the WAL archive logs from the master. Then pg_stop_backup(). Copy in the
> config files from your slave repository (very important as you must NOT
> start the slave server without the correct slave config or it will
> immediately destroy the context that allows it come up as a slave and you
> get to start over with #4.)
>
> 5. Bring up the NEW slave instance. It will immediately connect back to the
> new master and catch up. This will not take very long as the only data it
> needs to fetch is that which changed during #4 above.
>
> If you have multiple slaves you can do multiple rsync's (in parallel if you
> wish) to them between the pg_start_backup and pg_stop_backup calls. The
> only "gotcha" doing it this way is that you must be keeping enough WAL
> records on the master to cover the time between the pg_start_backup call and
> when you bring the slaves back up in replication mode so they're able to
> retrieve the WAL data and come back into sync. If you come up short the
> restart will fail.
>
> When the slaves restart they will come into consistency almost immediately
> but will be materially behind until the replication protocol catches up.
with. Without, you get no guarantees, but it helps rsync be faster
next time. So you cut down on the amount of changes that second rsync
will have to transfer, you may even skip whole segments, if your
update patterns aren't too random.
I still have a considerable amount of time between the start_backup
and end_backup, but I have minimal downtimes and it never failed.
I also think that's a good option for most case, but not because it is faster, in fact if you count the whole process, it is slower. But the master will be on backup state (between pg_start_backup and pg_stop_backup) for a small period of time which make things go faster on the master (nothing different on slave though).
Just for the record, we do this quite frequently in our pre-production
servers, since the network there is a lot slower and replication falls
irreparably out of sync quite often. And nobody notices when we
re-sync the slave. (ie: downtime at the master is nonexistent).
If you have incremental backup, a restore_command on recovery.conf seems better than running rsync again when the slave get out of sync. Doesn't it?
Regards,
--
Matheus de Oliveira
Analista de Banco de Dados PostgreSQL
Dextra Sistemas - MPS.Br nível F!
www.dextra.com.br/postgres
On Mon, Oct 29, 2012 at 7:41 AM, Matheus de Oliveira <matioli.matheus@gmail.com> wrote: > I also think that's a good option for most case, but not because it is > faster, in fact if you count the whole process, it is slower. But the master > will be on backup state (between pg_start_backup and pg_stop_backup) for a > small period of time which make things go faster on the master (nothing > different on slave though). Exactly the point. >> >> Just for the record, we do this quite frequently in our pre-production >> servers, since the network there is a lot slower and replication falls >> irreparably out of sync quite often. And nobody notices when we >> re-sync the slave. (ie: downtime at the master is nonexistent). >> > > If you have incremental backup, a restore_command on recovery.conf seems > better than running rsync again when the slave get out of sync. Doesn't it? What do you mean? Usually, when it falls out of sync like that, it's because the database is undergoing structural changes, and the link between master and slave (both streaming and WAL shipping) isn't strong enough to handle the massive rewrites. A backup is of no use there either. We could make the rsync part of a recovery command, but we don't want to be left out of the loop so we prefer to do it manually. As noted, it always happens when someone's doing structural changes so it's not entirely unexpected. Or am I missing some point?
On Mon, Oct 29, 2012 at 9:53 AM, Claudio Freire <klaussfreire@gmail.com> wrote:
What do you mean?
>>
>> Just for the record, we do this quite frequently in our pre-production
>> servers, since the network there is a lot slower and replication falls
>> irreparably out of sync quite often. And nobody notices when we
>> re-sync the slave. (ie: downtime at the master is nonexistent).
>>
>
> If you have incremental backup, a restore_command on recovery.conf seems
> better than running rsync again when the slave get out of sync. Doesn't it?
Usually, when it falls out of sync like that, it's because the
database is undergoing structural changes, and the link between master
and slave (both streaming and WAL shipping) isn't strong enough to
handle the massive rewrites. A backup is of no use there either. We
could make the rsync part of a recovery command, but we don't want to
be left out of the loop so we prefer to do it manually. As noted, it
always happens when someone's doing structural changes so it's not
entirely unexpected.
Or am I missing some point?
What I meant is that *if* you save you log segments somewhere (with archive_command), you can always use the restore_command on the slave side to catch-up with the master, even if streaming replication failed and you got out of sync. Of course if you structural changes is *really big*, perhaps recovering from WAL archives could even be slower than rsync (I really think it's hard to happen though).
Regards,
--
Matheus de Oliveira
Analista de Banco de Dados PostgreSQL
Dextra Sistemas - MPS.Br nível F!
www.dextra.com.br/postgres
On Mon, Oct 29, 2012 at 9:09 AM, Matheus de Oliveira <matioli.matheus@gmail.com> wrote: >> > If you have incremental backup, a restore_command on recovery.conf seems >> > better than running rsync again when the slave get out of sync. Doesn't >> > it? >> >> What do you mean? >> >> Usually, when it falls out of sync like that, it's because the >> database is undergoing structural changes, and the link between master >> and slave (both streaming and WAL shipping) isn't strong enough to >> handle the massive rewrites. A backup is of no use there either. We >> could make the rsync part of a recovery command, but we don't want to >> be left out of the loop so we prefer to do it manually. As noted, it >> always happens when someone's doing structural changes so it's not >> entirely unexpected. >> >> Or am I missing some point? > > > What I meant is that *if* you save you log segments somewhere (with > archive_command), you can always use the restore_command on the slave side > to catch-up with the master, even if streaming replication failed and you > got out of sync. Of course if you structural changes is *really big*, > perhaps recovering from WAL archives could even be slower than rsync (I > really think it's hard to happen though). I imagine it's automatic. We have WAL shipping in place, but even that gets out of sync (more segments generated than our quota on the archive allows - we can't really keep more since we lack the space on the server we put them).
On Mon, Oct 29, 2012 at 10:23 AM, Claudio Freire <klaussfreire@gmail.com> wrote:
If you don't set restore_command *and* get more segments than max_wal_keep_segments, PostgreSQL will not read the archived segments (it does not even know where it is actually).
On Mon, Oct 29, 2012 at 9:09 AM, Matheus de OliveiraI imagine it's automatic.
<matioli.matheus@gmail.com> wrote:
>> > If you have incremental backup, a restore_command on recovery.conf seems
>> > better than running rsync again when the slave get out of sync. Doesn't
>> > it?
>>
>> What do you mean?
>>
>> Usually, when it falls out of sync like that, it's because the
>> database is undergoing structural changes, and the link between master
>> and slave (both streaming and WAL shipping) isn't strong enough to
>> handle the massive rewrites. A backup is of no use there either. We
>> could make the rsync part of a recovery command, but we don't want to
>> be left out of the loop so we prefer to do it manually. As noted, it
>> always happens when someone's doing structural changes so it's not
>> entirely unexpected.
>>
>> Or am I missing some point?
>
>
> What I meant is that *if* you save you log segments somewhere (with
> archive_command), you can always use the restore_command on the slave side
> to catch-up with the master, even if streaming replication failed and you
> got out of sync. Of course if you structural changes is *really big*,
> perhaps recovering from WAL archives could even be slower than rsync (I
> really think it's hard to happen though).
If you don't set restore_command *and* get more segments than max_wal_keep_segments, PostgreSQL will not read the archived segments (it does not even know where it is actually).
We have WAL shipping in place, but even that
gets out of sync (more segments generated than our quota on the
archive allows - we can't really keep more since we lack the space on
the server we put them).
Yeah, in that case there is no way. If you cannot keep *all* segments during your "structural changes" you will have to go with a rsync (or something similar).
But that's an option for you to know, *if* you have enough segments, than it is possible to restore from them. In some customers of mine (with little disk space) I even don't set max_wal_keep_segments too high, and prefer to "keep" the segments with archive_command, but that's not the better scenario.
Regards,
--
Matheus de Oliveira
Analista de Banco de Dados PostgreSQL
Dextra Sistemas - MPS.Br nível F!
www.dextra.com.br/postgres
On Fri, Oct 19, 2012 at 12:02:49PM -0300, Claudio Freire wrote: > > This unfortunately does require a new data copy to be pulled across to the > > slave. For the local copies this isn't so bad as wire speed is fast enough > > to make it reasonable; for the actual backup units at a remove it takes a > > while as the copy has to go across a WAN link. I cheat on that by using a > > SSH tunnel with compression turned on (which, incidentally, it would be > > really nice if Postgres supported internally, and it could quite easily -- > > I've considered working up a patch set for this and submitting it.) > > > > For really BIG databases (as opposed to moderately-big) this could be a > > much-more material problem than it is for me. > > Did you try? > > Bring both down. > pg_upgrade master > Bring master up > pg_upgrade slave Is there any reason to upgrade the slave when you are going to do rsync anyway? Of course you need to install the new binaries and libs, but it seems running pg_upgrade on the standby is unnecessary. > rsync master->slave (differential update, much faster than basebackup) > Bring slave up Good ideas. I have applied the attached doc patch to pg_upgrade head and 9.2 docs to suggest using rsync as part of base backup. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
Attachment
On Wed, Nov 7, 2012 at 3:36 PM, Bruce Momjian <bruce@momjian.us> wrote: >> Bring both down. >> pg_upgrade master >> Bring master up >> pg_upgrade slave > > Is there any reason to upgrade the slave when you are going to do rsync > anyway? Of course you need to install the new binaries and libs, but it > seems running pg_upgrade on the standby is unnecessary. Just to speed up the rsync
On Wed, Nov 7, 2012 at 03:44:13PM -0300, Claudio Freire wrote: > On Wed, Nov 7, 2012 at 3:36 PM, Bruce Momjian <bruce@momjian.us> wrote: > >> Bring both down. > >> pg_upgrade master > >> Bring master up > >> pg_upgrade slave > > > > Is there any reason to upgrade the slave when you are going to do rsync > > anyway? Of course you need to install the new binaries and libs, but it > > seems running pg_upgrade on the standby is unnecessary. > > Just to speed up the rsync pg_upgrade is mostly modifying the system tables --- not sure if that is faster than just having rsync copy those. The file modification times would be different after pg_upgrade, so rsync might copy the file anyway when you run pg_upgrade. It would be good for you to test if it really is a win --- I would be surprised if pg_upgrade was in this case on the standby. -- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com + It's impossible for everything to be true. +
On Wed, Nov 7, 2012 at 5:59 PM, Bruce Momjian <bruce@momjian.us> wrote: >> > Is there any reason to upgrade the slave when you are going to do rsync >> > anyway? Of course you need to install the new binaries and libs, but it >> > seems running pg_upgrade on the standby is unnecessary. >> >> Just to speed up the rsync > > pg_upgrade is mostly modifying the system tables --- not sure if that is > faster than just having rsync copy those. The file modification times > would be different after pg_upgrade, so rsync might copy the file anyway > when you run pg_upgrade. It would be good for you to test if it really > is a win --- I would be surprised if pg_upgrade was in this case on the > standby. I guess it depends on the release (ie: whether a table rewrite is necessary). I'll check next time I upgrade a database, but I don't expect it to be anytime soon.