Thread: minimizing downtime when upgrading

minimizing downtime when upgrading

From
snacktime
Date:
Anyone have any tips for minimizing downtime when upgrading?  So far
we have done upgrades during scheduled downtimes.  Now we are getting
to the point where the time required for a standard dump/restore is
just too long.  What have others done when downtime is critical?  The
only solution we have been able to come up with is to migrate the data
on a per user basis to a new database server.  Each user is a
merchant, and the data in the database is order data.  Migrating one
merchant at a time will keep the downtime per merchant limited to just
the time it takes to migrate the data for that merchant, which is
acceptable.

Any other ideas?

Re: minimizing downtime when upgrading

From
Richard Huxton
Date:
snacktime wrote:
> Anyone have any tips for minimizing downtime when upgrading?  So far
> we have done upgrades during scheduled downtimes.  Now we are getting
> to the point where the time required for a standard dump/restore is
> just too long.  What have others done when downtime is critical?  The
> only solution we have been able to come up with is to migrate the data
> on a per user basis to a new database server.  Each user is a
> merchant, and the data in the database is order data.  Migrating one
> merchant at a time will keep the downtime per merchant limited to just
> the time it takes to migrate the data for that merchant, which is
> acceptable.

The other option would be to run replication, e.g. slony to migrate from
one version to another. I've done it and it works fine, but it will mean
slony adding its own tables to each database. I'd still do it one
merchant at a time, but that should reduce your downtime to seconds.

--
   Richard Huxton
   Archonet Ltd

Re: minimizing downtime when upgrading

From
Kenneth Downs
Date:
snacktime wrote:

> Anyone have any tips for minimizing downtime when upgrading?  So far
> we have done upgrades during scheduled downtimes.  Now we are getting
> to the point where the time required for a standard dump/restore is
> just too long.  What have others done when downtime is critical?  The
> only solution we have been able to come up with is to migrate the data
> on a per user basis to a new database server.  Each user is a
> merchant, and the data in the database is order data.  Migrating one
> merchant at a time will keep the downtime per merchant limited to just
> the time it takes to migrate the data for that merchant, which is
> acceptable.


AFAIK it has always been the case that you should expect to have to dump
out your databases and reload them for version upgrades.

Is anybody over at the dev team considering what an onerous burden this
is?  Is anyone considering doing away with it?



Attachment

Re: minimizing downtime when upgrading

From
Oleg Bartunov
Date:
On Thu, 15 Jun 2006, snacktime wrote:

> Anyone have any tips for minimizing downtime when upgrading?  So far
> we have done upgrades during scheduled downtimes.  Now we are getting
> to the point where the time required for a standard dump/restore is
> just too long.  What have others done when downtime is critical?  The
> only solution we have been able to come up with is to migrate the data
> on a per user basis to a new database server.  Each user is a
> merchant, and the data in the database is order data.  Migrating one
> merchant at a time will keep the downtime per merchant limited to just
> the time it takes to migrate the data for that merchant, which is
> acceptable.
>
> Any other ideas?

we use replication package slony for upgrading.

>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: Have you checked our extensive FAQ?
>
>              http://www.postgresql.org/docs/faq
>

     Regards,
         Oleg
_____________________________________________________________
Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru),
Sternberg Astronomical Institute, Moscow University, Russia
Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/
phone: +007(495)939-16-83, +007(495)939-23-83

Re: minimizing downtime when upgrading

From
Richard Huxton
Date:
Kenneth Downs wrote:
> AFAIK it has always been the case that you should expect to have to dump
> out your databases and reload them for version upgrades.
>
> Is anybody over at the dev team considering what an onerous burden this
> is?  Is anyone considering doing away with it?

Far from trivial. You have changes in on-disk formats and actual
functionality between major version numbers. For instance - what would
you do to deal with the recent changes in unicode validation?

--
   Richard Huxton
   Archonet Ltd

Re: minimizing downtime when upgrading

From
snacktime
Date:
On 6/16/06, Richard Huxton <dev@archonet.com> wrote:

> The other option would be to run replication, e.g. slony to migrate from
> one version to another. I've done it and it works fine, but it will mean
> slony adding its own tables to each database. I'd still do it one
> merchant at a time, but that should reduce your downtime to seconds.
>

I'll have to take another look at slony, it's been a while.  Our
database structure is a bit non standard.  Being a payment gateway, we
are required to have a separation of data between merchants, which
means not mixing data from different merchants in the same table.
So what we do is every user has their own schema, with their own set
of tables.  Yes I know that's not considered the best practice design
wise, but separate databases would have caused even more issues, and
as it turns out there are some advantages to the separate schema
approach that we never thought of.  Last time I looked at slony you
have to configure it for each individual table you want replicated.
We have around 50,000 tables, and more are added on a daily basis.

Re: minimizing downtime when upgrading

From
Bill Moran
Date:
In response to snacktime <snacktime@gmail.com>:

> On 6/16/06, Richard Huxton <dev@archonet.com> wrote:
>
> > The other option would be to run replication, e.g. slony to migrate from
> > one version to another. I've done it and it works fine, but it will mean
> > slony adding its own tables to each database. I'd still do it one
> > merchant at a time, but that should reduce your downtime to seconds.
>
> I'll have to take another look at slony, it's been a while.  Our
> database structure is a bit non standard.  Being a payment gateway, we
> are required to have a separation of data between merchants, which
> means not mixing data from different merchants in the same table.
> So what we do is every user has their own schema, with their own set
> of tables.  Yes I know that's not considered the best practice design
> wise, but separate databases would have caused even more issues, and
> as it turns out there are some advantages to the separate schema
> approach that we never thought of.  Last time I looked at slony you
> have to configure it for each individual table you want replicated.
> We have around 50,000 tables, and more are added on a daily basis.

We've got a script here that takes a pg_dump and automatically generates
a slony config that adds all tables and sequences.  I've got to check with
the Powers That Be, but i suspect we'll be opening up the code.

Does this duplicate any work that anyone else is doing?

--
Bill Moran
Collaborative Fusion Inc.

****************************************************************
IMPORTANT: This message contains confidential information and is
intended only for the individual named. If the reader of this
message is not an intended recipient (or the individual
responsible for the delivery of this message to an intended
recipient), please be advised that any re-use, dissemination,
distribution or copying of this message is prohibited. Please
notify the sender immediately by e-mail if you have received
this e-mail by mistake and delete this e-mail from your system.
E-mail transmission cannot be guaranteed to be secure or
error-free as information could be intercepted, corrupted, lost,
destroyed, arrive late or incomplete, or contain viruses. The
sender therefore does not accept liability for any errors or
omissions in the contents of this message, which arise as a
result of e-mail transmission.
****************************************************************

Re: minimizing downtime when upgrading

From
Kenneth Downs
Date:
Richard Huxton wrote:

> Kenneth Downs wrote:
>
>> AFAIK it has always been the case that you should expect to have to
>> dump out your databases and reload them for version upgrades.
>>
>> Is anybody over at the dev team considering what an onerous burden
>> this is?  Is anyone considering doing away with it?
>
>
> Far from trivial.

Kind of gets to the heart of things, though, doesn't it.

It's the non-trivial stuff where we look to the machine to help us out.

As a user of PostgreSQL, I benefit from a lot of things.  I gain a total
advantage of "X" units of time/money.  Then its time to upgrade and I
have to give a lot of it back.  The more I use the package, the more
non-trivial is my upgrade, and the more I give back.

Regardless of whether a package is commercial or free, it strikes me as
counter to the very soul of programming to build in a burden that
increases with the user's use of the program, threatening even to tip
the balance altogether away from its use.  This seems to be the very
kind of feature that you want to programmatically control precisely
because it is non-trivial.


> You have changes in on-disk formats and actual functionality between
> major version numbers. For instance - what would you do to deal with
> the recent changes in unicode validation?
>


Attachment

Re: minimizing downtime when upgrading

From
Martijn van Oosterhout
Date:
On Wed, Jun 21, 2006 at 08:10:20AM -0400, Kenneth Downs wrote:
> Regardless of whether a package is commercial or free, it strikes me as
> counter to the very soul of programming to build in a burden that
> increases with the user's use of the program, threatening even to tip
> the balance altogether away from its use.  This seems to be the very
> kind of feature that you want to programmatically control precisely
> because it is non-trivial.

That doesn't change the fact that it's a really hard problem. In-place
upgrades would require lots of safety checks because otherwise you
might end up with a cluster that's not readable by any version.

OTOH, you have something like slony which you can use to upgrade to
newer versions without any downtime at all. With a solution like that
working right now, why would people spend effort on making in-place
upgrades work?

Have a nice day,
--
Martijn van Oosterhout   <kleptog@svana.org>   http://svana.org/kleptog/
> From each according to his ability. To each according to his ability to litigate.

Attachment

Re: minimizing downtime when upgrading

From
Alban Hertroys
Date:
Kenneth Downs wrote:
> Richard Huxton wrote:
>
>> Kenneth Downs wrote:
>>
>>> AFAIK it has always been the case that you should expect to have to
>>> dump out your databases and reload them for version upgrades.
>>>
>>> Is anybody over at the dev team considering what an onerous burden
>>> this is?  Is anyone considering doing away with it?

Is there any good reason not to invest in having a second database
server? That way you could upgrade the slave server, switch that to be
the master server and replicate the data (using Slony-I, for example) to
the slave (formerly master) server.

It provides other benefits as well, like the ability to stay up during
system maintenance, load balancing, etc.

> Kind of gets to the heart of things, though, doesn't it.
>
> It's the non-trivial stuff where we look to the machine to help us out.
> As a user of PostgreSQL, I benefit from a lot of things.  I gain a total
> advantage of "X" units of time/money.  Then its time to upgrade and I
> have to give a lot of it back.  The more I use the package, the more
> non-trivial is my upgrade, and the more I give back.
> Regardless of whether a package is commercial or free, it strikes me as
> counter to the very soul of programming to build in a burden that
> increases with the user's use of the program, threatening even to tip
> the balance altogether away from its use.  This seems to be the very
> kind of feature that you want to programmatically control precisely
> because it is non-trivial.

Which is why you have to use the pg_dump from the new version to dump
your data, so it will be compatible with the new database on restore.
That's a good example of this already being the case.

Your real burden isn't the (possible!) data incompatibility between
major versions, but the fact that your data grows. The more data you
have, the more time a dump/restore will take.

You could attempt to just upgrade and hope your data can be interpreted
by a newer major version (you should dump first, of course). You'll want
to have some kind of checksums over your data to check if everything
went well.
This method can't be expected to always work, that'd be near impossible
to guarantee. There'll be changes to data structures (for the better),
for example. I suppose the developers could give some estimate about
your chances...

As mentioned, with a replicated setup your trouble should be minimal.

P.S. We don't use replication as of yet, but we probably will soon.
--
Alban Hertroys
alban@magproductions.nl

magproductions b.v.

T: ++31(0)534346874
F: ++31(0)534346876
M:
I: www.magproductions.nl
A: Postbus 416
    7500 AK Enschede

// Integrate Your World //

Re: minimizing downtime when upgrading

From
"H.J. Sanders"
Date:

>>>
> >>> Is anybody over at the dev team considering what an onerous burden
> >>> this is?  Is anyone considering doing away with it?

Just my 2 cents:

more and more databases have to run 24 * 7 , so something has to be done.

The last 15 years we also used Informix and we never, never had to unload/load
the database because of an upgrade.

Perhaps somebody knows how they do the trick?

Regards

Henk Sanders


Re: minimizing downtime when upgrading

From
Jim Nasby
Date:
On Jun 21, 2006, at 7:42 AM, H.J. Sanders wrote:
> The last 15 years we also used Informix and we never, never had to
> unload/load
> the database because of an upgrade.
>
> Perhaps somebody knows how they do the trick?

Do they provide a migration/upgrade utility?
--
Jim C. Nasby, Sr. Engineering Consultant      jnasby@pervasive.com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461



Re: minimizing downtime when upgrading

From
Kenneth Downs
Date:
Jim Nasby wrote:

> On Jun 21, 2006, at 7:42 AM, H.J. Sanders wrote:
>
>> The last 15 years we also used Informix and we never, never had to
>> unload/load
>> the database because of an upgrade.
>>
>> Perhaps somebody knows how they do the trick?
>
>
> Do they provide a migration/upgrade utility?

In the case of MS SQL Server the issue doesn't come up.  Here are some
user experiences:

1) Create a database in MS SS 6
2) Upgrade software to MS SS 7
3) Use software


1) Create a database in MS SS 6
2) Backup database (closest analog is -Fc option of pg_dump)
3) Move backup to machine running MS SS 7
4) Attempt to restore.  Program warns it will upgrade database and you
can't go back, proceed?  You say yes.


Attachment