Re: minimizing downtime when upgrading - Mailing list pgsql-general

From snacktime
Subject Re: minimizing downtime when upgrading
Date
Msg-id 1f060c4c0606161016m6e3a11fct1b8b79c1e96808d3@mail.gmail.com
Whole thread Raw
In response to Re: minimizing downtime when upgrading  (Richard Huxton <dev@archonet.com>)
Responses Re: minimizing downtime when upgrading  (Bill Moran <wmoran@collaborativefusion.com>)
List pgsql-general
On 6/16/06, Richard Huxton <dev@archonet.com> wrote:

> The other option would be to run replication, e.g. slony to migrate from
> one version to another. I've done it and it works fine, but it will mean
> slony adding its own tables to each database. I'd still do it one
> merchant at a time, but that should reduce your downtime to seconds.
>

I'll have to take another look at slony, it's been a while.  Our
database structure is a bit non standard.  Being a payment gateway, we
are required to have a separation of data between merchants, which
means not mixing data from different merchants in the same table.
So what we do is every user has their own schema, with their own set
of tables.  Yes I know that's not considered the best practice design
wise, but separate databases would have caused even more issues, and
as it turns out there are some advantages to the separate schema
approach that we never thought of.  Last time I looked at slony you
have to configure it for each individual table you want replicated.
We have around 50,000 tables, and more are added on a daily basis.

pgsql-general by date:

Previous
From: Chander Ganesan
Date:
Subject: Re: Omitting tablespace creation from pg_dumpall...
Next
From: Bruno Wolff III
Date:
Subject: Re: table has many to many relationship with itself - how