Re: Schema version control - Mailing list pgsql-general

From ChronicDB Community Team
Subject Re: Schema version control
Date
Msg-id 1298333496.30814.128.camel@localhost
Whole thread Raw
In response to Re: Schema version control  (Glenn Maynard <glenn@zewt.org>)
List pgsql-general
> What about upgrades that can't be derived directly from an inspection
> of the schema?  Some examples:
>
> - Adding a NOT NULL constraint (without adding a DEFAULT).  You often
> want to precede this with filling in any existing NULL values, so the
> new constraint doesn't fail.

This is an important point. The upgrade to the new schema should be
defined with a matching transformer that will initialize such a value.

> - Updating triggers, functions and their effects.  For example, when I
> have an FTS index with a trigger to update an index column, and I
> change the underlying trigger, I often do something like "UPDATE table
> SET column = column", to cause all of the update triggers to fire and
> recalculate the index columns.

This is the kind of work one should not need to write for every upgrade.
The upgrade should be specified as succinctly as possible, and
everything else around it should be automatic.

> - Creating a new column based on an old one, and removing the old one;
> eg. add a column "n", run "UPDATE ... SET n = i*j * 2", and then drop
> the old columns "i" and "j".

Again, a transformer for computing such new columns is clearly needed.
With ChronicDB not only can such a transformer be provided, but also the
old version of the application client (like a web-app) can continue to
use the old schema while maintaining backwards compatible: its query
requests are also transformed.

> - For efficiency, dropping a specific index while making a large
> update, and then recreating the index.
>
> In my experience, while generating schema updates automatically is
> handy, it tends to make nontrivial database updates more complicated.
> These sorts of things happen often and are an integral part of a
> database update, so I'm just curious how/if you deal with them.

Glenn, I think you have hit the nail on the head here. A more systematic
model for dealing with nontrivial database updates is needed. In the
case of making a large update that recreates an index, one approach
might be to instantiate a new schema that has the new index on it, and
ensure service is switched automatically and with data consistency.

> I've used Ruby's migrations, and for my Django databases I use my own
> migration system which is based in principle off of it: create scripts
> to migrate the database from version X to X+1 and X-1, and upgrade or
> downgrade by running the appropriate scripts in sequence.
>
> It's not ideal, since it can't generate a database at a specific
> version directly; it always has to run through the entire sequence of
> migrations to the version you want, and the migrations accumulate.
> However, it can handle whatever arbitrary steps are needed to update a
> database, and I don't need to test updates from every version to every
> other version.

This is something we are planning on adding to ChronicDB. Given a list
of database upgrade definitions, the "end result" should be computed so
that one could upgrade from any arbitrary version to any other arbitrary
version.


pgsql-general by date:

Previous
From: Melvin Davidson
Date:
Subject: why is there no TRIGGER ON SELECT ?
Next
From: ChronicDB Community Team
Date:
Subject: Re: Schema version control