On Sat, Jun 2, 2012 at 1:28 AM, Bryan Montgomery <monty@english.net> wrote:
> Hello,
> So we've been reviewing our processes and working on improving them. One
> area we've been lacking is a procedure to version control our database
> functions, table changes, static data etc.
>
> I'm curious how others do it. Ideally, we want it to be part of our release
> / build process so that we deploy functions.
>
> However, how do you handle dependancies?
>
> We've also used patch files if we need to modify tables - these are
> typically written to only run once, ie alter table add column x int. Any
> thoughts on putting this in to a process so that it can be run mutliple
> times without any issue?
At work, we have perfectly linear database versioning, so I
implemented a simple patchlevel system. The database has a singleton
config table with (among other things) PatchLevel INTEGER NOT NULL,
and every change to the database increments it. We have a single
autopatch script whose logic looks something like this:
level=query("select patchlevel from config");
query("begin");
switch (level)
{
default: explode("Incompatible source code and database, terminating");
case 1:
say("AutoPatch: Adding FooBar column to Quuxification table");
query("alter table quux add FooBar integer not null default 0")
case 2:
say("AutoPatch: Creating accountancy tables");
query("create table ........");
query("create table ....");
//Add new patch levels here
query("update config set patchlevel=3");
case 3:
break;
}
// ... other code here, which will notice if the transaction's broken
query("commit");
This script is safe to run multiple times, and is in fact quite
efficient for that case (as it does just one query and then
terminates). The transaction also protects against code bugs or other
issues, and will not half-way-patch a system.
ChrisA