I'm in a similar position, cloning a database multiple times to provide
development and qa databases for multiple groups. A feature desired by
Clark would greatly help many people with their development and qa
databases.
On the other hand, I imagine, a feature like this would not be easy to
develop, and it is useful for many but not all the users.
So I doubt we can have it any time soon.
On Sat, 2011-11-12 at 16:40 -0500, Clark C. Evans wrote:
> Hello all!
>
> Our company has some headaches in our application development
> and deployment process. The chief problem is, "creating stages",
> which to this audience is, cloning a database efficiently,
> making and testing a few changes, perhaps recording the
> differences between databases, and then dropping the database.
>
> I'm eternally grateful for someone who pointed out that we
> should be using "CREATE DATABASE ... WITH TEMPLATE". However,
> this has two big disadvantages. First, it only works if you
> can kick the users off the clone. Secondly, it still takes
> time, uses disk space, etc. We have some big databases.
>
> I was also thinking about using ZFS with PostgreSQL to do
> some sort of copy-on-write. However, this would require me
> to spawn a whole *new* PostgreSQL instance. In both of these
> cases, you lose your cache...
>
> So, I was wondering... could PostgreSQL grow the ability to
> "CLONE" a database by re-using existing file system blocks,
> sharing them across databases? This would perhaps be fast,
> keep the shared memory cache relevant for both the old copy
> and the clone, and remove WAL overhead. Then, if the block
> has to be edited, it'd be cloned in memory, and the clone
> would be flushed.
>
> I'm just imagining a world where "deploying" a new version
> of our software that makes very small catalog changes and
> tweaks a few rows would be... quick. Quick to try, check,
> test, and even deploy on live servers.
>
> Best,
>
> Clark
>