Re: Need some help creating a database sandbox... - Mailing list pgsql-general

From Magnus Hagander
Subject Re: Need some help creating a database sandbox...
Date
Msg-id 6BCB9D8A16AC4241919521715F4D8BCE6C7805@algol.sollentuna.se
Whole thread Raw
In response to Need some help creating a database sandbox...  ("Eric D. Nielsen" <nielsene@MIT.EDU>)
List pgsql-general
> I'm trying to setup a "safe" testing database environment for
> some unit testing of a web application.  I would like to have
> the unit tests restore the database to a known state before
> every test.  The simplest way I thought would be to have the
> testing tools drop/create the testing database on every test
> case, and then populate the database from a specified file.
> However I don't want to give the test user superuser
> privileges.  Thus I don't think I can restrict it to only
> drop/create a single named DB.

No, AFAIK there is no way to do that.


> My next thought was to code up a "DELETE ALL" script that
> would delete all entities in the database.  However it seems
> easy to miss something and its not robust against schema
> changes, even though it can be looked down to the test_db_owner.

If you're giong to drop *everything* in the db, you can drive something
off the system tables or information schema. Like:

SELECT 'DROP TABLE ' || table_schema || '.' || table_name FROM
information_schema.tables WHERE table_type='BASE TABLE' AND table_schema
NOT IN ('pg_catalog','information_schema')

And then feed the generated script back through a different psql prompt.

Similar scripts for other object types of coruse (views, functions etc).
It might be easier to drive it off the system tables directly instead of
information schema, if you can live with possible backend version
dependencies.


> A third thought would be to run a second cluster containing
> only the test database(s).  Then the users wouldn't be
> shared, so even if it someone connected to the wrong DB it
> would lack any permissions.  I don't have much experience
> running multiple clusters, however.  So I don't know if thats
> opening up another whole can of worms.

Just make them run in completely different directories, and use
different accouts to start each cluster (each only having permissions on
it's own data directory, of course). It's going to mean two sets of
shared buffer caches etc, so you may need to trim the memory values in
your postgresql.conf, and of course run them on different ports, but it
should work just fine.

//Magnus

pgsql-general by date:

Previous
From: "Magnus Hagander"
Date:
Subject: Re: Windows file path for copy
Next
From: Richard Huxton
Date:
Subject: Re: Questions about anonymous procedure/function.