Thread: Add more regression tests for dbcommands
Hi,
Please find attached a patch to take code-coverage of DBCommands (CREATE DATABASE / ALTER DATABASE / DROP DATABASE) from 36% to 71%.
Any and all feedback is obviously welcome.
--
Robins Tharakan
Attachment
Robins Tharakan <tharakan@gmail.com> writes: > Please find attached a patch to take code-coverage of DBCommands (CREATE > DATABASE / ALTER DATABASE / DROP DATABASE) from 36% to 71%. I wish to object strenuously to adding any more CREATE/DROP DATABASE commands to the core regression tests. Those are at least one order of magnitude more expensive than any other commands we test, and the added code coverage is precisely zero because creation and dropping of a database is already done repeatedly in the contrib regression tests. regards, tom lane
>> Please find attached a patch to take code-coverage of DBCommands (CREATE >> DATABASE / ALTER DATABASE / DROP DATABASE) from 36% to 71%. > > I wish to object strenuously to adding any more CREATE/DROP DATABASE > commands to the core regression tests. Those are at least one order of > magnitude more expensive than any other commands we test, and the added > code coverage is precisely zero because creation and dropping of a > database is already done repeatedly in the contrib regression tests. Would you be okay if there is one/a few effective create/drop (some tests check that the create or drop fails e.g. depending on permissions, which ISTM is not tested anywhere else), so that tests for various ALTER DATABASE commands are combined together onto these databases? -- Fabien.
Fabien COELHO <coelho@cri.ensmp.fr> writes: > Would you be okay if there is one/a few effective create/drop (some tests > check that the create or drop fails e.g. depending on permissions, which > ISTM is not tested anywhere else), so that tests for various ALTER > DATABASE commands are combined together onto these databases? TBH, I do not see that such tests are worth adding, if they are going to significantly slow down the core regression tests. Those tests are run probably hundreds of times a day, in aggregate across all Postgres developers. Adding ten seconds or whatever this would add is a major cost, while the benefit appears trivial. We could consider adding expensive low-value tests like these to some alternate regression target that's only exercised by buildfarm members, perhaps. But I think there's probably a point of diminishing returns even in that context. regards, tom lane
Hello, >> Would you be okay if there is one/a few effective create/drop (some tests >> check that the create or drop fails e.g. depending on permissions, which >> ISTM is not tested anywhere else), so that tests for various ALTER >> DATABASE commands are combined together onto these databases? > > TBH, I do not see that such tests are worth adding, if they are going to > significantly slow down the core regression tests. Those tests are run > probably hundreds of times a day, in aggregate across all Postgres > developers. Adding ten seconds or whatever this would add is a major > cost, while the benefit appears trivial. > We could consider adding expensive low-value tests like these to some > alternate regression target that's only exercised by buildfarm members, > perhaps. But I think there's probably a point of diminishing returns > even in that context. I'm not sure that the tests are "low value", because a commit that would generate a failure on a permission check test would be a potential security issue for Pg. My personal experience in other contexts is that small sanity checks help avoid blunders at a small cost. It is also worthwhile to have significant functional tests, such as replication and so on... As for the cost, if the proposed tests are indeed too costly, what is not necessarily the case for what I have seen, I do not think that it would be a great problem to have two set of tests, with one a superset of the other, with some convention. It is enough that these tests are run once in a while to raise an alert if need be, especially before a release, and not necessarily on every "make check" of every developer in the world, so that we get some value at very low cost. So, as you suggest implicitely, maybe "make check" and "make longcheck" or make "shortcheck" in the test infrastructure would do the trick? -- Fabien.
On 2013-05-13 16:52:08 +0200, Fabien COELHO wrote: > > Hello, > > >>Would you be okay if there is one/a few effective create/drop (some tests > >>check that the create or drop fails e.g. depending on permissions, which > >>ISTM is not tested anywhere else), so that tests for various ALTER > >>DATABASE commands are combined together onto these databases? > > > >TBH, I do not see that such tests are worth adding, if they are going to > >significantly slow down the core regression tests. Those tests are run > >probably hundreds of times a day, in aggregate across all Postgres > >developers. Adding ten seconds or whatever this would add is a major > >cost, while the benefit appears trivial. > > >We could consider adding expensive low-value tests like these to some > >alternate regression target that's only exercised by buildfarm members, > >perhaps. But I think there's probably a point of diminishing returns > >even in that context. > > I'm not sure that the tests are "low value", because a commit that would > generate a failure on a permission check test would be a potential security > issue for Pg. > As for the cost, if the proposed tests are indeed too costly, what is not > necessarily the case for what I have seen, I do not think that it would be a > great problem to have two set of tests, with one a superset of the other, > with some convention. Well, tests like permission tests aren't the expensive part. The actual CREATE/DROP DATABASE you do is. The latter essentially are already tested by the buildfarm already. So, trimming the patch to do only the fast stuff should be less controversial? Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
I believe Tom / Andres and Fabien all have valid points.
Net-net, I believe the tests although non-negotiable, are not required to be in make-check. For now, its the slow tests that are the pain points here, and then I would soon try to prune them and commit once again.
Whether it goes in make-check or not is obviously not discretion but to me their importance is undoubted since I am sure they increase code-coverage. Actually that is 'how' I create those tests, I look at untouched code and create new tests that check untouched code.
Would try to revert with a faster script (preferably with minimal CREATE/DROP).
--
Robins Tharakan
On 13 May 2013 20:30, Andres Freund <andres@2ndquadrant.com> wrote:
On 2013-05-13 16:52:08 +0200, Fabien COELHO wrote:
>
> Hello,
>
> >>Would you be okay if there is one/a few effective create/drop (some tests
> >>check that the create or drop fails e.g. depending on permissions, which
> >>ISTM is not tested anywhere else), so that tests for various ALTER
> >>DATABASE commands are combined together onto these databases?
> >
> >TBH, I do not see that such tests are worth adding, if they are going to
> >significantly slow down the core regression tests. Those tests are run
> >probably hundreds of times a day, in aggregate across all Postgres
> >developers. Adding ten seconds or whatever this would add is a major
> >cost, while the benefit appears trivial.
>
> >We could consider adding expensive low-value tests like these to some
> >alternate regression target that's only exercised by buildfarm members,
> >perhaps. But I think there's probably a point of diminishing returns
> >even in that context.
>
> I'm not sure that the tests are "low value", because a commit that would
> generate a failure on a permission check test would be a potential security
> issue for Pg.> As for the cost, if the proposed tests are indeed too costly, what is notWell, tests like permission tests aren't the expensive part. The actual
> necessarily the case for what I have seen, I do not think that it would be a
> great problem to have two set of tests, with one a superset of the other,
> with some convention.
CREATE/DROP DATABASE you do is. The latter essentially are already
tested by the buildfarm already.
So, trimming the patch to do only the fast stuff should be less
controversial?
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
>> As for the cost, if the proposed tests are indeed too costly, what is not >> necessarily the case for what I have seen, I do not think that it would be a >> great problem to have two set of tests, with one a superset of the other, >> with some convention. > > Well, tests like permission tests aren't the expensive part. The actual > CREATE/DROP DATABASE you Not me, but "Robins Tharakan". > do is. The latter essentially are already tested by the buildfarm > already. Yep, that is what Tom was arguing. > So, trimming the patch to do only the fast stuff should be less > controversial? Yes, I think so. I think that allowing "expensive" tests such as a full archive or replication setup would be nice as well, so maybe having several level of tests would be a good thing anyway. -- Fabien.
> Would try to revert with a faster script (preferably with minimal > CREATE/DROP). Yes. I just checked with a few create database/drop database. One cycle takes about 1 full second on my laptop, so I must agree that it is slow. -- Fabien.
Hi,
Attached is an updated patch that does only 1 CREATE DATABASE and reuses that for all other tests.
The code-coverage with this patch goes up from 36% to 70%.
--
Robins Tharakan
On 13 May 2013 21:04, Robins Tharakan <tharakan@gmail.com> wrote:
I believe Tom / Andres and Fabien all have valid points.Net-net, I believe the tests although non-negotiable, are not required to be in make-check. For now, its the slow tests that are the pain points here, and then I would soon try to prune them and commit once again.Whether it goes in make-check or not is obviously not discretion but to me their importance is undoubted since I am sure they increase code-coverage. Actually that is 'how' I create those tests, I look at untouched code and create new tests that check untouched code.Would try to revert with a faster script (preferably with minimal CREATE/DROP).--Robins TharakanOn 13 May 2013 20:30, Andres Freund <andres@2ndquadrant.com> wrote:On 2013-05-13 16:52:08 +0200, Fabien COELHO wrote:
>
> Hello,
>
> >>Would you be okay if there is one/a few effective create/drop (some tests
> >>check that the create or drop fails e.g. depending on permissions, which
> >>ISTM is not tested anywhere else), so that tests for various ALTER
> >>DATABASE commands are combined together onto these databases?
> >
> >TBH, I do not see that such tests are worth adding, if they are going to
> >significantly slow down the core regression tests. Those tests are run
> >probably hundreds of times a day, in aggregate across all Postgres
> >developers. Adding ten seconds or whatever this would add is a major
> >cost, while the benefit appears trivial.
>
> >We could consider adding expensive low-value tests like these to some
> >alternate regression target that's only exercised by buildfarm members,
> >perhaps. But I think there's probably a point of diminishing returns
> >even in that context.
>
> I'm not sure that the tests are "low value", because a commit that would
> generate a failure on a permission check test would be a potential security
> issue for Pg.> As for the cost, if the proposed tests are indeed too costly, what is notWell, tests like permission tests aren't the expensive part. The actual
> necessarily the case for what I have seen, I do not think that it would be a
> great problem to have two set of tests, with one a superset of the other,
> with some convention.
CREATE/DROP DATABASE you do is. The latter essentially are already
tested by the buildfarm already.
So, trimming the patch to do only the fast stuff should be less
controversial?
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Attachment
On 2013-05-21 02:58:25 +0530, Robins Tharakan wrote: > Attached is an updated patch that does only 1 CREATE DATABASE and reuses > that for all other tests. > The code-coverage with this patch goes up from 36% to 70%. Even creating one database seems superfluous. The plain CREATE DATABASE has been tested due to the creation of the regression database itself anyway? All the other tests should be doable using the normal regression db? Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
Hi Andres,
Attached is a patch which does not CREATE DATABASE, but now the regression tests do not test the following:
- ALTER DATABASE RENAME TO is not allowed on a database in use. Had to remove two tests that were using this.
- ALTER DATABASE SET TABLESPACE is also not allowed on a database in use, so removed it (the existing test was supposed to fail too, but it was to fail at a later stage in the function when checking for whether the tablespace exists, but with the 'regression' database, it just doesn't even reach that part)
-
The CREATE DATABASE test itself was checking whether the 'CONNECTION LIMIT' was working. Removed that as well.
Code coverage improved from 36% to 68%.
--
Robins Tharakan
On 24 June 2013 07:47, Andres Freund <andres@2ndquadrant.com> wrote:
On 2013-05-21 02:58:25 +0530, Robins Tharakan wrote:Even creating one database seems superfluous. The plain CREATE DATABASE
> Attached is an updated patch that does only 1 CREATE DATABASE and reuses
> that for all other tests.
> The code-coverage with this patch goes up from 36% to 70%.
has been tested due to the creation of the regression database itself
anyway?
All the other tests should be doable using the normal regression db?
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Attachment
Hi, I am generally a bit unsure whether the regression tests you propose aren't a bit too verbose. Does any of the committers have an opinion about this? My feeling is that they are ok if they aren't slowing down things much. On 2013-06-26 01:55:53 -0500, Robins Tharakan wrote: > The CREATE DATABASE test itself was checking whether the 'CONNECTION LIMIT' > was working. Removed that as well. You should be able to test that by setting the connection limit to 1 and then try to connect using \c. The old connection is only dropped after the new one has been successfully performed. Greetings, Andres Freund -- Andres Freund http://www.2ndQuadrant.com/PostgreSQL Development, 24x7 Support, Training & Services
Andres Freund <andres@2ndquadrant.com> writes: > I am generally a bit unsure whether the regression tests you propose > aren't a bit too verbose. Does any of the committers have an opinion > about this? > My feeling is that they are ok if they aren't slowing down things much. Yeah, I'm concerned about speed too. If the regression tests take a long time it makes it harder for developers to run them. (I like to point at mysql's regression tests, which take well over an hour even on fast machines. If lots of tests are so helpful, why is their bug rate no better than ours?) I was intending to suggest that much of what Robins has submitted doesn't belong in the core regression tests, but could usefully be put into an optional set of "big" regression tests. We already have a "numeric_big" test in that spirit. What seems to be lacking is an organizational principle for this (should the "big" tests live with the regular ones, or in a separate directory?) and a convenient "make" target for running the big ones. Once we had a setup like that, we could get some or all of the buildfarm machines to run the "big" tests, but we wouldn't be penalizing development work. regards, tom lane
> I was intending to suggest that much of what Robins has submitted > doesn't belong in the core regression tests, but could usefully be put > into an optional set of "big" regression tests. We already have a > "numeric_big" test in that spirit. What seems to be lacking is an > organizational principle for this (should the "big" tests live with the > regular ones, or in a separate directory?) and a convenient "make" > target for running the big ones. Once we had a setup like that, we > could get some or all of the buildfarm machines to run the "big" tests, > but we wouldn't be penalizing development work. I have been suggesting something upon that line in some of the reviews I've posted about Robins non regression tests, if they were to be rejected on the basis that they add a few seconds for checks. They are well made to test corner cases quite systematically, and I feel that it would be sad if they were lost. Looking at the GNUmakefile, ISTM that there could be a {parallel,serial}_big_schedule derived from {parallel,serial}_schedule by appending a few things in a separate file: parallel_big_schedule: parallel_schedule parallel_big cat $^ > $@ bigcheck: ... parallel_big_schedule ... --schedule=$(srcdir)/parallel_big_schedule ... and idem for serial & bigtest. Also, the serial_schedule should be automatically derived from the parallel one, which does not seem to me the case. ISTM that currently Robins patches only update the parallel version. -- Fabien.
On 06/26/2013 12:08 PM, Fabien COELHO wrote: > I have been suggesting something upon that line in some of the reviews > I've posted about Robins non regression tests, if they were to be > rejected on the basis that they add a few seconds for checks. They are > well made to test corner cases quite systematically, and I feel that it > would be sad if they were lost. My thinking was that someone should add all of his new tests at once, and then see how much of a time difference they make. If it's 7 seconds, who cares? -- Josh Berkus PostgreSQL Experts Inc. http://pgexperts.com
Josh Berkus <josh@agliodbs.com> writes: > On 06/26/2013 12:08 PM, Fabien COELHO wrote: >> I have been suggesting something upon that line in some of the reviews >> I've posted about Robins non regression tests, if they were to be >> rejected on the basis that they add a few seconds for checks. They are >> well made to test corner cases quite systematically, and I feel that it >> would be sad if they were lost. > My thinking was that someone should add all of his new tests at once, > and then see how much of a time difference they make. If it's 7 > seconds, who cares? Making that measurement on the current set of tests doesn't seem to me to prove much. I assume Robins' eventual goal is to make a significant improvement in the tests' code coverage across the entire backend, and what we see submitted now is just as much as he's been able to do yet in that line. So even if the current cost is negligible, I don't think it'll stay that way. regards, tom lane
On Wed, Jun 26, 2013 at 11:33 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > Josh Berkus <josh@agliodbs.com> writes: >> On 06/26/2013 12:08 PM, Fabien COELHO wrote: >>> I have been suggesting something upon that line in some of the reviews >>> I've posted about Robins non regression tests, if they were to be >>> rejected on the basis that they add a few seconds for checks. They are >>> well made to test corner cases quite systematically, and I feel that it >>> would be sad if they were lost. > >> My thinking was that someone should add all of his new tests at once, >> and then see how much of a time difference they make. If it's 7 >> seconds, who cares? > > Making that measurement on the current set of tests doesn't seem to me > to prove much. I assume Robins' eventual goal is to make a significant > improvement in the tests' code coverage across the entire backend, and > what we see submitted now is just as much as he's been able to do yet > in that line. So even if the current cost is negligible, I don't think > it'll stay that way. Well, this is a discussion we've had before. I'm certainly in favor of having a bigger set of regression tests, so that people can just run the small suite if they want to. But I'm not keen on pushing tests into that extended suite when they run in a fractions of a second. That's likely to create more hassle than it solves, since people will forget to update the expected outputs if they don't run those tests, and if they do run those tests, then having them in a separate suite isn't saving anything. The place for a separate test suite, IMHO, is when you have tests that run for a long time individually. FWIW, internally to EDB, we have it broken down just about exactly that way. Our core set of regression tests for PPAS takes about 5 minutes to run on my laptop, and unfortunately quite a bit longer on some older laptops. It's a nuisance, but it's a manageable nuisance, and it finds a lot of coding errors. Then we have separate schedules for test suites that contain long-running tests or have dependencies on which compiler flags you use; most developers don't run that suite regularly, but it gets run every night. That keeps those annoyances out of the foreground path of developers. I really don't see why we shouldn't take the same approach here. I don't think we want as large a regression test suite for PostgreSQL as we have for PPAS, because among other things, PostgreSQL doesn't have a paid staff of people who can be roped into helping manage it when needed. But the management effort for the kind of expansion that Robins Tharakan is proposing here is not going to be all that large, at least if my EDB experience is any indication. And I think it will help find bugs. I also think (and this is also something we've seen internally) that part of the value of a regression suite is that it exposes when you've changed the behavior. The regression tests fail and you have to change the expected outputs; fine. But now it's much less likely that you're going to make those kinds of changes without *realizing* that you've changed the behavior. I don't have a specific example top of mind right now, but I am pretty sure there have been at least a few cases where PostgreSQL changes have had knock-on consequences that weren't obvious at the time they were made. The kind of work that Robins is doing here is not going to fix that problem entirely, but I think it will help, and I think that has a lot of value. So I'd like to endorse Josh's idea: subject to appropriate review, let's add these test cases. Then, if it really turns out to be too burdensome, we can take them out, or figure out a sensible way to split the suite. Pushing all of Robins work into a secondary suite, or throwing it out altogether, both seem to me to be things which will not be to the long-term benefit of the project. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
On 6/27/13 10:20 AM, Robert Haas wrote: > So I'd like to endorse Josh's idea: subject to appropriate review, > let's add these test cases. Then, if it really turns out to be too > burdensome, we can take them out, or figure out a sensible way to > split the suite. Pushing all of Robins work into a secondary suite, > or throwing it out altogether, both seem to me to be things which will > not be to the long-term benefit of the project. I agree. If it gets too big, let's deal with it then. But let's not make things more complicated because they *might* get too big later.
On 6/26/13 12:17 PM, Tom Lane wrote: > (I like to > point at mysql's regression tests, which take well over an hour even on > fast machines. If lots of tests are so helpful, why is their bug rate > no better than ours?) Tests are not (primarily) there to prevent bugs.
Peter Eisentraut <peter_e@gmx.net> writes: > On 6/26/13 12:17 PM, Tom Lane wrote: >> (I like to >> point at mysql's regression tests, which take well over an hour even on >> fast machines. If lots of tests are so helpful, why is their bug rate >> no better than ours?) > Tests are not (primarily) there to prevent bugs. Really? Pray tell, what do you think they're for? Particularly code coverage tests, which is what these are? regards, tom lane
On 6/27/13 10:57 AM, Tom Lane wrote: > Peter Eisentraut <peter_e@gmx.net> writes: >> On 6/26/13 12:17 PM, Tom Lane wrote: >>> (I like to >>> point at mysql's regression tests, which take well over an hour even on >>> fast machines. If lots of tests are so helpful, why is their bug rate >>> no better than ours?) > >> Tests are not (primarily) there to prevent bugs. > > Really? Pray tell, what do you think they're for? Particularly code > coverage tests, which is what these are? Tests document the existing functionality of the code, to facilitate refactoring. (paraphrased, Uncle Bob) Case in point, the existing ALTER DDL code could arguably use even more refactoring. But no one wants to do it because it's unclear what will break. With the proposed set of tests (which I have not read to completion), this could become quite a bit easier, both for the coder and the reviewer. But these tests probably won't detect, say, locking bugs in such new code. That can only be prevented by careful code review and a clean architecture. Perhaps, MySQL has neither. ;-) Code coverage is not an end itself, it's a tool. In this sense, tests prevent existing functionality being broken, which might be classified as a bug. But it's wrong to infer that because system X has a lot of bugs and a lot of tests, having a lot of tests must be useless.
Peter Eisentraut <peter_e@gmx.net> wrote: > On 6/27/13 10:20 AM, Robert Haas wrote: >> So I'd like to endorse Josh's idea: subject to appropriate review, >> let's add these test cases. Then, if it really turns out to be too >> burdensome, we can take them out, or figure out a sensible way to >> split the suite. Pushing all of Robins work into a secondary suite, >> or throwing it out altogether, both seem to me to be things which will >> not be to the long-term benefit of the project. > > I agree. If it gets too big, let's deal with it then. But let's not > make things more complicated because they *might* get too big later. +1 -- Kevin Grittner EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
Hi Andres,
Just an aside, your point about CONNECTION LIMIT was something that just didn't come to my mind and is probably a good way to test ALTER DATABASE with CONNECTION LIMIT.
Its just that that actually wasn't what I was testing there. That 'CONNECTION LIMIT' test was coupled with CREATE DATABASE because I wanted to test that 'branch' in the CREATE DATABASE function just to ensure that there was a regression test that tests CONNECTION LIMIT specifically with CREATE DATABASE. That's all. A check to confirm whether connection limit restrictions actually got enforced was something I missed, but well, its out of the window for now anyway.
Its just that that actually wasn't what I was testing there. That 'CONNECTION LIMIT' test was coupled with CREATE DATABASE because I wanted to test that 'branch' in the CREATE DATABASE function just to ensure that there was a regression test that tests CONNECTION LIMIT specifically with CREATE DATABASE. That's all. A check to confirm whether connection limit restrictions actually got enforced was something I missed, but well, its out of the window for now anyway.
--
Robins Tharakan
On 26 June 2013 06:34, Andres Freund <andres@2ndquadrant.com> wrote:
Hi,
I am generally a bit unsure whether the regression tests you propose
aren't a bit too verbose. Does any of the committers have an opinion
about this?
My feeling is that they are ok if they aren't slowing down things much.You should be able to test that by setting the connection limit to 1 and
On 2013-06-26 01:55:53 -0500, Robins Tharakan wrote:
> The CREATE DATABASE test itself was checking whether the 'CONNECTION LIMIT'
> was working. Removed that as well.
then try to connect using \c. The old connection is only dropped after
the new one has been successfully performed.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
On 26 June 2013 01:55, Robins Tharakan <tharakan@gmail.com> wrote:
Code coverage improved from 36% to 68%.
Updated patch:
- Renamed ROLEs as per Robert's feedback (prepend regress_xxx)
- Added test to serial_schedule (missed out earlier).
--
Robins Tharakan
Attachment
On Sun, Jul 7, 2013 at 10:35 AM, Robins Tharakan <tharakan@gmail.com> wrote: > On 26 June 2013 01:55, Robins Tharakan <tharakan@gmail.com> wrote: >> >> Code coverage improved from 36% to 68%. > > Updated patch: > - Renamed ROLEs as per Robert's feedback (prepend regress_xxx) > - Added test to serial_schedule (missed out earlier). Databases - like roles - are global objects, so those should be similarly prefixed with "regress". This is very dangerous: +DROP DATABASE db_db2; -- doesn't exist +DROP DATABASE IF EXISTS db_db2; -- doesn't exist with IF EXISTS; +DROP DATABASE template1; -- can't drop a template database +DROP DATABASE regression; -- can't drop a database in use I think we should drop the first three of these. The first one risks dropping actual user data in the "make installcheck" case, as does the second one. We can reduce the risk of death and destruction by choosing a better database name, but I don't think it's really worth it. If DROP DATABASE gets broken, we'll notice that soon enough. I don't think the third one is a good test, either. There's no requirement that the user keep template1 around, although nearly everyone probably does. Please see if you can't also do something to minimize the number of create/drop role cycles this patch performs - like maybe to just one. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company