On Wed, Aug 12, 2015 at 06:46:19PM +0100, Greg Stark wrote:
> On Wed, Aug 12, 2015 at 3:10 AM, Noah Misch <noah@leadboat.com> wrote:
> > Committers press authors to delete tests more often than we press them to
> > resubmit with more tests. No wonder so many patches have insufficient tests;
> > we treat those patches more favorably, on average. I have no objective
> > principles for determining whether a test is pointlessly redundant, but I
> > think the principles should become roughly 10x more permissive than the
> > (unspecified) ones we've been using.
>
> I would suggest the metric should be "if this test fails is it more
> likely to be noise due to an intentional change in behaviour or more
> likely to be a bug?"
When I've just spent awhile implementing a behavior change, the test diffs are
a comforting sight. They confirm that the test suite exercises the topic I
just changed. Furthermore, most tests today do not qualify under this
stringent metric you suggest. The nature of pg_regress opposes it.
I sometimes hear a myth that tests catch the bugs their authors anticipated.
We have tests like that (opr_sanity comes to mind), but much test-induced bug
discovery is serendipitous. To give a recent example, Peter Eisentraut didn't
write src/bin tests to reveal the bug that led to commit d73d14c.