On 06/29/2013 03:57 PM, Josh Berkus wrote:
>> I see two problems with this report:
>> 1. it creates a new installation for each run,
> Yes, I'm running "make check"
>
>> 2. it only uses the serial schedule.
> Um, no:
>
> parallel group (19 tests): limit prepare copy2 plancache xml returning
> conversion rowtypes largeobject temp truncate polymorphism with
> without_oid sequence domain rangefuncs alter_table plpgsql
>
> Out of curiosity, I tried a serial run (MAX_CONNECTIONS=1), which took
> about 39s (with patches).
>
>> I care more about the parallel schedule than the serial one, because
>> since it obviously runs in less time, then I can run it often and not
>> worry about how long it takes. On the other hand, the cost of the extra
>> initdb obviously means that the percentage is a bit lower than if you
>> were to compare test run time without considering the initdb step.
> Possibly, but I know I run "make check" not "make installcheck" when I'm
> testing new code. And the buildfarm, afaik, runs "make check". And,
> for that matter, who the heck cares?
It runs both :-) We run "make check" early in the process to make sure
we can at least get that far. We run "make installcheck" later, among
other things to check that the tests work in different locales.
I think we need to have a better understanding of just what our standard
regression tests do.
AIUI: They do test feature use and errors that have cropped up in the
past that we need to beware of. They don't test every bug we've ever
had, nor do they exercise every piece of code.
Maybe there is a good case for these last two in a different set of tests.
cheers
andrew