Fabien COELHO <coelho@cri.ensmp.fr> writes:
>> I'm not following this argument. The test case is basically useless
>> for its intended purpose with that parameter, because it's highly
>> likely that the failure mode it's supposedly checking for will be
>> masked by the "random" function's tendency to spit out the same
>> value all the time.
> The first value is taken about 75% of the time for N=1000 and s=2.5, which
> means that a non deterministic implementation would succeed about 0.75² ~
> 56% of the time on that one.
Right, that's about what we've been seeing on OpenBSD.
> Also, the drawing procedure is less efficient when the parameter is close
> to 1 because it is more likely to loop,
That might be something to fix, but I agree it's a reason not to go
overboard trying to flatten the test case's distribution right now.
> If you want something more drastic, using 1.5 instead of 2.5 would reduce
> the probability of accidentaly passing the test by chance to about 20%, so
> it would fail 80% of the time.
I think your math is off; 1.5 works quite well here. I saw one failure
to produce distinct values in 20 attempts. It's not demonstrably slower
than 2.5 either. (1.1 is measurably slower; probably not by enough for
anyone to care, but 1.5 is good enough for me.)
regards, tom lane