The comment in the random() function indicates that its author thought
it'd produce output in the range 0..1, which seems like a pretty
reasonable definition:
/* result 0.0-1.0 */
result = ((double) random()) / RAND_MAX;
Unfortunately, at least on my box, it produces no such thing. random()
actually yields values in the range 0..2^31-1 --- while RAND_MAX is
only 32767, because it applies to the rand() function not random().
So what I actually get is floating-point output in the range 0..65535.
regression=# select random();
random
------------------
35771.3981139561
(1 row)
regression=# select random();
random
------------------
58647.5821405683
(1 row)
This is, to say the least, a bizarre definition.
I would like to propose changing the code to
/* result 0.0-1.0 */
result = ((double) random()) / INT_MAX;
(and making the corresponding change in setseed()). But I wonder if
anyone out there has applications that depend on the current behavior.
As far as I can find, random() isn't mentioned in the documentation
currently, so there probably aren't many people using it...
regards, tom lane