uniform floating point random number generator - summary
Clark O. Morgan
morgan at ogicse.cse.ogi.edu
Thu Jan 31 05:17:28 AEST 1991
A few days ago I requested help on generating uniform random numbers
in the range (0..1] on a BSD 4.2 system. Here is a summary of the help
I received and the method I finally chose.
Note: the information described here is also directly applicable to
BSD 4.3 . This method could also be adapted for use on SYSV, but I
don't have access to such an environment...and cannot therefore provide
a specific implementation.
Note: in the description below, I will not quote any contributors by
name, since I have neither asked for, nor received their permission to
do so.
To begin with, it was pointed out to me that I had misstated the desired
range of the random number generator. It should be [0..1) . Oops :-).
Several people sent me references for random number generation.
1. P. L'Ecuyer, "Efficient and portable combined random number
generators", Communications of the ACM 31 (1988) 742-749.
2. P. Bratley, B.L. Fox and L.E. Schrage, "A guide to simulation",
(Springer-Verlag, New York, NY, 1987).
3. "Efficient and Portable Combined Random Number Generators",
Pierre L'Ecuyer, Communications of the ACM, Vol. 31, No. 6 (June 1989).
I suspect the 1st and 3rd references are the same; it's probable that
one of the cited dates is incorrect.
I can also add the reference for the algorithm that I mentioned
when I posted my query (which overflows when implemented with unsigned
arithmetic in C):
4. J. Banks and J. S. Carson, "Discrete-Event System Simulation",
Prentice-Hall, 1984, ISBN 0-13-215582-6, Chapter 7, Example 7.12,
pages 266-267. Chapter 7 of this book describes random number
generation in detail and the methods used to test generators for
uniformity and independence.
Continuing, one individual told me to check out Knuth's books and
one individual advised me to consult the Numerical Recipes In
C/Pascal/Fortran books. Another individual sent me his implementation
of an algorithm cited in reference #1 above. If you'd like a copy of
his implementation, drop me a note.
Finally, two individuals told me to use the generator provided by the
"random()" library function. At first, I discounted these suggestions
because random() returns integers. But I carefully re-read reference
4 above (specifically example 7.12) and immediately noticed that
if random() returns uniform integers in a _known_ range, then it
can be used to generate uniform floating point values in the range
[0..1) with one arithmetic operation (division). So here's the
method I chose to use:
/*
* Return a "uniformly" distributed double precision random
* number in the range [0..1).
*
* CAVEAT: only works on a 32-bit architecture.
*/
double
drandom()
{
return (((double) random()) / ((unsigned) 0x80000000));
}
The key to note here is that random() returns integers in the range
0 to 2^31-1 on MY MACHINE (your mileage WILL vary). To obtain the
desired floating point values, it was only necessary to divide
random()'s result by 2^31 (0x80000000).
But does it work (is the result uniform)? That, of course, depends
upon whether or not random() is uniform. I asked a statistician how
to test the results. She said, "There are many methods. Three of
the simplest are:
- generate a large sample size, bin the results, and look at the
distribution. If the bins look skewed, you probably have problems.
If the bins look uniformly distributed, verify the results with one
of the following two tests.
- Chi-squared goodness of fit test.
- Kolmogrov-Smirnov test."
So, I generated 10,000 samples and binned the results:
Bin Total
0: 504
1: 504
2: 514
3: 495
4: 515
5: 489
6: 478
7: 511
8: 469
9: 521
10: 493
11: 525
12: 498
13: 477
14: 532
15: 479
16: 473
17: 520
18: 491
19: 512
As far as I was concerned, this was plenty uniform for my purposes.
I did not pursue any other tests.
More information about the Comp.unix.programmer
mailing list