As the performance in theoretical tests is *no guarantee*, but *only an indicator* of what we may expect in practice, empirical testing of pseudorandom number generator s is an absolute necessity. A *popular misconception* is to equate testing pseudorandom number s with testing ``randomness''. The latter term is undefined in statistics. No random number generator is ``more random'' than any other. We propose to forget about the misleading term ``randomness'' and to concentrate upon the original purpose of pseudorandom number generation. The objective is to get reliable results in stochastic simulation. No pseudorandom number generator is appropriate for all tasks. As a consequence, we shall try to identify statistical tests that are similar to our simulation problem. If a generator passes these tests, we may expect ``good'' simulation results from it. For our notion of ``goodness'', see Wegenkittl (1995). Certain statistical tests have proven their relevance for a large number of problems encountered in practice.

As a first example of such a test, we would like to check if the bits in the binary representation of pseudorandom number s , , simulate realizations of independent random variables, equidistributed on the set . The following test design is due to Leeb (1995), who has also contributed Figures 3 and 4. From the binary representation of every coordinate of the nonoverlapping *s*-tuple
we take a block of *l* digits that starts at the *k*-th digit. We perform this operation on *s*-tuples. This procedure will yield a quantity that simulates the upper tail probability of a distributed random variable. is
an equidistributed random variable on [0,1[. In a second step, we compute the value of for 64 distinct consecutive samples of size *N*. We compare the empirical distribution of these 64 numbers to the distribution of by means of a two-sided Kolmogorov-Smirnov test statistic. We denote its value by . The distribution of the KS-statistic is known. To a level of significance of 0.01, there corresponds the critical region . The following figures illustrate the results for the Ansi-C generator, see Figure 3, and ICG(, see Figure 4. The parameters are *s*=4, and .
Large values of have been truncated to keep the graphics in scale. The ICG is clearly superior, the LCG performs poorly.

Many models in stochastic simulation have the form

where is some given function and the are independent, identically *U*([0,1[)-distributed random variables. As the *M*-tuples overlap, the usual -test cannot be applied. This approach leads to the important *overlapping M-tuple test* of Marsaglia (1985).

The M-tuple test is a stringent test. Wegenkittl (1995) gave a detailed proof of the distribution of this random variable that is missing in Marsaglia's paper. The following test design and Figures 5 and 6 stem from Wegenkittl (1995). It is an application of the M-tuple test. From every component of an overlapping 5-tuple of pseudorandom number s [0,1[, we take the first four bits in its binary representation. Then, for a given sample size *N*, we compute 32 values of the -theoretically equidistributed- upper tail probability of the M-tuple test. In the following figures, the sample size ranges between and . In Figure 5, we plot the 32 values of this test statistic. The resulting patterns should be irregular. If, for a given sample size *N*, the corresponding box is either totally white or black, the generator has failed miserably. For example, the Fishman and Moore LCG begins to break down from onwards. In Figure 6, we show the result of a two-side
d KS-test applied to these 32 values. Values of the KS-test statistic greater than the critical value 1.59 that corresponds to the significance level of 1% are shown in dark grey. We compare the following PRN generators:

EICG(, short ``EICG1"EICG(, short ``EICG7"

ICG(, short ``ICG"

LCG(, short ``FISH''

LCG(, short ``ANSIC''

LCG(, short ``MINSTND''

LCG( short ``RANDU"

``FISH'' was recommended by Fishman and Moore (1986) because of its excellent lattice structure. ``ANSIC'' is the Ansi-C system generator. The call `rand(0)` is equivalent to our initialization. ``MINSTND'' is the ``minimal standard" generator of Park and Miller (1988) , where ``RANDU'' is also discussed. The latter is an unlucky product of IBM.

Figure 5: The Distribution of the Upper Tail Probabilities of the M-Tuple Test Statistic

Figure 6: The Values of the KS-Test Statistic

Our third example is the *run test*. Since Knuth (1981), the run test has proven to be a very reliable test to detect correlations within a sample of pseudorandom number s. We refer to Fishman and Moore (1982) for empirical results concerning LCGs.

Entacher (1995a, 1995b) has studied the *runs up statistic*. For a given sample of size *N*, 100 values of this asymptotically -distributed quantity have been generated. In a second step, a two-sided KS-test was applied to these values to check the goodness-of-fit. The following figures show the results for the minimal standard generator and for EICG(). The two horizontal lines represent the critical values of the KS test statistic that correspond to the significance levels 0.05 and 0.01. The sample size ranges from to We have considered the *subsequence* . Similar results hold for other LCG and EICG, see Entacher (1995b). Apparently, large samples of the LCG have an increasing tendency to fall into the critical region.

As a final remark in this section, we would like to draw the reader's attention to the fact that all our tests are *two-level tests* in the sense of L'Ecuyer (1992) and that we have *varied* the sample size. This careful test design is not as common in the published literature on pseudorandom number generation as one would wish.