RANDU is a pseudorandom number generator from the early 1960s. It is a linear congruential generator, specifically a Lehmer random number generator.
Formula
RANDU has a very short formula. Starting from the seed value, you can calculate the next value using this formula.
Vj+1 = Vj × 65539 mod 231
LCG parameters
As RANDU is a linear congruential generator (LCG), it can be defined by its LCG parameters.
RANDU = LCG(mod=231, mult=216 + 3, inc=0)
Recommendations
It is highly discouraged to use RANDU in any new projects. Its flaws have been known for a very long time and there are many alternatives that are superiour in every way.
Below is a (short) list of valid reasons to use RANDU.
- Historical interest
- Educational; specifically, how to recognise a bad RNG
- Replicating old studies
- Re-creating old software that needs to have identical behaviour
Python implementation
Below is a simple implementation.
def RANDU(n): n *= 65539 n %= 0x80000000 return n
RANDU(1) RANDU(RANDU(1)) RANDU(RANDU(RANDU(1)))
65539
393225
1769499
Or using the LCG parameters from above.
RANDU = LCG(2**31, 2**16 + 3, 0)
RANDU(1) RANDU(RANDU(1)) RANDU(RANDU(RANDU(1)))
65539
393225
1769499
Analysis
In this section, I will try to visually inspect the output of RANDU in order to see how obvious its flaws are. We will inspect 1-D, 2-D, and 3-D outputs in that order.
Due to the known flaws of RANDU, 3-D outputs should completely give away how broken it is. But let’s go through all the steps and see if we can identify any signs earlier.
1-D analysis
vals = [] RANDU.seed_urandom() for _ in range(2_000_000): x = 0 x += RANDU.next_float() x += RANDU.next_float() x += RANDU.next_float() vals.append(x - 1.5) _ = plt.hist(vals, bins=200)
RES = 1024 RANDU.seed_urandom() vals = [RANDU.next_float() for _ in range(RES * RES)] vals = np.array(vals).reshape((RES, RES)) _ = plt.imshow(vals, cmap=cm)
RES = 1024 vals = [0 for _ in range(RES * RES)] RANDU.seed_urandom() for _ in range(10_000_000): index = int(RANDU.next_float() * (RES * RES)) vals[index] += 1 vals = np.array(vals).reshape((RES, RES)) _ = plt.imshow(vals, cmap=cm)
RES = 512 vals = [0 for _ in range(RES * RES)] RANDU.seed_urandom() for _ in range(200): for index in range(RES * RES): vals[index] += RANDU.next_float() vals = np.array(vals).reshape((RES, RES)) _ = plt.imshow(vals, cmap=cm) _ = plt.colorbar()
Those vertical lines are bad news.
2-D analysis
RES = 256 vals = np.zeros((RES, RES)) RANDU.seed_urandom() for _ in range(10_000_000): x = int(RANDU.next_float() * RES) y = int(RANDU.next_float() * RES) vals[x][y] += 1 _ = plt.imshow(vals, cmap=cm) _ = plt.colorbar()
RES = 2**11 vals = [0 for _ in range(RES * RES)] RANDU.seed_urandom() for _ in range(5_000_000): index = int(RANDU.next_float() * (RES * RES)) vals[index] += RANDU.next_float() vals = np.array(vals).reshape((RES, RES)) _ = plt.imshow(vals, cmap=cm)
3-D analysis
This is where the whole thing breaks down.
RES = 1024 values = np.zeros((RES, RES)) RANDU.seed_urandom() for _ in range(10_000_000): x = RANDU.next_float() y = RANDU.next_float() x = int(x * RES) y = int(y * RES) values[y][x] += RANDU.next_float() _ = plt.imshow(values, cmap=cm)
x = [] y = [] z = [] RANDU.seed_urandom() for i in range(100_000): x.append(RANDU.next_float()) y.append(RANDU.next_float()) z.append(RANDU.next_float()) fig = plt.figure() ax = fig.add_subplot(projection='3d') ax.azim = 0 ax.elev = 0 _ = ax.scatter(x, y, z, s=0.05)
x = [] y = [] z = [] RANDU.seed_urandom() for i in range(100_000): x.append(RANDU.next_float()) y.append(RANDU.next_float()) z.append(RANDU.next_float()) fig = plt.figure() ax = fig.add_subplot(projection='3d') ax.azim = 57 ax.elev = 0 _ = ax.scatter(x, y, z, s=0.05)