Multiplayer gaming at it's core fosters replayability through competition, however, one of my favourite examples of competitive gaming exists in the single player roguelike Spelunky. In it's feature the Daily Challenge players get a single attempt to set a high score in a randomly generated level which is available for one day. Whilst being randomised, the level and it's inhabitants are identical for all players.

Wanting to be able to include such a feature in one of my own projects, the first thing I needed was a consistent source of randomisation. While there are plenty of nice looking pseudo random number generators (PRNGs) available in JavaScript, I recently came across an interesting concept while reading Complexity a Guided Tour and instead sought to exercise some theory.

The logistic map is a well known model in Dynamical Systems Theory. One of its uses is to make predictions on population levels over time. Given a starting population and taking into account birth and death rates, you can iterate over the model which will return estimated population sizes at each step.

```
function logisticMap (birthAndDeathRate, populationSize) {
return birthAndDeathRate * populationSize * (1 - populationSize)
}
```

`populationSize`

is a number between 0 and 1 which represents the size of the population as a ratio of maximum capacity. AKA: the *fraction of carrying capacity* for the environment the population is growing in.

`birthAndDeathRate`

A positive number. The combined rate for reproduction and starvation of the population.

The below demonstrates how modifying the `birthAndDeathRate`

(R) effects the `populationSize`

(X) over time (T).

- With low values of R, the birthrate is not sufficient to sustain the population which quickly falls to 0.
- As R increases to more fruitful levels you will see the population stabilise.
- As R exceeds 3, overcrowding causes instability and you begin to see oscillations in the population levels. These oscillations have short and regular intervals.
- Approaching R = 4, these oscillations vary tremendously in their regularity.

As you might have guessed, R = 4 is looking like a pretty good candidate for our randomisation. To simplify the more verbose definition above, here is how the logistic map will look when tailored for my own purposes:

```
function logisticMap (x) {
return 4 * x * (1 - x)
}
```

To start generating some sequences, all that's necessary is that we pass an initial value (our seed) to the `logisticMap`

as `x`

(previously the population size), that result is then passed to the next invocation ad infinitum.

An important factor to consider is the level of divergence each seed has from one another irrespective of their numerical distance. I.e, if you were to create two sequences of random numbers from similar seeds, ideally the sequences would not correlate to one another. This *sensitive dependence on initial conditions* is a hallmark of Chaotic Systems and is often referred to as the Butterfly Effect.

As shown below, while two slightly different seeds start by following a distinctly similar pattern, after a number of iterations these diverge and produce wildly differing results.

Graph demonstrating the onset of chaos

After around 30 iterations the two seeds diverge and bear little resemblance to one another. While this is likely a sufficient level of distinction, I'm going to prime my sequencer by 100 iterations for good measure.

For a quick and dirty implementation this will just about suffice. Using the above I generated a landscape for a theoretical infinite running game. All the attributes of the city and it's buildings are derived using the sequence generator so each city scape could be regenerated exactly using the original seed.

I think this approach to generating numbers, does a fairly good job considering the simplicity of the logic involved. But just how random is it? After seeing an article by Random.org on the analysis of random numbers, I wanted to produce some images and see if I could find any visual artefacts which would suggest the existence of predictability (Here's how I did it).

Random.org's TRNG

PHP's `rand()`

on Windows

Logistic map `x = 0.1`

Pretty good, right? well, I was actually hoping for it to fail a little more dramatically. Unfortunately the nuance of failure in random number generation is generally much more subtle than the example provided by PHP. In the hopes of seeing some visual evidence of predictability I drew each pixel in the image in grayscale to give an extra dimension to the visualisation.

Logistic map greyscaled `x = 0.1`

A zoomed region

The first Image is a little hard to read at the magnification used to show errors in PHP's algorithm. Looking at the zoomed region, it's now fairly clear is that very light pixels are generally always followed by very dark ones, and the lighter the initial pixel, the longer the tail of dark pixels are. These dark pixels chains often taper off into brighter shades, creating shadow like artefacts. You may have noticed this effect back in the graph demonstrating the onset of chaos, where after almost reaching 1 the values plummet and then slowly climb higher something which loosely resembles a knee in the line graph.

The nail in the coffin, however, is delivered by some analysis courtesy of Wolfram Alpha:

The first 20,000 iterates when `x = 0.1`

This shows that the vast majority of numbers generated are in the top and bottom decile. Looking back at both the Graph demonstrating the onset of chaos, and the grayscaled logistic map image, it is actually rather apparent that both instances seem to favour their relative extremities.

It goes without saying that due to the above flaws in it's current state this arrangement of the logistic map isn't fit for any vaguely cryptographic applications. It's use even in games is it's self fairly questionable given the algorithms *flavour*, tending to prefer extremities over middle ground values. This however makes me curious, given the patterns evident in the sequences could the flavour of this particular implementation be taken advantage of?

My earlier city scape used a single seed to effect many aspects of it's generation. I wonder if multiple seeds we're used for differing attributes, would the flavour of the seed give an organic feel to the end product? Furthermore, my graphs and analysis have so far all used the same parameters (`birthAndDeathRate = 4`

and `populationSize = 0.1`

), would adjusting these yield alternative flavours?

Anyone seeking to formally quantify the facets of predictability should look at the barrage of tests created by NIST which can be used to identify specific attributes of non-randomness.