In the image below you see four strange yet mesmerizing patterns. Some areas of these patterns are extremely bright, while others are rather dark. The bright areas seem to follow well-defined lines and curves. But these lines and curves don’t seem to follow any obvious logic like a circle or a spiral. The patterns have something “organic” about them and when seen from a distance they look like star dust in the night sky. What makes these patterns even stranger is the fact that they resemble rather simple numerical systems of equations. These systems of equations are called strange attractors, which are some of the most fascinating structures in all of mathematics. In this tutorial, you will learn the basic theory behind these structures and how you can use R to search for these patterns and produce some truly wonderful images.
Practically all of the theory and code that follow are based on Julien Sprott’s book Strange Attractors: Creating Patterns in Chaos. The book is a very good primer on the fundamentals of chaos theory and how to use computers to search for strange attractors. Unfortunately, all of the book’s code examples are written in BASIC, a language which virtually nobody uses today. This is is why I decided to write this tutorial. Should you find any errors in the code or theory below, they are mine and not Julien Sprott’s.
Consider the following two equations: \[x_{t+1} = r \cdot x_t \tag{1} \] \[x_{t+1} = r \cdot x_t \cdot (x_t - 1) \tag{2} \] Given a particular starting value \(x_0\) and a parameter \(r\) these two equations will tell you how the variable \(x\) will behave over time \(t\). Let’s set \(r=2\) and have a look. The two figures below show the trajectories for three starting values \(x_0 = \{.1,.2,.3\}\). Unsurprisingly, the trajectories of equation (1) follow a pattern of exponential growth, and the greater \(x_0\), the greater \(x_t\) after a given number of iterations. Conversely, the trajectories of equation (2) all converge to a fixed point or steady state of 0.5. The starting values only determine how quickly the trajectories reach that steady state.
Now what would would happen if we were to increase \(r\)? Needless to say, the values of \(x\) in equation (1) would simply grow quicker. The image would still be one of explosive growth. But what happens with equation (2)? Well, let’s have a look at the next figure! If we increase \(r\) to 2.8, the system still converges to a steady state, which is now a little higher than the initial 0.5. However, before doing so, the system “overshoots” this steady state and then “wiggles” around it for a while before finally settling down. If we increase \(r\) to 3.2, the system never settles down on a single steady state. Instead, the system constantly switches between two points (roughly 0.8 and 0.51). If \(r=3.5\) or \(r=3.8\), the system exhibits a similar cyclic behavior, except that it switches back and forth between more than two values. At first these patterns might appear random. But remember that the system is entirely deterministic, and as long as \(r \leq 4\), the system is also bounded, i.e., \(x\) does not grow arbitrarily large or small. If \(r=4.1\), the system suddenly explodes to minus infinity.
As we have just seen, our second equation, which is called the logistic equation, produces some strange patterns for \(2\leq r \leq4\). Let’s study this phenomenon more rigorously. To do so we will study hundreds of different values for \(r\), all between 2 and 4, and examine on which values of \(x\) the system eventually settles. To this end, we will iterate the equation 500 times and only retain the last 50 values, which we will then plot for each of the different values of \(r\).
# function to run logistic equation
logEq <- function(x0=.1, r=2, n=10){
# x0: starting value
# r: growth rate parameter
# n: number of iterations
x <- rep(NA,n)
x[1] <- x0
for(i in 2:n) x[i] <- r*x[i-1]*(1-x[i-1])
return(x)
}
# plot the system's final values for different values of r
r <- seq(2,4,.001)
x <- sapply(r,logEq,x0=.1,n=500)[450:500,]
plot(as.numeric(x)~rep(r,each=nrow(x)),
pch=".", xlab="r", ylab="x")
The resulting bifurcation diagram, also called Feigenbaum diagram, looks incredibly weird. First, i.e., when \(r < 3\), there is a simple upward sloping curve. At around \(r=3\) the curve splits in two. Just short of \(r = 3.5\) these two curves split again. And somewhere after \(r=3.6\), the image starts to look quite bizarre. Let’s start at the left of the diagram, where the image is a simple curve. This curve means that our systems converges to a steady state. If \(r = 2\), this steady state is equal to 0.5. If we increase \(r\), this steady state also increases. This is why the curve increases. However, as we have seen before, if \(r\) grows large enough, the system develops a cyclic behavior. Instead of reaching a unique steady state, the system switches back and forth between two particular values. This is why the initial curve in the bifurcation diagram splits into two somewhere after \(r>3\). The system now settles on two instead of a single point. These two points grow further apart if \(r\) increases. Once \(r\) is large enough, somewhere before 3.5, the system starts to settle on four different points. If \(r\) grows even further, the system becomes increasingly chaotic and constantly switches between many different points which are, however, all between 0 and 1. We are now in the chaotic region. The system is entirely deterministic, i.e. it’s not random, but it does not reach a fixed point solution or stable cyclic solution either, nor does the system explode. The system is chaotic.
Notice, however, that within all the chaos in the Feigenbaum diagram, there are small non-chaotic interludes. If you look closely, you can see that somewhere around \(r=3.8\) the chaos is intersected by a white vertical stripe. Here, the system, settles back to a stable cyclic solution, before plunging right back into chaos. In fact, there are more such instances. To mathematically distinguish between chaos and order, we can use the Lyapunov exponent.
\[ L = 1/T \cdot \sum \log_2(|r \cdot (1 - 2x_t)|) \]
If this exponent is greater than zero, the system behaves chaotic. Let’s compute this exponent for different growth rates.
# function to compute lyapunov exponent
lyapunov <- function(x,r) sum(log2(abs(r*(1-2*x))))/length(x)
# plot lyapunov exponent over different growth rates
r <- seq(2,4,.001)
x <- sapply(r,logEq,x0=.1,n=500)
l <- rep(NA,length(r))
for(i in 1:length(l)) l[i] <- lyapunov(x=x[,i],r=r[i])
plot(l ~ r,
type="l",
ylim=c(-1,1),
ylab="Lyapunov exponent")
abline(h = 0, col="red")
As we can see, the Lyapunov exponent is less than zero until we reach a value just above 3.5. Thereafter, i.e., for greater values of \(r\), the system generally behaves chaotic. However, from time to time, or certain values of \(r\), the Lyapunov exponent drops below zero indicating that the system is again non-chaotic.
So far we have only concerned ourselves with a single variable \(x\). We already found some very interesting types of chaotic behaviors. But things get a whole lot more interesting when we consider more than one variable. So let’s move from a single equation to a two-dimensional system of equations with two variables \(x\) and \(y\). In particular, we will focus on what are called two-dimensional quadratic maps, which in their most general form look like this:
\[ x_{t+1} = \alpha_1 + \alpha_2 x_t + \alpha_3 x^2_t + \alpha_4 x_t y_t + \alpha_5 y_t + \alpha_6 y^2_t \tag{3a} \] \[ y_{t+1} = \alpha_7 + \alpha_8 x_t + \alpha_9 x^2_t + \alpha_{10} x_t y_t + \alpha_{11} y_t + \alpha_{12} y^2_t \tag{3b} \]
The two variables \(x\) and \(y\) depend on their previous period values in levels and squared as well as the combination of the previous period \(x\) and \(y\). Let’s write a function that takes a starting point \((x_0,y_0)\) and traces its position through each of \(n\) many iterations. Moreover, we will also write a function to plot these points. In this function, we will include an option to drop the first couple of points for which the attractor might not have yet reached its basin of attraction, which is the region where, once reached, all future points will wander around in.
# function to run two-dimensional system
eqSys2d <- function(a=rep(0,12), n=1e5, p0=rep(1e-6,2)){
# matrix to store points
p <- matrix(NA,ncol=2,nrow=n)
# starting point
p[1,] <- p0
# iterations
for(i in 2:n){
x = p[i-1,1]
y = p[i-1,2]
p[i,1] <- a[1] + a[2]*x + a[3]*x^2 + a[4]*x*y + a[5]*y + a[6]*y^2
p[i,2] <- a[7] + a[8]*x + a[9]*x^2 + a[10]*x*y + a[11]*y + a[12]*y^2
}
# return
return(list("parameters"=a,"start"=p0,"points"=p))
}
# function to plot two-dimensional system
plotSys2d <- function(p, dropfirst=T,
pch=".", cex=1,
color="white"){
if(dropfirst) p <- p[((nrow(p) %/% 9) : nrow(p)), ]
par(mar=rep(1,4), bg="black", bty="n")
plot(p, cex=cex,
pch=pch, col=color,
xlab="", ylab="",
xaxt="n", yaxt="n")
}
# example
eqSys2d(a=1:12, n=10)
## $parameters
## [1] 1 2 3 4 5 6 7 8 9 10 11 12
##
## $start
## [1] 1e-06 1e-06
##
## $points
## [,1] [,2]
## [1,] 1.000000e-06 1.000000e-06
## [2,] 1.000007e+00 7.000019e+00
## [3,] 3.630020e+02 7.590043e+02
## [4,] 4.958439e+06 1.086544e+07
## [5,] 9.976083e+14 2.176726e+15
## [6,] 4.010057e+31 8.752985e+31
## [7,] 6.483300e+64 1.415102e+65
## [8,] 1.694588e+131 3.698767e+131
## [9,] 1.157717e+264 2.526940e+264
## [10,] Inf Inf
Unsurprisingly, setting \(\alpha_1\) through \(\alpha_2\) equal to 1 through 12 causes the system to explode as indicated by the rapidly increasing values in the output that quickly surpass R’s range of feasible numerical values. So plotting this attractor would throw us an error (or at least a warning). The question now is which values for the 12 coefficients produce strange attractors …
One of the most prominent strange attractors based on quadratic map is the so-called Hénon map named after French mathematician Michel Hénon. The Hénon map is typically written as \[ x_{t+1} = 1 - a x^2_t + y_t \] \[ y_{t+1} = b x_t \] with \(a=1.4\) and \(b=0.3\). Another well-known chaotic quadratic map is the Tinkerbell map, which is commonly written as \[ x_{t+1} = ax_t + by_t + x_t^2 - y_t^2 \] \[ y_{t+1} = cx_t + dy_t + 2 x_ty_t \] with \(a=0.9\), \(b=-0.6013\), \(c=2\) and \(d=0.5\). Now, all we need to do is to substitute these coefficents into equation (3) and we are ready to plot the first of many hypnotic images. As we programmed different coloring options into our plot function, we can even produce plots with different colors.
# henon map
henon <- eqSys2d(a=c(1,0,-1.4,0,1,0,0,.3,0,0,0,0),n=1e4)
# tinkerbell map
tinkerbell <- eqSys2d(a=c(0,.9,1,0,-.6013,-1,0,2,0,2,.5,0),n=1e5)
# plots
par(mfrow=c(2,2))
plotSys2d(henon$points)
plotSys2d(tinkerbell$points)
# plots with color
plotSys2d(henon$points, color="maroon")
plotSys2d(tinkerbell$points, color="limegreen")
We are now ready to find our own strange attractors. The following code snippet searches and plots ten strange attractors. It does so by checking whether random combinations of our twelve coefficients constitute strange attractors.
n_attractors <- 10
counter <- 0
while(T){
# run new system of equations for 1000 steps
a <- runif(12,-1,1)
p <- eqSys2d(a=a, n=1e4)$points
# check if all points are unique
if(nrow(p) != nrow(unique(p))) next
# check if trajectory does not explode
if(any(is.infinite(p))) next
if(any(is.nan(p))) next
# check if lyapunov exponent is positive
d <- rep(NA,nrow(p)-1)
for(i in 2:nrow(p)) d[i-1] <- dist(p[(i-1):i,])
l <- sum(log2(d[-1]/d[-length(d)]))/nrow(p)
if(is.na(l) | l <= 0) next
# run again with more steps and plot
attractor <- eqSys2d(a=a, n=1e5)
plotSys2d(attractor$points)
# breaking condition
counter <- counter + 1
if(counter == n_attractors) break
}
rm(a,p,d,l)
Every time you run this code you will get a whole new set of strange attractors. Some might look a little boring, others will produce the most fascinating patterns. Here are some examples:
The Hénon map and the Tinkerbell map are both situated in two-dimensional space. Many other famous strange attractors are, however, three-dimensional. Moreover, they are often formulated as differential equations: \[ dx/dt = f(x,y,z) \] \[ dy/dt = f(x,y,z) \] \[ dz/dt = f(x,y,z) \] If we maintain our focus on quadratic maps, we only need to make a few small changes to our existing function to also simulate and visualize these three-dimensional attractors:
# function to run three-dimensional system
eqSys3d <- function(a=rep(0,30), n=1e5, p0=rep(1e-5,3), dt=1e-3){
p <- matrix(NA,ncol=3,nrow=n)
p[1,] <- p0
for(i in 2:n){
x = p[i-1,1]
y = p[i-1,2]
z = p[i-1,3]
dx <- a[1] +a[2]*x +a[3]*y +a[4]*z +a[5]*x^2 +a[6]*y^2 +a[7]*z^2 +a[8]*x*y +a[9]*x*z+a[10]*y*z
dy <- a[11]+a[12]*x+a[13]*y+a[14]*z+a[15]*x^2+a[16]*y^2+a[17]*z^2+a[18]*x*y+a[19]*x*z+a[20]*y*z
dz <- a[21]+a[22]*x+a[23]*y+a[24]*z+a[25]*x^2+a[26]*y^2+a[27]*z^2+a[28]*x*y+a[29]*x*z+a[30]*y*z
p[i,1] = p[i-1,1] + dt*dx
p[i,2] = p[i-1,2] + dt*dy
p[i,3] = p[i-1,3] + dt*dz
}
return(list("parameters"=a,"points"=p,start="p0"))
}
# function to plot three-dimensional systems
library(rgl)
plotSys3d <- function(p, dropfirst=T,
pch=".",
color=rainbow(nrow(p)),
theta=20, phi=10){
# drop initial observations
if(dropfirst) p <- p[((nrow(p) %/% 9) : nrow(p)), ]
# plot
library(plot3D)
par(bg = "black", bty="n")
scatter3D(p[,1],p[,2],p[,3],
pch=pch, bty="n",
colkey=F, axes=F,
theta=theta, phi=phi,
col=color
)
}
With these tools at hand, we are now able to visualize some of the most famous attractors ever, including the butterfly-like Lorenz attractor or the Rössler attractor.
# lorenz attractor
lorenz <- eqSys3d(a=c(0, -10, 10, 0, 0, 0, 0, 0, 0, 0,
0, 28, -1, 0, 0, 0, 0, 0, -1, 0,
0, 0, 0, -8/3, 0, 0, 0, 1, 0, 0))
plotSys3d(lorenz$points)
# roessler attractor
roessler <- eqSys3d(a=c(0, 0, -1, -1, 0, 0, 0, 0, 0, 0,
0, 1, 0.2, 0, 0, 0, 0, 0, 0, 0,
0.2, 0, 0, -5.8, 0, 0, 0, 0, 1, 0))
plotSys3d(roessler$points, color="goldenrod2")