Brownian Motion or Pedesis is the random motion of particles suspended in a medium (liquid or gas). The pattern of motion typically consists of random fluctuation in a particle’s position. This motion is named after the botanist Robert Brown, who first described this phenomenon in 1827, while looking through a microscope at pollen of the plant Clarkia Pulchella immersed in water.
Here we are going to define Brownian motion and develop it’s basic properties. For us the most important properties of Brownian motion are that-
Brownian motion is a martingale.
It accumulates quadratic variation at rate one per unit time.
If you haven’t heard of these weird terms like martingale, quadratic variation- don’t worry, these are very easy. We will learn these on the way.
To construct a Brownian motion, we have to start with a symmetric random walk. To construct a symmetric random walk, we repeatedly toss a fair coin. We denote the successive outcome of the tosses by- \(\omega = \omega_1\omega_2\omega_3...\). In other words, this is an infinite sequence of tosses, where \(\omega_n\) is the outcome of the n-th toss. Let- \[\begin{equation} X_j = \begin{cases} +1 & \text{if $\omega_j$ = H} \\ -1 & \text{if $\omega_j$ = T} \\ \end{cases} \end{equation}\] Define, \(M_0=0\), and- \[M_k = \sum_{j=1}^{k}X_j\qquad \text{for k = 1,2,3...}\] The process \(M_k\) is called a symmetric random walk.
Random walks have independent increments. Now what is independent increment ?
Mathematically- if we choose non-negative integers- \(0 = k_0 < k_1 < k_2 <... < k_m\), the random variables- \[M_{k_1} = (M_{k_1}- M_{k_0}), (M_{k_2}- M_{k_1}),... (M_{k_m}- M_{k_{m-1}})\]
are independent. The general form of these random variables- \[M_{k_{i+1}}- M_{k_{i}} = \sum_{j = k_{i}+1}^{k_{i+1}}X_j\] is nothing but the increment in random walk \(M_k\) between step- \(k_i\) and \(k_{i_1}\). Increments over non-overlapping time interval are independent, because they are generated from different (and independent) coin tosses.
\[\begin{aligned} E(M_{k_{i+1}}- M_{k_{i}}) &= E(\sum_{j = k_{i}+1}^{k_{i+1}}X_j) \\ &= \sum_{j = k_{i}+1}^{k_{i+1}}E(X_j) = 0\\ \end{aligned}\]
\[\begin{aligned} Var(M_{k_{i+1}}- M_{k_{i}}) &= Var(\sum_{j = k_{i}+1}^{k_{i+1}}X_j) \\ &= \sum_{j = k_{i}+1}^{k_{i+1}}Var(X_j) \\ &= \sum_{j = k_{i}+1}^{k_{i+1}}1 = k_{i+1} - k_{i}\\ \end{aligned}\]
So the expectation of increment in symmetric random walk is = 0. The variance of symmetric random walk accumulates at rate one per unit time.
To understand this- first we need to know what is a martingale ?
Martingale:In probability theory, a martingale is a sequence of random variables (i.e., a stochastic process) for which, at a particular time, the conditional expectation of the next value in the sequence is equal to the present value, regardless of all previous values.
Suppose, \(\{X_n\}\) be a sequence of random variables, \(n = 1,2,3...\). If for all values of n- \[\begin{aligned} & E(|X_n|) < \infty \\ & E(X_{n+1}|X_n,X_{n-1},X_{n-2}...) = X_n \end{aligned}\] Then \(\{X_n\}\) is called a discrete time martingale.
Filteration: Let \(\Omega\) be a nonempty set. Let T be a fixed positive number, and assume that for each \(t \in [0, T]\) there is a \(\sigma\)-algebra \(F(t)\). Assume further that if \(s \leq t\), then every set in \(F(s)\) is also in \(F(t)\) . Then we call the collection of a \(\sigma\)-algebra \(F(t)\), \(0 \leq t \leq T\), a filtration.
Now we are trying to prove, if- the symmetric random walk is a martingale. Suppose- \(k \leq l\) are two non-negative integers. Then- \[\begin{aligned} E[M_l|F_k] &= E[M_l-M_k+M_k|F_k] \\ &= E[M_l-M_k|F_k] + E[M_k|F_k] \\ &= E[M_l-M_k|F_k] + M_k = M_k\\ \end{aligned}\]
Suppose \(M_t\) is a symmetric random walk. We know the value of \(M_t\) at time \(t= k\), which is \(M_k\). Then the value of the symmetric random walk at time point \(t=l\), where \(l > k\), which is \(M_l\) varies symmetrically with respect to \(M_k\), which is already known.
The quadratic variation of a stochastic process \(M_t\) up to time k can be defined as- \[[M,M]_k =\sum_{j = 1}^{k}(M_{j}-M_{j-1})^2\]
Now, \(M_t\) is a symmetric random walk, then we must have- \((M_{j}-M_{j-1})^2\) = 1 for all values of j. So-
\[[M,M]_k =\sum_{j = 1}^{k}(M_{j}-M_{j-1})^2 = k\] See that, the quadratic variation is calculated over the path. Also notice that- \[[M,M]_k = k = Var(M_k)\] The computation of variance and quadratic variation are quite different. While calculating variance we take probabilities into account, i.e. if the building blocks of random walk \(X_j\) have- \(P[X_j = 1] = p \neq \frac{1}{2}\), then \(Var(M_k)\) changes.
But in case of quadratic variation even if- \(P[X_j = 1] = p \neq \frac{1}{2}\), we always have- \((M_j- M_{j-1})^2 = X_j^2 = 1\).
To approximate a Brownian motion, we- speed up time and scale down the step size of the Brownian motion.
Now what is meant by “speed up time” and “scale down the step size”?
Suppose we have constructed a symmetric random walk, start from zero, using 100 \(X_j\)’s defined as before.
In this case, the step size = \(|X_j|\) = 1.Now we increase the number of steps from 100 to 400, i.e. 400 coin tosses. But decrease the step size from 1 to \(\frac{1}{2}\) (I shall explain later why I have taken step size = \(\frac{1}{2}\)). Then it will be a scaled random walk. Mathematically we can write scaled symmetric random walk as- \[W^{(n)}(t) = \frac{1}{\sqrt{n}}M_{nt} = \frac{1}{\sqrt{n}}\sum_{i=1}^{nt}X_i\] provided nt is an integer and \(X_i\) same as before.
Then for \(0= t_0 < t_1 < t_2 <... <t_m\), such that each \(nt_j\) are integers- \[(W^{(n)}(t_1)-W^{(n)}(t_0)), (W^{(n)}(t_2)-W^{(n)}(t_1)),..., (W^{(n)}(t_m)-W^{(n)}(t_{m-1}))\] are independent. Because these random variables depend on different coin tosses.
Expectation and variance:
Let \(0\leq s \leq t\) such that ns and nt are integers. \[\begin{aligned} E[W^{(n)}(t)-W^{(n)}(s)] &= \frac{1}{\sqrt{n}}E[M_{nt}-M_{ns}] \\ &= \frac{1}{\sqrt{n}}E[\sum_{j=ns+1}^{nt}X_j] = \frac{1}{\sqrt{n}}\sum_{j=ns+1}^{nt}E[X_j]\\ &= \frac{1}{\sqrt{n}}\sum_{j=ns+1}^{nt}0 = 0 \end{aligned}\]
\[\begin{aligned} Var(W^{(n)}(t)-W^{(n)}(s)) &= \frac{1}{n}Var[M_{nt}-M_{ns}] \\ &= \frac{1}{n}Var[\sum_{j=ns+1}^{nt}X_j] = \frac{1}{n}\sum_{j=ns+1}^{nt}Var[X_j]\\ &= \frac{1}{n}\sum_{j=ns+1}^{nt}1 = \frac{1}{n}n(t-s) = t-s \end{aligned}\]
We can write- \(W^{(n)}(t) = [W^{(n)}(t)-W^{(n)}(s)] + [W^{(n)}(s)-W^{(n)}(0)]\). These two splitted random variables are independent of each other (explained before). Define- \(F_s\) is the \(\sigma\)-algebra of information available at time s. Then-
\[\begin{aligned} E[W^{(n)}(t)|F_s] &= E[(W^{(n)}(t)-W^{(n)}(s)) + W^{(n)}(s)| F_s] \\ &= E[(W^{(n)}(t)-W^{(n)}(s))|F_s] + E[W^{(n)}(s)| F_s] \\ &= 0 + W^{(n)}(s) = W^{(n)}(s) \end{aligned}\]
\(E[(W^{(n)}(t)-W^{(n)}(s))|F_s] = 0\) due to independence between \(F_s\) and \(W^{(n)}(t)-W^{(n)}(s)\).
Quadratic variation of scaled symmetric random walk:
\[\begin{aligned} {[W^{(n)},W^{(n)}]}(t) &= \sum_{j=1}^{nt}[W^{(n)}(\frac{j}{n})-W^{(n)}(\frac{j-1}{n})]^2 \\ &= \sum_{j=1}^{nt}[\frac{1}{\sqrt{n}}X_j]^2 \\ &= \sum_{j=1}^{nt}[\frac{1}{n}1] = t \end{aligned}\]
The following step diagram is the distribution of \(W^{100}(0.25)\). This can be approximated using a normal distribution, which is the blue line here.
Theorem:(Central limit) Fix \(t \geq 0\). As \(n \rightarrow \infty\), the distribution of the scaled random walk \(W^{(n)}(t)\) evaluated at time t converges to a Normal distribution with mean = 0 and variance = t.
Proof: Now here is a very useful trick. Whenever you have to prove that- two distributions are same; the best way to prove it is via Moment Generating Functions (MGFs) if exists or via Characteristic functions.
The MGF of \(X \sim N(\mu,\sigma^2)\) is- \[M_{X}(u) = \exp({u\mu+\frac{1}{2}u^2\sigma^2})\]
In this case- \(\mu = 0\) and \(\sigma^2 = t\). Hence, \(M_X(u) = \exp(\frac{1}{2}u^2t)\).
Now, if t is such that \(n.t\) is an integer, the MGF of \(W^{(n)}(t)\) is- \[\begin{aligned} \phi_n(u) &= E[\exp(u.W^{(n)}(t))] = E[\exp(u.\frac{1}{\sqrt{n}}M_{nt})]\\ &= E[\exp(u.\frac{1}{\sqrt{n}}\sum_{j=1}^{nt}X_j)] \\ &= E[\prod_{j=1}^{nt}\exp(\frac{u}{\sqrt{n}}X_j)] = \prod_{j=1}^{nt}E[\exp(\frac{u}{\sqrt{n}}X_j)] \qquad \text{due to independence} \\ &= (\frac{1}{2}e^{-\frac{u}{\sqrt{n}}}+\frac{1}{2}e^{\frac{u}{\sqrt{n}}})^{nt} \end{aligned}\]
Now, taking logarithm of both sides we get- \[\log(\phi_n(u)) = nt. \log(\frac{1}{2}e^{-\frac{u}{\sqrt{n}}}+\frac{1}{2}e^{\frac{u}{\sqrt{n}}})\]
As \(n \rightarrow \infty\) \(\frac{u}{\sqrt{n}} \rightarrow 0\).
For \(x = \frac{1}{\sqrt{n}}\) we always have-
\[\begin{aligned} & \lim_{n \to \infty} \log\phi_n(u) = t. \lim_{x \downarrow 0} \frac{\log(\frac{1}{2}e^{-ux}+\frac{1}{2}e^{ux})}{x^2} \qquad \text{[$\frac{0}{0}$ form]} \\ \end{aligned}\]
applying L’Hopital rule we get-
\[\begin{aligned} \lim_{n \rightarrow \infty} \log\phi_n(u) &= t. \lim_{x \downarrow 0} \frac{\log(\frac{u}{2}e^{-ux}-\frac{u}{2}e^{ux})}{2x.(\frac{1}{2}e^{ux}+\frac{1}{2}e^{-ux})}\\ &= \frac{t}{2}. \lim_{x \downarrow 0} \frac{\log(\frac{u}{2}e^{-ux}-\frac{u}{2}e^{ux})}{x} \\ \end{aligned}\] Still it is \(\frac{0}{0}\) form. So we again apply L’Hopital rule-
\[\begin{aligned} \lim_{n \rightarrow \infty} \log\phi_n(u) &= \frac{t}{2}. \lim_{x \downarrow 0} \frac{\log(\frac{u^2}{2}e^{-ux}+\frac{u^2}{2}e^{ux})}{x} = \frac{u^2t}{2} \end{aligned}\] Hence we have- \[\lim_{n \rightarrow \infty} \phi_n(u) = e^{\frac{1}{2}u^2t}\] which is the Moment generating function of \(N(0,t)\), as desired.
Alternative: There is a very simple alternative also. We know-
\[ e^x+e^{-x} = 2+2\frac{x^2}{2!}+2\frac{x^4}{4!}+...\] So we can write-
\[\begin{aligned} \lim_{n \to \infty} \log\phi_n(u) &= t. \lim_{x \downarrow 0} \frac{\log(\frac{1}{2}e^{-ux}+\frac{1}{2}e^{ux})}{x^2} \\ &= t. \lim_{x \downarrow 0} \frac{\log(\frac{1}{2}(2+2\frac{u^2x^2}{2!}+2\frac{u^4x^4}{4!}+...))}{x^2} \\ &= t. \lim_{x \downarrow 0} \frac{\log(1+\frac{1}{2}u^2x^2)}{x^2} \\ &= t. \lim_{y \downarrow 0} \frac{\log(1+\frac{1}{2}u^2y)}{y} \\ &= \frac{1}{2}t.u^2 \end{aligned}\]
Hence we have- \[\lim_{n \rightarrow \infty} \phi_n(u) = e^{\frac{1}{2}u^2t}\]
Now we will slowly proceed towards the connection between finance and Brownian motion, stochastic processes etc.