Till now we have seen the Brownian motion as the limit of the scaled random walk, \(W_n(t)\). The Brownian motion takes the properties from random walks.
Definition: Let \((\Omega,\mathbb{F},\mathbb{P})\) be a probability space. For each \(\omega \in \Omega\), suppose there is a continuous function \(W(t)\) of \(t \geq 0\) that satisfies- W(0) = 0 and that depends on \(\omega\).
Then, \(W(t),\ t \geq 0\) is a Brownian Motion if for all partition- \(0 = t_0 < t_1 < \cdots <t_m\), the increments- \[W(t_1)-W(t_0), W(t_2)-W(t_1), \cdots, W(t_m)-W(t_{m-1})\] are independent and each of these increments are Normally distributed with-
\[\begin{aligned} &E[W(t_{i+1})-W(t_i)] = 0 \\ &Var[W(t_{i+1})-W(t_i)] = t_{i+1}-t_i \\ \end{aligned}\]
But there is one difference between Brownian motion, W(t) and scaled random walk, \(W^{(n)}(t)\).
In the definition above I started with- \(\omega \in \Omega\). The \(\omega\) can be viewed as one of two ways.
First, \(\omega\) is the path of Brownian motion path. A random experiment is performed and it’s outcome is the path of Brownian motion. Then value W(t) is the value of the path at time t. The value of W(t) depends on the path, which results from the random experiment.
Secondly, we can think \(\omega\) something more crude than the path itself. We can think it as an outcome of a sequence of coin tosses but- the toss is tossed infinitely fast. Once the toss is done an \(\omega\) is obtained, the path of the Brownian motion can be drawn. If we toss again, we will get a different \(\omega\), the path will be different.
In either of the cases- \(\Omega\) is the sample space, \(\mathbb{F}\) is the \(\sigma \text{-algebra}\) of subsets of \(\Omega\), whose probabilities are defined and \(\mathbb{P}\) is the probability measure.
In the definition, for \(0 = t_0 < t_1 < \cdots <t_m\) the increments- \[W(t_1)-W(t_0), W(t_2)-W(t_1), \cdots, W(t_m)-W(t_{m-1})\] are independent and each of these increments are Normally distributed. The random variables- \(\{W(t_1), W(t_2), \cdots, W(t_m)\}\) are jointly normally distributed.
For any \(k \in I\) and \(0 \leq k \leq m\) we have- \[\begin{aligned} &W(t_k) = [W(t_k)-W(t_{k-1})]+[W(t_{k-1})-W(t_{k-2})]+ \cdots + [W(t_1)-W(t_{0})] \\ \implies &E(W(t_k)) = E[W(t_k)-W(t_{k-1})]+E[W(t_{k-1})-W(t_{k-2})]+ \cdots + E[W(t_1)-W(t_{0})] \\ \implies &E(W(t_k)) = 0 \\ \end{aligned}\] Hence for any t > 0, we have- \(E[W(t)] = 0\).
In the definition of Brownian motion, if we take \(t_k = t\) and \(t_{k-1} = 0\), we have- \(Var[W(t_{k})-W(t_{k-1})] = t_{k}-t_{k-1}\), i.e. \(Var[W(t)-W(0)] = Var[W(t)] = t = E[W(t)^2]\), (to be used later).
For \(0 \leq s < t\), we can write- \[\begin{aligned} E[W(t).W(s)] &= E[(W(t)-W(s)+W(s)).W(s)] \\ &= E[(W(t)-W(s)).W(s)]+ E[W(s)^2] \\ &= E[(W(t)-W(s))].E[W(s)]+ E[W(s)^2] \qquad \text{Since (W(t)-W(s)) and W(s) are independently distributed} \\ &= 0.0 + E[W(s)^2] = s = \min(t,s) \end{aligned}\]
Hence the covariance matrix of the Brownian motion i.e. \(\{W(t_1),W(t_2), \cdots, W(t_m) \}\) is- \[\begin{aligned} \Sigma &= \begin{bmatrix} E[W^2(t_1)] & E[W(t_1)W(t_2)] & \cdots & E[W(t_1)W(t_m)] \\ E[W(t_1)W(t_2) & E[W^2(t_1)] & \cdots & E[W(t_2)W(t_m)] \\ . & . & \cdots & . \\ . & . & \cdots & . \\ . & . & \cdots & . \\ E[W(t_1)W(t_m)] & E[W(t_1)W(t_2) & \cdots & E[W^2(t_m)] \\ \end{bmatrix} \\ &= \begin{bmatrix} t_1 & t_1 & \cdots & t_1 \\ t_1 & t_2 & \cdots & t_2 \\ . & . & \cdots & . \\ . & . & \cdots & . \\ . & . & \cdots & . \\ t_1 & t_2 & \cdots & t_m \\ \end{bmatrix} = [[min(t_i,t_j)]]_{i=1(1)m\\{j=1(1)m}} \end{aligned}\]
Now we are going to find the distribution of m-dimensional vector of Brownian motion, i.e. \(\{W(t_1),W(t_2), \cdots, W(t_m) \}\) using the same trick we used to find the limiting distribution of the scaled random walk, the Moment Generating Function technique.
For \(\vec{u} = (u_1,u_2, \cdots, u_m) \in \mathbb{R}^m\), we can write- \[\begin{aligned} \vec{u}'\vec{W} &= \begin{bmatrix}u_1 & u_2 & \cdots & u_m \end{bmatrix}. \begin{bmatrix} W(t_1) \\ W(t_2) \\ . \\ . \\ . \\ W(t_m) \\ \end{bmatrix}= u_1.W(t_1)+u_2.W(t_2)+ \cdots +u_m.W(t_m) \\ &= u_m.W(t_m) + u_{m-1}.W(t_{m-1}) + \cdots + u_1.W(t_1) \\ &= u_m.(W(t_m)-W(t_{m-1}))+ (u_m+u_{m-1}).(W(t_{m-1})-W(t_{m-2}))+ \cdots + (u_m+u_{m-1}+ \cdots + u_1).W(t_1) \\ \end{aligned}\]
Hence the multivariate version of \(\{W(t_1),W(t_2), \cdots, W(t_m) \}\) can be written as-
\[\begin{aligned} M_{\vec{W}}(\vec{u}) &= E[\exp\{u_m.W(t_m) + u_{m-1}.W(t_{m-1}) + \cdots + u_1.W(t_1)\}] \\ &= E[\exp\{u_m.(W(t_m)-W(t_{m-1}))+ (u_m+u_{m-1}).(W(t_{m-1})-W(t_{m-2}))+ \cdots + (u_m+u_{m-1}+ \cdots + u_1).W(t_1)\}] \\ &= E[\exp\{u_m.(W(t_m)-W(t_{m-1}))\}].E[\exp\{(u_m+u_{m-1}).(W(t_{m-1})-W(t_{m-2}))\}].E[\exp\{(u_m+u_{m-1}+ \cdots + u_1).W(t_1)\}] \\ \end{aligned}\]
This is due to the independence of increments of Brownian motion.
\[\begin{aligned} M_{\vec{W}}(\vec{u}) &= E[\exp\{u_m.(W(t_m)-W(t_{m-1}))\}] \cdots E[\exp\{(u_m+u_{m-1}+ \cdots + u_1).W(t_1)\}] \\ &= \exp\{\frac{1}{2}u_m^2(t_m-t_{m-1})\}\cdots \exp\{\frac{1}{2}(u_m+u_{m-1}+\cdots+u_1)^2(t_1)\} \end{aligned}\]
The distribution of the Brownian increments in \(W(t_1)-W(t_0), W(t_2)-W(t_1), \cdots, W(t_m)-W(t_{m-1})\) can be specified by specifying the joint density or the joint moment-generating function of the random variables \(\{W(t_1),W(t_2), \cdots, W(t_m) \}\).
Let \((\Omega, \mathbb{F}, \mathbb{P})\) be a probability space. For each \(\omega \in \Omega\), suppose there is a continuous function \(W(t)\) for \(t \geq 0\), that satisfies \(W(0) = 0\) that depends on \(\omega\). The following three properties are equivalent.
For all \(0 = t_0 \leq t_1 \leq \cdots \leq t_m\), the increments- \[W(t_1)= W(t_1)-W(t_0),W(t_2)-W(t_1), \cdots, W(t_m)-W(t_{m-1})\] are independent and each of these increments are Normally distributed with- \[\begin{aligned} &E[W(t_{i+1})-W(t_i)] = 0 \\ &Var[W(t_{i+1})-W(t_i)] = t_{i+1}-t_i \qquad \text{For}\quad i = 1(1)m\\ \end{aligned}\]
For all \(0 = t_0 \leq t_1 \leq \cdots \leq t_m\), the random variable- \(\{W(t_1),W(t_2), \cdots, W(t_m) \}\) are jointly normally distributed with- \[\begin{aligned} &E[W(t_{i+1})-W(t_i)] = 0 \qquad \text{For}\quad i = 1(1)m\\ &\text{Covariance matrix}( \Sigma) = \begin{bmatrix} t_1 & t_1 & \cdots & t_1 \\ t_1 & t_2 & \cdots & t_2 \\ . & . & \cdots & . \\ . & . & \cdots & . \\ . & . & \cdots & . \\ t_1 & t_2 & \cdots & t_m \\ \end{bmatrix} \end{aligned}\]
For all \(0 = t_0 \leq t_1 \leq \cdots \leq t_m\), the random variable- \(\{W(t_1),W(t_2), \cdots, W(t_m) \}\) have Moment Generating function- \[\begin{aligned} M_{\vec{W}}(\vec{u}) &= \exp\{\frac{1}{2}u_m^2(t_m-t_{m-1})\}\cdots \exp\{\frac{1}{2}(u_m+u_{m-1}+\cdots+u_1)^2(t_1)\} \end{aligned}\]
If any of above three holds, then \(W(t), \ t\geq 0\) is a Brownian motion.
Today we have seen the definition and distribution of Brownian motion, more to come later… Happy reading!!