Brownian Motion

Quadratic Variation of Brownian motion- Part-3

In the previous blog, I have showed you- how to calculate Quadratic Variation of a function \(f\) over an interval \(0 \leq t \leq T\). Today we are see- what is the Quadratic Variation of the Brownian Motion.

Theorem: Let \(W\) be a Brownian Motion. Then the Quadratic Variation of \(W\) up to time \(T \geq 0\); i.e. \([W,W](T) = T\) for all \(T \geq 0\).

I am sure you know about convergence. We will try to prove probability convergence, \(L^2\)- convergence, then with little more assumption we will prove almost sure convergence. Let’s get started.

Proof: Let \(\Pi = \{0=t_0,t_1,t_2,\cdots, t_n\}\) be a partition of \([0,T]\).

Let us define sampled quadratic variation corresponding to the partition \(\Pi\) by- \[QV_{\Pi} = \sum_{j=0}^{n-1}(W(t_{j+1})-W(t_{j}))^2\]

We have to show- as \(||\Pi|| \rightarrow 0\), the sample quadratic variation converges to \(T\). i.e. to prove- \[\lim_{||\Pi|| \rightarrow 0} QV_{\Pi} = \lim_{||\Pi|| \rightarrow 0} \sum_{j=0}^{n-1}(W(t_{j+1})-W(t_{j}))^2 = T\]

Now here, \(QV_{\Pi}\) is a random variable and we want to show- \[\lim_{||\Pi|| \rightarrow 0} QV_{\Pi} \rightarrow T\] So, if we can just show \(L^2\)-convergence i.e. \(\lim_{||\Pi|| \rightarrow 0} QV_{\Pi} \overset{L^2}\to T\) that would also imply probability convergence. Which is- \[\lim_{||\Pi|| \rightarrow 0} QV_{\Pi} \overset{L^2}\to T \implies \lim_{||\Pi|| \rightarrow 0} QV_{\Pi} \overset{P}\to T\]

If you are familiar with convergence in r-th mean, it states that- a sequence \(\{ y_n \}_{n\geq 1}\) converges to a constant c, in probability, denoted by- \(y_n\overset{L^r}\to c\) if- \[\lim_{n \rightarrow \infty}E(|y_n-c|^r)= 0\]

It’s very basis, if you want to know more about convergence go to- https://www.probabilitycourse.com/chapter7/7_0_0_intro.php and subsequent pages.

Now condition for \(L^2\)-convergence i.e. \(y_n\overset{L^2}\to c\) is-

  • \(\lim_{n \rightarrow \infty}E(y_n-c)^2 = \lim_{n \rightarrow \infty} Var(y_n) = 0\)

Now, there is a beautiful relation between different types of convergences. One of them is- \(L^r\)-convergence implies Probability convergence. i.e.  \[y_n \overset{L^2}\to y \implies y_n \overset{P}\to y\] The proof is very simple; this comes from the Chebyshev’s inequality. We are going to try that in this case also.

First, try to prove- \(\lim_{||\Pi|| \rightarrow 0}E(QV_{\Pi}) = T\) then, \(\lim_{||\Pi|| \rightarrow 0}Var(QV_{\Pi}) = 0\). Let’s start.

In the previous blogs, I have showed you the increments of the Brownian motion follows normal distribution with mean = 0 and variance = difference in time. i.e. for \(0 \leq s \leq t\), \(W(t)-W(s)\sim N(0, t-s)\).

Now for partition \(\Pi = \{0=t_0,t_1,t_2,\cdots, t_n\}\) with \(0=t_0 < t_1<t_2<\cdots< t_n = T\) we can write- For \(j = 0(1)\overline{n-1}\) \[\begin{aligned} Var(W(t_{j+1})-W(t_{j})) &= E(W(t_{j+1})-W(t_{j}))^2 \\ &= (t_{j+1}-t_j) \end{aligned}\] Hence we can write-

\[\begin{aligned} E(QV_{\Pi}) &= E(\sum_{j=0}^{n-1}(W(t_{j+1})-W(t_{j}))^2) \\ &= \sum_{j=0}^{n-1}E((W(t_{j+1})-W(t_{j}))^2) \\ &= \sum_{j=0}^{n-1}(t_{j+1}-t_j) = T \\ \end{aligned}\]

So we have proved the first part- \(E(QV_{\Pi}) = T\).

Now,

\[\begin{aligned} Var((W(t_{j+1})-W(t_{j}))^2) &= E((W(t_{j+1})-W(t_{j}))^2-(t_{j+1}-t_j))^2 \\ &= E(W(t_{j+1})-W(t_{j}))^4 -2(t_{j+1}-t_j)E(W(t_{j+1})-W(t_{j}))^2 + (t_{j+1}-t_j)^2 \\ \end{aligned}\] Now let \(X \sim N(0,\sigma^2)\) which implies \(E(X^4) = 3\sigma^2\).

\[\begin{aligned} Var((W(t_{j+1})-W(t_{j}))^2) &= E(W(t_{j+1})-W(t_{j}))^4 -2(t_{j+1}-t_j)E(W(t_{j+1})-W(t_{j}))^2 + (t_{j+1}-t_j)^2 \\ &= 3(t_{j+1}-t_j)^2-2(t_{j+1}-t_j)(t_{j+1}-t_j) + (t_{j+1}-t_j)^2 \\ &= 2(t_{j+1}-t_j)^2 \end{aligned}\]

Now since we have already seen increments in Brownian motion are independent; i.e. \(cov((W(t)-W(s)),(W(u)-W(v))) = 0\), where- \(v \leq u < s \leq t\), which implies-

\[\begin{aligned} Var(QV_{\Pi}) &= Var(\sum_{j=0}^{n-1}(W(t_{j+1})-W(t_{j}))^2) \\ &= \sum_{j=0}^{n-1}Var[(W(t_{j+1})-W(t_{j}))^2] \\ &= \sum_{j=0}^{n-1} 2(t_{j+1}-t_j)^2 \\ &\leq \max_{0\leq j \leq n-1}(t_{j+1}-t_j)\sum_{j=0}^{n-1} 2(t_{j+1}-t_j) \\ &= ||\Pi||.\sum_{j=0}^{n-1} 2(t_{j+1}-t_j) \\ &= 2||\Pi||.T \\ \end{aligned}\]

In particular, \(\lim_{||\Pi|| \rightarrow 0} Var(QV_{\Pi}) = 0\). This is enough to prove that- \[\lim_{|||\Pi|| \rightarrow 0} QV_{\Pi} \overset{L^2}\to T\] This leads to- \[\lim_{|||\Pi|| \rightarrow 0} QV_{\Pi} \overset{P}\to T\]

Now, let’s move towards almost sure convergence with some additional condition.

Additional condition (for a.s.): Suppose we have a sequence of partition \(\{\Pi_n\}_{n\geq1}\), with \(\lim_{n\rightarrow \infty}||\Pi_n|| = 0\).In order to the previous conditions we have- \[\lim_{||\Pi_n|| \rightarrow 0} n^2||\Pi_n|| = 0\] i.e. the maximum step-size/ resolution of the partition converges to zero, faster than \(\frac{1}{n^2}\). Then we have- \[\lim_{|||\Pi|| \rightarrow 0} QV_{\Pi} \overset{a.s.}\to T\] In other words, the standard Brownian motion has almost surely finite quadratic variation which is equal to T.

Now to best way to show almost sure convergence is by using the Bore-Cantelli lemma. We will try to go in that way.

Now, since we are already given the condition- \[\lim_{||\Pi|| \rightarrow 0} n^2||\Pi_n|| = 0\] we can find a sequence \(\{\epsilon_n\}_{n\geq1}\), such that- \(\lim_{n \rightarrow \infty} \epsilon_n = 0\) and \(||\Pi_n|| = \frac{\epsilon_n}{n^2}\).

So, using the Chebyshev’s inequality we can have- \[\begin{aligned} \Pr[(QV(\Pi_n)-T)^2 > 2\epsilon_n] &\leq \frac{E(QV(\Pi_n)-T)^2}{2\epsilon_n} \\ &\leq \frac{2||\Pi_n||T}{2\epsilon_n} \leq \frac{T}{n^2}\\ \end{aligned}\]

Let us define a sequence of events- \(E_n := \{(QV(\Pi_n)-T)^2 > 2\epsilon_n\}\).

Then by Borel-Cantelli lemma we can write, if- \(\sum_{n=1}^{\infty}\Pr(E_n) < \infty\), then we have- \[\Pr[\limsup_{n \rightarrow \infty} E_n] = 0\].

In this case- \(\sum_{n=1}^{\infty}\Pr(E_n) = \sum_{n=1}^{\infty}\frac{T}{n^2} < \infty\).

Hence \[\begin{aligned} \Pr[\lim \sup_{n \rightarrow \infty} E_n] &= \Pr[\bigcap_{n=1}^{\infty} \bigcup_{k=n}^{\infty} E_n] \\ &= 0 \\ \end{aligned}\]

which implies- \[\Pr[QV_{\Pi} \overset{i.o.}\to T] = 1\] Which leads to \(QV_{\Pi} \overset{a.s.}\to T\). So here we have proved \(QV_{\Pi} \overset{a.s.}\to T\) and by this we have proved every kind of convergence with some assumptions.

Note: We can generalize the above condition for a.s. convergence by using the simple fact that- for any \(\delta > 0\)- \[\sum_{n=1}^{\infty}\frac{1}{n^{1+\delta}} < \infty\] So we will have a similar result if we proceed with- \(\exists \delta > 0\) such that-

\[\lim_{||\Pi_n|| \rightarrow 0} n^{1+\delta}||\Pi_n|| = 0\] i.e. the resolution of the partition converges faster than- \(\frac{1}{n^{1+\delta}}\).

I hope you find it helpful. If you have a much simpler way to prove these convergences, let me know. Thank you. Happy reading.