Stochastic Calculus

In the previous blog we had some introduction to Ito integral, how to construct Ito integral. Ito calculus, invented by famous Japanese mathematician Kiyosi Ito, which helps us in particular in stochastic process.

We also see that Ito integral \(\mathbb{I(t)}, \ \mathbb{t \geq 0}\) is a martingale. Now we are going to see some other interesting properties of Ito integral and move forward.

2.2 Properties of Ito integral

Theorem: The Ito integral defined as, if- \(\mathbb{t_k \leq t < t_{k+1}}\), in the partition- \(\mathbb{\Pi = \{t_0, t_1, t_2, \cdots t_n \}}\) \[\mathbb{I(t) = \sum_{j = 0}^{k-1}\Delta(t_j)[W(t_{j+1})-W(t_j)]+ \Delta(t_k)[W(t)-W(t_k)]}\] satisfies- \[\mathbb{E[I^2(t)] = E\int_{0}^{t}\Delta^2(u) .du}\]

Proof: This is a very interesting and important result. This is called Ito isometry. Isometry is nothing but a distance-preserving transformation between metric spaces. The proof is very simple. To simplify the notation, we will set-

\[\begin{aligned} \mathbb{D_j} &= \mathbb{W(t_{j+1})-W(t_j)} \qquad \text{for} \ \mathbb{j = 0,1, \cdots, k-1} \\ \mathbb{D_k} &= \mathbb{W(t)-W(t_k)} \\ \end{aligned}\] Then we can write the Ito integral as-

\[\begin{aligned} \mathbb{I(t)} &= \mathbb{\sum_{j = 0}^{k-1}\Delta(t_j)[W(t_{j+1})-W(t_j)]+ \Delta(t_k)[W(t)-W(t_k)]} \\ &= \mathbb{\sum_{j=0}^{k}\Delta(t_j).D_j} \end{aligned}\]

So we can also write-

\[\begin{aligned} \mathbb{I^2(t)} &= \mathbb{[\sum_{j=0}^{k}\Delta(t_j).D_j]^2} \\ &= \mathbb{\sum_{j=0}^{k}\Delta^2(t_j).D^2_j + 2\sum_{0 \leq i < j \leq k}\Delta(t_i).\Delta(t_j).D_i.D_j} \\ \end{aligned}\]

First, we will show that the expected value of each of the cross product terms is zero, i.e. \[\mathbb{E[\Delta(t_i).\Delta(t_j).D_i.D_j] = 0} \qquad \text{for} \quad \mathbb{j = 0,1, \cdots ,k}\]

For \(\mathbb{i<j}\),

the random variable \(\mathbb{\Delta(t_i).\Delta(t_j).D_i}\) is \(\mathbb{F(t_j)}\) measurable;

and increments in the Brownian motion \(\mathbb{D_j = W(t_{j+1})-W(t_j)}\) is independent of \(\mathbb{F(t_j)}\), furthermore \(\mathbb{E[D_j|F] = 0}\)

Therefore, we can write- \[\begin{aligned} \mathbb{E[\Delta(t_i).\Delta(t_j).D_i.D_j]} &= \mathbb{E[\Delta(t_i).\Delta(t_j).D_i].E[D_j]} \\ &= \mathbb{E[\Delta(t_i).\Delta(t_j).D_i]. \ 0} \\ &= 0 \\ \end{aligned}\]

Next, we look at the squared term, i.e. \(\mathbb{\Delta^2(t_j).D^2_j}\).

Similar to \(\mathbb{\Delta(t_j)}\), \(\mathbb{\Delta^2(t_j)}\) is also \(\mathbb{F(t_j)}\) measurable;

and the square of increments in the Brownian motion \(\mathbb{D^2_j}\) is independent of \(\mathbb{F(t_j)}\). Furthermore we have- \[\begin{aligned} & \mathbb{D_j = W(t_{j+1})-W(t_j)} \sim \ \mathbb{N(0, t_{j+1}-t_j)} \quad \text{For} \ \mathbb{j = 0,1, \cdots, k-1;} \quad \text{and} \\ & \mathbb{D_k = W(t)-W(t_k)} \sim \ \mathbb{N(0, t-t_k)} \end{aligned}\]

Hence we can write- \[\begin{aligned} &\mathbb{E[D^2_j] = t_{j+1}-t_j} \quad \text{For} \ \mathbb{j = 0,1, \cdots, k-1;} \quad \text{and} \\ &\mathbb{E[D^2_k] = t-t_k} \end{aligned}\]

Therefore,

\[\begin{aligned} \mathbb{E[I^2(t)]} &= \mathbb{E[\sum_{j=0}^{k}\Delta(t_j).D_j]^2} \\ &= \mathbb{E[\sum_{j=0}^{k}\Delta^2(t_j).D^2_j + 2\sum_{0 \leq i < j \leq k}\Delta(t_i).\Delta(t_j).D_i.D_j]} \\ &= \mathbb{E[\sum_{j=0}^{k}\Delta^2(t_j).D^2_j] = \sum_{j=0}^{k}E[\Delta^2(t_j)].E[D^2_j]} \\ &= \mathbb{\sum_{j=0}^{k-1}E[\Delta^2(t_j)].(t_{j+1}-t_j)+E[\Delta^2(t_k)].(t-t_k)} \\ \end{aligned}\]

But \(\mathbb{\Delta(t_j)}\) is constant over the interval \(\mathbb{[t_j,t_{j+1})}\), that helps us to write- \[\begin{aligned} &\mathbb{\Delta^2(t_j).(t_{j+1}-t_j) = \int_{t_j}^{t_{j+1}}\Delta^2(u).du} \\ &\mathbb{\Delta^2(t_k).(t-t_k) = \int_{t}^{t_{k}}\Delta^2(u).du} \\ \end{aligned}\]

Therefore-

\[\begin{aligned} \mathbb{E[I^2(t)]} &= \mathbb{\sum_{j=0}^{k-1}E[\Delta^2(t_j)].(t_{j+1}-t_j)+E[\Delta^2(t_k)].(t-t_k)} \\ &= \mathbb{\sum_{j=0}^{k-1}E\int_{t_j}^{t_{j+1}}\Delta^2(u).du}+ \mathbb{E\int_{t}^{t_{k}}\Delta^2(u).du} \\ &= \mathbb{E[\sum_{j=0}^{k-1}\int_{t_j}^{t_{j+1}}\Delta^2(u).du+ \int_{t}^{t_{k}}\Delta^2(u).du]} \\ &= \mathbb{E \int_{0}^{t}\Delta^2(u).du} \qquad \text{[Q.E.D.]} \end{aligned}\]

Now, we turn to the quadratic variation of the Ito integral \(\mathbb{I(t)}\), thought of as a process in its upper limit of integration \(\mathbb{t}\). We have previously seen that- \[\begin{aligned} \text{Brownian motion accumulates quadratic variation at rate- one per unit time} \\ \end{aligned}\]

However, Brownian motion is scaled in a time- and path-dependent way by the integrand \(\mathbb{\Delta(u)}\) as it enters the Ito integral- \[\mathbb{I(t)=\int_{0}^{t}\Delta(u) \mathbb{dB(u)}}\]

Now, because increments are squared in the quadratic variation calculation, the quadratic variation of Brownian motion will be multiplied by \(\mathbb{\Delta^2(u)}\) as it enters the Ito integral. We will see this in the following theorem.

Theorem: Lets define the Ito integral as- \[\mathbb{I(t) = \sum_{j = 0}^{k-1}\Delta(t_j)[W(t_{j+1})-W(t_j)]+ \Delta(t_k)[W(t)-W(t_k)]}\] Then the quadratic variation by the Ito integral up to time \(\mathbb{t}\) is given by- \[\mathbb{[I,I](t) = \int_{0}^{t}\Delta^2(u).du}\]

Proof: In the definition of Ito integral, we start with partition- \(\Pi = \{t_0, t_1, t_2, \cdots t_n \}\), with \[0 = \mathbb{t_0} \leq \mathbb{t_1} \leq \mathbb{t_2} \leq \cdots \leq \mathbb{t_n} = \mathbb{T}\]

Consider any \(\mathbb{j}\), for which the sub-interval \(\mathbb{[t_j,t_{j+1}]}\) have constant value of \(\mathbb{\Delta(u)}\). Consider a much finer partition of that sub-interval as-

\[\mathbb{t_j} =\mathbb{s_0} < \mathbb{s_1} < \mathbb{t_2} < \cdots < \mathbb{s_m} = \mathbb{t_{j+1}}\]

Then we can write-

\[\begin{aligned} \mathbb{\sum_{i=0}^{m-1}[I(s_{i+1})-I(s_i)]^2} &= \mathbb{\sum_{i=0}^{m-1}[\Delta(t_j)(W(s_{i+1})-W(s_{i}))]^2} \\ &= \mathbb{\sum_{i=0}^{m-1}\Delta^2(t_j)[W(s_{i+1})-W(s_{i})]^2} \\ \end{aligned}\] As \(\mathbb{m \rightarrow \infty}\) and the step-size \(\mathbb{\max_j(s_{j+1}-s_j)} \rightarrow 0\), the quadratic variation \(\mathbb{\sum_{i=0}^{m-1}[W(s_{i+1})-W(s_{i})]^2}\) converges to the quadratic variation accumulated by the Brownian motion between time \(\mathbb{t_{j}}\) and \(\mathbb{t_{j+1}}\). i.e. \[\lim_{\mathbb{\max_j(s_{j+1}-s_j)} \rightarrow 0}\mathbb{\sum_{i=0}^{m-1}[W(s_{i+1})-W(s_{i})]^2} \approx \mathbb{t_{j+1}-t_j}\]

Hence, \[\begin{aligned} \mathbb{\sum_{i=0}^{m-1}[I(s_{i+1})-I(s_i)]^2} &= \mathbb{\sum_{i=0}^{m-1}\Delta^2(t_j)[W(s_{i+1})-W(s_{i})]^2} \\ &= \mathbb{\Delta^2(t_j)(\mathbb{t_{j+1}-t_j})} \\ \end{aligned}\]

Using Riemann sum, we can write- \[\mathbb{\Delta^2(t_j)(\mathbb{t_{j+1}-t_j}) = \int_{t_j}^{t_{j+1}}\Delta^2(u).du}\] Similar thing can be done for the interval \(\mathbb{[t_k,t]}\).

So, \(\mathbb{[I,I](t)}\) can be written as- \[\begin{aligned} \mathbb{[I,I](t)} &= \mathbb{\sum_{j=0}^{k-1}\int_{t_j}^{t_{j+1}}\Delta^2(u).du+\int_{t_k}^{t}\Delta^2(u).du} \\ &= \int_{0}^{t}\Delta^2(u).du \qquad \text{Q.E.D.} \end{aligned}\]

In the above two theorems, Ito Isometry and Quadratic variation of Ito integral we see- how the quadratic variation and variance of a process can differ. The quadratic variation is calculated path-by-path so it also depends on the path. If in one path- the Brownian motion takes large position \(\mathbb{\Delta(u)}\), the quadratic variation would be large; if the Brownian motion takes small positions, i.e. small values of \(\mathbb{\Delta(u)}\) then the quadratic variation would be small. The quadratic variation can be regarded as a measure of risk, and it depends on the size of the positions we take. The variance of I(t) is an average over all possible paths of the quadratic variation. Because it is the expectation of something, it cannot be random. As an average over all possible paths, realized and unrealized, it is a more theoretical concept than quadratic variation.

In the next blog, we are going use some tricks we found while studying Brownian motion. Happy reading.