In this article, the basic concepts of the Google Page Rank Algorithm will be discussed along with a few examples.
By Perron-Frobenius theorem, if the underlying web-graph is connected and aperiodic, then the power-iteration algorithm used to compute the page ranks is guaranteed to converge to a steady state vector, which is precisely the vector with the page ranks of all the nodes.
The power-iteration algorithm convergence rate is directly proportional to the ratio of the dominant eigenvalues \(\frac{\lambda 2}{\lambda 1}\) of the matrix M. Since \(\lambda 1 = 1\), the page rank computation algorithm converges faster if the second eigenvalue is smaller.
The next figures show the result of an experiment done starting with an Erdos-Renyi random graph with n=100 nodes, with probability p=1/2 and then iteratively removing 10 edges randomly from the graph and running the power iteration algorithm and recording the # iterations to converge with an error at most \(10^{-5}\), also noting the \(\frac{\lambda 2}{\lambda 1}\) each time.
As can be seen, with ratio of the dominant eigenvalues (or \(\lambda 2\)) decreases as we go on removing the edges and as expected, #iterations taken to converge (with the same accuracy threshold) decreases (which means a faster convergence) as the second eigenvalue becomes smaller.