1 Problem Set 1

1.1 Item 1: Compute the transpose times matrix and its eigenvalues, eigenvectors

In this section, we will set up the \(3 \times 2\) matrix \(A\) and define \(X = AA^{T}\) and \(Y=A^{T}A\) and compute their eigenvalues and eigenvectors using native R functions.

##      [,1] [,2] [,3]
## [1,]    1    2    3
## [2,]   -1    0    4
##      [,1] [,2]
## [1,]   14   11
## [2,]   11   17
## eigen() decomposition
## $values
## [1] 26.601802  4.398198
## 
## $vectors
##           [,1]       [,2]
## [1,] 0.6576043 -0.7533635
## [2,] 0.7533635  0.6576043

Thus we conclude that \[ X = \left[ \begin{array}{rr} 14 & 11 \\ 11 & 17 \end{array} \right] \text{ and } \lambda_1 = 26.601802 \text{ , } \lambda_2 = 4.398198 \text{ with eigenvectors } XU_1 = \left[ \begin{array}{r} 0.6576043 \\ 0.7533635 \end{array}\right] \text{ , } XU_2 = \left[ \begin{array}{r} -0.7533635 \\ 0.6576043 \end{array}\right] \]

A similar computation for \(Y\) will yield

##      [,1] [,2] [,3]
## [1,]    2    2   -1
## [2,]    2    4    6
## [3,]   -1    6   25
## eigen() decomposition
## $values
## [1] 2.660180e+01 4.398198e+00 1.058982e-16
## 
## $vectors
##             [,1]       [,2]       [,3]
## [1,] -0.01856629 -0.6727903  0.7396003
## [2,]  0.25499937 -0.7184510 -0.6471502
## [3,]  0.96676296  0.1765824  0.1849001

\[ Y = \left[ \begin{array}{rrr} 2 & 2 & -1 \\ 2 & 4 & 6 \\ -1 & 6 & 25 \\ \end{array} \right] \text{ and } \lambda_1 = 26.60180 \text{ , } \lambda_2 = 4.398198 \text{ , } \lambda_3 = 1.058982 e^{-16} \sim 0 \]

\[ \text{ with eigenvectors } YU_1 = \left[ \begin{array}{r} -0.01856629 \\ 0.25499937 \\ 0.96676296 \end{array}\right] \text{ , } YU_2 = \left[ \begin{array}{r} -0.6727903 \\ -0.7184510 \\ 0.1765824 \end{array}\right] \text{ , } YU_3 = \left[ \begin{array}{r} 0.7396003 \\ -0.6471502 \\ 0.1849001 \end{array}\right] \] We will discard ignore the last eigenvector \(YU_3\) as it is associated with the final eigenvalue of 0.

1.2 Item 2: Compute the Singular Value Decomposition

Now we will compute the singular value decomposition using native functionality and examine the left singular, singular vlalues and right singular matrices of \(A\). In this section we rely on the standard notation:

\[ A = U \Sigma V^{T} \]

## $d
## [1] 5.157693 2.097188
## 
## $u
##            [,1]       [,2]
## [1,] -0.6576043 -0.7533635
## [2,] -0.7533635  0.6576043
## 
## $v
##             [,1]       [,2]
## [1,]  0.01856629 -0.6727903
## [2,] -0.25499937 -0.7184510
## [3,] -0.96676296  0.1765824
## [1]  0.01856629 -0.25499937 -0.96676296
## [1] -0.6727903 -0.7184510  0.1765824
## [1] 5.157693
## [1] 2.097188
## [1] -0.6576043 -0.7533635
## [1] -0.7533635  0.6576043

1.3 Item 3. Verify the relationship of SVD to X and Y

We verify that the singular vectors are eigenvectors of \(X\) and \(Y\) and that \(v_i\) map to \(u_i\) under \(A\).

\[ Xu_{i} = \sigma_{i}^2u_{i} \text{ and } Yv_{i} = \sigma_{i}^2v_{i} \]

\[ Av_{i} = \sigma_{i}U_{i} \text{ } \implies Av_{i} - \sigma_{i}U_{i} = 0 \] The left singular vectors are eigenvectors of \(X\):

##           [,1]
## [1,] -17.49346
## [2,] -20.04083
## [1] -17.49346 -20.04083
##           [,1]
## [1,] -3.313442
## [2,]  2.892274
## [1] -3.313442  2.892274

Moreover, the right singular vectors map to the left singular vectors under \(A\):

##           [,1]
## [1,] -3.391721
## [2,] -3.885618
## [1] -3.391721 -3.885618
##           [,1]
## [1,] -1.579945
## [2,]  1.379120
## [1] -1.579945  1.379120

Doing the same calculations for the eigenvectors of \(Y\):

##             [,1]
## [1,]   0.4938968
## [2,]  -6.7834426
## [3,] -25.7176364
## [1]   0.4938968  -6.7834426 -25.7176364
##            [,1]
## [1,] -2.9590650
## [2,] -3.1598902
## [3,]  0.7766445
## [1] -2.9590650 -3.1598902  0.7766445

Finally, we verify that the nonzero eigenvalues of \(X\) and \(Y\) are the same and squares of the eigenvalues of \(A\). In the difference below, the residuals are close to zero providing equality of the non-zero eigenvalues.

## [1]  4.973799e-14 -2.664535e-15
## [1] 3.552714e-15
## [1] -2.664535e-15

2 Problem Set 2

2.1 The code

The solution code below describes the matrix inverse using cofactor expansion. We implement the code in 4 successive steps by building bottom-up helper functions first.

  • first, we calculate the minor of a matrix at cell i,j using get_minor
  • second, we calculate the cofactor of cell i,j which adds the sign term
  • third, we calculate the matrix of cofactors by doing a double for loop
  • lastly, the inverse is defined in terms of the transpose of the cofactor matrix and the determinant

Lastly, we test the code and validate it using cases of dimension \(n=1,2\).
We also try to break the code using an ill conditioned matrix.

#  FUNCTION:  get_minor
#  PURPOSE:   Calculate the minor of a matrix with error checking for dimensions
#  INPUTS:    M      a matrix (which ought to have be at least 2 x 2 square matrix)
#             rowi   a matrix with rowi parameter between 1 and num rows of M
#             colj   a matrix with colj parameter between 1 and num cols of M
#  OUTPUT:    status = 1 if input parameters satisfy contract.  Otherwise status = 0
#             minor  = determinant of matrix M where row i and col j are removed.
# ------------------------------------------------------------------------------------
get_minor <- function(M, rowi, colj )
{
    status = 1  # Valid =>1 ,   Failure => 0
    minor = 0
    
    if(nrow(M)< 1 | ncol(M)< 1 | rowi <= 0 | colj <= 0 | (nrow(M) != ncol(M) ) )
    {
        status = 0  # Matrix is too small or rows are wrong
        print(paste0("matrix dimensions are too small or rowi, colj are negative or matrix is not square"))
    }
    
    if(nrow(M)< rowi ) 
    {
        print(paste0("rowi parameter exceeds matrix M: ", rowi, " > ", "nrow(M)= ", nrow(M)))
        status = 0
    }

    if(ncol(M)< colj)
    {
        print(paste0("colj parameter exceeds matrix M: ", colj, " > ", "ncol(M)= ", ncol(M)))
        status = 0
    }
    if(nrow(M) == 1 )
    {
        minor = 1 # For a 1 x 1 matrix, we define the minor of a submatrix to be the value 1
    }
    if( status > 0 )
    {
        minor = det( matrix( M[ -rowi, -colj ], nrow=nrow(M)-1 ) )
    }
    retValue = c( status, minor)
}


cofactor<- function(M, rowi, colj ){
  
     output = get_minor( M, rowi, colj)
     cofactor_sign_term = (-1) ** ( ( rowi + colj ) %% 2 )  # sign only depends on the parity of sum of coordinates
     c( output[1], cofactor_sign_term * output[2] ) # pass along the status of the get_minor function AND output value.
}

# FUNCTION:   cofactor_matrix
# PURPOSE:    Calculate the nxn matrix of cofactors of a matrix M of same size
# INPUT:      matrix M
# OUTPUT:     List of status, matrix where
#              status = 1 if the calculations and inputs are valid
#              status = 0 if otherwise
#              matrix output should not be used if status = 0
#              matrix[i,j] = (-1)**(i+j) times minor(i,j) of M
# -------------------------------------------------------------------
cofactor_matrix <- function(M)
{
    status = 1
    cfm = matrix(c(0),nrow=1, ncol=1 )
  
    if(nrow(M)< 1 | ncol(M)< 1 | nrow(M) != ncol(M)){
        status = 0
        print("Matrix size is too small or matrix is not square")
    }
    
    if( status != 0 )
    {
        cfm = matrix(rep(0,nrow(M)**2 ), nrow=nrow(M))  # Allocate empty matrix
        for( i in 1:nrow(M) )
        {
            for(j in 1:ncol(M))
            {
                 x = cofactor(M, i, j )  # calculate the cofactor here and do error checking
                 if( x[1] != 0 )
                 {
                    cfm[i,j] = x[2]
                 }
                 else
                 {
                    print(paste0("error in cofactor for entry row ", i , " column ", j ) )
                    status = 0
                    break
                 }
            }
        }
    }
    list( status, cfm  )
}

# FUNCTION:  myinverse
# PURPOSE:   Computes a matrix inverse using the cofactor expansion method.
# INPUTS:    a square matrix A
# OUTPUTS:   a square matrix representing the inverse of A.
# -----------------------------------------------------------------------------
myinverse <- function(A)
{
    retList = cofactor_matrix(A)
    status = retList[[1]]
    CF = retList[[2]]
    
    B = t(CF)/det(A)   # transpose the cofactor matrix and divide by the matrix determinant to get inverse
    
}

2.2 Normal Test Cases

The following test cases all pass. They are well conditioned.

##      [,1]
## [1,]  0.6
##      [,1] [,2]
## [1,]    1    2
## [2,]    3   -2
## [1] "Test Case 1 x 1 matrix should be reciprocal of value"
##      [,1]
## [1,]  0.6
## [1] " stable solution using R's solve "
##          [,1]
## [1,] 1.666667
## [1] " cofactor solution using myinverse "
##          [,1]
## [1,] 1.666667
## [1] " Does solution pass the numerical tolerance test? TRUE"
## [1] " Does the inverse times matrix equal identity? TRUE"
##      [,1]
## [1,]    1
## [1] "Test Case 2 x 2 matrix should pass"
##      [,1] [,2]
## [1,]    1    2
## [2,]    3   -2
## [1] " stable solution using R's solve "
##       [,1]   [,2]
## [1,] 0.250  0.250
## [2,] 0.375 -0.125
## [1] " cofactor solution using myinverse "
##       [,1]   [,2]
## [1,] 0.250  0.250
## [2,] 0.375 -0.125
## [1] " Does solution pass the numerical tolerance test? Mean relative difference: 2.220446e-16"
## [1] " Does the inverse times matrix equal identity? TRUE"
##      [,1] [,2]
## [1,]    1    0
## [2,]    0    1

2.3 Ill-Conditioned Test Case

The last test case arises from an online example of an ill conditioned matrix which I have adapted. [http://www.onmyphd.com/?p=invertible.singular.ill.conditioned.matrix]

\[ G = \frac{1}{2} \left[ \begin{array}{rr} 1 & 1 \\ 1 + 10^{-7} & 1 - 10^{-7} \\ \end{array} \right] \text{ , } G^{-1} = \left[ \begin{array}{rr} 1 - 10^{7} & 10^{7} \\ 1 + 10^{7} & -10^{7} \\ \end{array} \right] \] Note that we know the inverse of \(G\) analytically. We also know the inverse of powers of \(G\).
For example, the inverse of \(G^2\) is \((G^{-1})^2\). Since taking powers of matrix will progressively degrade the accuracy of its entries, we will show that \(G\) to the second power gives observable degradation in accuracy of the inverse.

##      [,1]      [,2]
## [1,]    1 1.0000000
## [2,]    1 0.9999999
## [1] "Test Case 2 x 2 matrix is ill conditioned - could fail"
##      [,1]      [,2]
## [1,]    1 1.0000000
## [2,]    1 0.9999999
## [1] " stable solution using R's solve "
##          [,1]   [,2]
## [1,] -4999999  5e+06
## [2,]  5000000 -5e+06
## [1] " cofactor solution using myinverse "
##          [,1]   [,2]
## [1,] -4999999  5e+06
## [2,]  5000000 -5e+06
## [1] " Does solution pass the numerical tolerance test? Mean relative difference: 1.862645e-16"
## [1] " Does the inverse times matrix equal identity? Mean relative difference: 1.862645e-09"
##              [,1]          [,2]
## [1,] 1.000000e+00 -9.313226e-10
## [2,] 9.313226e-10  1.000000e+00

Looking that the singular values of \(G\) below shows that the \(\sigma_{1}\) and \(\sigma_{2}\) have a large relative difference. The condition number is defined as: \[\kappa(G) = \frac{\sigma_{max}(G)}{\sigma_{min}(G)}\]

## [1] 2e+00 1e-07
## [1] 2e+07
## [1] "Test Case 2 x 2 matrix ill condition - 2th power of G"
##      [,1] [,2]
## [1,]    2    2
## [2,]    2    2
## [1] " stable solution using R's solve "
##               [,1]          [,2]
## [1,]  4.949010e+13 -4.949010e+13
## [2,] -4.949011e+13  4.949011e+13
## [1] " cofactor solution using myinverse "
##               [,1]          [,2]
## [1,]  4.949010e+13 -4.949010e+13
## [2,] -4.949011e+13  4.949011e+13
## [1] " Does solution pass the numerical tolerance test? Mean relative difference: 5.130445e-16"
## [1] " Does the inverse times matrix equal identity? Mean relative difference: 0.03076923"
##          [,1]      [,2]
## [1,] 0.984375 -0.015625
## [2,] 0.015625  1.015625

Clearly, the above shows that \[G^2 \cdot G^{-2} = \left[ \begin{array}{rr} 0.9844 & -0.0156 \\ 0.0156 & 1.0156 \end{array} \right] \neq \left[ \begin{array}{rr} 1 & 0 \\ 0 & 1 \end{array} \right] \]