# Define sections for each chapter sequentially
sections <- c(
"2.1", "2.2", "2.3", "2.4",
"3.1", "3.2", "3.3", "3.4",
"4.1", "4.2", "4.3",
"5.1", "5.2", "5.3", "5.4", "5.5",
"6.1", "6.2", "6.3",
"7.1", "7.2", "7.3", "7.4", "7.5", "7.6",
"8.1", "8.2", "8.3"
)
# Initialize variables
weeks_list <- list()
sec_index <- 1
total_sections <- length(sections)
week_number <- 1
# Fill in the list
while (sec_index <= total_sections) {
week_schedule <- data.frame(
Day = c("Mon", "Tues", "Wed", "Thurs", "Fri"),
cover_computation = character(5),
cover_studying = character(5),
stringsAsFactors = FALSE
)
for (day in 1:5) {
if (sec_index <= total_sections) {
# Assign current section to computation
week_schedule$cover_computation[day] <- sections[sec_index]
# If there's another section, assign it to studying
if (sec_index + 1 <= total_sections) {
week_schedule$cover_studying[day] <- sections[sec_index + 1]
}
# Increment section index
sec_index <- sec_index + 1
}
}
# Add the week's schedule to the list
weeks_list[[paste("wk", week_number)]] <- week_schedule
week_number <- week_number + 1
}
# Display the weeks list
weeks_list
## $`wk 1`
## Day cover_computation cover_studying
## 1 Mon 2.1 2.2
## 2 Tues 2.2 2.3
## 3 Wed 2.3 2.4
## 4 Thurs 2.4 3.1
## 5 Fri 3.1 3.2
##
## $`wk 2`
## Day cover_computation cover_studying
## 1 Mon 3.2 3.3
## 2 Tues 3.3 3.4
## 3 Wed 3.4 4.1
## 4 Thurs 4.1 4.2
## 5 Fri 4.2 4.3
##
## $`wk 3`
## Day cover_computation cover_studying
## 1 Mon 4.3 5.1
## 2 Tues 5.1 5.2
## 3 Wed 5.2 5.3
## 4 Thurs 5.3 5.4
## 5 Fri 5.4 5.5
##
## $`wk 4`
## Day cover_computation cover_studying
## 1 Mon 5.5 6.1
## 2 Tues 6.1 6.2
## 3 Wed 6.2 6.3
## 4 Thurs 6.3 7.1
## 5 Fri 7.1 7.2
##
## $`wk 5`
## Day cover_computation cover_studying
## 1 Mon 7.2 7.3
## 2 Tues 7.3 7.4
## 3 Wed 7.4 7.5
## 4 Thurs 7.5 7.6
## 5 Fri 7.6 8.1
##
## $`wk 6`
## Day cover_computation cover_studying
## 1 Mon 8.1 8.2
## 2 Tues 8.2 8.3
## 3 Wed 8.3
## 4 Thurs
## 5 Fri
Over-fitted:
Under-fitted:
Linear Transformation:
Basis:
Image:
Kernel:
Nullity:
Total Space (Whole Vector Space):
(Linear) Vector Space :
This is the set of all possible vectors in a given space, for all linear combinations of basis vectors.
or, \(V_{ector \ Space}=\{ \vec{v}=c_1\vec{v_1}+...+c_n\vec{v_n}|c_1,c_2...,c_n \in \mathbb{R} \} \therefore \vec{v} \in V\)
Subset:
When everything in the set is a part of another set. We say, set \(A\) is a subset of \(B\) if
\(A \subset B \ \ \text{i.e. A is 'contained' by B}\)
Subspace:
A subspace is a smaller space within a larger vector space that follows the same rules of vector addition and scalar multiplication. Imagine a flat surface, like a sheet of paper, floating in 3D space. Any straight line drawn on that paper is a subspace of the 2D plane of the paper, which itself is a subspace of the 3D space which holds the paper. Equivalently,
We say, a subspace \(A\) of vector space \(B\) is a subset of B defined linearly.
\(A \subset B \ \ \text{i.e. A is 'contained' by B}\)
Dimension:
Rank:
\(\subset\) and \(\in\) are different.
Subset of: \(\subset\)
In : \(\in\)
We all know that if we combine red and blue, we get purple. But how does this relate to the kernal or, image of a Linear Transformation? Well, you can simply think about an image as a representation of what is keptfrom one vector space to another and,the kernal as what information is lost. So, the image above tells us the image from, \(V \rightarrow W\) is red and, the kernal is blue; since, going from purple to red implies that we lost the blue-color information and were left with red-color information.
Mathematically, we might say:
The image of a linear-transformation is the total area of subspace in one vector space to another vector space.
The kernal of a linear-transformation is the set of vectors that when transformed from one subspace to another, those vectors in that subspace become the zero-vector
In other words, think of the concept of “span” in linear algebra as similar to a toolbox with a set of tools. Each tool in your toolbox represents a vector, and the span is like all the different things you can build using combinations of those tools.
For instance, if you only have a hammer and nails (representing two specific vectors), the span would be all the possible structures you could create using just those two tools. You might be able to build simple things like a small box (analogous to certain vectors in the span), but you wouldn’t be able to build a complex piece of furniture (which might require more tools or different vectors).
If you add more tools (more vectors), the number of things you can create (the span) increases.
\[ \text{image}(T(\vec{x})) = \text{span}(\vec{x_1}...\vec{x_m})\\ \text{image}(T)=\{\vec{y} \in \mathbb{R^{2}}|\vec{y}=A\vec{x}, \vec{x}\in\ \mathbb{R^{2}} \} \\ \text{span}(\vec{A}_{1},\vec{A}_{2})=\{c_1\vec{A}_{1}+c_2\vec{A}_{2}|c_1,c_2\in \mathbb{R} \} \]
The image asks what kinds of projects can be built with the tools in the toolbox, while the span asks how those tools can be combined to create different projects. Although one focuses on the final builds and the other on the tools themselves, both describe the same range of projects that can be completed. They tell the same story from different perspectives: what can be constructed and how it’s achieved using the available tools.
In other words, A linear-transformation from some-dimension to another dimension has 3 properties associated with a linear-transformation:
Whenever a linear transformation occurs, the \(\vec{0}\) is both in the information lost and, Kept.
Both the image() & Keral() are closed under scalar mult. and addition
In other words, the only time when we loose the least-information is when the rank() is equivalent to the number of variables. In other words, when the transformation-matrix itself is square and \(A^{-1}\) exist and the \(\text{rank}(A)==\text{ncol}(A)\) then, we should expect to not loose any information. However, when our matrix is under-fitted then will gave lost more information then we had before.
For each matrix A in Exercises 1 through 13, find vectors that span the kernel of A. Use paper and pencil.
\[Let: \\ A=\begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \]
Notice the matrix is of the form:
\[
\begin{bmatrix}
a & b
\\
c & d
\end{bmatrix} \therefore \text{non-unique Transformation}
\]
We therefore cant use geometric-intuition to consider the \(\text{kernal()}\).
\[ \therefore \\ \text{kernal}(A)\implies A\vec{x}=\vec{0} \\ \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}=\begin{bmatrix} 0 \\ 0 \end{bmatrix} \\ \implies \\ eq_1:x_1+2x_2=0 \\ eq_2: 3x_1+4x_2=0 \\ \implies \]
\[ \begin{bmatrix} 1 & 2 & 0 \\ 3 & 4 & 0 \end{bmatrix} \implies \text{rref}(\begin{bmatrix} 1 & 2 & 0 \\ 3 & 4 & 0 \end{bmatrix}) \\ \implies \\\text{rref}(\begin{bmatrix} 1 & 2 & 0 \\ 3 & 4 & 0 \end{bmatrix}) \rightarrow \begin{bmatrix} 0 & 2 & 0 \\ 1 & 0 & 0 \end{bmatrix} \rightarrow \begin{bmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \end{bmatrix} \rightarrow \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix} \\ \therefore \\ \]
\[ x_1=0=x_2 \\ \implies \\ \text{span}(\begin{bmatrix} 0 \\ 0 \end{bmatrix}) \\ \text{recall: } \\\ \text{span}(\vec{A}_{1},\vec{A}_{2})=\{c_1\vec{A}_{1}+c_2\vec{A}_{2}|c_1,c_2\in \mathbb{R} \} \\ \therefore \\ \text{span}(\vec{x})=\{\vec{0}\} \\ \implies \\\ \text{kernal}(A)= \{{\vec{0}}\} \]Meaning, we don’t loose any information from the linear transformation since, the span of the vectorn indicates the only vector that went to the null-space is \(\vec{0}\).
Consider how in the image above^. Your computer, to produce a .jpg file uses a range of numbers to describe the intensity of each color. After doing so, the computer then merges each of the colored matrices, it averages across them to determine the color of the cats fur (white).
So, suppose we had those 3 color-matrices describing the image above, in the previous homework-example, we got \(\vec{0}\) as the kernal of \(A\). Meaning, we lost 0 information from the linear-transformation–just like how we lost 0 color-information from the rightward-horrizontal-shear of this building.
(For those more interested in the subject look up “computer vision”)
\[ \text{Let: } \\ A= \begin{bmatrix} 1 & 2 & 3\end{bmatrix} \implies \text{rref}(A)=A \\ \therefore \\ A\vec{x}=\vec{0} \\ \implies \\ eq:x_1+2x_2+3x_3=0 \\ \implies \\ \text{solve in terms-of target variable: } \] \[ x_1=-2x_2-3x_3 \]
\[ x_2 =\frac{-3x_3-x_1}{2} \\ = \frac{-3x_3-(-2x_2-3x_3)}{2} \\ =\frac{0+2x_2}{2} \\ =0+x_2 \] \[ x_3 = \frac{-x_1-2x_2}{3} \\ =\frac{-(-2x_2-3x_3)-2x_2}{3} \\ =\frac{2x_2+3x_3-2x_2}{3} \\ =\frac{0+3x_3}{3} \\ =0+x_3 \\ \implies \]
\[ \therefore \\ \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix} = \begin{bmatrix} -2 \\1\\0 \end{bmatrix}t_1 +\begin{bmatrix} -3 \\0\\1 \end{bmatrix}t_2 \text{ for: } \{t_1,t_2\}\in \mathbb{R}^2 \\ \implies \\ \text{kernal(A)=Span(A)} \]
\[ \text{span}(\begin{bmatrix} -2 \\ 1 \\ 0 \end{bmatrix}, \begin{bmatrix} -3 \\ 0 \\ 1\end{bmatrix})=\{ \begin{bmatrix} -2 \\1\\0 \end{bmatrix}t_1 +\begin{bmatrix} -3 \\0\\1 \end{bmatrix}t_2|\{t_1,t_2\}\in \mathbb{R}^2 \} \\ =\text{image}( \begin{bmatrix} -2 &-3\\1&0\\0&1 \end{bmatrix}) \]
\[ \text{Let: }\\ A= \begin{bmatrix} 0 & 0 \\0 & 0 \end{bmatrix}=0^{2 \times2} \\ A\vec{x}=\vec{0} \\ \text{Except note: it sends any vector to become the 0 vector.} \\ \text{So ALL the information the Original Vector-Space is lost} \\ \implies \]
We should expect:
\[ \text{Kernal}(A)= \text{Span}(\begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \end{bmatrix}) = \text{Span}(\hat{i}, \hat{j}) \]
\[ \text{Proof: } \\ A\vec{x}=\vec{0} \\ \implies \\ \begin{bmatrix} 0 & 0 \\0 & 0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}=\begin{bmatrix} 0 \\ 0 \end{bmatrix} \\ \begin{bmatrix} 0 \\ 0 \end{bmatrix}=\begin{bmatrix} 0 \\ 0 \end{bmatrix} \]
In other words, because this \(T(\vec{x})\) (ie. transformation) reduces every 2-dimensional input-vector into the null-space. So, the \(\text{Kernal()}\) of \(T\) is the \(\text{span()}\) of those two dimensions–described by basis-vectors: \(\hat{i} \text{ and } \hat{j}\). To put it simply, this is like the first problem in reverse–since, we are loosing all the information instead of keeping it.
For each matrix A in Exercises 14 through 16, find vectors that span the image of A. Give as few vectors as possible. Use paper and pencil.
\[\text{Let: }\\A= \begin{bmatrix} 1 & 2 & 3\\1 & 2 & 3\\1 & 2 & 3 \end{bmatrix} \implies rref(A)= \begin{bmatrix} 1 & 2 & 3\\0 & 0 & 0\\0 & 0 & 0 \end{bmatrix} \]
\[ \text{consider: } \\ A=\begin{bmatrix} \vec{v}_1 &\vec{v}_2 &\vec{v}_3 \end{bmatrix} \\ \implies \\ A\vec{x}= \begin{bmatrix} 1 \\ 1\\1 \end{bmatrix} x_1 + \begin{bmatrix} 2 \\ 2\\2 \end{bmatrix} x_2 + \begin{bmatrix} 3 \\ 3\\ 3 \end{bmatrix} x_3 \\ \text{recall that we dont want 'redundancy', }\\ \text{so, all vectors must be unique}\\\text{however, note: }\\ \]
\[ \vec{x}_1=\frac{1}{2}x_2=\frac{1}{3}x_3 \\ \text{meanining, our vectors } \vec{x}_2 \text{ and } \vec{x}_2 \text{ aren't linearly-dependent on } \vec{x}_1. \\ \text{so, we can simple describe the basis as } \vec{x}_1. \]
\[ A=\begin{bmatrix} 1 & 1 &1 &1 \\ 1 & 2 & 3&4 \end{bmatrix} \\ rref(A)=\begin{bmatrix} 1 & 1& 1& 1 \\ 0& 1& 2& 3 \end{bmatrix}= \begin{bmatrix} 1 & 0& -1& -2 \\ 0& 1& 2& 3 \end{bmatrix} \implies \\ \therefore \\ \text{consider the first and second col of the original matrix} \\ \text{image(A)}=\text{span}(A[i,1], A[i,2])=\text{span}(\begin{bmatrix}1 \\ 1\end{bmatrix}, \begin{bmatrix}1 \\ 2\end{bmatrix}) \]
Find the \(\text{image}() \text{ and kernal}()\) of the 7 main linear Transformations.
\[ \text{image}(S) = \begin{cases} \text{span}\left(\begin{bmatrix} C \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ C \end{bmatrix}\right) & \text{, if } C \neq 0 \\ \{\vec{0}\} & \text{, if } C = 0 \end{cases} \]
\[ \text{kernal}(S)=\ ? \\ \begin{bmatrix} C & 0 \\ 0 & C\end{bmatrix} \begin{bmatrix} x_1 \\x_2\end{bmatrix}=\begin{bmatrix} 0 \\0\end{bmatrix} \]
\[ \text{kernal}(S) = \begin{cases} \text{span}\left(\begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \end{bmatrix}\right) & \text{, if } C \neq 0 \\ \{\vec{0}\} & \text{, if } C = 0 \end{cases} \]
\[ \text{Let: } \perp_{proj}=\begin{bmatrix} u_1^2 & u_1u_2 \\ u_1u_2 & u_2^2\end{bmatrix} \text{for: } \hat{u}=<u_1,u_2>\implies||\hat{u}||:=1 \\ \text{image}(\perp_{proj})=? \\ \implies \\ \text{RREF}(\begin{bmatrix} u_1^2 & u_1u_2 \\ u_1u_2 & u_2^2\end{bmatrix} ) \\ =\begin{bmatrix} u_1^2 & u_1u_2 \\ u_1u_2 & u_2^2\end{bmatrix}^{R_1:\frac{R_1}{u^2_1}}_{R_2:\frac{R_2}{u^2_2}} \\ =\begin{bmatrix} 1 & \frac{u_1u_2}{u^2_1} \\ \frac{u_1u_2}{u^2_2} & 1\end{bmatrix} \\ =\begin{bmatrix} u_1^2 & u_1u_2 \\ u_1u_2 & u_2^2\end{bmatrix}^{R_1:\frac{R_1}{u^2_1}} \\ \]
\[ =\begin{bmatrix} 1 & \frac{u_1u_2}{u_1^2} \\ u_1u_2 & u_2^2\end{bmatrix}_{R_2: R_2-u_1u_2R_1} \\ =\begin{bmatrix} 1 & \frac{u_1u_2}{u_1^2} \\ 0 & u_2^2-(\frac{u_1u_2}{u_1^2}u_1u_2)\end{bmatrix} \\ =\begin{bmatrix} 1 & \frac{u_1u_2}{u_1^2} \\ 0 & u_2^2-(\frac{u_1^2u_2^2}{u_1^2})\end{bmatrix} \\ =\begin{bmatrix} 1 & \frac{u_1u_2}{u_1^2} \\ 0 & u_2^2-u_2^2\end{bmatrix} \\ =\begin{bmatrix} 1 & \frac{u_1u_2}{u_1^2} \\ 0 & 0\end{bmatrix} \\ \therefore \\ A[1, ] = \begin{bmatrix}u_1^2 \\ u_1u_2\end{bmatrix}_{R_2:\frac{R_2}{u_1}}^{R_1:\frac{R_1}{u_1}} \\ =\begin{bmatrix}u_1 \\ u_2\end{bmatrix} \\ \therefore\\ \text{image}(\perp_{proj})=\text{span}(\begin{bmatrix}u_1 \\ u_2\end{bmatrix}) \]
Now, how much information are we loosing due to the transformation?
\[ \text{Kernal}(\perp_{proj})=\ ? \\ \perp_{proj}\vec{x}=\vec{0} \\ \text{recall: rref}(\perp_{proj})= \begin{bmatrix} 1 & \frac{u_1u_2}{u_1^2} \\ 0 & 0\end{bmatrix} \\ \implies \\ \begin{bmatrix} 1 & \frac{u_1u_2}{u_1^2} \\ 0 & 0\end{bmatrix} \begin{bmatrix}x_1\\x_2\end{bmatrix}=\begin{bmatrix}0\\0\end{bmatrix} \\ \implies \\ \begin{bmatrix} 1 & \frac{u_2}{u_1} \\ 0 & 0\end{bmatrix} \begin{bmatrix}x_1\\x_2\end{bmatrix}=\begin{bmatrix}0\\0\end{bmatrix} \\ eq: x_1+x_2\frac{u_2}{u_1}=0 \\ \implies \\ x_1=-x_2\frac{u_2}{u_1} \\ x_2=x_2 \\\implies\\ \begin{bmatrix} x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} -\frac{u_2}{u_1} \\ 1 \end{bmatrix} \\ \implies \\ \text{span}(\begin{bmatrix} -\frac{u_2}{u_1} \\ 1 \end{bmatrix}) = \text{span}(\begin{bmatrix} -u_2\\ u_1 \end{bmatrix}) \\ \therefore \\ \text{Kenal}(\perp_{proj})=\text{span}(\begin{bmatrix} -u_2\\ u_1 \end{bmatrix})_\text{for: }u_1^2 +u_2^2 := 1\]
Furthermore, consider that the vectors spanning the image and the kernal are orthogonal :
\[ \begin{bmatrix}u_1 \\ u_2\end{bmatrix} \cdot \begin{bmatrix} -\frac{u_2}{u_1} \\ 1 \end{bmatrix} = \ ? \\ <u_1,u_2> \cdot <-\frac{u_2}{u_1},1>=-\frac{u_2}{u_1}u_1+u_2=-u_2+u_2=0 \]
Consider that what an orthogonal-projection is, is a method for decomposing a vector into perpendicular vectors. So consider that a function which inherently decomposes vectors would have a structure of orthogonality encoded into the angle between the vectors spanning() the image() and kernal(). Why is this? and–why do we care?
why do we care?
Principal Component Analysis (PCA) is probably one of the last things we will “talk about” in this class as it is much above the difficulty of this class. At its core, its a way of simplifying date. Specifically, PCA is a statistical technique used to reduce the dimensionality of data by identifying the directions (ie. principal components) in which the data variation is described the most. In other words, we accurately describe out data by considering how much out data varies. Once we have a measure of that we create principle components which describe that attribute of the data with less computational-complexity (i.e. dimensionaliy). Orthogonal projections in particular are integral to PCA because they project the data onto these principal components, which are orthogonal to each other, thereby ensuring that the most significant patterns in the data are preserved while reducing its dimensionality.
Imagine you’re working for a government agency responsible for monitoring ocean life, and you’re managing a massive dataset with 10 million rows of fish height and width measurements. Using a machine learning model with PCA helps you simplify this enormous dataset by focusing on the most critical patterns, like overall fish size and shape. This makes it easier to analyze the data, classify fish species, and monitor population trends, leading to more effective management of marine resources.
Why is this?
Recall that when we are doing an orthogonal projection, what we are doing is creating the vector \(\vec{x}^{||}\) parallel to \(L\) which has an orthogonal component \(\vec{x}^{\perp}\) lost in the transformation. So, we should expect the kernel() and image() to be orthogonal since the image() represents the information we kept (this sense of parallelness) vs. the information we lost (the orthogonal component–which when summed with the parallel component gives us the orginal vector).
Reflections: \(\begin{bmatrix} a & b \\ b & -a\end{bmatrix} \text{for: }a^2+b^2:=1\)
Rotation: \(\begin{bmatrix} a & -b \\ b & a\end{bmatrix} \text{for: }a^2+b^2:=1\)
Rotation and Scaling: \(\begin{bmatrix} Ca & -b \\ b & Ca\end{bmatrix} \text{for: }a^2+b^2:=1\)
Horizontal Shear: \(\begin{bmatrix} 1 & C \\ 0 & 1\end{bmatrix}\)
Verticle Shear: \(\begin{bmatrix} 1 & 0 \\ C & 1\end{bmatrix}\)
Imagine a team of data analysts working on a complex project where each analyst specializes in a different type of data. The entire team’s skill set represents the vector space \(\mathbb{R^{n}}\), while smaller teams within, focused on specific tasks like data visualization or statistical modeling, represent subspaces of the vector space. A basis for these subspaces is like having the minimal number of unique skill sets needed to accomplish their specialized task without redundancy. Linear independence means that each analyst’s contribution is essential—no one’s skills are just a repeat of someone else’s.
Recall that the image() and kernal() of a linear-transformation both contain the \(\vec{0}\).
Subsets of a vector-space closed under addition and scalar multiplication is called: linear-subspace of \(\mathbb{R^{n}}\).
In other words, the kernal is a subspace of the vector space you left from the linear-transformation. The image is the subspace you transition to.
In other words, in the real-world, we would like to get rid of redundant information. When we describe information. in terms of vectors, we can maximize efficiency by only keeping the basis for all other vectors we may want to describe. Furthermore, we consider vectors as redundant if they are a linear combination of any vector we use to describe a vector space. Furthermore, the set of non-redundant vectors in a subspace form a basis–these are all the linearly-independentver-fitted vectors in your linear-subspace.
In other words, when we can use the identity-matrix to describe the simplified version of a image(), each col represents the linearly-independent vectors.
In other words, when a non-trivial relation exists, this means that one or more vectors describing your space are redundant and, therefore, linearly dependent. Meaning, you dont have the basis-vectors. This is because, if multiple vectors can be described as a linear-combination of one another then, they cancel each-other out and sum to 0.
Meaning, non-trivially related vectors are considered “linearly-dependent”–linearly-dependent vectors are described in the Kernal and, linearly-independent insider of the image(). We should expect linearly-dependent vectors to go to the kernal as they are not required to span the image of the new vector space we have created through the linear-transformation and, therefore are discarded into the kernal() pile.
In other words, basis-vectors are the only vectors which uniquely describe the space once spanned–i.e. vectors spanning the image().
In other words, the dimension is the amount of required unique-pieces of information to describe a Vector space.
In other words, if we have independent vectors spanning a subspace the only information lost is what we always loose \(\vec{0}\):
\[ \text{Kernal}(T)=\{\vec{0}\} \ \text{or, } \\ A\vec{v}=\vec{0} \ \text{for: }\vec{v}:=\vec{0} \]
The amount of dimensions of an image is the same as the how many independent directions are spanned by the columns of the matrix.
In other words, the dimensions of the image and kernal of a transformation is equivalent to the number of col.
\[ \text{dim(Ker(A))+Rank(A)=ncol(A)} \\ \text{dim(Ker(A))+dim(im(A))=ncol(A)} \]In other words:
\[ \text{Let: } \mathfrak{B} \in V_{ectorSpace} \\ \mathfrak{B}:= \text{Basis}=\{ \vec{a}, \vec{b}\} \ \\ \text{for: }\{ \vec{a}, \vec{b}\}:= \text{vectors spanning vector space} \]
We can determine whether or not some vector \(\vec{c}\) is present in the vector space spanned by \(\{ \vec{v}_1, \vec{v}_2\}\) by creating an augmented matrix of the following form:
\[ \begin{bmatrix} a_1 & b_2 & c_1 \\ a_2 & b_2 & c_2 \end{bmatrix} \\ \therefore \\ \text{rref}(\begin{bmatrix} a_1 & b_2 & c_1 \\ a_2 & b_2 & c_2 \end{bmatrix} ) \\ \implies\\ \vec{x} :=(x_1, x_2) =x_1\vec{a}+x_2\vec{b} \text{ or, } \\\implies \\ \vec{c} \notin V \]
In other words, by solving for the unique solution where both \(\vec{a} \text{ & }\vec{b}\) are equivalent to the new vector \(\vec{c}\) , we are effectively finding out whether or not out new vector belongs to our vector space spanned by both \(\vec{a} \text{ & }\vec{b}\).
If our vector \(\vec{c}\) belongs to the same vector space then we can interpret the results in 2 ways:
As coordinates on the plane spanned by both \(\vec{a} \text{ & }\vec{b}\) –i.e. \((x_1, x_2)\) or,
as a set of instructions using some linear combination of both \(\vec{a} \text{ & }\vec{b}\) to go from the origin to the point in space– i.e. \(x_1\vec{a}+x_2\vec{b}\)
\[ \text{i.e. } \begin{bmatrix} \vec{x}\end{bmatrix}_\mathfrak{B}=<(x_1,x_2)> =\mathfrak{B}_{Coordinates} \\ \therefore \\ \mathfrak{B}_{Coordinates} \begin{bmatrix} \vec{a} & \vec{b}\end{bmatrix}=\vec{x} \]
In other words, the coordinates of a plane defined by spanning vectors, when multiplied by the matrix representing each vector, yields the vector’s position relative to the origin some vector with those coordinates would have.
In other words, linearity holds in the linear transformations involved in coordinate transformations.
Imagine you and a team of scientists are currently studying astrobiology, and there’s been some exciting news about potential life in space! However, your team remains cautiously skeptical about the validity of this claim. Fortunately, you work for a NASA laboratory that captures data from the exoplanet in question. But disaster strikes—NASA encounters significant issues with data transmission!
The satellites are incapable ofsending the large files across space– Signal Degradation is the cause! After reverse engineering the issue, your teams of scientists and engineers have diagnosed and solved the issue. Can you figure it out?
Standard Basis Approach:
Suppose we want as much detail as possible, so we attempt to describe all colors emmited by the planet. We do this by considering 3 dimensions described by the standard basis vectors \(\hat{i} \text{ and } \hat{j} \text{ and } \hat{k}\) where each standard basis vector represents a set of vectors which range from \(0\rightarrow1\) that describe the intensity of each unique color ( \(\vec{R}_{ed}, \vec{G}_{reen}, \vec{B}_{lue}\) ) –which, when combined returns the actual color.
\[ \text{Basis Vectors}= \{ \vec{R}_{ed}, \vec{G}_{reen}, \vec{B}_{lue} \} \\ \implies \\ \text{Basis Matrix } = \begin{bmatrix}1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1\end{bmatrix} \\ \implies \\ \text{DATA :} \begin{bmatrix}0.12 & 0.57 & 0.83 \\0.75 & 0.34 & 0.60 \\0.51 & 0.18 & 0.92\end{bmatrix},\ \begin{bmatrix}0.09 & 0.67 & 0.28 \\0.41 & 0.88 & 0.15 \\0.53 & 0.25 & 0.70\end{bmatrix},\ \begin{bmatrix}0.33 & 0.61 & 0.78 \\0.47 & 0.10 & 0.55 \\0.86 & 0.22 & 0.31\end{bmatrix} \\ \text{Total Intensities} = \{4.82,3.96,4.23\} \]
Specific Basis Approach:
\[ \text{Basis Vectors}= \{ 1.5\vec{R}_{ed}, 0.8\vec{G}_{reen}, 0.1\vec{B}_{lue} \} \\ \implies \\ \text{Basis Matrix } = \begin{bmatrix}1.5 & 0 & 0 \\ 0 & 0.8 & 0 \\ 0 & 0 & 0.1\end{bmatrix}\\ \implies \\\text{DATA :}\begin{bmatrix}0.18 & 0.46 & 0.08 \\ 1.13 & 0.27 & 0.06 \\ 0.77 & 0.14 & 0.09\end{bmatrix},\ \begin{bmatrix}0.14 & 0.54 & 0.03 \\ 0.62 & 0.70 & 0.02 \\ 0.80 & 0.20 & 0.07\end{bmatrix},\ \begin{bmatrix}0.50 & 0.49 & 0.08 \\ 0.71 & 0.08 & 0.06 \\ 1.29 & 0.18 & 0.03\end{bmatrix} \\ \text{Total Intensities} = \{2.348,2.244,2.435\} \\ \triangle_{\text{Intensity}}=\{2.472,1.716,1.795\} \]
By shifting our perspective to consider the three dimensions of color from a different basis, we effectively address both challenges facing NASA. This new approach increases our sensitivity to detecting water while also reducing the overall size of our images by minimizing sensitivity to non-red colors (green and blue). This change is evident in the mathematics: using the same data, we obtain significantly different results, with fewer detections outside the red spectrum and an overall decrease in the amount of intensity of each color.