.row[ .col-7[ .title[ # Linear Regression ] .subtitle[ ## Linear regression with multiple features ] .author[ ### Laxmikant Soni <br> [Web-Site](https://laxmikants.github.io) <br> [<i class="fab fa-github"></i>](https://github.com/laxmiaknts) [<i class="fab fa-twitter"></i>](https://twitter.com/laxmikantsoni09) ] .affiliation[ ] ] .col-5[ .logo[ <!-- --> ] ] ] --- class: very-large-body # Multiple features .pull-top[ Linear regression with multiple variables is also known as “multivariate linear regression”. We now introduce notation for equations where we can have any number of input variables. `\(\begin{align*}x_j^{(i)} &= \text{value of feature } j \text{ in the }i^{th}\text{ training example} \newline x^{(i)}& = \text{the column vector of all the feature inputs of the }i^{th}\text{ training example} \newline m &= \text{the number of training examples} \newline n &= \left| x^{(i)} \right| ; \text{(the number of features)} \end{align*}\)` ] --- class: large-body # Hypothesis function .pull-top[ `\(h_\theta(x) = \theta_0 + \theta_1x_1 + \theta_2 x_2 + \theta_3 x_3 + ... + \theta_n x_n\)` In order to develop intuition about this function, we can think about θ0 as the basic price of a house, θ1 as the price per square meter, θ2 as the price per floor, etc. x1 will be the number of square meters in the house, x2 the number of floors, etc. Using the definition of matrix multiplication, our multivariate hypothesis function can be concisely represented as: `\(h_\theta (x) = \left [ \matrix { \theta_0 & \theta_1 ... & \theta_n } \right ] \left [ \matrix { x_0 \\ x_1 \\ ... \\ x_n} \right ] = \theta^T x\)` ] --- class: large-body # Hypothesis function .pull-top[ This is a vectorization of our hypothesis function for one training example; see the lessons on vectorization to learn more. `\(x_{0}^{(i)} = 1 \ for \ (i \in 1, ..., m)\)` `\(h_\theta(x) = \theta_0 + \theta_1x_1 + \theta_2 x_2 + \theta_3 x_3 + ... + \theta_n x_n\)` [Note: So that we can do matrix operations with theta and x, we will set `\(x_{(0)}^{(i)} = 1\)`, for all values of i. This makes the two vectors ‘theta’ and `\(x_{(i)}\)` match each other element-wise (that is, have the same number of elements: n+1).] The training examples are stored in X row-wise, like such: `\(\left[\matrix{x_{(0)}^{(1)} & x_{(1)}^{(1)} \\ x_{(0)}^{(2)} & x_{(1)}^{(2)} \\ x_{(0)}^{(3)} & x_{(1)}^{(3)}} \right] , \theta = \left[\matrix {\theta_0 \\ \theta_1}\right]\)` You can calculate the hypothesis as a column vector of size (m x 1) with: `\(h_\theta (X) = X\theta\)` ] --- class: large-body # Cost function .pull-top[ For the parameter vector `\(\theta\)`, the cost function is `\(J(\theta) = \frac{1}{2m} \sum_{i=1}^{m} (h_\theta (x_i) - y_i)^2\)` The vectorized version is: `\(J(\theta) = \frac{1}{2m} (X \theta - \bar{y})^T (X \theta - \bar{y})\)` where `\(\bar{y}\)` denotes the vector of all y values ] --- class: large-body # Gradient Descent for Multiple Variables .pull-top[ The gradient descent equation itself is generally the same form; we just have to repeat it for our ‘n’ features: repeat until convergence: `\(\theta_0 := \theta_0 - \alpha \frac{1}{m} \sum_{i = 1}^{m} (h_\theta(x^{(i)}) - y^{(i)}) * x_0^{(i)}\)` `\(\theta_1 := \theta_1 - \alpha \frac{1}{m} \sum_{i = 1}^{m} (h_\theta(x^{(i)}) - y^{(i)}) * x_1^{(i)}\)` `\(\theta_2 := \theta_2 - \alpha \frac{1}{m} \sum_{i = 1}^{m} (h_\theta(x^{(i)}) - y^{(i)}) * x_2^{(i)}\)` In other words: repeat until convergence: { `\(\theta_j := \theta_j - \alpha \frac{1}{m} \sum_{i = 1}^{m} (h_\theta(x^{(i)}) - y^{(i)}) * x_0^{(i)} \ for \ j := 0..n\)` } ] --- class: large-body # Feature normalization .pull-top[ Mean normalization involves subtracting the average value for an input variable from the values for that input variable, resulting in a new average value for the input variable of just zero. To implement both of these techniques, adjust your input values as shown in this formula: `\(x_i := \frac{x_i - \mu_i}{s_i}\)` Where μi is the average of all the values for feature (i) and si is the range of values (max - min), or si is the standard deviation. ] --- class: large-body # Features and polynomial regression .pull-top[ We can improve our features and the form of our hypothesis function in a couple different ways. Our hypothesis function need not be linear (a straight line) if that does not fit the data well. We can change the behavior or curve of our hypothesis function by making it a quadratic, cubic or square root function (or any other form). `\(h_\theta(x) = \theta_0 + \theta_1 x_1 + \theta_2 x_1^2 + \theta_3 x_1^3\)` To make it a square root function, we could do `\(h_\theta(x) = \theta_0 + \theta_1 x_1 + \theta_2 \sqrt{x_1}\)` ] --- class: large-body # Normal Equation .pull-top[ The “Normal Equation” is a method of finding the optimum theta without iteration. `\(\theta = (X^T X)^{-1} X^T y\)` There is no need to do feature scaling with the normal equation. ] --- class: inverse, center, middle # Thanks ---