Introduction to Machine Learning Part I

Basic Terms and Concepts

Ziwei Ma

6/15/2021

What is Machine Learning

Why should machines have to learn?

Wellsprings of Machine Learning

Varieties of Machine Learning

• Functions

• Logic programs and rule sets

• Finite-state machines

• Grammars

• Problem solving systems

Type of Learning

Key ML Terminoly

Examples to illustrate ML terms

For example, the following table shows 5 labeled examples from a data set containing information about housing prices in California:

Labeled examples
housingMedianAge
(feature)
totalRooms
(feature)
totalBedrooms
(feature)
medianHouseValue
(label)
15 5612 1283 66900
19 7650 1901 80100
17 720 174 85700
14 1501 337 73400
20 1454 326 65500
15 5612 1283 66900

Examples to illustrate ML terms

Unlabeled examples
housingMedianAge
(feature)
totalRooms
(feature)
totalBedrooms
(feature)
42 1686 361
34 1226 180
33 1077 271

Descending into ML: Linear Regression

Background It has long been known that crickets (an insect species) chirp more frequently on hotter days than on cooler days. For decades, professional and amateur scientists have cataloged data on chirps-per-minute and temperature. As a birthday gift, your Aunt Ruth gives you her cricket database and asks you to learn a model to predict this relationship. Using this data, you want to explore this relationship.

Descending into ML: Linear Regression

Using the equation for a line, you could write down this relationship as follows: \[y=mx+b\] where: - \(y\) is the temperature in Celsius—the value we’re trying to predict. - \(m\) is the slope of the line. - \(x\) is the number of chirps per minute—the value of our input feature. - \(b\) is the y-intercept.

By convention in machine learning, you’ll write the equation for a model slightly differently: \[y'=b+w_1 x_1\] where: - \(y'\) is the temperature in Celsius—the value we’re trying to predict. - \(w_1\) is is the weight of feature 1. Weight is the same concept as the “slope” \(m\) in the traditional equation of a line. - \(x_1\) is a feature (a known input). - \(b\) is bias (the y-intercept), sometimes referred to as \(w_0\).

Although this model uses only one feature, a more sophisticated model might rely on multiple features, each having a separate weight (\(w_1\), \(w_2\), etc.). For example, a model that relies on three features might look as follows: \[y'=b+w_1 x_1+w_2 x_2+w_3 x_3\]

Descending into ML: Training and Loss

Training and Loss

Descending into ML: Squared Loss

The linear regression models we’ll examine here use a loss function called squared loss (also known as \(L_2\) loss).

Remark Although MSE is commonly-used in machine learning, it is neither the only practical loss function nor the best loss function for all circumstances.

Reducing Loss: An Iterative Approach

In ML, the majority commonly used methods to reduce loss are interactively proceeded.

Iterative learning might remind you of the “Hot and Cold” kid’s game for finding a hidden object like a thimble. In this game, the “hidden object” is the best possible model. You’ll start with a wild guess (“the values of \(w_1\) is 0”) and wait for the system to tell you what the loss is. Then, you’ll try another guess (“the values of \(w_1\) is 0.5”) and see what the loss is. Aah, you’re getting warmer. Actually, if you play this game right, you’ll usually be getting warmer. The real trick to the game is trying to find the best possible model as efficiently as possible.

Reduce Loss: Gradient Descent

Suppose we had the time and the computing resources to calculate the loss for all possible values of \(w_1\). For the kind of regression problems we’ve been examining, the resulting plot of loss vs. \(w_1\) will always be convex. In other words, the plot will always be bowl-shaped, kind of like this:

Reduce Loss: Gradient Descent

Convex problems have only one minimum; that is, only one place where the slope is exactly 0. That minimum is where the loss function converges.

Calculating the loss function for every conceivable value of \(w_1\) over the entire data set would be an inefficient way of finding the convergence point. Let’s examine a better mechanism—very popular in machine learning—called gradient descent.

  1. The first stage in gradient descent is to pick a starting value (a starting point) for \(w_1\).

  1. The gradient descent algorithm then calculates the gradient of the loss curve at the starting point.

Note: a gradient is a vector, so it has both of the following characteristics: a direction and a magnitude

Note: The gradient always points in the direction of steepest increase in the loss function. The gradient descent algorithm takes a step in the direction of the negative gradient in order to reduce loss as quickly as possible.

  1. To determine the next point along the loss function curve, the gradient descent algorithm adds some fraction of the gradient’s magnitude to the starting point as shown in the following figure:

Reduce Loss: Learning Rate

As noted, the gradient vector has both a direction and a magnitude. Gradient descent algorithms multiply the gradient by a scalar known as the learning rate (also sometimes called step size) to determine the next point. For example, if the gradient magnitude is 2.5 and the learning rate is 0.01, then the gradient descent algorithm will pick the next point 0.025 away from the previous point.

Hyperparameters are the knobs that programmers tweak in machine learning algorithms. Most machine learning programmers spend a fair amount of time tuning the learning rate.

Reduce Loss: Stochastic Gradient Descent

When the loss function is NOT concave (for complicated model, like neural network), the gradient decent method highly depends on the initial value assignments and only reach to local minimum.

Model Evaluation

Model Evaluation

In modern times, we’ve formalized Ockham’s razor into the fields of statistical learning theory and computational learning theory. These fields have developed generalization bounds–a statistical description of a model’s ability to generalize to new data based on factors such as:

  1. the complexity of the model
  2. the model’s performance on training data

A machine learning model aims to make good predictions on new, previously unseen data. But if you are building a model from your data set, how would you get the previously unseen data? Well, one way is to divide your data set into two subsets:

  1. training set— a subset to train a model.
  2. test set — a subset to test the model.

Training and Test Sets: Splitting Data

You could imagine slicing the single data set as follows:

Make sure that your test set meets the following two conditions:

  1. Is large enough to yield statistically meaningful results.
  2. Is representative of the data set as a whole. In other words, don’t pick a test set with different characteristics than the training set.

Validation Set: Another Partition

This partitioning enabled you to train on one set of examples and then to test the model against a different set of examples. With two partitions, the workflow could look as follows:

You can greatly reduce your chances of overfitting by partitioning the data set into the three subsets shown in the following figure:

Representation: Feature Engineering

Many machine learning models must represent the features as real-numbered vectors since the feature values must be multiplied by the model weights.