k-Nearest Neighbors (KNN) algorithm for classifcation and regression.
The model representation for KNN is the entire training dataset.KNN has no model other than storing the entire dataset, so there is no learning required.Efficient implementations can store the data using complex data structures like k-d trees to make look-up and matching of new patterns during prediction efficient.
KNN makes predictions using the training dataset directly. Predictions are made for a new data point by searching through the entire training set for the K most similar instances (the neighbors) and summarizing the output variable for those K instances. For regression this might be the mean output variable, in classification this might be the mode (or most common) class value.
To determine which of the K instances in the training dataset are most similar to a new input a distance measure is used. For real-valued input variables, the most popular distance measure is Euclidean distance. Euclidean distance is calculated as the square root of the sum of the squared differences between a point a and point b across all input attributes i.
\[ Eculidean\quad Distance(a,b)\quad =\sqrt { \sum _{ i=1 }^{ n }{ { ({ a }_{ i }-{ b }_{ i }) }^{ 2 } } } \] KNN has been around for a long time and has been very well studied. As such, different disciplines have different names for it, for example:
KNN can be used for regression and classification problems.
KNN for Regression When KNN is used for regression problems the prediction is based on the mean or the median of the K-most similar instances.
KNN for Classification
When KNN is used for classication, the output can be calculated as the class with the highest frequency from the K-most similar instances. Each instance in essence votes for their class and the class with the most votes is taken as the prediction. Class probabilities can be calculated as the normalized frequency of samples that belong to each class in the set of K most similar instances for a new data instance. For example, in a binary classication problem (class is 0 or 1)
\[ p(class = 0) = \frac{count(class = 0)}{(count(class = 0) + count(class = 1)} \] If you are using K and you have an even number of classes (e.g. 2) it is a good idea to choose a K value with an odd number to avoid a tie. And the inverse, use an even number for K when you have an odd number of classes. Ties can be broken consistently by expanding K by 1 and looking at the class of the next most similar instance in the training dataset.
KNN works well with a small number of input variables (p), but struggles when the number of inputs is very large. Each input variable can be considered a dimension of a p-dimensional input space.
Rescale Data: KNN performs much better if all of the data has the same scale. Normalizing your data to the range between 0 and 1 is a good idea. It may also be a good idea to standardize your data if it has a Gaussian distribution.
Address Missing Data: Missing data will mean that the distance between samples cannot be calculated. These samples could be excluded or the missing values could be imputed.
Lower Dimensionality: KNN is suited for lower dimensional data. You can try it on high dimensional data (hundreds or thousands of input variables) but be aware that it may not perform as well as other techniques. KNN can benefit from feature selection that reduces the dimensionality of the input feature space.
KNN stores the entire training dataset which it uses as its representation.
KNN does not learn any model.
KNN makes predictions just-in-time by calculating the similarity between an input sample and each training instance.
There are many distance measures to choose from to match the structure of your input data.
That it is a good idea to rescale your data, such as using normalization, when using KNN.