What is Keras?
Keras is a deep learning framework that provides a convenient framework to define and train any kind of deep learning model. It has a user friendly API and the same code can run on both CPU and GPUs.
Keras is a model-level library meaning it provides a high-level building blocks for developing deep learning models. It cannot handle low level operations like tensor manipulation or differentiation. Keras depends on well optimized tensor library to do so, which serves as the backend engine for Keras. Several backend engines can be plugged into Keras - TensorFlow developed by Google, CNTK developed by Microsoft and Theano.
So, Keras depends on a backend to do the low-level computations like TensorFlow. When run on a CPU, TensorFlow itself is wrapping a low-level library for tensor operations, called Eigen. On a GPU, TensorFlow wraps a library called the NVIDIA CUDA Deep Neural Network library- cuDNN.
The default and the recommended backend in TensorFlow. In R, when you install Keras library it will automatically install TensorFlow. And the default installation is for a CPU.
If you are using a GPU, then do -
#install_keras(tensorflow = "gpu")
MNIST Dataset
The MNIST dataset is often considered as the “hello world” of deep learning. It’s basically a set of grayscale images (28 x 28 pixels) of hand written single digits that fall into 10 categories - 0 through 9.
Sample = Image; Feature = Pixels; Outcome = 10 categories;
This is a standard dataset, so it comes with train and test split. There are 60,000 images in train data and 10,000 images in test data.
The MNIST dataset comes preloaded in Keras, in the form of train and test lists, each of which includes a set of images (x) and their associated labels (y).
Check out some other datasets available in Keras package
?dataset_
No documentation for ‘dataset_’ in specified packages and libraries:
you could try ‘??dataset_’
Load the MNIST Dataset
mnist <- dataset_mnist() #mnist is a list
/home/auggy/.virtualenvs/r-tensorflow/lib/python2.7/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
train_images <- mnist$train$x
train_labels <- mnist$train$y
test_images <- mnist$test$x
test_labels <- mnist$test$y
Understand your data structure..
What is the structure of training data?
class(train_images)
[1] "array"
str(train_images)
int [1:60000, 1:28, 1:28] 0 0 0 0 0 0 0 0 0 0 ...
class(train_labels)
[1] "array"
str(train_labels)
int [1:60000(1d)] 5 0 4 1 9 2 1 3 1 4 ...
What is the structure of test data?
class(test_images)
[1] "array"
str(test_images)
int [1:10000, 1:28, 1:28] 0 0 0 0 0 0 0 0 0 0 ...
class(test_labels)
[1] "array"
str(test_labels)
int [1:10000(1d)] 7 2 1 0 4 1 4 9 5 9 ...
The structure we see in training data for ex, is an array of 60,000 matrices of 28 x 28 integers. Since it’s grayscale, the coefficients will range from 0 to 255.
min(train_images)
[1] 0
max(train_images)
[1] 255
Let’s look at a digit!
digit<-train_images[1,,]
plot(as.raster(digit,max = 255))

Where are the tensors?
The array train_images is actually a tensor! And so is train_labels. Tensors are multidimensional arrays. We’ve all dealt with tensors before!
Scalar is a 0D tensor; a vector is a 1D Tensor; a matrix is a 2D tensor.
If you pack a bunch of matrices in an array then it’s a 3D tensor; if you pack a bunch of 3D tensors in an array then you will get a 4D tensor and so on.
You can think of grayscale images as 3D tensor, color images as 4D tensor and videos as 5D Tensor.
What are the dimensions in each of those?
3D - Grayscale images = samples, height, width 4D - Color images = samples, height, width, channels-R,B,G 5D - Videos = samples, frames, height, width, channels
paste("The number of axes in our training data")
[1] "The number of axes in our training data"
length(dim(train_images))
[1] 3
Building the Neural Network
There are 2 broad things that we need to do - one is to pick a network architecture and the 2nd is to compilation step that includes optimization.
Define the network architecture
network <- keras_model_sequential() %>% #linear layers
layer_dense(units = 512, activation = "relu", input_shape = c(28 * 28)) %>% #in a NN, each layer is made of unit, each unit has a function called activations
layer_dense(units = 10, activation = "softmax") #output
Layers are the core building blocks of a NN. Think of it as a filter of data, as some data goes in, it comes out in a more useful form. This useful form is fed into another layer and you might get more refined data.
Layers extract representations out of the data fed into them. Most of deep learning consists of chaining together simple layers that will implement a form of progressive data distillation.
Our network consists of 2 layers which are densely connected i.e. fully connected neural layers.The last layer is a 10-way softmax layer - it gives an array of 10 probability scores one for each class that sums to 1. Think of softmax as a generalized logistic function - instead of 2 classes we have 10 classes.
Compilation Step
There are 3 things that we need to pick within compilation.
network %>% compile(
optimizer = "rmsprop", #network will update itself based on the training data & loss
loss = "categorical_crossentropy", #measure mismatch between y_pred and y
metrics = c("accuracy") #measure of performace - correctly classified images
)
Pre-Processing the data
Just like several other ML models, we need to preprocess the data into the right shape and scale the pixels so it ranges from 0 to 1 instead of 0 to 255
train_images <- array_reshape(train_images, c(60000, 28 * 28))
train_images <- train_images / 255
test_images <- array_reshape(test_images, c(10000, 28 * 28))
test_images <- test_images / 255
dim(train_images)
[1] 60000 784
dim(test_images)
[1] 10000 784
Prepare the labels
train_labels <- to_categorical(train_labels)
test_labels <- to_categorical(test_labels)
dim(train_labels)
[1] 60000 10
dim(test_labels)
[1] 10000 10
Fit the model
history <-network %>% fit(train_images, train_labels, epochs = 10, batch_size = 128,validation_split = 0.1) #epoch is an interation. Every iteration a random batch of 128 images are passed through the network
plot(history)

Metrics
metrics <- network %>% evaluate(test_images, test_labels)
metrics
$loss
[1] 0.06858944
$acc
[1] 0.9824
The test accuracy is lower than train accuracy - Overfitting! What should we do when there’s over fitting?
network %>% predict_classes(test_images[1:10,])
[1] 7 2 1 0 4 1 4 9 5 9
Try another architecture!
model<-keras_model_sequential()
model %>%
layer_dense(units=512,activation = "relu",input_shape = c(28*28)) %>%
layer_dropout(rate = 0.4) %>%
layer_dense(units=512,activation = "relu") %>%
layer_dropout(rate = 0.3) %>%
layer_dense(units=512,activation = "relu") %>%
layer_dropout(rate = 0.2) %>%
layer_dense(units=10,activation = "softmax")
model %>% compile(
optimizer = "rmsprop",
loss = "categorical_crossentropy",
metrics = c("accuracy"))
Fit the model
history <- model %>% fit(
train_images, train_labels,
epochs = 10, batch_size = 128,
validation_split = 0.1
)
plot(history)

Since training and validation accuracy is almost equal this model has a good fit. There’s no overfitting or underfitting - hence this model can be generalized.
metrics
$loss
[1] 0.06858944
$acc
[1] 0.9824
Break down the architecture
What we’ve seen so far in designing and fitting a neural network is a whole bunch of tensor operations - adding a tensor, multiplying a tensor etc…
Consider the following -
layer_dense(units = 512, activation = “relu”)
This is basically the first layer which contains 512 units and each unit is defined by a function called ‘activation’ and the function chosen here is - ReLu. ReLu stands for Rectified Linear Unit. It’s just a fancy way of saying -
relu(z) = max(z,0)
When we use tensors - stacked matrices - and when we want to perform operations we dont have to use for loops, we can just apply matrix algebra! This significantly improves performance and efficiency in running complex networks with 10’s or 100’s of layers!! This is called vectorized implementation.
z=W.X+b where X=input, b=bias, W=weights
at every unit of the 1st layer we are calculating-
max(W.X+b,0)
Gradient based optimization
W,b are tensors that are attributes of a layer. Initially, the weights are given random values - called random initialization.
What comes next is to gradually adjust these weights, based on a feedback signal. This gradual adjustment, also called training, is basically the learning that machine learning is all about.
What happens in a training loop?
- Draw a batch of training samples x and corresponding targets y.
- Run the network on x (a step called the forward pass) to obtain predictions y_pred forward propagation
- Compute the loss of the network on the batch, a measure of the mismatch between y_pred and y
- Update all the weights in a way that it slightly reduces the loss in this batch. backward propagation
Step 1-3 is straightforward. Step 4 is the challenge.
The difficult part is step 4: updating the net- work’s weights. Given an individual weight coefficient in the network, how can you compute whether the coefficient should be increased or decreased, and by how much?
All operations used in the network are differentiable, and compute the gradient of the loss with regard to the network’s coefficients
A gradient is a derivative of a multidimensional input like a tensor. You can think of back prop as a chain of partial differential equations. If a function is differentiable then we can find a min of that function.
In a NN framework, what this means is that we need to find the combination of weights such that yields lowest possible loss.
1.Draw a batch of training samples x and corresponding targets y. 2 Run the network on x to obtain predictions y_pred. 3 Compute the loss of the network on the batch, a measure of the mismatch between y_pred and y. 4 Compute the gradient of the loss with regard to the network’s parameters (a backward pass). 5 Move the parameters a little in the opposite direction from the gradient—for example, W=W-(step*gradient)—thus reducing the loss on the batch a bit.
This is called a mini batch Stochastic Gradient Descent.
The step factor is called learning rate. Picking a reasonable learning rate is important in DL models because if it’s too small then the model will learn very slowly, and if it’s too large then the model might miss the global min of loss function!
---
title: "Keras"
output:
  html_notebook: null
  highlight: textmate
  html_document:
    df_print: paged
  theme: cerulean
---


```{r setup, include=FALSE}
knitr::opts_chunk$set(warning = FALSE, message = FALSE)
```

### What is Keras?

Keras is a deep learning framework that provides a convenient framework to define and train any kind of deep learning model. It has a user friendly API and the same code can run on both CPU and GPUs.

Keras is a model-level library meaning it provides a high-level building blocks for developing deep learning models. It cannot handle low level operations like tensor manipulation or differentiation. Keras depends on well optimized tensor library to do so, which serves as the _backend engine_ for Keras. Several backend engines can be plugged into Keras - TensorFlow developed by Google, CNTK developed by Microsoft and Theano. 

So, Keras depends on a backend to do the low-level computations like TensorFlow. When run on a CPU, TensorFlow itself is wrapping a low-level library for tensor operations, called Eigen. On a GPU, TensorFlow wraps a library called the NVIDIA CUDA Deep Neural Network library- cuDNN.

The default and the recommended backend in TensorFlow. In R, when you install Keras library it will automatically install TensorFlow. And the default installation is for a CPU. 

```{r}
install.packages("keras")
library(keras)
install_keras()
```

If you are using a GPU, then do - 

```{r}
#install_keras(tensorflow = "gpu")
```


### MNIST Dataset

The MNIST dataset is often considered as the "hello world" of deep learning. It's basically a set of grayscale images (28 x 28 pixels) of hand written single digits that fall into 10 categories - 0 through 9. 

Sample = Image;
Feature = Pixels;
Outcome = 10 categories;

This is a standard dataset, so it comes with _train_ and _test_ split. There are 60,000 images in train data and 10,000 images in test data.

The MNIST dataset comes preloaded in Keras, in the form of train and test lists, each of which includes a set of images (x) and their associated labels (y).

Check out some other datasets available in Keras package 

```{r}
?dataset_
```

#### Load the MNIST Dataset
```{r}
mnist <- dataset_mnist() #mnist is a list
train_images <- mnist$train$x 
train_labels <- mnist$train$y
test_images <- mnist$test$x
test_labels <- mnist$test$y
```


Understand your data structure..

What is the structure of training data?

```{r}
class(train_images)
str(train_images)

class(train_labels)
str(train_labels)
```


What is the structure of test data?

```{r}
class(test_images)
str(test_images)

class(test_labels)
str(test_labels)
```

The structure we see in training data for ex, is an array of 60,000 matrices of 28 x 28 integers. Since it's grayscale, the coefficients will range from 0 to 255.

```{r}
min(train_images)
max(train_images)
```


Let's look at a digit!


```{r}
digit<-train_images[1,,]
plot(as.raster(digit,max = 255))
```


##### Where are the tensors?

The array train_images is actually a tensor! And so is train_labels. Tensors are multidimensional arrays. We've all dealt with tensors before! 

Scalar is a 0D tensor; a vector is a 1D Tensor; a matrix is a 2D tensor.

If you pack a bunch of matrices in an array then it's a 3D tensor; if you pack a bunch of 3D tensors in an array then you will get a 4D tensor and so on.

You can think of grayscale images as 3D tensor, color images as 4D tensor and videos as 5D Tensor.

What are the dimensions in each of those?

3D - Grayscale images = samples, height, width
4D - Color images = samples, height, width, channels-R,B,G
5D - Videos = samples, frames, height, width, channels

```{r}
paste("The number of axes in our training data")
length(dim(train_images))
```


### Building the Neural Network

There are 2 broad things that we need to do - one is to pick a network architecture and the 2nd is to compilation step that includes optimization.


#### Define the network architecture

```{r}
network <- keras_model_sequential() %>%  #linear layers
  layer_dense(units = 512, activation = "relu", input_shape = c(28 * 28)) %>% #in a NN, each layer is made of unit, each unit has a function called activations
  layer_dense(units = 10, activation = "softmax") #output
```

Layers are the core building blocks of a NN. Think of it as a filter of data, as some data goes in, it comes out in a more useful form. This useful form is fed into another layer and you might get more refined data. 

Layers extract _representations_ out of the data fed into them. Most of deep learning consists of chaining together simple layers that will implement a form of progressive _data distillation_.

Our network consists of 2 layers which are densely connected i.e. fully connected neural layers.The last layer is a 10-way softmax layer - it gives an array of 10 probability scores one for each class that sums to 1. Think of softmax as a generalized logistic function - instead of 2 classes we have 10 classes.


#### Compilation Step

There are 3 things that we need to pick within compilation.

```{r}
network %>% compile(
  optimizer = "rmsprop", #network will update itself based on the training data & loss
  loss = "categorical_crossentropy", #measure mismatch between y_pred and y
  metrics = c("accuracy") #measure of performace - correctly classified images
)
```

#### Pre-Processing the data

Just like several other ML models, we need to preprocess the data into the right shape and scale the pixels so it ranges from 0 to 1 instead of 0 to 255

```{r}
train_images <- array_reshape(train_images, c(60000, 28 * 28))
train_images <- train_images / 255

test_images <- array_reshape(test_images, c(10000, 28 * 28))
test_images <- test_images / 255

dim(train_images)
dim(test_images)
```


#### Prepare the labels

```{r}
train_labels <- to_categorical(train_labels)
test_labels <- to_categorical(test_labels)

dim(train_labels)
dim(test_labels)
```


#### Fit the model

```{r results='hide'}
history <-network %>% fit(train_images, train_labels, epochs = 10, batch_size = 128,validation_split = 0.1) #epoch is an interation. Every iteration a random batch of 128 images are passed through the network
```

```{r}
plot(history)
```


#### Metrics

```{r results='hide'}
metrics <- network %>% evaluate(test_images, test_labels)
```

```{r}
metrics
```

The test accuracy is lower than train accuracy - Overfitting! What should we do when there's over fitting?

```{r}
network %>% predict_classes(test_images[1:10,])
```


#### Try another architecture!

```{r}
model<-keras_model_sequential()
model %>% 
  layer_dense(units=512,activation = "relu",input_shape = c(28*28)) %>%
  layer_dropout(rate = 0.4) %>%
  layer_dense(units=512,activation = "relu") %>%
  layer_dropout(rate = 0.3) %>%
  layer_dense(units=512,activation = "relu") %>%
  layer_dropout(rate = 0.2) %>%
  layer_dense(units=10,activation = "softmax")

model %>% compile(
  optimizer = "rmsprop", 
  loss = "categorical_crossentropy", 
  metrics = c("accuracy")) 
```


Fit the model


```{r results='hide'}
history <- model %>% fit(
  train_images, train_labels, 
  epochs = 10, batch_size = 128, 
  validation_split = 0.1
)
```


```{r}
plot(history)
```

Since training and validation accuracy is almost equal this model has a good fit. There's no overfitting or underfitting - hence this model can be generalized.

```{r results='hide'}
metrics<-model%>%evaluate(test_images,test_labels)
```

```{r}
metrics
```

### Break down the architecture

What we've seen so far in designing and fitting a neural network is a whole bunch of tensor operations - adding a tensor, multiplying a tensor etc...

Consider the following -

layer_dense(units = 512, activation = "relu")

This is basically the first layer which contains 512 units and each unit is defined by a function called 'activation' and the function chosen here is - ReLu. ReLu stands for Rectified Linear Unit. It's just a fancy way of saying - 

relu(z) = max(z,0)

When we use tensors - stacked matrices - and when we want to perform operations we dont have to use for loops, we can just apply matrix algebra! This significantly improves performance and efficiency in running complex networks with 10's or 100's of layers!! This is called _vectorized_ implementation.


z=W.X+b
where X=input, b=bias, W=weights

at every unit of the 1st layer we are calculating-

max(W.X+b,0)


### Gradient based optimization

W,b are tensors that are attributes of a layer. Initially, the weights are given random values - called _random initialization_.

What comes next is to gradually adjust these weights, based on a feedback signal. This gradual adjustment, also called training, is basically the learning that machine learning is all about.

What happens in a training loop?

1. Draw a batch of training samples x and corresponding targets y.
2. Run the network on x (a step called the forward pass) to obtain predictions y_pred _forward propagation_
3. Compute the _loss_ of the network on the batch, a measure of the mismatch between y_pred and y
4. Update all the weights in a way that it slightly reduces the loss in this batch. _backward propagation_

Step 1-3 is straightforward. Step 4 is the challenge. 

The difficult part is step 4: updating the net- work’s weights. Given an individual weight coefficient in the network, how can you compute whether the coefficient should be increased or decreased, and by how much?

All operations used in the network are _differentiable_, and compute the _gradient_ of the loss with regard to the network’s coefficients

A gradient is a derivative of a multidimensional input like a tensor. You can think of back prop as a chain of partial differential equations. If a function is differentiable then we can find a min of that function.

In a NN framework, what this means is that we need to find the combination of weights such that yields lowest possible loss. 

1.Draw a batch of training samples x and corresponding targets y.
2 Run the network on x to obtain predictions y_pred.
3 Compute the loss of the network on the batch, a measure of the mismatch between y_pred and y.
4 Compute the gradient of the loss with regard to the network’s parameters (a backward pass).
5 Move the parameters a little in the opposite direction from the gradient—for example, W=W-(step*gradient)—thus reducing the loss on the batch a bit.

This is called a _mini batch Stochastic Gradient Descent_.

The _step_ factor is called learning rate. Picking a reasonable learning rate is important in DL models because if it's too small then the model will learn very slowly, and if it's too large then the model might miss the global min of loss function!







