To illustrate the use of single neuron classifier we use the columns Petal.Length and Petal.Width, rows 1:100 from the “iris” data set.

A scatterplot of our data set is given below. Red points represent the pair(Petal.Length,Petal.Width) for species 0. The blue points represent pairs for species 1. A Single Neuron Classifier partitions the plane with a line so that one can predict class (species) by simply being on one side of the line or the other.

Code for the Single Neuron Classifier

Some Values and Vectors

The initial weight=w and bias=b are randomly chosen. We will collect the weights and biases from each epoch using the vectors “weight” and “bias”. We will collect the errors between target and prediction for each epoch with the vector “errors”. The gradient step length is eta=.1

w <- runif(2,1e-3, 1e-2) 
b <- runif(1)   
bias<-b
weight<-w
errors<-c()
eta<-.1

Epoch Loop

for (j in 1:10) {
   for (i in 1:100) {
    x<-c(flowert[i,1],flowert[i,2])   
    a<-sum(x*w)
    y<-1/(1+exp(-a-b)) 
    e<- t[i]-y #the estimated error
    g<- -e*x
    g.bias<- -e 
    b = b - eta * g.bias
    w[1]<-w[1]-eta*g[1]
    w[2]<-w[2]-eta*g[2]
    weight<-rbind(weight,w)
    bias<-append(bias,b)
    errors<-append(errors,e)
    }
  
}

Moving Towards a “Classifying” Line

Add New Columns to the Dataframe

flowert$w1 <- weight[1001,1]*flowert[,1]
flowert$w2 <- weight[1001,2]*flowert[,2]
flowert$prediction <- 1/(1+exp(-(flowert$w1+flowert$w2+bias[1001])))
flowert$error <- flowert$prediction-flowert$t

Plot of Differences Between Prediction and Target

The plot below shows differences between the prediction and target for each epoch. There are 1000 epochs in total. Notice that the graph is bunched into 10 different clusters of 100 points. These clusters represents the 10 interations of the loop indexed by j given above. Within every cluster we see there there are 2 components. The two components represent errors in predicting the target species 0 and 1.
From the plot one sees that the variance of errors decrease as the index j increases. It appears that the errors are converging.

Scatterplot of Weights

When examining the scatterplot of weights many of the same observations from the previous plot can be applied. First, we see there are 10 clusters of points. Each cluster contains 100 points and is associated with an iteration of the loop indexed by j given above.
Each of the 10 clusters can be decomposed into two subcomponents. Each subcomponent is a set of points that looks linear. For each j, the first subcomponent is composed of 50 points corresponding to weights for species one, and the second components corresponds to weights for species two.
The weights (w1,w2) in the 1st subcomponent both decrease for the first species. The weights(w1,w2) increase for the second species.
We note that within a cluster, as j increases the variance of each w1, and w2 decreases.

Classifying Lines

In the scatterplot below, red points represent the pairs (Petal.Length,Petal.Width) for species 0. The blue points represent pairs for species 1. The lines are “classifying” lines that partition the plane so one can predict class(species). Each line is derived using the weights of from an epoch. Specifically lines are derived from epoches 300, 500, 800, 900, 1001.
Lines on the plot for epochs greater than 800 correctly partitions the training set of data.
Overall one sees an improvement in the ability to classify species for the training set as the epochs increase. It’s interesting to note that at epoch 1001 the classifying line is closer in proximity to the centroid of species 0 than the centroid of species 1. This may be due to the fact that there is less variance in the set of points associated with species 1 than species 2. More investigation is needed prove this.

plot(flowert[,1],flowert[,2],col=ifelse (flowert$t==0,"red","blue"))    


x<-c(1:5)    
nums<-c(300,500,800,900,1001)   
colors<-c(28,53,78,103,128)


for(i in 1:5) {   
y2<- -(weight[nums[i],1]*x + bias[nums[i]])/(weight[nums[i],2])   
lines(x,y2,type='l',col=colors[i])      
};     
legend(2, 1.8, nums, text.col=colors,title="epochs")