Jupyter Notebook#
Lecture 29: Single Layer Neural Nets#
# Everyone's favorite standard imports
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
import time
Building your own basic neural network#
We’re going to try to play with building some simple portions of the neural nets described in class.
First up, let’s try to build the following example we played with in class:
✅ Do this: Add code where noted below to create an automatic computation of the things we did in class. Note: You can replace this code with the matrix version if you prefer, but this structure is built to work with the individual equations version.
def MyFirstNN(X1,X2,w, beta):
A = []
for i in range(3):
Ai = np.NaN # <----- your code here
A.append(Ai)
# print(A)
A = np.array(A)
Y = np.NaN # <----- your code here
return Y
w = np.array([(1,2,1),(-1,0,1),(3,-1,-1)])
print(w)
beta = np.array((-1,2,1,-2))
print('(1,0) should give 1 => ', MyFirstNN(1,0,w,beta))
print('(0,1) should give -1 => ', MyFirstNN(0,1,w,beta))
✅ Do this: Play with the following code by changing around your \(w\) and \(\beta\) matrix values.
What sorts of changes occur in the resulting function value outputs?
We’re using the ReLU as our activation function above, which replaces the entry with the positive value of the entry. Why am I able to get negative results out of the neural net?
w = np.array([(1,2,1),(-1,0,1),(3,-1,-1)]) #<----- Mess with these
beta = np.array((-1,2,1,-2)) #<----- These, too
#--- below here plots the output values for many combinations of
#--- X1,X2
a = np.linspace(-10,10,30)
b = np.linspace(-10,10,30)
x,y = np.meshgrid(a,b)
M = np.zeros((len(a),len(b)))
for i in range(len(a)):
for j in range(len(a)):
M[i,j] = MyFirstNN(a[i],b[j],w, beta)
# --- Make plots----
fig,(ax1,ax2) = plt.subplots(1,2, sharey = True)
# Heatmap version
ax1.set(aspect='equal')
colormesh = ax1.pcolormesh(x,y,M)
ax1.set_xlabel('X1')
ax1.set_ylabel('X2')
fig.colorbar(colormesh,ax = ax1, shrink = 0.5)
# Contour plot version
ax2.set(aspect='equal')
contour = ax2.contour(x,y,M)
ax2.set_xlabel('X1')
ax2.set_ylabel('X2')
fig.colorbar(contour,ax = ax2, shrink = 0.5)
plt.show()
✅ Do this: Write a modified version of your MyFirstNN
function that uses a sigmoid function instead of ReLu. Use the block below to draw a 2D heatmap and/or contour plot like above. What sort of patterns can you get in the resulting output function?
# Your code here #
w = np.array([(1,2,1),(-1,0,1),(3,-1,-1)]) #<----- Original choices
beta = np.array((-1,2,1,-2)) #<----- of matrices
a = np.linspace(-10,10,30)
b = np.linspace(-10,10,30)
x,y = np.meshgrid(a,b)
M = np.zeros((len(a),len(b)))
for i in range(len(a)):
for j in range(len(a)):
M[i,j] = MyFirstNN(a[i],b[j],w, beta)
# --- Make plots----
fig,(ax1,ax2) = plt.subplots(1,2, sharey = True)
# Heatmap version
ax1.set(aspect='equal')
colormesh = ax1.pcolormesh(x,y,M)
ax1.set_xlabel('X1')
ax1.set_ylabel('X2')
fig.colorbar(colormesh,ax = ax1, shrink = 0.5)
# Contour plot version
ax2.set(aspect='equal')
contour = ax2.contour(x,y,M)
ax2.set_xlabel('X1')
ax2.set_ylabel('X2')
fig.colorbar(contour,ax = ax2, shrink = 0.5)
plt.show()
The functions you have created are what the neural net would predict given a new input data point.
✅ Q: For the neural net using \(w\) and \(\beta\) from class, and using the sigmoid activation function, what output prediction would you get for an input \((-10,10)\)?
So of course, if we were actually building this beast, our bigger job would be to find good choices of \(w\) and \(\beta\) based on our available training data. Let’s pretend for the moment I have the following prediction data.
data = np.loadtxt('../../DataSets/DL-toy-data.csv')
X = data[:,:2]
y = data[:,2]
plt.scatter(X[:,0],X[:,1], c= y)
plt.colorbar()
plt.show()
✅ Do this: Given this particular data set, what is the mean squared error for the neural net using \(w\) and \(\beta\) from class, and using the sigmoid activation function?
# Your code here
We’re not going to train the model* ourselves, so let’s switch now to the following online tool for training model.
Open this website in another window: https://playground.tensorflow.org
*Translation: chosing better weights \(w\) and \(\beta\) to improve this score.
✅ Do this:
Start by working with a neural network with a single hidden layer on the data set labeled “Exclusive or”. The initial setup has random weights chosen so the test loss on the right isn’t very good. Hit the play button to watch the model find better and better weights to improve the prediction.
Hover over the neurons on your trained model. What does the colormap shown on the right mean?
What does the thickness/color of the edges between features/neurons mean? How does it align with our notation from class?
What happens when you train the model using more or less neurons (hidden units) in the layer? What happens if you increase
What happens when you train the same model on the spiral data set?
If your settings are like mine, the sprial data set results are not good. Play with parameters until you can get test loss below 10% on the spiral data set.
Congratulations, we’re done!#
Written by Dr. Liz Munch, Michigan State University
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.