Jupyter Notebook#
Lecture 25: Random Forests#
In this module we are going to set up random forest models.
# Everyone's favorite standard imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import time
# ML imports we've used previously
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
Loading the data#
Since we’re doing regression trees, again, we’re going to use the Carseat
data where we will predict Sales
from the rest of the columns. I’ll do a bit of cleanup for you so we can get to the good stuff.
carseats = pd.read_csv('../../DataSets/Carseats.csv').drop('Unnamed: 0', axis=1)
carseats.ShelveLoc = pd.factorize(carseats.ShelveLoc)[0]
carseats.Urban = carseats.Urban.map({'No':0, 'Yes':1})
carseats.US = carseats.US.map({'No':0, 'Yes':1})
carseats.info()
X = carseats.drop(['Sales'], axis = 1)
y = carseats.Sales
X.head()
Bagging#
Last time, we figured out how to use sklearn.tree.DecisionTreeRegressor
to build a single tree to do our regression, but we’d like to be able to use ensemble models to create a pile of trees to give a more robust predction.
Let’s try doing this the hard way first. We’re going to set up the bagging on our carseats
data. We’ll use \(B=3\) to create our bagged model. So, step one is to generate three bootstrap samples, meaning I generate data sets by sampling \(n=400\) points with replacement.
X1 = carseats.sample(frac = 1, replace = True, random_state = 42)
y1 = X1.Sales
X1 = X1.drop(['Sales'], axis = 1)
X2 = carseats.sample(frac = 1, replace = True, random_state = 43)
y2 = X2.Sales
X2 = X2.drop(['Sales'], axis = 1)
X3 = carseats.sample(frac = 1, replace = True, random_state = 44)
y3 = X3.Sales
X3 = X3.drop(['Sales'], axis = 1)
✅ Do this: Fit one regression tree of depth 2 per sampled data we just created. You’ll end up with three regression trees: call them reg_tree1
, reg_tree2
, and reg_tree3
. How similar are the tree structures?
from sklearn import tree
from sklearn.tree import DecisionTreeRegressor
# Your code to generate the three trees here #
# If you named your models reg_tree1, reg_tree2, and reg_tree3 above,
# this block will plot the three trees
fig, axs = plt.subplots(3,1, figsize = (20,30))
for i, reg_treeX in enumerate([reg_tree1, reg_tree2, reg_tree3]):
_= tree.plot_tree(reg_treeX, feature_names = X.columns,
filled = True,
fontsize = 22,
ax = axs[i])
axs[i].set_title(f"Tree {i+1}", fontsize = 30)
plt.show()
✅ Do this: Predict the Sales
value for the first data point in the carseats data set using each of the three trees just generated. What is the average value of the three numbers? This is the bagged prediction for this input data.
X.head().iloc[0,:]
# Your code to get the bagged prediction for the first here.
Great, you got to here! Hang out for a bit, there’s more lecture before we go on to the next portion.
Random Forests#
Did you really need to do all that bagging by hand? Well, no actually, but this goes under the “eating your vegetables exactly once” part of the lab. Of course, sklearn
has built in functions to do bagging for us. In reality, it just has a random forest function, but if we really want to do bagging, we can cheat.
Remember, for random forests, we essentially do bagging but we only allow for a subset of \(m \leq p\) variables to be considered at each splitting step. So if we want bagging, we set the \(m=\)max_features
to be the total number of features. That means this next code actually replecates what we just did above with bagging.
from sklearn.ensemble import RandomForestRegressor
# Note we have p=10 input variables
print(X.shape)
bagged_carseats = RandomForestRegressor(max_features = 10, oob_score = True )
bagged_carseats.fit(X,y)
✅ Do this: Build a random forest model instead where the maximum number of features used at each step is \(m = \sqrt {p}\).
# Your code here #
Because we have that oob_score = True
set, we get access to the out of bag predictions (i.e., \(\hat y\)) from forest_carseats.oob_prediction_
. That is, for each data point, predict its \(\hat y\) by returning the averaged value using only the trees it wasn’t involved in fitting.
✅ Do this: Determine the OOB error on the forest model you built just above using the mean_squared_error
command.
Note: The class does also keep track of the oob score as model.oob_score_
, however this appears to be reporting \(R^2\) (where close to 1 is better) and I don’t seem to have the ability to change this. See the documentation for further details.
# Your code here
✅ Do this: How does the number of trees used (n_estimators
in this code) affect the error? Generate a plot like Fig 8.10 in the book for our carseats
data. How many trees should we use?
# Modify the code below to fit a bunch of models and keep track of the MSE.
p = 10
m_list = [int(np.sqrt(p)), int(p/2), int(p)]
print('m_list:', m_list)
n_tree_list = np.arange(10,201,10)
Errors = []
for m in m_list:
M_error = []
for i in range(len(n_tree_list)):
n_trees = n_tree_list[i]
#--------------------------------------------------
# Add code here to train a random forest model with
# max_features = m, n_estimators = n_trees,
# and the MSE saved as the error below.
error = 0 # <----- your error goes here.
#--------------------------------------------------
M_error.append(error)
Errors.append(np.array(M_error))
# If your code above works, below you'll get a plot of the different choices of m
colors = ['orange','blue','green']
labels = ['sqrt(p)', 'p/2', 'p']
for i in range(3):
M_error = Errors[i]
plt.plot(n_tree_list, M_error, label = labels[i], color = colors[i])
plt.legend()
plt.title('Error in random forest for different values of $m$')
plt.show()
Congratulations, we’re done!#
Written by Dr. Liz Munch, Michigan State University
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.