Jupyter Notebook#

Lecture 25: Decision Trees#

In this module we are going to test out the tree based methods we discussed in class from Chapter 8.

# Everyone's favorite standard imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import time


# ML imports we've used previously
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error

Fitting Regression Trees#

We can now turn to setting up a basic regression tree. For this example, we’re going to use the Carseat data where we will predict Sales from the rest of the columns. I’ll do a bit of cleanup for you so we can get to the good stuff.

carseats = pd.read_csv('../../DataSets/Carseats.csv').drop('Unnamed: 0', axis=1)
carseats.ShelveLoc = pd.factorize(carseats.ShelveLoc)[0]
carseats.Urban = carseats.Urban.map({'No':0, 'Yes':1})
carseats.US = carseats.US.map({'No':0, 'Yes':1})
carseats.info()
carseats.head()
X = carseats.drop(['Sales'], axis = 1)
y = carseats.Sales
X.head()

The regression tree function we will use is DecisionTreeRegressor.

from sklearn import tree
from sklearn.tree import DecisionTreeRegressor
reg_tree = DecisionTreeRegressor(max_depth = 3)
reg_tree.fit(X,y)

We can draw the resulting tree to visualize what’s happening.

Visualization 1: Text based#

Ok, so not the prettiest of the options, but at least this one will work. This provides a text based tree where I can figure out what decisions were made at each step.

X.columns
print( tree.export_text(reg_tree, feature_names = list(X.columns)) )

Do this: Given a new data point with the entries below, use the visualization to determine the choices made by the decision tree at each step? What will your decision tree predict for Sales?

CompPrice      117
Income         100
Advertising      4
Population     466
Price           97
ShelveLoc        2
Age             55
Education       14
Urban            1
US               1

Your answer here

Visualization 2: Probably should work#

Here’s another option for visualization. There is a plotting function built into sklearn.tree but I’ve had issues with people’s python versions before. Let’s try it and see, it’s a bit clunky but it gets the job done.

fig = plt.figure(figsize = (25,20))
_= tree.plot_tree(reg_tree, feature_names = X.columns, 
               filled = True, 
              fontsize = 20)
plt.show()

Do this: Given a new data point with the following entries, use the visualization to determine the choices made by the decision tree at each step? What will your decision tree predict for Sales?

CompPrice      141
Income          64
Advertising      3
Population     340
Price          128
ShelveLoc        0
Age             38
Education       13
Urban            1
US               0

Your answer here

Other visualization tools.#

There are nicer visualization tools. In particular, the outputs requiring graphviz are quite a bit better than these options. However, installing graphviz is non trivial so we won’t use it in this lecture. Examples of code using this can be found here.

Predicting on the tree.#

As with all other sklearn packages we’ve seen, we can predict values on our input X matrix, and compare the results using MSE.

yhat = reg_tree.predict(X)
yhat[:5]

Do this: Use the regression tree you just built to predict the Sales value for the test set.

  • Check your answers from above. The first data point example was the third row of X; the second data point example was the fourth row. Do you get the same answer from the prediction as by hand with the visualization?

  • What is the resulting MSE on the full data set?

# Your code here #

Stop Icon

Great, you got to here! Hang out for a bit, there’s more lecture before we go on to the next portion.

Classification trees#

Loading in the data#

Let’s start with the palmerPenguins data set.

pip install palmerpenguins
import palmerpenguins
penguins_df = palmerpenguins.load_penguins()

#I'm shuffling the data to make this a bit more interesting
penguins_df = penguins_df.sample(frac=1, random_state=1236) 

penguins_df = penguins_df.dropna()
penguins_df.head()
X_df = pd.get_dummies(penguins_df.drop(columns = ['species']), drop_first = True)
X_df.head()
y_df = penguins_df.species
y_df

Fitting Classification Trees#

We’ll use sklearn’s built in modules for this. As always, the user guide is an excellent place to get started.

Now to fit the decision tree classifier? All we need is two lines:

from sklearn import tree 
clf_tree = tree.DecisionTreeClassifier(max_depth = 3)
clf_tree = clf_tree.fit(X_df, y_df)
y_df.head()

Do this: Using the .predict function, what is the species predicted for the first five data points in X? Which of these predicted values are the same as the original labels?

# Your code here

Do this: Use whichever visualization method you prefer from above to see the resulting tree. What is the sequence of decisions for predicting the first data point?

#your code here 

Pruning the tree#

The simplest method we have for pruning the tree is to limit the maximum depth, that is the number of times the tree is allowed to split.

Do this: Change the value of max_depth below to see how the resulting tree changes.

clf_tree = tree.DecisionTreeClassifier(max_depth =10) #<-- mess with this


clf_tree = clf_tree.fit(X_df, y_df)
fig = plt.figure(figsize = (25,20))
_= tree.plot_tree(clf_tree, feature_names = X_df.columns, 
               filled = True, 
              fontsize = 20)

plt.show()

If you are interested in more complex pruning techniques like we discussed in class, you can try to mess around with Minimal Cost-Complexity Pruning, but I’ll leave that for another day.

Visualizing the parameter splits#

Now, if we want to visualize the parameter splits that are being represented with the trees, we can do that. However, I can’t (easily) draw these sorts of figures when I’m using more than two variables. Let’s just grab the first two variables and build a classifer off of those.

X_pair = X_df[['bill_length_mm', 'bill_depth_mm']].values
y_pair = y_df

clf_pair = tree.DecisionTreeClassifier(max_depth = 5).fit(X_pair, y_df)

Do this: Use whatever worked for you above to plot your decision tree.

# Your code here #

Do this: Below is some code that will draw the regions of parameter space that get each different prediction.

  • Which labels do the colors red, yellow, and blue match to?

  • What split in the figure does the first split in your tree above correspond to?

  • What changes in this figure if you change the max_depth in your tree model above?

# Bounds for the figure 
X = X_df.values
x0_min = X[:,0].min()-1
x0_max = X[:,0].max()+1
x1_min = X[:,1].min()-1
x1_max = X[:,1].max()+1

# Parameters
n_classes = 3
plot_colors = "ryb"
plot_step = 0.02

xx, yy = np.meshgrid(
    np.arange(x0_min, x0_max, plot_step), np.arange(x1_min, x1_max, plot_step)
)
plt.tight_layout(h_pad=0.5, w_pad=0.5, pad=2.5)

Z = clf_pair.predict(np.c_[xx.ravel(), yy.ravel()])
def numReplace(label):
    if label == 'Adelie':
        return 0
    if label == 'Gentoo':
        return 1
    else: #(if 'Chinstrap')
        return 2
Z = np.array([numReplace(label) for label in Z])
# Z = Z.replace(['Adelie','Gentoo','Chinstrap'], [0,1,2])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.RdYlBu)

plt.xlabel(X_df.columns[0])
plt.ylabel(X_df.columns[1])

# Plot the training points
for label, color, symbol in zip(['Adelie','Gentoo','Chinstrap'], plot_colors, ['o','x','^']):
    idx = np.where(y_df == label)
    plt.scatter(
        X[idx, 0],
        X[idx, 1],
        c=color,
        label=label,
        edgecolor="black",
        s=15,
        marker=symbol,
    )
plt.legend()
plt.show()

Congratulations, we’re done!#

Written by Dr. Liz Munch, Michigan State University

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.