Jupyter Notebook#

Lec 18: The Lasso#

In this module we are going to test out the lasso method, discussed in class.

# Everyone's favorite standard imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import time


# ML imports we've used previously
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import Ridge, RidgeCV
from sklearn.pipeline import make_pipeline


# Fix for deprecation warnings, but we should fix so this isn't here
import warnings 
warnings.filterwarnings('ignore')

Loading in the data#

Ok, here we go, let’s play with a baseball data set again. Note this cleanup is all the same as the last lab.

hitters_df = pd.read_csv('../../DataSets/Hitters.csv')

# Print the dimensions of the original Hitters data (322 rows x 20 columns)
print("Dimensions of original data:", hitters_df.shape)

# Drop any rows the contain missing values, along with the player names
hitters_df = hitters_df.dropna().drop('Player', axis=1)

# Replace any categorical variables with dummy variables
hitters_df = pd.get_dummies(hitters_df, drop_first = True)

hitters_df.head()
y = hitters_df.Salary

# Drop the column with the independent variable (Salary)
X = hitters_df.drop(['Salary'], axis = 1).astype('float64')

X.info()

Finally, here’s a list of \(\alpha\)’s to test for our Lasso.

# List of alphas
alphas = 10**np.linspace(4,-2,100)*0.5
alphas

Lasso#

Thanks to the wonders of scikit-learn, now that we know how to do all this with ridge regression, translation to lasso is super easy.

from sklearn.linear_model import Lasso, LassoCV

Here’s an example computing the lasso regression.

a = 1 #<------ this is me picking an alpha value

# normalize the input
transformer = StandardScaler().fit(X)
# transformer.set_output(transform = 'pandas') #<---- some older versions of sklearn
                                               #      have issues with this
X_norm = pd.DataFrame(transformer.transform(X), columns = X.columns)

# Fit the regression
lasso = Lasso(alpha = a) 
lasso.fit(X_norm, y)

# Get all the coefficients
print('intercept:', lasso.intercept_)
print('\n')
print(pd.Series(lasso.coef_, index = X_norm.columns))
print('\nTraining MSE:',mean_squared_error(y,lasso.predict(X_norm)))

And the version using the make_pipeline along with the StandardScaler function which gives back the same answer.

a = 1#<------ this is me picking an alpha value


# The make_pipeline command takes care of the normalization and the 
# Lasso regression for you. Note that I am passing in the un-normalized
# matrix X everywhere in here, since the normalization happens internally.
model = make_pipeline(StandardScaler(), Lasso(alpha = a))
model.fit(X, y)
 
# Get all the coefficients. Notice that in order to get 
# them out of the ridge portion, we have to ask the pipeline 
# for the specific bit we want with the model.named_steps['ridge']
# in place of just ridge from above.
print('intercept:', model.named_steps['lasso'].intercept_)
print('\n')
print(pd.Series(model.named_steps['lasso'].coef_, index = X.columns))
print('\nTraining MSE:',mean_squared_error(y,model.predict(X)))

Do this: Mess with the values for \(a\) in the code, such as for \(a=1, 10, 100, 1000\). What do you notice about the coefficients?

Your answer here

Do this: Make a graph of the coeffiencts from lasso as \(\alpha\) changes

Note: we did similar things in the last class but with ridge regression, so I’ve included the code you can borrow and modify from there. Also note I got a bunch of convergence warnings, but drawing the graphs I could safely ignore them. There’s a command you ran in the first box to ignore warnings so they might not show up at all.

# Your code for computing the coefficients goes here 
# I've included the code from last class that did this for the ridge version
# so you should just need to update it to do lasso instead.


coefs = []

for a in alphas:
    model = make_pipeline(StandardScaler(), Ridge(alpha = a))
    model.fit(X, y)
    coefs.append(model.named_steps['ridge'].coef_)
    

coefs = pd.DataFrame(coefs,columns = X_norm.columns)
coefs.head()
   
# If that worked above, you'll get a graph in this code. 

for var in coefs.columns:
    # I'm greying out the ones that have magnitude below 200 to easier visualization
    if np.abs(coefs[var][coefs.shape[0]-1])<200:
        plt.plot(alphas, coefs[var], color = 'grey', linewidth = .5)
    else:
        plt.plot(alphas, coefs[var], label = var)

plt.xscale('log')
plt.axis('tight')
plt.xlabel('alpha')
plt.ylabel('coefficients')

plt.legend()

Do this: Make a graph the test mean squared error as \(\alpha\) changes for a fixed train/test split.

Note: again I’ve included code from last time that you should just have to update.

# Update this code to get the MSE for lasso instead of Ridge
X_train, X_test , y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1)

errors = []

for a in alphas:
    model = make_pipeline(StandardScaler(), Ridge(alpha = a))
    model.fit(X_train, y_train)

    pred = model.predict(X_test)
    errors.append(mean_squared_error(y_test, pred))
# If that worked, you should see your test error here.
i = np.argmin(errors) # Index of minimum 
print(f'Min occurs at alpha = {alphas[i]:.2f}')
print(f'Min MSE is {errors[i]:.2f}')

plt.title('Testing MSE')
plt.plot(alphas,errors)
plt.scatter(alphas[i],errors[i])
ax=plt.gca()
ax.set_xscale('log')

Do this: make a plot showing the number of non-zero coefficients as a function of the \(\alpha\) choice.

Hint: I used the np.count_nonzero command on the coefs data frame we already built

# Your code here

Q: Say your goal was to end up with a model with 5 variables used. What choice of \(\alpha\) gives us that and what are the variables used?

# Your code here 

Lasso with Cross Validation#

Do this: Now try what we did with LassoCV. What choice of \(\alpha\) does it recommend?

I would actually recommend either not passing in any \(\alpha\) list or passing explicitly alphas = None. RidgeCV can’t do this, but LassoCV will automatically try to find good choices of \(\alpha\) for you.

# Here's the ridge code from last time that you should update

X_train, X_test , y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1)

# To make sure my normalization isn't snooping, I fit the transformer only 
# on the training set 
transformer = StandardScaler().fit(X_train)
# transformer.set_output(transform = 'pandas')
X_train_norm = pd.DataFrame(transformer.transform(X_train), columns = X_train.columns)

# but in order for my output results to make sense, I have to apply the same 
# transformation to the testing set. 
X_test_norm = transformer.transform(X_test)

# I'm going to drop that 0 from the alphas because it makes 
# RidgeCV cranky
alphas = alphas[:-1]


ridgecv = RidgeCV(alphas = alphas, 
                  scoring = 'neg_mean_squared_error', 
                  )
ridgecv.fit(X_train_norm, y_train)

print('alpha chosen is', ridgecv.alpha_)
# Your code here

Now let’s take a look at some of the coefficients.

# Some of the coefficients are now reduced to exactly zero.
pd.Series(lassocv.coef_, index=X.columns)

Q: We’ve been repeating over and over that lasso gives us coefficients that are actually 0. At least in my code, I’m not seeing many that are 0. What happened? Can I change something to get more 0 entries?

Your answer here

# You might also want some code in here to try to figure it out

Congratulations, we’re done!#

Written by Dr. Liz Munch, Michigan State University

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.