10 In-Class Assignment: Eigenproblems
Contents
10 In-Class Assignment: Eigenproblems#
Image from: https://campusinvolvement.umich.edu/
1. Pre Class Review#
Definition. Let \(A\) be an \(n\times n\) matrix. A non-zero vector \(x\) in \(\mathbb{R}^n\) is called an eigenvector of \(A\) such that:
for some scalar \(\lambda\), which will be called eigenvalue corresponding to \(x\).
The above can be rewritten as the following homogeneous equation:
The trivial solution is \(x=0\). To find the eigenvalues, you need the determinant of \(A - \lambda I_n \) to be zero, i.e. $\(|A-\lambda I_n| = 0\)\( Once you know the eigenvalues \)\lambda\(, you use \)(A-\lambda I_n)x = 0$ to find the corresponding eigenvectors.
✅QUESTION: Find the eigenvalues of $\(A=\begin{bmatrix} 2 & 3 \\ 3 & -6 \end{bmatrix}\)$ and the corresponding eigenvectors.
Put your answer here
2. Markov Models#
In probability theory, a Markov model is a stochastic model used to model randomly changing systems. It is assumed that future states depend only on the current state, not on the events that occurred before it.
A diagram representing a two-state Markov process, with the states labelled E and A.
Each number represents the probability of the Markov process changing from one state to another state, with the direction indicated by the arrow. For example, if the Markov process is in state A, then the probability it changes to state E is 0.4, while the probability it remains in state A is 0.6.
From: Wikipedia
The above state model can be represented by a transition matrix.
In other words we can write the above as follows:
%matplotlib inline
import matplotlib.pylab as plt
import numpy as np
import sympy as sym
sym.init_printing(use_unicode=True)
A = np.matrix([[0.6, 0.7],[0.4, 0.3]])
sym.Matrix(A)
Notice how the columns in the matrix all add to one. This is because all of the transition probabilities out of a matrix must add to 100 percent.
A Markov matrix \(P\) always has an eigenvalue of \(1\). All other eigenvalues are in absolute value smaller or equal to \(1\).
Now, consider the following house map with cats in each room…
At each time step, there is an equal probability of a cat staying in their current room or moving to a new room. If a cat chooses to leave a room, then there is an equal chance of that cat picking any of the doors in the room to leave.
✅DO THIS : Try to draw a Markov chain (Markov matrix) for the above system of equations. Be prepared to share your diagram with the class.
A Markov chain can be represented as a Markov transition model of the form \(Ax=b\). Where \(A\) is your probability tranisition matrix (often represented as a \(P\) instead of an \(A\)). \(x\) is the state before the transition and \(b\) is the state after the transition.
✅QUESTION: Generate a Markov transition model represented as a matrix \(P\) of the form: $\( \begin{array}{ccc} & \text{Current Room} \\ P = & \begin{bmatrix} p_{11} & p_{12} & p_{13} \\ p_{21} & p_{22} & p_{23} \\ p_{31} & p_{32} & p_{33} \end{bmatrix} \end{array} \text{Next Room} \)$
Where \(p_{ij}\) are probability transitions of the cat moving between rooms (from room \(j\) to room \(i\)):
##put your answer here
from answercheck import checkanswer
checkanswer.matrix(P,'1001a6fa07727caf8ce05226b765542c');
✅QUESTION: Let’s assume that the system starts with; 6 cats in room 1, 15 cats in room 2, and 3 cats in room 3. On average, how many cats will be in each room after one time step (Store the values in a vector called current_state
)?
#Put your answer to the above question here.
from answercheck import checkanswer
checkanswer.vector(current_state,'98d5519be82a0585654de5eda3a7f397');
✅QUESTION: The following code will plot the expected number of cats as a function of time (\(t\)). When this system converges, what is the steady state?
#Define Start State
room1 = [6]
room2 = [15]
room3 = [3]
current_state = np.matrix([room1, room2, room3])
for i in range(10):
#update Current State
current_state = P*current_state
#Store history for each room
room1.append(current_state[0])
room2.append(current_state[1])
room3.append(current_state[2])
plt.plot(room1, label="room1");
plt.plot(room2, label="room2");
plt.plot(room3, label="room3");
plt.legend();
print(current_state)
qaz = np.matrix([1,2,3])
print(qaz)
Inspect the generated graph and put your answer to the above question here
✅QUESTION: Calculate the eigenvalues and eigenvectors of your \(P\) transition matrix.
##put your answer here
The eigenvector associated with the eigenvalue of 1 represents the steady state probability for our original transition matrix \(P\).
✅DO THIS: make a new vector called steadystate
from the eigenvector of your \(P\) matrix with a eigenvalue of 1.
## Put your answer here
Since the steadystate
vectors represent long term probabilities, they should sum to one (1). However, many programming libraries (including numpy) return “normalized” eigenvectors to length of 1 (i.e. norm(e)==1).
✅DO THIS: Correct for the normalization by multiplying the steadystate
eigenvector by a constant such that the sum of the vector elements add to 1.
#Put your answer here
✅DO THIS: The steady state represents the proportion of cats that will be in each room, but since we know the initial number of cats, we can find how these cats arrange themselves in the steady state by multiplying steadystate
by the total number of cats. The resulting vector should be identical to the steady state in the above chart.
✅QUESTION: Why does the sum of the numbers at every stage remain the same?
Put your answer here
3. Another Markov Model Example#
A sports broadcaster wishes to predict how many Michigan residents prefer University of Michigan teams and how many prefer Michigan State teams. She noticed that, year after year, most people stick with their preferred team; however, about 5% of Michigan fans switch to Michigan State, and about 3% of Michigan State fans switch to Michigan each year. However, there is no noticeable difference in the state’s population of 10 million’s preference at large; in other words, it seems Michigan sports fans have reached a stationary distribution. What might that be?
This problem is from https://brilliant.org/wiki/stationary-distributions/
✅DO THIS: Try to draw a Markov chain for the above system of equations. Discuss your diagram with your classmate.
✅ QUESTION: Write a system of linear equations that represents how the populations change each year. Check your equations by writing the matrix P
for the probability transitions matrix in your equations. Make sure your first row/column represents MSU and the second row/column represents UofM.
Put your answer here
from answercheck import checkanswer
checkanswer.vector(P,'1d3f7cbebef4b610f3b0a2d97609c81f');
✅ QUESTION: Use Numpy to calculate the eigenvalues and eigenvectors of your \(P\) transition matrix.
#Put the answer to the above quesiton here.
✅ QUESTION: Assuming both teams start with 500,000 fans each (so 1,000,000 fans total), what is the steady state of this model? (I.e. in the long term how many Spartan and Wolverine fans will there be?).
#Put your answer here
steadystate
from answercheck import checkanswer
checkanswer.float(spartans,'06d263de629f4dbe51eafd524b69ddd9');
from answercheck import checkanswer
checkanswer.float(wolverines,'62d63699c8f7b886ec9b3cb651bba753');
4. Page Rank#
Google’s famous “Page Rank” algorithm is just a calculation of the likelyhood that a person will reach a particulare web page.
To explain the algorithm, we model the web as an oriented graph. The vertices are the pages, and the oriented edges are the links between pages. As we just explained, each page corresponds to a different url. Hence, a website may contain many pages. The model makes no difference between the individual pages of a website and its front page. But, most likely, the algorithm will rank better the front page of an important website.
From: http://blog.kleinproject.org/?p=280
Using the vocabulary from this course, a simplified version of Google’s Page Rank is the steady state probabilities of the Markov State Model where each node in the network is a page and each transition probability is calculated as an equal probablility from the current page to its linked pages.
For this problem, consider the following Markov model of a generic webpage.
✅ QUESTION: Find the state transition matrix for the above system. Let’s consider Home = state 1, About = state 2, Resume = state 2, Blog = state 4 and let \(p_{ij}\) be the probability transition from state \(j\) to state \(i\).
# Define the matrix here
✅ QUESTION: Using numpy
calculate the eigenvalues and eigenvectors of the model.
# Calculate the eigenvalues and eigenvectors here
✅ QUESTION: Again, calculate the steady state of this model. Remember that the steady state vector is an eigenvector corresponding to the eigenvalue \(1\) which is scaled so that the sum of its entries is \(1\).
# Put your code here
✅ QUESTION: Based on the long term steady state vector that you calculated, which page are people likely to spend the most time? Explain how you came up with the answer.
Put the answer to the above question here.
✅ QUESTION: If exactly 10,000 people are on the website at a time (assume this is constant), estimate how many people will be on the About page over the long term? Explain how you came up with your answer.
Put the answer to the above question here.
Written by Dr. Dirk Colbry, Michigan State University
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.