Homework


Homework

last homework!

HW 12 (Apr 26)

Do

  1. 3.11
  2. 3.15

HW 11 (Apr 19)

A little longer as promised, but hopefully not too hard.

Do

  1. 3.3
  2. 3.4
  3. 3.8

HW 10 (Apr 13)

Do

  1. 3.1

HW 9 (Apr 5)

Do

  1. 5.12
  2. 5.13. You need to use the definition of uniform integrability.

    This will not be graded!

    and the following bounded convergence theorem. If $|A_n| \leq C$ for all $n$, and $A_n \to B$, then

    It’s called the bounded convergence theorem because the $|A_n|$ are all uniformly bounded by $C$.

    Start by reviewing equation 5.12 in the textbook, and using

    One of these terms can be proven to be small, and the other should converge to $\E[ Y 1_{Y \geq t} ]$. Prove it, and then send $t$ to $\infty$.

    I can’t see how to do it without the bounded convergence theorem right now. Any thoughts?

HW 8 (Mar 29)

Do problems

  1. 1.7
  2. 5.7
  3. 5.8
  4. 2.4

HW 7 (due Mar 8)

Do problems

  1. 5.1
  2. 5.4
  3. 5.5

Midterm is next week on thursday Mar 8. The homework problem on branching processes should be useful for the midterm. For midterm practice, also do problems 1.11, 1.12, 2.13, 2.14, 1.14 from the textbook and the following problem: A professor continually gives exams to her students. She can give three possible types of exams, and her class is graded as either having done well or badly. Let $p_i$ denote the probability that the class does well on a type $i$ exam, and suppose that $p_1 = 0.3, p_2 = 0.6 \AND p_3 = 0.9$. If the class does well on an exam, then the next exam is equally likely to be any of the three types. If the class does badly, then the next exam is always type $1$.

Draw a graph, with arrows and probabilities of the Markov chain. Alternatively, you may also just write down the transition matrix.

HW 6

Do problems

  1. 2.3
  2. 2.8

HW 5 (due Thu 22)

Do problems

  1. 2.1
  2. 2.2

HW 4 (due Thu 15)

Do problems

  1. 1.14
  2. 1.15

  3. 1.16

  4. 1.18
  5. 1.19
  6. Let $P$ be the transition matrix of a Markov chain. In the notation used in class, if $Q$ is the matrix that represents the transitions between transient states, show that $Q^n(i,j) \to 0$ for all $(i,j)$.

    Solution

    I tried to show directly that since $Q$ is a sub-stochastic matrix, it must have all of its eigenvalues strictly less than $1$. But this doesn’t seem very. I thought that there might be a trick using Perron-Frobenius, but I’ve never been good with tricks, so I couldn’t find it.

    So it’s probabily best to directly show that you cannot stay in a transient state forever. We will contradict this by counting returns. The proof implicitly assumes the strong Markov property which we haven’t proved in class; but this is quite easy to swallow.

    Let $T = \cup_{i=1}^k T_i$ be a disjoint union of all the transient communication classes, and let $R$ contain all the recurrent states.

    1. Fix a state $i \in T_1$. We show first that there must be a way for the chain to transition from $i$ to a state $j \in R$; i.e., we show that this has non-zero probability. To establish a contradiction we assume that you can never get from $i$ to the set $R$.

      Solution: Let $T_1,\ldots,T_k$ be all the transient states. Let $R$ represent the union of all the recurrent classes.

      Suppose you start in transient class $T_1$. There must be a way to leave class $T_1$; for if not, $T_1$ is a communicating class that you can never leave and this means that it is recurrent. Let us suppose we’ve ordered $T_1,\ldots,T_k$ so that one (of the many possible) classes that you can reach is $T_2$.

      We claim that you can never go back from $T_2$ to $T_1$. If it were possible to go back, then this would mean that $T_1$ and $T_2$ communicate; this violates our assumption about them being distinct communicating classes. Arguing this way, we find that must there is path from $i$ to a final communication class $T_k$. Now, we cannot have an edge between $T_k$ and any previous state $T_i$ for $i=1,\ldots,k-1$. If it did this would mean that $T_i$ and $T_k$ communicate, and this is not allowed. This means that $T_k$ must have a state from which there is a positive probability of reaching $R$. In other words, there is $k_i$ such that

    2. Next we must show that if $i \in T$, $\Prob_i( \exists~n > 0 \,s.t., X_n \in R ) = 1$. To arrive at a contradiction, suppose again that we never exit the transient states. Since $T$ is a finite set, there must be at least one state that is repeated infinitely many times. Let us show this semi-rigorously. Let

      Then, if we never leave the transient class, we must certainly visit at least one of the states in the finite transient class infinitely many times with positive probability:

      Let

      Let $R_j^{(k)}$ be the $k^\text{th}$ return to $j$. Then,

      Then, this is non-zero iff $\rho(j,j) = 1$ for at least one $j \in T$. Here again, we’ve used the strong Markov property to say that the return times to $j$ are independent.

      However since there is certainly one way of reaching $R$ from $j$. This is a contradiction.

HW 3

Problem 1: Conditional probability redux.

Let $A_0, A_1, \ldots, A_n$ be events of non-zero probability.

  • Show that

  • Suppose we have some stochastic process with state space $S = { a, b, c}$. What does equation in the previous part say about the event

  • Suppose the process is Markov, simplify the equation you obtained above.
  • Give me a simple (but mathematical) example of a process that is not Markovian. (If you say “Stock Market” or “the weather”, you get zero points).

Problem 2:

Consider a two-state Markov chain with $S = {0,1}$. Let $p(0,1) = 1/2$ and $p(1,0) = 2/3$. Let $T$ be the first return time to $0$ given that you started in state $0$.

  • Compute $\E[T]$ using the stationary distribution of the chain.

  • Find $\Prob(T \geq n)$ by computing

  • Compute $\E[T]$ using the formula

Problem 3:

  • Suppose $a_n$ is a sequence of numbers such that Show that Hint: Use the $\e - \delta$ definition of a limit to prove this.

  • Suppose $X$ is a Markov chain that’s irreducible and has period $d > 1$. Let $Y(n,i)$ be the amount of time spent in state $i$ in $n$ steps of the chain. We’ve seen that Show that $A \to \pi(i)$, where $\pi$ is the unique stationary probability distribution.

Problem 4

A thinker who owns 4 umbrellas travels back and forth between home and office, taking along an umbrella (if there is one at hand) in rain (probability $p$) but not in shine (probability $q=1-p$). Let the state be the number of umbrellas at hand, irrespective of whether the thinker is at home or at once. Suppose he has two umbrellas at home and two at work, and he’s at home. Then the state is $X_0 = 2$. It rains that day, he takes one umbrella from home to work and so he has three umbrellas at work. The state is now $3$.

  1. Set up the transition matrix and find the stationary probabilities.

  2. What is the average number of umbrellas you’d have on hand in the long term?

  3. Suppose you start with $0$ umbrellas. What is the average amount of time until you have $0$ umbrellas again?

HW 2 Due Thu Feb 1

Homework 2 is mostly from the textbook. I’ve copied the problems for your convenience so that you have the time to get your copies.

Solution Some mathematica calculations maybe found here.

Problem A.

Read Example 5 from the textbook that describes a random walk on a graph. Define a random walk on a directed graph as follows. Label the vertices of the graph ${1,\ldots,n}$. The ordered pair $(i,j)$ of vertices $i$ and $j$ corresponds to an edge from $i$ to $j$. Let $S_i$ be the set of vertices $j$ such that $(i,j)$ is an edge (its the set of vertices that $i$ points to). Let Formulate the simple random walk with reflecting and absorbing barriers as a random walk on a directed graph.

HW 1 Due Thu Jan 25

  1. Plot the distribution function

    • Determine the corresponding density function $f(x)$ in the three regions.
    • What’s the mean of the distribution?
    • If $X$ is a random variable with distribution $F$, then evaluate $\mathbb{P}(1/4 \leq X \leq 3/4)$.
  2. Let $X$ and $Y$ be independent random variables having distribution $F_x$ and $F_y$ respectively.

    • Let $Z = \max(X,Y)$. Express $F_Z(t)$ in terms of $F_X(s)$ and $F_Y(u)$.
    • Let $W = \min(X,Y)$. Express $F_W(t)$ in terms of $F_X(s)$ and $F_Y(u)$.
  3. Determine the distribution function, mean and variance corresponding to the triangular density

  4. Let $1_{A}$ be the indicator random variable associated with an event $A$, defined to be one if $A$ occurs, and zero otherwise. Show

    • $1_{A^c} = 1 - 1_{A}$
    • $1_{A \cap B} = 1_{A} 1_{B} = \min( 1_{A}, 1_{B} )$
    • $1_{A \cup B} = \max( 1_{A}, 1_{B} )$.
  5. Let $N$ cards carry the distinct numbers $x_1,\ldots,x_N$. If two cards are drawn at random without replacement, show that the correlation coefficient $\rho$ between the numbers appearing on the two cards in $-1/(N-1)$.

  6. A pair of dice is tossed. If the two outcomes are equal, the dice are tossed again, and the process is repeated. If the dice are unequal, their sum is recorded. Determine the probability mass function for the sum.