MATH 5050 Homework


HW 4

Problem 0

Simulate a trajectory of Brownian motion on the interval $[0,1]$ on a mesh with $10000$ to $100000$ points. Figure out how to do this by reading the textbook.

Simulate a random walk trajectory with $\pm 1$ increments for $n$ steps. Scale the random walk trajectory so that it compares with a Brownian motion trajectory on $[0,1]$. Try it for $(n=100,1000,10000)$.

Plot trajectories of Brownian motion and the random walk on the same figure.

Problem 1

Let $B(t)$ be standard Brownian motion.

a. Calculate numerically $\int_0^t B(s)\,dB(s)$ for $0\le t\leq 10$. Use the leftmost points of the partition intervals and then the rightmost points. Plot both solutions (using the same copy of Brownian motion, not two independent copies).

b. Based on the plots, which of the two is the right one and why?

c. Calculate $\int_0^t B(s)\,dB(s)$ explicitly (not numerically).

Hint: Use Ito’s formula: with $f(x)=x^2$.

d. If you use the same Brownian motion for the plots in (a) as well as to plot the explicit solution you got in (c), then the latter should match one of the former two. Is this the case? Does this agree with your answer to (b)?

Problem 2

Let $B(t)$ be standard Brownian motion. Let $\lambda$ be a positive constant and let $X(t)=e^{\lambda B(t)}$.

a. Apply Ito’s formula to the function $f(x)=e^{\lambda x}$ to prove that $X(t)$ solves

b. Let $A(t)=E[X(t)]$. Observe that $A(0)=0$. Using the fact that a stochastic integral is a martingale prove that $A(t)=\frac{\lambda^2}2\int_0^t A(s)\,ds$.

c. The equation in (b) says that $A’(t)/A(t)=\frac{\lambda^2}2$. Is this a stochastic differential equation? Integrate both sides (and use $A(0)=0$) to find the formula for $A(t)$.

d. What is the distribution of $B(t)$? Can you compute $E[e^{\lambda B(t)}]$ directly? Does your answer agree with (c)?

HW 3

Due Apr 10

Question 1

Prove the chapman kolmogorov equation. Let $p_t(x,y)$ be the transition density of brownian motion.

  1. Show explicitly using the explicit formula for $p_t(x,y)$ that
  2. Can you show this using the Markov property for Brownian motion, (see page 177 of lawler) Hint: the expectation turns into a probability. To find the density, perhaps it’s better to look at the cdf instead. For example, the probability that the BM is in a set $A$ at time $t$ is an integral over the transition density.

Question 2

Brownian scaling.

  1. If $B(t)$ is a BM, then what is the transition density of $a B(t/b)$ for any two constants $a$ and $b$.
  2. For any constant $s > 0$, show that $\frac{1}{\sqrt{s}} B(s t)$ is a BM by showing that it satisfies all the requirements of a Brownian motion. Note: This shows BM is self similar. If you scale time and space appropriately (think of this as zooming into the BM trajectory), BM looks the same.
  3. We saw the law of large numbers for BM. First use BM scaling to show that is a BM. Give an alternate proof of the law of large numbers for $W(t)$
  4. We saw that Use Brownian scaling to find an expression for Which proof did this formula appear in?

Question 3

Show that $B^2_t -t$ is a Martingale using the Markov property of Brownian motion. Suppose a $B(0) = x$ on the interval $[0,a]$ where $0$ and $a$ are absorbing boundries. Let $T$ be the absorption time. Use the stopping theorem to find $\E[ T] $ and $\Prob( B(T) = 0)$. Justify all your steps and the use of the stopping theorem.

Question 4

Let $B(t) = ( B_1(t), \ldots, B_n(t))$ be a vector of independent one dimensional BMs. where $f \colon \R^d \to \R$ is a smooth function, and $\E_x$ denotes the expectation. Show using Taylor’s theorem.

Question 5

Multidimensional Gaussians. Let $Z = (Z_1,\ldots,Z_n)$ be $n$ iid Gaussian random variables.

  1. Show that their joint density function is given by
  2. Consider the random vector $Y = A Z$, where $A$ is a symmetric matrix. Show that the joint density function of $Y$ is given by where $\vec{y}^T$ is a the transpose of the column vector $\vec{y} = (y_1,\ldots,y_n)$. We say that such a random vector $Y$ is jointly normal. *Hints: you will need to use a multidimensional change of variables formula. Also use the fact that $A^T = A$. Sometimes, it’s useful for me to write the $i^{th}$ component of $\vec{y}$ as $\vec{y}i = A{ij} \vec{z}_j$ where the repeated index $j$ on the right hand size indicates that it’s being summed over (Einstein’s summation notation).
  3. The covariance matrix of a vector $Z = (Z_1,\ldots,Z_n)$ has entries What is the covariance matrix of the vector $Y$ defined in the previous part? Hint: use the independence of $Z$ from part $1$ and $Y = AZ$.
  4. What is the covariance matrix of the vector $Y = B(t_1),\ldots,B(t_n)$, where $B$ is a Brownian motion and $0 < t_1 \cdots < t_n$. What is the joint density function of $Z = { B(t_{i+1}) - B(t_{i}) }_{i=1}^{d-1}$? Is $Y$ a jointly normal random variable?
  5. Do problem 8.6 now.

Question 6

Can you have a discontinuous BM with the right distributions? Let $U$ be a uniform random variable on $[0,1]$ and let it be independent of a standard BM $B(t)$. Define Show that $X(0) = 0$ with probability $1$ and show that for any fixed times $0 < t_1 \cdots < t_n$ are independent and jointly Gaussian with the same density function as Brownian motion. Hint What is the probability that $U = t_i$ for some $i = 1,\ldots,n$?

Question 7

Do problems 8.1, 8.3 and 8.4

Question 8

Do problems 8.10, 8.11, 8.12 and 8.15

HW 2

Due Feb 27.

Problem 1

Do problem 1.7, this will be useful later

Problem 2 (sigma algebras)

A $\sigma$ algebra $\mathcal{F}$ is a collection of subsets of a space $\W$ that is closed under the countably many unions, intersection and complements. It also contains the empty- or null-set $\phi$ and the entire space $\W$ itself. That’s all there is to it.

A $\sigma$-algebra generated by a random variable $X$ ($\sigma(X)$) contains all sets of the form ${X \leq a}$.

  1. Show that $\sigma(X)$ contains sets of the form ${ X > a }$.
  2. What about ${ X = \infty }$? Hint Can you write this as a intersection of a countable number of sets?
  3. If $Y$ is independent of $X$, and $E$ is an event (set) in $\sigma(X)$, what can we say about
  4. $Y$ is “determined by” or “measurable” with respect to some $\sigma$-algebra $\sigma(X)$ if events of the form ${ Y \leq a } \in \sigma(X)$. In other words, all the information about $Y$ is contained in the $\sigma(X)$. If $Y$ is independent of $\sigma(X)$, can ${ Y \leq a} \in \sigma(X)$ for any $a$?

Let $Y$ be independent of mean-zero random variables $X_1,\ldots$, and define Let $\mathcal{F}_n$ be the $\sigma$-algebra generated by the random variables $Z_1,\ldots,Z_n$.

  1. Is $Y$ measurable with respect to $\sigma(Z_1,\ldots,Z_n) = \mathcal{F}_n$?
  2. What happens to
  3. Let $\mathcal{F}\infty = \sigma(X_1,X_2,\ldots)$. Is $Y$ $\mathcal{F}\infty$ measurable?

Problem 3 (example 2 on page 113)

Redo the example, and fill in the details.

  1. Show in particular that $\E[ M_T ] < \infty$
  2. Show that $\rho^n (N^2 + n) \to 0$
  3. Show that the time to absorption in Gambler’s ruin is given by $\E[T] = a(N-a)$.

Problem 4 (biased gambler’s ruin)

In gambler’s ruin with $p = 1/2$, remind yourself how $\Prob(X_T = N)$ and $\E[T]$ were found in chapter $1$. Compare with the martingale methods we’ve developed to handle this.

  • Do problem 5.7 and 5.8 from the text book.

Problem 5

Do problems 5.3, 5.4 and 5.6 from the textbook. Please submit polya’s urn code with the problem.

Problem 6

Do 5.11 and 5.13

HW 1

Due Fri, Feb 3.

Problem 1

Implement an algorithm to find $v(x)$ in the cases when

  1. There is only a payoff function
  2. There is a payoff function and a cost to play
  3. There is a payoff function f, a cost g to play and a discounting factor $\alpha$.

Do problems $4.1$, $4.2$, $4.3$ and $4.4$ from the textbook. In each case, compare your theoretical answer to the numerical answer.

Draw pictures of $v(x)$ in all problems, and compare it with $f(x)$. Use your favorite programming language to do this.

Submit your code.

I will only do theoretical solutions for 4.1 and 4.2. For 4.3 and 4.4 I will just implement the algorithm.

Solution We have We will updated the STOP and GO sets after a few computations. It’s clear at the outset that Using $v(x) \geq Pv(x)$, we get the following observations:

  1. $v(4) = 10$ and hence it must be in the STOP set.
  2. $v(3) \geq Pv(3) = (10+4)/2 = 7 > 3$. Hence it must be in the GO set.
  3. $v(2) \geq (7 + 2)/2 = 4.5 > 4$. GO.
  4. $v(1) \geq v(2)/2 = 4.5/2 = 2.2.5 > 2$. GO.
  5. $v(5) \geq (10 + 6)/2 = 8$. Obviously GO.
  6. $v(6) \geq (v(5) + 4)/2 = \left( 8 + 4 \right)/2 = 6$. unclear at this point.
  7. $v(7) \geq (v(6) + v(8))/2 = (6+3)/2 = 4.5 > 3 $. GO.
  8. $v(8) \geq ( v(7) + v(9) ) /2 \geq (4.5 + 3)/2 = 3.25 > 3$. GO.
  9. $v(9) \geq v(8)/2$. Unclear.
  10. Recomputing $v(6) \geq ( v(5) + v(7))/2 \geq (8 + 4.5)/2 = 6.25 > 6$. GO.

At this stage, we have It’s only sort of unclear whether one should have $9$ in the STOP set or the GO set. Suppose $9$ were in the GO set, we would have for $x \in { 5,6,7,8,9 }$ Doing a similar computation with $9$ in the STOP set gives for $x \in {5,6,7,8}$ Both these functions start at $10$ at $x=4$ and decrease linearly. So it’s enough to test them at $1$ value, say $x=9$. It’s easily seen that it’s better to have $9$ in the STOP set.

For the states $x \in { 0,1,2,3,4 }$ we get To summarize, we get

Solution 4.2

Here we have two dice rolls $A + B$, that have mean But here, if $A + B = 7$, the payoff is $0$. So let’s compute the average winnings if we roll once and then stop. Again, the optimal strategy is clear: Since $Pv(x) = u = 5 \frac{5}{6}$, it’s clear that We need to write down the probabilities of different winnings: Then the expected winnings are This means that $6$ is in the GO set too! Then, we must have Could $8$ be in the GO set too? Note that if for the remaining states ${8,9,10,11,12}$, if $x \in STOP$, then it must be true that $x + 1 \in STOP$ too. So let’s start by assuming that $8$ is in the GO set. In this case, Then, we must have that for $x = 2,\ldots,6,8$ (using the characterization that $v(x) = \max(Pv(x),f(x))$). Then, computing Solving for $u$ gives $u = 5$. This certainly cannot be true. So assuming $8$ is in the STOP set, we get Using $Pv(x) = u$ for $x \leq 6$ again gives $u = 20/3 \approx 6.67$. One simply verifies using $v(x) = \max(Pv(x),f(x))$ that this is the correct answer.

Problem 4.3 and 4.4 I’ll simply do these using computer simulation.

4.3

(a)

(b)

(c)

4.4

(a)

(b)

(c)

Problem 2

Do 4.8 and 4.10. Simulate 4.8 first as instructed.

Solution

4.8

Let us divide the payout by 100 so that you get paid $1,2,\ldots,11$ each time you play. We will view this as a Markov process on the non-negative integers. $0$ or bankruptcy is viewed as an absorbing state. Suppose the amount of money you hold right now is $x$; then after the next turn, you go to $x + i$ for $i=1,\ldots,11$ with probability $1/12$ and you go bankrupt with probability $0$. Clearly If you choose to stop right now, then you get to leave with whatever you have; i.e., $f(x) = x$. As before Clearly, there must be an $x_0$ that is the last time you’re in the GO set; then, $x_0 + 1$ must be in the CONTINUE set. If you stop at $x_0 + 1$, you’ll certainly stop at $x_0 + 2$. At this $x_0$, we must have Solving this shows that $x_0$ = 66. Clearly, $x_0 + 1 > Pv(x_0 + 1)$. So we know our optimal strategy. We keep rolling until we hit $66$ and then stop. If we go bankrupt, “too bad, so sad, next contestant please”.

Now you somehow have to figure out the solution to where $T$ is the first time you reach the stopping set $x_0 \geq 66$ starting at $X$. Note that we must have And the same for $v(x) = Pv(x)$, $x \leq 65$. This is shown in the accompanying python code. What I do is to solve for $v(x)$ for $x < 66$ using For $x = 65$, we have Note that we know $v(65 + i) = 65 + i$ for $i > 0$! Then we iteratively solve for $v(64)$ $v(63)$ are so on.

Can anyone find an analytic solution for this? It should be possible.

If we started with no money, then we expect to make on average. There is another way to figure out this threshold. Suppose we’re allowed to keep the money we make until we hit the threshold. Let $T$ be the number of rolls until we hit the first $12$. Clear $T \sim \Geometric(1/12)$; i.e., Then the payout is After figuring out that $\E[ X_i | T = k ] = 6$ (Hint: compute the conditional distribution of $X_i$ given that it doesn’t take the value $12$), we see that the payout is If we used the prophet function instead, this is the payout we’d receive on average!

4.10

Let $T$ be the number of rolls before the first $6$ is rolled in the die game. Clearly where $p = 1/6$. It’s a geometric random variable.

Now, the payoff function is clearly This is because we stop right before the first time a $6$ is rolled. To compute the conditional expectation, let us compute Here we’ve used the independence of the die rolls. Recall that $u = 3$ in the original dice game. See the first part of chapter 3 of Lawler’s book for a discussion of the dice game.

4.10 (b)

This is a messier problem, but still entertaining. Recall Let’s start at $1$. You can get absorbed into state 0 or 4, since we know that we must stop when we get tp $4$. Then We’ve abused notation a little bit, and treated $v(\cdot)$ as a random variable above. But you guys know what I mean, right? Notice that for example In problem 4.1, $9$ was in the stop set, and therefore, $\E[v(9)] = 3$. Therefore the “prophet” function obviously does better. The other expectations can be calculated, of course. To summarize

Problem 3

Show that if $Y$ is independent of $X_1,X_2,\ldots,X_n$, then

Solution We know that the definition of conditional expectation is that for any set $A$ that depends only on the variables $X_1,\ldots,X_n$, since $Y$ and $1_A(X_1,\ldots,X_n)$ are independent. Suppose there is a set $B$ and some $\e > 0$ such that Then, using the above, we have Notice that $B$ depends only on the $X_i$. This is a contradiction and shows that $B$ must have measure $0$ for all $\e$. Similarly we consider the set where $\E[ Y| X_1,X_2,\ldots,X_n ] < \E[ Y]$ and show that it has probability $0$ too. This proves the statement.

Problem 4

Show that $v(x)$ must be concave if you have a simple random walk with absorbing boundaries. What happens when you have reflecting boundaries?

Solution See my handwritten notes here

Problem 5

Complete the proof of the fact that if $u_i(x)$ are superharmonic, then so is $u(x) = \inf_i u_i(x)$.

Solution We’ve done this several times in class now.