\documentclass[a4paper,12pt]{article}
\newcommand{\ds}{\displaystyle}
\newcommand{\un}{\underline}
\newcommand{\undb}{\underbrace}
\newcommand{\df}{\ds\frac}
\newcommand{\pl}{\partial}
\parindent=0pt
\begin{document}


{\bf Question}

The telephone directory for Hirstville consists of three volumes,
$I$, $II$, $III$. The local Albanian take-away keeps is set of
volumes in a neat pile by the telephone. When a volume us
consulted it is always replaced on the top of the pile. The
proprietor observes that over a long period of time volume $I$ is
consulted with probability $p$, volume $II$ with probability $q$
and volume $III$ with probability $r$, where

$$p+q+r-1,\ 0<p<1,\ 0<q<1,\ 0<r<1.$$

There are six possible orders for the pile of three volumes.
Explain briefly why these form the six states of a Markov chain.

Draw a transition diagram and use it to explain why the Markov
chain is ergodic. Write down the transition matrix. Find the
equilibrium distribution. What are the mean recurrence times for
each state?


\vspace{.25in}

{\bf Answer}

The six states are:

$\begin{array}{cccc} 1 & I & II & III\\ 2 & III & I & II\\ 3 & II
& III & I\\ 4 & I & III & II\\ 5 & II & I & III\\ 6 & III & II & I
\end{array}$

The probability of transition between states does not depend on
past history. In fact it depends only on which volume is chosen at
each stage. This is the Markov property and so we have a Markov
chain.

Transition diagram:

PICTURE \vspace{2in}

Since $p_{ii}>0$ for each $i$ the chain is aperiodic. The circuit
1-2-4-5-6-3-1 show that all states intercommunicate. So the chain
is irreducible and aperiodic i.e. ergodic, and so all states are
positive recurrent.

${}$

\newpage
Transition Matrix

$$P= \bordermatrix{  & 1 & 2 & 3 & 4 & 5 & 6 \cr
                   1 & p & r & 0 & 0 & q & 0 \cr
                   2 & 0 & r & q & p & 0 & 0 \cr
                   3 & p & 0 & q & 0 & 0 & r \cr
                   4 & 0 & r & 0 & p & q & 0 \cr
                   5 & p & 0 & 0 & 0 & q & r \cr
                   6 & 0 & 0 & q & p & 0 & r \cr}$$

Equilibrium distribution. Because the Markov chain is ergodic
there is a unique stationary distribution equal to the equilibrium
distribution. The stationary distribution satisfies
$\un{\pi}P=\un{\pi}$.

Thus we have

\begin{eqnarray} p(\pi_1+\pi_3+\pi_5) & = &
\pi_1\\p(\pi_1+\pi_2+\pi_4) & = & \pi_2\\p(\pi_2+\pi_3+\pi_6) & =
& \pi_3\\p(\pi_2+\pi_4+\pi_6) & = & \pi_4\\p(\pi_1+\pi_4+\pi_5) &
= & \pi_5\\p(\pi_3+\pi_5+\pi_6) & = & \pi_6 \end{eqnarray}

\begin{eqnarray} \mbox{Adding $1$ to $4$, using}\ \ds\sum \pi_i=1,\ \mbox{gives}\
\pi_1+\pi_4=p\\ \mbox{Adding $3$ to $5$, using}\ \ds\sum \pi_i=1,\
\mbox{gives}\ \pi_3+\pi_5=q\\ \mbox{Adding $2$ to $6$, using}\
\ds\sum \pi_i=1,\ \mbox{gives}\ \pi_2+\pi_6=r \end{eqnarray}

Substituting in 1 using 8 gives $p(\pi_1+q)=\pi_1$ so
$\pi_1=\df{pq}{1-p}$

Similarly we obtain

$\pi_2=\df{pr}{1-r},\ \pi_3=\df{qr}{1-q},\ \pi_4=\df{pr}{1-p}\,
\pi_5=\df{pq}{1-q},\ \pi_6=\df{qr}{1-r}$

The mean recurrence times for ergodic Markov chains are the
reciprocals of the equilibrium probabilities

so $\mu_1=\df{1-p}{pq}$ etc.



\end{document}
