Kezdőoldal » Tudományok » Természettudományok » Markov-láncok esetében a...

Markov-láncok esetében a stacionárius elsozlásra (egyértelműség, konvergencia) vonatkozó bizonyításokat megtalálom valahol közérthető nyelven röviden, hogy kürülbelül hogyan megy a bizonyítás?

Figyelt kérdés

2014. febr. 23. 19:44
 1/7 anonim ***** válasza:
Angolul van csak neten.
2014. febr. 23. 20:34
Hasznos számodra ez a válasz?
 2/7 A kérdező kommentje:
jó az angol és anet, de én sehol nem találtam, mindenhol csak állítják, de sehol nem bizonyítják
2014. febr. 23. 20:50
 3/7 anonim ***** válasza:

In these Lecture Notes, we shall study the limiting behavior of Markov chains as time n ! 1.

In particular, under suitable easy-to-check conditions, we will see that a Markov chain possesses

a limiting probability distribution,  = (j)j2S, and that the chain, if started o initially with

such a distribution will be a stationary stochastic process. We will also see that we can nd 

by merely solving a set of linear equations.

1.1 Communication classes and irreducibility for Markov chains

For a Markov chain with state space S, consider a pair of states (i; j). We say that j is reachable

from i, denoted by i ! j, if there exists an integer n  0 such that P

n

ij > 0. This means that

starting in state i, there is a positive probability (but not necessarily equal to 1) that the chain

will be in state j at time n (that is, n steps later); P(Xn = jjX0 = i) > 0. If j is reachable

from i, and i is reachable from j, then the states i and j are said to communicate, denoted by

i ! j. The relation de ned by communication satis es the following conditions:

1. All states communicate with themselves: P

0

ii = 1 > 0.1

2. Symmetry: If i ! j, then j ! i.

3. Transitivity: If i ! k and k ! j, then i ! j.

The above conditions imply that communication is an example of an equivalence relation,

meaning that it shares the properties with the more familiar equality relation \ = ":

i = i. If i = j, then j = i. If i = k and k = j, then i = j.

Only condition 3 above needs some justi cation, so we now prove it for completeness:

Suppose there exists integers n, m such that P

n

ik > 0 and P

m

kj > 0. Letting l = n + m,

we conclude that P

l

ij  P

n

ikP

m

kj > 0 where we have formally used the Chapman-Kolmogorov

equations. The point is that the chain can (with positive probability) go from i to j by rst

going from i to k (n steps) and then (independent of the past) going from k to j (an additional

m steps).

If we consider the rat in the open maze, we easily see that the set of states C1 = f1; 2; 3; 4g

all communicate with one another, but state 0 only communicates with itself (since it is an

absorbing state). Whereas state 0 is reachable from the other states, i ! 0, no other state can

be reached from state 0. We conclude that the state space S = f0; 1; 2; 3; 4g can be broken up

into two disjoint subsets, C1 = f1; 2; 3; 4g and C2 = f0g whose union equals S, and such that

each of these subsets has the property that all states within it communicate. Disjoint means

that their intersection contains no elements: C1 \ C2 = ;.

A little thought reveals that this kind of disjoint breaking can be done with any Markov

chain:

Proposition 1.1 For each Markov chain, there exists a unique decomposition of the state space

S into a sequence of disjoint subsets C1; C2; : : :,

S = [

1

i=1Ci

;

in which each subset has the property that all states within it communicate. Each such subset

is called a communication class of the Markov chain.


In these Lecture Notes, we shall study the limiting behavior of Markov chains as time n ! 1.

In particular, under suitable easy-to-check conditions, we will see that a Markov chain possesses

a limiting probability distribution,  = (j)j2S, and that the chain, if started o initially with

such a distribution will be a stationary stochastic process. We will also see that we can nd 

by merely solving a set of linear equations.

1.1 Communication classes and irreducibility for Markov chains

For a Markov chain with state space S, consider a pair of states (i; j). We say that j is reachable

from i, denoted by i ! j, if there exists an integer n  0 such that P

n

ij > 0. This means that

starting in state i, there is a positive probability (but not necessarily equal to 1) that the chain

will be in state j at time n (that is, n steps later); P(Xn = jjX0 = i) > 0. If j is reachable

from i, and i is reachable from j, then the states i and j are said to communicate, denoted by

i ! j. The relation de ned by communication satis es the following conditions:

1. All states communicate with themselves: P

0

ii = 1 > 0.1

2. Symmetry: If i ! j, then j ! i.

3. Transitivity: If i ! k and k ! j, then i ! j.

The above conditions imply that communication is an example of an equivalence relation,

meaning that it shares the properties with the more familiar equality relation \ = ":

i = i. If i = j, then j = i. If i = k and k = j, then i = j.

Only condition 3 above needs some justi cation, so we now prove it for completeness:

Suppose there exists integers n, m such that P

n

ik > 0 and P

m

kj > 0. Letting l = n + m,

we conclude that P

l

ij  P

n

ikP

m

kj > 0 where we have formally used the Chapman-Kolmogorov

equations. The point is that the chain can (with positive probability) go from i to j by rst

going from i to k (n steps) and then (independent of the past) going from k to j (an additional

m steps).

If we consider the rat in the open maze, we easily see that the set of states C1 = f1; 2; 3; 4g

all communicate with one another, but state 0 only communicates with itself (since it is an

absorbing state). Whereas state 0 is reachable from the other states, i ! 0, no other state can

be reached from state 0. We conclude that the state space S = f0; 1; 2; 3; 4g can be broken up

into two disjoint subsets, C1 = f1; 2; 3; 4g and C2 = f0g whose union equals S, and such that

each of these subsets has the property that all states within it communicate. Disjoint means

that their intersection contains no elements: C1 \ C2 = ;.

A little thought reveals that this kind of disjoint breaking can be done with any Markov

chain:

Proposition 1.1 For each Markov chain, there exists a unique decomposition of the state space

S into a sequence of disjoint subsets C1; C2; : : :,

S = [

1

i=1Ci

;

in which each subset has the property that all states within it communicate. Each such subset

is called a communication class of the Markov chain.


In these Lecture Notes, we shall study the limiting behavior of Markov chains as time n ! 1.

In particular, under suitable easy-to-check conditions, we will see that a Markov chain possesses

a limiting probability distribution,  = (j)j2S, and that the chain, if started o initially with

such a distribution will be a stationary stochastic process. We will also see that we can nd 

by merely solving a set of linear equations.

1.1 Communication classes and irreducibility for Markov chains

For a Markov chain with state space S, consider a pair of states (i; j). We say that j is reachable

from i, denoted by i ! j, if there exists an integer n  0 such that P

n

ij > 0. This means that

starting in state i, there is a positive probability (but not necessarily equal to 1) that the chain

will be in state j at time n (that is, n steps later); P(Xn = jjX0 = i) > 0. If j is reachable

from i, and i is reachable from j, then the states i and j are said to communicate, denoted by

i ! j. The relation de ned by communication satis es the following conditions:

1. All states communicate with themselves: P

0

ii = 1 > 0.1

2. Symmetry: If i ! j, then j ! i.

3. Transitivity: If i ! k and k ! j, then i ! j.

The above conditions imply that communication is an example of an equivalence relation,

meaning that it shares the properties with the more familiar equality relation \ = ":

i = i. If i = j, then j = i. If i = k and k = j, then i = j.

Only condition 3 above needs some justi cation, so we now prove it for completeness:

Suppose there exists integers n, m such that P

n

ik > 0 and P

m

kj > 0. Letting l = n + m,

we conclude that P

l

ij  P

n

ikP

m

kj > 0 where we have formally used the Chapman-Kolmogorov

equations. The point is that the chain can (with positive probability) go from i to j by rst

going from i to k (n steps) and then (independent of the past) going from k to j (an additional

m steps).

If we consider the rat in the open maze, we easily see that the set of states C1 = f1; 2; 3; 4g

all communicate with one another, but state 0 only communicates with itself (since it is an

absorbing state). Whereas state 0 is reachable from the other states, i ! 0, no other state can

be reached from state 0. We conclude that the state space S = f0; 1; 2; 3; 4g can be broken up

into two disjoint subsets, C1 = f1; 2; 3; 4g and C2 = f0g whose union equals S, and such that

each of these subsets has the property that all states within it communicate. Disjoint means

that their intersection contains no elements: C1 \ C2 = ;.

A little thought reveals that this kind of disjoint breaking can be done with any Markov

chain:

Proposition 1.1 For each Markov chain, there exists a unique decomposition of the state space

S into a sequence of disjoint subsets C1; C2; : : :,

S = [

1

i=1Ci

;

in which each subset has the property that all states within it communicate. Each such subset

is called a communication class of the Markov chain.

2014. febr. 23. 21:24
Hasznos számodra ez a válasz?
 4/7 anonim ***** válasza:

the Markov chain is recurrent; transient otherwise.

3The rat in the closed maze yields a recurrent Markov chain. The rat in the open maze yields a

Markov chain that is not irreducible; there are two communication classes C1 = f1; 2; 3; 4g; C2 =

f0g. C1 is transient, whereas C2 is recurrent.

Clearly if the state space is nite for a given Markov chain, then not all the states can be

transient (for otherwise after a nite number a steps (time) the chain would leave every state

never to return; where would it go?).

Thus we conclude that

Proposition 2.3 An irreducible Markov chain with a nite state space is always recurrent: all

states are recurrent.

Finally observe (from the argument that if two states communicate and one is recurrent

then so is the other) that for an irreducible recurrent chain, even if we start in some other state

X0 6= i, the chain will still visit state i an in nite number of times: For an irreducible recurrent

Markov chain, each state j will be visited over and over again (an in nite number of times)

regardless of the initial state X0 = i.

For example, if the rat in the closed maze starts o in cell 3, it will still return over and

over again to cell 1.

2.2 Expected return time to a given state: positive recurrence and null

recurrence

A recurrent state j is called positive recurrent if the expected amount of time to return to state

j given that the chain started in state j has nite rst moment:

E(jj) < 1:

A recurrent state j for which E(jj) = 1 is called null recurrent.

In general even for i 6= j, we de ne ij

def = minfn  1 : Xn = j jX0 = ig, the time (after

time 0) until reaching state j given X0 = i.

Proposition 2.4 Suppose i 6= j are both recurrent. If i and j communicate and if j is positive

recurrent (E(jj) < 1), then i is positive recurrent (E(ii) < 1) and also E(ij) < 1. In

particular, all states in a recurrent communication class are either all together positive recurrent

or all together null recurrent.

Proof : Assume that E(jj) < 1 and that i and j communicate. Choose the smallest n  1

such that P

n

j;i > 0. With X0 = j, let A = fXk 6= j; 1  k  n; Xn = ig; P(A) > 0. Then

E(j;j)  E(j;jjA)P(A) = (n + E(i;j))P(A); hence E(i;j) < 1 (for otherwise E(j;j) = 1, a

contradiction).

With X0 = j, let fYm : m  1g be iid distributed as j;j denote the interarrival times between

visits to state j. Thus the n

th revisit of the chain to state j is at time tn = Y1 +    + Yn, and

E(Y ) = E(j;j) < 1. Let

p = P(fXng visits state i before returning to state j j X0 = j):

p  P(A) > 0, where A is de ned above. Every time the chain revisits state j, there is,

independent of the past, this probability p that the chain will visit state i before revisiting state

j again. Letting N denote the number of revisits the chain makes to state j until rst visiting


the Markov chain is recurrent; transient otherwise.

3The rat in the closed maze yields a recurrent Markov chain. The rat in the open maze yields a

Markov chain that is not irreducible; there are two communication classes C1 = f1; 2; 3; 4g; C2 =

f0g. C1 is transient, whereas C2 is recurrent.

Clearly if the state space is nite for a given Markov chain, then not all the states can be

transient (for otherwise after a nite number a steps (time) the chain would leave every state

never to return; where would it go?).

Thus we conclude that

Proposition 2.3 An irreducible Markov chain with a nite state space is always recurrent: all

states are recurrent.

Finally observe (from the argument that if two states communicate and one is recurrent

then so is the other) that for an irreducible recurrent chain, even if we start in some other state

X0 6= i, the chain will still visit state i an in nite number of times: For an irreducible recurrent

Markov chain, each state j will be visited over and over again (an in nite number of times)

regardless of the initial state X0 = i.

For example, if the rat in the closed maze starts o in cell 3, it will still return over and

over again to cell 1.

2.2 Expected return time to a given state: positive recurrence and null

recurrence

A recurrent state j is called positive recurrent if the expected amount of time to return to state

j given that the chain started in state j has nite rst moment:

E(jj) < 1:

A recurrent state j for which E(jj) = 1 is called null recurrent.

In general even for i 6= j, we de ne ij

def = minfn  1 : Xn = j jX0 = ig, the time (after

time 0) until reaching state j given X0 = i.

Proposition 2.4 Suppose i 6= j are both recurrent. If i and j communicate and if j is positive

recurrent (E(jj) < 1), then i is positive recurrent (E(ii) < 1) and also E(ij) < 1. In

particular, all states in a recurrent communication class are either all together positive recurrent

or all together null recurrent.

Proof : Assume that E(jj) < 1 and that i and j communicate. Choose the smallest n  1

such that P

n

j;i > 0. With X0 = j, let A = fXk 6= j; 1  k  n; Xn = ig; P(A) > 0. Then

E(j;j)  E(j;jjA)P(A) = (n + E(i;j))P(A); hence E(i;j) < 1 (for otherwise E(j;j) = 1, a

contradiction).

With X0 = j, let fYm : m  1g be iid distributed as j;j denote the interarrival times between

visits to state j. Thus the n

th revisit of the chain to state j is at time tn = Y1 +    + Yn, and

E(Y ) = E(j;j) < 1. Let

p = P(fXng visits state i before returning to state j j X0 = j):

p  P(A) > 0, where A is de ned above. Every time the chain revisits state j, there is,

independent of the past, this probability p that the chain will visit state i before revisiting state

j again. Letting N denote the number of revisits the chain makes to state j until rst visiting

2014. febr. 23. 21:25
Hasznos számodra ez a válasz?
 5/7 anonim ***** válasza:

the Markov chain is recurrent; transient otherwise.

3The rat in the closed maze yields a recurrent Markov chain. The rat in the open maze yields a

Markov chain that is not irreducible; there are two communication classes C1 = f1; 2; 3; 4g; C2 =

f0g. C1 is transient, whereas C2 is recurrent.

Clearly if the state space is nite for a given Markov chain, then not all the states can be

transient (for otherwise after a nite number a steps (time) the chain would leave every state

never to return; where would it go?).

Thus we conclude that

Proposition 2.3 An irreducible Markov chain with a nite state space is always recurrent: all

states are recurrent.

Finally observe (from the argument that if two states communicate and one is recurrent

then so is the other) that for an irreducible recurrent chain, even if we start in some other state

X0 6= i, the chain will still visit state i an in nite number of times: For an irreducible recurrent

Markov chain, each state j will be visited over and over again (an in nite number of times)

regardless of the initial state X0 = i.

For example, if the rat in the closed maze starts o in cell 3, it will still return over and

over again to cell 1.

2.2 Expected return time to a given state: positive recurrence and null

recurrence

A recurrent state j is called positive recurrent if the expected amount of time to return to state

j given that the chain started in state j has nite rst moment:

E(jj) < 1:

A recurrent state j for which E(jj) = 1 is called null recurrent.

In general even for i 6= j, we de ne ij

def = minfn  1 : Xn = j jX0 = ig, the time (after

time 0) until reaching state j given X0 = i.

Proposition 2.4 Suppose i 6= j are both recurrent. If i and j communicate and if j is positive

recurrent (E(jj) < 1), then i is positive recurrent (E(ii) < 1) and also E(ij) < 1. In

particular, all states in a recurrent communication class are either all together positive recurrent

or all together null recurrent.

Proof : Assume that E(jj) < 1 and that i and j communicate. Choose the smallest n  1

such that P

n

j;i > 0. With X0 = j, let A = fXk 6= j; 1  k  n; Xn = ig; P(A) > 0. Then

E(j;j)  E(j;jjA)P(A) = (n + E(i;j))P(A); hence E(i;j) < 1 (for otherwise E(j;j) = 1, a

contradiction).

With X0 = j, let fYm : m  1g be iid distributed as j;j denote the interarrival times between

visits to state j. Thus the n

th revisit of the chain to state j is at time tn = Y1 +    + Yn, and

E(Y ) = E(j;j) < 1. Let

p = P(fXng visits state i before returning to state j j X0 = j):

p  P(A) > 0, where A is de ned above. Every time the chain revisits state j, there is,

independent of the past, this probability p that the chain will visit state i before revisiting state

j again. Letting N denote the number of revisits the chain makes to state j until rst visiting

2014. febr. 23. 21:25
Hasznos számodra ez a válasz?
 6/7 anonim ***** válasza:

the Markov chain is recurrent; transient otherwise.

3The rat in the closed maze yields a recurrent Markov chain. The rat in the open maze yields a

Markov chain that is not irreducible; there are two communication classes C1 = f1; 2; 3; 4g; C2 =

f0g. C1 is transient, whereas C2 is recurrent.

Clearly if the state space is nite for a given Markov chain, then not all the states can be

transient (for otherwise after a nite number a steps (time) the chain would leave every state

never to return; where would it go?).

Thus we conclude that

Proposition 2.3 An irreducible Markov chain with a nite state space is always recurrent: all

states are recurrent.

Finally observe (from the argument that if two states communicate and one is recurrent

then so is the other) that for an irreducible recurrent chain, even if we start in some other state

X0 6= i, the chain will still visit state i an in nite number of times: For an irreducible recurrent

Markov chain, each state j will be visited over and over again (an in nite number of times)

regardless of the initial state X0 = i.

For example, if the rat in the closed maze starts o in cell 3, it will still return over and

over again to cell 1.

2.2 Expected return time to a given state: positive recurrence and null

recurrence

A recurrent state j is called positive recurrent if the expected amount of time to return to state

j given that the chain started in state j has nite rst moment:

E(jj) < 1:

A recurrent state j for which E(jj) = 1 is called null recurrent.

In general even for i 6= j, we de ne ij

def = minfn  1 : Xn = j jX0 = ig, the time (after

time 0) until reaching state j given X0 = i.

Proposition 2.4 Suppose i 6= j are both recurrent. If i and j communicate and if j is positive

recurrent (E(jj) < 1), then i is positive recurrent (E(ii) < 1) and also E(ij) < 1. In

particular, all states in a recurrent communication class are either all together positive recurrent

or all together null recurrent.

Proof : Assume that E(jj) < 1 and that i and j communicate. Choose the smallest n  1

such that P

n

j;i > 0. With X0 = j, let A = fXk 6= j; 1  k  n; Xn = ig; P(A) > 0. Then

E(j;j)  E(j;jjA)P(A) = (n + E(i;j))P(A); hence E(i;j) < 1 (for otherwise E(j;j) = 1, a

contradiction).

With X0 = j, let fYm : m  1g be iid distributed as j;j denote the interarrival times between

visits to state j. Thus the n

th revisit of the chain to state j is at time tn = Y1 +    + Yn, and

E(Y ) = E(j;j) < 1. Let

p = P(fXng visits state i before returning to state j j X0 = j):

p  P(A) > 0, where A is de ned above. Every time the chain revisits state j, there is,

independent of the past, this probability p that the chain will visit state i before revisiting state

j again. Letting N denote the number of revisits the chain makes to state j until rst visiting



remélem ennyi elég lesz.

2014. febr. 23. 21:26
Hasznos számodra ez a válasz?
 7/7 A kérdező kommentje:
Köszönöm, de egy linknek jobban örülnék, hogy látszódjanak a képletek is szépen
2014. febr. 24. 10:20

Kapcsolódó kérdések:





Minden jog fenntartva © 2024, www.gyakorikerdesek.hu
GYIK | Szabályzat | Jogi nyilatkozat | Adatvédelem | Cookie beállítások | WebMinute Kft. | Facebook | Kapcsolat: info(kukac)gyakorikerdesek.hu

A weboldalon megjelenő anyagok nem minősülnek szerkesztői tartalomnak, előzetes ellenőrzésen nem esnek át, az üzemeltető véleményét nem tükrözik.
Ha kifogással szeretne élni valamely tartalommal kapcsolatban, kérjük jelezze e-mailes elérhetőségünkön!