Deck 16: Markov Processes

Full screen (f)
exit full mode
Question
The probability that a system is in a particular state after a large number of periods is

A)independent of the beginning state of the system.
B)dependent on the beginning state of the system.
C)equal to one half.
D)the same for every ending system.
Use Space or
up arrow
down arrow
to flip the card.
Question
If an absorbing state exists, then the probability that a unit will ultimately move into the absorbing state is given by the steady state probability.
Question
In Markov analysis, we are concerned with the probability that the

A)state is part of a system.
B)system is in a particular state at a given time.
C)time has reached a steady state.
D)transition will occur.
Question
A unique matrix of transition probabilities should be developed for each customer.
Question
All Markov chains have steady-state probabilities.
Question
The probability of going from state 1 in period 2 to state 4 in period 3 is

A)p12
B)p23
C)p14
D)p43
Question
Analysis of a Markov process

A)describes future behavior of the system.
B)optimizes the system.
C)leads to higher order decision making.
D)All of the alternatives are true.
Question
A transition probability describes

A)the probability of a success in repeated, independent trials.
B)the probability a system in a particular state now will be in a specific state next period.
C)the probability of reaching an absorbing state.
D)None of the alternatives is correct.
Question
The probability of reaching an absorbing state is given by the

A)R matrix.
B)NR matrix.
C)Q matrix.
D)(I - Q)-1 matrix
Question
The probability that the system is in state 2 in the 5th period is π\pi 5(2).
Question
At steady state

A) π\pi 1(n+1) > π\pi 1(n)
B) π\pi 1 = π\pi 2
C) π\pi 1 + π\pi 2 > 1
D) π\pi 1(n+1) = π\pi 1
Question
Steady state probabilities are independent of initial state.
Question
A Markov chain cannot consist of all absorbing states.
Question
If the probability of making a transition from a state is 0, then that state is called a(n)

A)steady state.
B)final state.
C)origin state.
D)absorbing state.
Question
All entries in a matrix of transition probabilities sum to 1.
Question
The fundamental matrix is used to calculate the probability of the process moving into each absorbing state.
Question
For a situation with weekly dining at either an Italian or Mexican restaurant,

A)the weekly visit is the trial and the restaurant is the state.
B)the weekly visit is the state and the restaurant is the trial.
C)the weekly visit is the trend and the restaurant is the transition.
D)the weekly visit is the transition and the restaurant is the trend.
Question
All Markov chain transition matrices have the same number of rows as columns.
Question
Absorbing state probabilities are the same as

A)steady state probabilities.
B)transition probabilities.
C)fundamental probabilities.
D)None of the alternatives is true.
Question
Markov processes use historical probabilities.
Question
All entries in a row of a matrix of transition probabilities sum to 1.
Question
When absorbing states are present, each row of the transition matrix corresponding to an absorbing state will have a single 1 and all other probabilities will be 0.
Question
A state i is a transient state if there exists a state j that is reachable from i, but the state i is not reachable from state j.
Question
For Markov processes having the memoryless property, the prior states of the system must be considered in order to predict the future behavior of the system.
Question
A state i is an absorbing state if pii = 0.
Unlock Deck
Sign up to unlock the cards in this deck!
Unlock Deck
Unlock Deck
1/25
auto play flashcards
Play
simple tutorial
Full screen (f)
exit full mode
Deck 16: Markov Processes
1
The probability that a system is in a particular state after a large number of periods is

A)independent of the beginning state of the system.
B)dependent on the beginning state of the system.
C)equal to one half.
D)the same for every ending system.
A
2
If an absorbing state exists, then the probability that a unit will ultimately move into the absorbing state is given by the steady state probability.
False
3
In Markov analysis, we are concerned with the probability that the

A)state is part of a system.
B)system is in a particular state at a given time.
C)time has reached a steady state.
D)transition will occur.
B
4
A unique matrix of transition probabilities should be developed for each customer.
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
5
All Markov chains have steady-state probabilities.
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
6
The probability of going from state 1 in period 2 to state 4 in period 3 is

A)p12
B)p23
C)p14
D)p43
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
7
Analysis of a Markov process

A)describes future behavior of the system.
B)optimizes the system.
C)leads to higher order decision making.
D)All of the alternatives are true.
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
8
A transition probability describes

A)the probability of a success in repeated, independent trials.
B)the probability a system in a particular state now will be in a specific state next period.
C)the probability of reaching an absorbing state.
D)None of the alternatives is correct.
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
9
The probability of reaching an absorbing state is given by the

A)R matrix.
B)NR matrix.
C)Q matrix.
D)(I - Q)-1 matrix
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
10
The probability that the system is in state 2 in the 5th period is π\pi 5(2).
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
11
At steady state

A) π\pi 1(n+1) > π\pi 1(n)
B) π\pi 1 = π\pi 2
C) π\pi 1 + π\pi 2 > 1
D) π\pi 1(n+1) = π\pi 1
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
12
Steady state probabilities are independent of initial state.
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
13
A Markov chain cannot consist of all absorbing states.
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
14
If the probability of making a transition from a state is 0, then that state is called a(n)

A)steady state.
B)final state.
C)origin state.
D)absorbing state.
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
15
All entries in a matrix of transition probabilities sum to 1.
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
16
The fundamental matrix is used to calculate the probability of the process moving into each absorbing state.
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
17
For a situation with weekly dining at either an Italian or Mexican restaurant,

A)the weekly visit is the trial and the restaurant is the state.
B)the weekly visit is the state and the restaurant is the trial.
C)the weekly visit is the trend and the restaurant is the transition.
D)the weekly visit is the transition and the restaurant is the trend.
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
18
All Markov chain transition matrices have the same number of rows as columns.
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
19
Absorbing state probabilities are the same as

A)steady state probabilities.
B)transition probabilities.
C)fundamental probabilities.
D)None of the alternatives is true.
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
20
Markov processes use historical probabilities.
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
21
All entries in a row of a matrix of transition probabilities sum to 1.
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
22
When absorbing states are present, each row of the transition matrix corresponding to an absorbing state will have a single 1 and all other probabilities will be 0.
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
23
A state i is a transient state if there exists a state j that is reachable from i, but the state i is not reachable from state j.
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
24
For Markov processes having the memoryless property, the prior states of the system must be considered in order to predict the future behavior of the system.
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
25
A state i is an absorbing state if pii = 0.
Unlock Deck
Unlock for access to all 25 flashcards in this deck.
Unlock Deck
k this deck
locked card icon
Unlock Deck
Unlock for access to all 25 flashcards in this deck.