Exam 16: Markov Processes
Exam 1: Introduction53 Questions
Exam 2: An Introduction to Linear Programming56 Questions
Exam 3: Linear Programming: Sensitivity Analysis and Interpretation of Solution44 Questions
Exam 4: Linear Programming Applications in Marketing, finance, and OM52 Questions
Exam 5: Advanced Linear Programming Applications39 Questions
Exam 6: Distribution and Network Models62 Questions
Exam 7: Integer Linear Programming52 Questions
Exam 8: Nonlinear Optimization Models45 Questions
Exam 9: Project Scheduling: Pertcpm60 Questions
Exam 10: Inventory Models60 Questions
Exam 11: Waiting Line Models56 Questions
Exam 12: Simulation53 Questions
Exam 13: Decision Analysis80 Questions
Exam 14: Multicriteria Decisions42 Questions
Exam 15: Time Series Analysis and Forecasting53 Questions
Exam 16: Markov Processes36 Questions
Exam 17: Linear Programming: Simplex Method45 Questions
Exam 18: Simplex-Based Sensitivity Analysis and Duality32 Questions
Exam 19: Solution Procedures for Transportation and Assignment Problems39 Questions
Exam 20: Minimal Spanning Tree19 Questions
Exam 21: Dynamic Programming41 Questions
Select questions type
For a situation with weekly dining at either an Italian or Mexican restaurant,
Free
(Multiple Choice)
4.8/5
(43)
Correct Answer:
A
All Markov chains have steady-state probabilities.
Free
(True/False)
4.8/5
(32)
Correct Answer:
False
Bark Bits Company is planning an advertising campaign to raise the brand loyalty of its customers to 0.80.
a.The former transition matrix is as follows:
What is the new one?
b.What are the new steady-state probabilities?
c.If each point of market share increases profit by $15,000,what is the most you would pay for the advertising?

Free
(Essay)
4.9/5
(46)
Correct Answer:
a. b. π1 = 0.5,π2 = 0.5 c. The increase in market share is 0.5 − 0.444 = 0.056. (5.6 points) ($15,000/point)= $84,000 value for the campaign.
A state i is a transient state if there exists a state j that is reachable from i,but the state i is not reachable from state j.
(True/False)
4.7/5
(32)
The fundamental matrix is derived from the matrix of transition probabilities and is relatively easy to compute for Markov processes with a small number of states.
(True/False)
4.7/5
(33)
In Markov analysis,we are concerned with the probability that the
(Multiple Choice)
4.8/5
(36)
All entries in a row of a matrix of transition probabilities sum to 1.
(True/False)
4.7/5
(34)
The probability that a system is in state 2 in the fifth period is π5(2).
(True/False)
4.9/5
(28)
If a Markov chain has at least one absorbing state,steady-state probabilities cannot be calculated.
(True/False)
4.8/5
(32)
The probability of reaching an absorbing state is given by the
(Multiple Choice)
4.8/5
(29)
If an absorbing state exists,then the probability that a unit will ultimately move into the absorbing state is given by the steady-state probability.
(True/False)
4.9/5
(29)
A unique matrix of transition probabilities should be developed for each customer.
(True/False)
4.8/5
(21)
The probability that a system is in a particular state after a large number of periods is
(Multiple Choice)
4.9/5
(29)
Showing 1 - 20 of 36
Filters
- Essay(0)
- Multiple Choice(0)
- Short Answer(0)
- True False(0)
- Matching(0)