Abstract
We consider Markov games of the general form characterized by the property that, for all stationary strategies of players, the set of game states is partitioned into several ergodic sets and a transient set, which may vary depending on the strategies of players. As a criterion, we choose the mean payoff of the first player per unit time. It is proved that the general Markov game with a finite set of states and decisions of both players has a value, and both players have ε-optimal stationary strategies. The correctness of this statement is demonstrated on the well-known Blackwell's example (“Big Match”).
Similar content being viewed by others
REFERENCES
L. S. Shapley, “Stochastic games,” Proc. Nat. Acad. Sci. USA, No. 39, 1095-1100 (1953).
D. Gillette, “Stochastic games with zero stop probabilities,” Contrib. Theor. Games (Ann. Math. Stud.), 3, No. 39, 179-187 (1957).
L. G. Gubenko, “On multistep stochastic games,” Teor. Ver. Mat. Statist., Issue 8, 35-49 (1973).
A. J. Hoffman and R. M. Karp, “On nonterminating stochastic games,” Manag. Sci., No. 12, 359-370 (1966).
A. A. Ibragimov, “Iterative method for solving a Markov game with single ergodic class,” Probl. Informat. Énerget., No. 4, 10-14 (1999).
T. Parthasarathy and T. E. S. Raghavan, Some Topics in Two-Person Games, Elsevier, New York (1971).
D. Blackwell, “The big match,” SIAM J. Appl. Math., No. 19, 473-476 (1970).
A. A. Ibragimov, “On the Markov 'Big Match' game,” Avtomat. Telemekh., No. 11, 104-113 (2000).
L. A. Petrosyan and G. V. Tomskii, Dynamic Games and Their Applications [in Russian], Leningrad University, Leningrad (1982).
R. E. Bellman, Dynamic Programming, Princeton University Press, Princeton (1957).
R. D. Luce and H. Raiffa, Games and Decisions, Wiley, New York (1957).
H. Mine and S. Osaki, Markovian Decision Processes [Russian translation], Nauka, Moscow (1977).
J. G. Kemeny and J. L. Snell, Finite Markov Chains, Springer, New York (1976).
A. N. Shiryaev, Probability [in Russian], Nauka, Moscow (1980).
G. M. Fikhtengol'ts, A Course of Differential and Integral Calculus [in Russian], Nauka, Moscow (1970).
T. A. Sarymsakov, Principles of the Theory of Markov Processes [in Russian], Fan, Tashkent (1988).
G. Pólya and G. Szegö, Aufgaben und Lehrzätze aus der Analysis. Erster Band: Reichen, Integralreichnung, Funktionentheorie [Russian translation], Nauka, Moscow (1978).
V. V. Baranov, “Model and methods of uniformly optimal stochastic control,” Avtomat. Telemekh., No. 5, 42-52 (1992).
G. N. Dyubin and V. G. Suzdal', Introduction to Applied Game Theory [in Russian], Nauka, Moscow (1981).
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Ibragimov, A.A. Markov Games with Several Ergodic Classes. Ukrainian Mathematical Journal 55, 921–941 (2003). https://doi.org/10.1023/B:UKMA.0000010593.59199.88
Issue Date:
DOI: https://doi.org/10.1023/B:UKMA.0000010593.59199.88