Skip to main content
Log in

Markov Games with Several Ergodic Classes

  • Published:
Ukrainian Mathematical Journal Aims and scope

Abstract

We consider Markov games of the general form characterized by the property that, for all stationary strategies of players, the set of game states is partitioned into several ergodic sets and a transient set, which may vary depending on the strategies of players. As a criterion, we choose the mean payoff of the first player per unit time. It is proved that the general Markov game with a finite set of states and decisions of both players has a value, and both players have ε-optimal stationary strategies. The correctness of this statement is demonstrated on the well-known Blackwell's example (“Big Match”).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

REFERENCES

  1. L. S. Shapley, “Stochastic games,” Proc. Nat. Acad. Sci. USA, No. 39, 1095-1100 (1953).

    Google Scholar 

  2. D. Gillette, “Stochastic games with zero stop probabilities,” Contrib. Theor. Games (Ann. Math. Stud.), 3, No. 39, 179-187 (1957).

    Google Scholar 

  3. L. G. Gubenko, “On multistep stochastic games,” Teor. Ver. Mat. Statist., Issue 8, 35-49 (1973).

    Google Scholar 

  4. A. J. Hoffman and R. M. Karp, “On nonterminating stochastic games,” Manag. Sci., No. 12, 359-370 (1966).

    Google Scholar 

  5. A. A. Ibragimov, “Iterative method for solving a Markov game with single ergodic class,” Probl. Informat. Énerget., No. 4, 10-14 (1999).

    Google Scholar 

  6. T. Parthasarathy and T. E. S. Raghavan, Some Topics in Two-Person Games, Elsevier, New York (1971).

    Google Scholar 

  7. D. Blackwell, “The big match,” SIAM J. Appl. Math., No. 19, 473-476 (1970).

    Google Scholar 

  8. A. A. Ibragimov, “On the Markov 'Big Match' game,” Avtomat. Telemekh., No. 11, 104-113 (2000).

    Google Scholar 

  9. L. A. Petrosyan and G. V. Tomskii, Dynamic Games and Their Applications [in Russian], Leningrad University, Leningrad (1982).

    Google Scholar 

  10. R. E. Bellman, Dynamic Programming, Princeton University Press, Princeton (1957).

    Google Scholar 

  11. R. D. Luce and H. Raiffa, Games and Decisions, Wiley, New York (1957).

    Google Scholar 

  12. H. Mine and S. Osaki, Markovian Decision Processes [Russian translation], Nauka, Moscow (1977).

  13. J. G. Kemeny and J. L. Snell, Finite Markov Chains, Springer, New York (1976).

    Google Scholar 

  14. A. N. Shiryaev, Probability [in Russian], Nauka, Moscow (1980).

  15. G. M. Fikhtengol'ts, A Course of Differential and Integral Calculus [in Russian], Nauka, Moscow (1970).

  16. T. A. Sarymsakov, Principles of the Theory of Markov Processes [in Russian], Fan, Tashkent (1988).

  17. G. Pólya and G. Szegö, Aufgaben und Lehrzätze aus der Analysis. Erster Band: Reichen, Integralreichnung, Funktionentheorie [Russian translation], Nauka, Moscow (1978).

  18. V. V. Baranov, “Model and methods of uniformly optimal stochastic control,” Avtomat. Telemekh., No. 5, 42-52 (1992).

    Google Scholar 

  19. G. N. Dyubin and V. G. Suzdal', Introduction to Applied Game Theory [in Russian], Nauka, Moscow (1981).

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ibragimov, A.A. Markov Games with Several Ergodic Classes. Ukrainian Mathematical Journal 55, 921–941 (2003). https://doi.org/10.1023/B:UKMA.0000010593.59199.88

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/B:UKMA.0000010593.59199.88

Keywords

Navigation