Large Deviation Principle for Markov Chains in Discrete Time

Guy FAYOLLE & Arnaud de LA FORTELLE

November 1999

Abstract: Let E be a denumerable state space, X be an homogeneous Markov chain on E with kernel P. Then the chain X verifies a weak Sanov's theorem, i.e. a weak large deviation principle holds for the law of the pair empirical measure. In our opinion this is an improvement with respect to the existing literature, insofar as the LDP in the Markov case often requires either the finiteness of E, or strong uniformity conditions, which important classes of chains do not verify (e.g. classical queueing networks with bounded jumps). Moreover this LDP holds for any discrete state space Markov chain, possibly non ergodic.
The result is obtained by a new method, allowing to extend the LDP from a finite state space setting to a denumerable one, somehow like a the projective limit approach. The analysis presented here offers some by-products, among which an analogue of Varadhan's integral lemma and, under restrictive conditions, a contraction principle leading directly to a weak Sanov's theorem for the one-dimensional empirical measure.

Keywords: Large deviations, Markov chain, pair empirical measure, Sanov, entropy, information, cycle.

PostScript file (198Ko, in english)
PDF file (434Ko, in english)
Back to homepage

This document was translated from LATEX by HEVEA.