News
We consider discounted Markov decision processes (MDPs) with countably-infinite state spaces, finite action spaces, and unbounded rewards. Typical examples of such MDPs are inventory management and ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results