Ah, another attempt to make sense of the chaos. You want me to rewrite this… Wikipedia article. About mathematical models. How… quaint. Fine. Don't expect sunshine and rainbows. This is how it is.
Mathematical Model for Sequential Decision Making Under Uncertainty
The Markov decision process (MDP), a term that sounds far more imposing than the reality, is essentially a framework for grappling with sequential decision making when the outcomes are, as you might expect, uncertain. Think of it as a formalized way of saying, "I do this, then maybe that happens, and I have to decide what to do next based on what might have happened." It's also known by the slightly more academic moniker of a stochastic dynamic program or a stochastic control problem. The label itself is a bit of a mouthful, isn't it? [1]
This whole endeavor didn't just spring into existence. It clawed its way out of the trenches of operations research back in the 1950s. [2] [3] Since then, MDPs have managed to infiltrate a rather diverse collection of fields. You'll find them lurking in ecology, whispering in the halls of economics, influencing decisions in healthcare, managing traffic in telecommunications, and, perhaps most notably, forming the backbone of reinforcement learning. In that last bastion, the MDP is the environment's skeleton, around which a learning agent awkwardly flails, trying to figure out states, actions, and rewards. It's designed to be a simplified, almost brutal, representation of the tangled challenges found in artificial intelligence. This framework forces a certain clarity: understanding cause and effect, navigating uncertainty and nondeterminism, and relentlessly pursuing explicit goals. [4]
The name itself, "Markov decision process," is a nod to the mathematician Andrey Markov and his chains. The "Markov" part signifies that the state transitions adhere to the Markov property – the future depends only on the present state, not the entire history. The "decision process" bit is where it gets interesting, or at least, less passive. It means someone, or something, is actually making choices that nudge these state transitions, elevating a simple chain into a realm of deliberate action amidst uncertainty.
Definition
Imagine a simple MDP. You have states, depicted as drab green circles, and actions, represented by those slightly more vibrant orange circles. Arrows, tinged with the same orange, signify rewards. It’s a stark visual, isn't it? [Diagram of a simple MDP]
A Markov decision process is formally defined by a 4-tuple:
( S , A ,
P
a
,
R
a
)
{\displaystyle (S,A,P_{a},R_{a})}
Let's dissect this.
-
S: This is the set of states, the "state space." It can be a neatly defined, discrete list of possibilities, or it can be something as sprawling and undefined as the set of real numbers. Think of it as the landscape of possible realities.
-
A: This is the set of actions, the "action space." Sometimes, it's more nuanced, with a specific set of actions, denoted as , available only from a particular state . Like the state space, actions can be discrete choices or continuous adjustments.
-
: This is the engine of uncertainty. Intuitively, it's the probability that if you take action while in state at time , you'll end up in state at time . Formally, this probability transition is defined such that:
for any measurable subset . If the state space is discrete, this integral transforms into a sum, simplifying to:
If is a subset of , the integral is typically understood with respect to the Lebesgue measure. It's the rulebook for how the world shifts, or fails to shift, based on your choices.
-
: This is the immediate payoff, the reward (or expected reward) you get for taking action in state and transitioning to state . It’s the tangible consequence, good or bad, delivered right after the action.
A "policy function," denoted by , is essentially a strategy. It's a mapping, potentially probabilistic, from the state space to the action space . It tells you, or guides you, on what action to take in any given state.
Optimization Objective
The ultimate goal in this labyrinth of states and actions is to discover a "policy" – a strategy, , that dictates which action to choose when you find yourself in state . Once an MDP is yoked to a policy, it behaves like a predictable Markov chain, at least in terms of its decision-making process.
The objective is to unearth a policy that maximizes some cumulative measure of these random rewards. Typically, this is the expected discounted sum over an infinite horizon:
Here, means the actions are dictated by the policy. The expectation is taken over the possible next states, . The term is the discount factor, where . When is close to 1, the decision-maker is far-sighted, considering future rewards with significant weight. A lower makes the decision-maker more myopic, prioritizing immediate gains.
Another objective, closely related, is the finite-horizon return, where the agent focuses only on the first steps, giving each reward equal weight:
Here, , and the expectation is over , with being the time horizon. This version is more common in Learning Theory.
A policy that achieves this maximum is called an "optimal policy," usually denoted . It's important to note that there might be several distinct optimal policies. Thanks to the Markov property, it's proven that the optimal policy is indeed a function of the current state, just as we've assumed.
Simulator Models
Sometimes, the precise probabilities are elusive, buried in complexity. In such cases, a simulator can step in to implicitly model the MDP by providing samples from those elusive transition distributions.
One common form is an episodic environment simulator. You start it in an initial state, feed it an action, and it spits back a subsequent state and reward. This allows you to generate trajectories of states, actions, and rewards, often referred to as "episodes."
Another type is a generative model. This is a single-step simulator. Given any state and action, it can generate a sample of the next state and the associated reward. [5] This is distinct from the generative model concept in statistical classification. In pseudocode, a generative model is often represented by . So, means you’re sampling from the generative model with current state and action to get the new state and reward . Unlike an episodic simulator, a generative model can provide data from any state, not just those encountered in a specific trajectory.
These model classes form a hierarchy. An explicit model can easily generate a generative model by sampling, and repeated use of a generative model yields an episodic simulator. The reverse is trickier; you can only learn approximate models through regression. The type of model available dictates which solution algorithms are suitable. Dynamic programming algorithms, for instance, demand an explicit model. Monte Carlo tree search requires a generative model (or a replicable episodic simulator). Most reinforcement learning algorithms, however, are content with just an episodic simulator.
Example
Consider the Pole-Balancing model, a classic from control theory. [Image of a Pole-Balancing simulation from OpenAI Gym]
-
S: The state is defined by a tuple representing the pole's angle, angular velocity, the cart's position, and its velocity. It's a snapshot of the physical configuration.
-
A: The actions are simple: . One action pushes the cart left, the other pushes it right. Binary choices for a messy reality.
-
: The system's transition is deterministic, governed by the laws of physics. No randomness here, just the inexorable march of mechanics.
-
: The reward is if the pole remains upright after the transition, and otherwise. The reward depends solely on the resulting state ; the action itself doesn't directly grant a reward, only influences the outcome.
Algorithms
For MDPs with finite state and action spaces, solutions can be found through various methods, notably dynamic programming. The algorithms I'll outline here are for finite MDPs with explicitly defined transition probabilities and reward functions, though the core ideas can be extended, often using function approximation. Sometimes, even processes with countably infinite state and action spaces can be cleverly reduced to finite ones. [6]
The standard approach to finding optimal policies in finite MDPs involves storing two state-indexed arrays: for real values (representing the expected future reward) and for actions. At the algorithm's conclusion, will hold the optimal policy, and will represent the discounted sum of rewards expected from state onwards.
The algorithm alternates between two steps: (1) a value update and (2) a policy update. These are iterated until no further changes occur. Both steps recursively refine estimates of the optimal policy and state value using older estimates.
The order and frequency of these updates can vary. As long as no state is permanently ignored, the algorithm is guaranteed to converge to the correct solution. [7]
Notable Variants
Value Iteration
In value iteration, also known as backward induction (Bellman, 1957), the function isn't explicitly used. Instead, the value of is computed within the calculation whenever needed. Substituting this into the update yields a combined step:
Here, is the iteration number. Value iteration begins with an initial guess , and then repeatedly calculates for all states until converges, meaning the left-hand side equals the right-hand side (this is the "Bellman equation" for this problem). [ clarification needed ] Lloyd Shapley's 1953 work on stochastic games included value iteration as a special case for MDPs, though this wasn't widely recognized immediately. [8] [9]
Policy Iteration
Policy iteration (Howard, 1960) is different. [Harv error: no target: CITEREFHoward1960] Step one (value update) is performed once, then step two (policy update) is performed once. This cycle repeats until the policy converges. Then, step one is performed again, and so on. Howard actually developed this method to optimize Sears catalogue mailings, having previously used value iteration. [10]
Instead of repeating the policy update until convergence, it can be formulated and solved as a system of linear equations. These equations arise from setting in the policy update equation. [ clarification needed ] Repeating the policy update until convergence can thus be viewed as solving these linear equations through relaxation.
A distinct advantage of policy iteration is its clear stopping condition: when the policy array remains unchanged after updating all states, the algorithm has finished.
However, policy iteration typically converges more slowly than value iteration when dealing with a large number of possible states.
Modified Policy Iteration
Modified policy iteration (van Nunen, 1976; Puterman & Shin, 1978) strikes a balance. Step one is performed once, followed by several repetitions of step two. This cycle then repeats. [11] [12]
Prioritized Sweeping
This variant focuses the updates on states deemed "important." Importance can be defined by recent significant changes in or around those states, or by their proximity to the starting state or other points of interest.
Computational Complexity
For finite MDPs, algorithms exist that can find optimal policies in time complexity polynomial to the problem's representation size. This places the related decision problems within the computational complexity class P. [13] However, the "curse of dimensionality" means that the problem representation size is often exponentially larger than the number of state and action variables. This severely limits exact solution techniques to problems with a compact representation. In practice, online planning methods like Monte Carlo tree search can yield useful solutions for larger problems. Theoretically, it's possible to devise online planning algorithms that approach arbitrarily near-optimal policies without a computational complexity dependency on the state space size. [14]
Extensions and Generalizations
A Markov decision process can be seen as a stochastic game with a single player. It’s the simplest form.
Partial Observability
The solutions described above assume perfect knowledge of the state when making a decision. If this isn't the case, the problem becomes a partially observable Markov decision process, or POMDP. You can't calculate if you don't know .
Constrained Markov Decision Processes
Constrained Markov Decision Processes (CMDPs) extend MDPs in a few crucial ways. [15]
- Instead of a single reward, there are multiple costs associated with actions.
- CMDPs are solved using linear programs; standard dynamic programming won't suffice.
- The resulting optimal policy can depend on the initial state.
The method of Lagrange multipliers is applicable here, and various Lagrangian-based algorithms have been developed, including the natural policy gradient primal-dual method. [16] CMDPs find applications in areas like motion planning for robotics. [17]
Continuous-Time Markov Decision Process
In discrete-time MDPs, decisions are made at distinct time intervals. Continuous-time Markov decision processes allow decisions to be made at any point in time. This makes them better suited for modeling systems with continuous dynamics, often described by ordinary differential equations (ODEs). These models are useful for queueing systems, epidemic processes, and population processes.
Like their discrete-time counterparts, continuous-time MDPs aim to find an optimal policy that maximizes the expected cumulative reward. The key difference is that the summation is replaced by an integral due to the continuous nature of time:
where .
Discrete Space: Linear Programming Formulation
When both state and action spaces are finite, linear programming offers an early approach to finding the optimal policy. This formulation typically assumes an ergodic model, meaning the continuous-time MDP behaves like an ergodic continuous-time Markov chain under a stationary policy. Under this assumption, decisions are made only when the system is about to transition between states. If the optimal value function is state-independent, we have:
If a function exists, will be the smallest that satisfies this. The linear programming formulation to find is as follows:
-
Primal Linear Program (P-LP)
-
Dual Linear Program (D-LP)
A feasible solution to the D-LP is optimal if it maximizes the objective function. Once the optimal solution is found, it can be used to establish the optimal policies.
Continuous Space: Hamilton–Jacobi–Bellman Equation
For continuous-time MDPs with continuous state and action spaces, the optimal criterion is found by solving the Hamilton–Jacobi–Bellman (HJB) partial differential equation. The problem is reformulated as:
Here, is the terminal reward function, is the system state vector, is the control vector we seek, and describes how the state vector evolves. The HJB equation is:
Solving this equation yields the optimal value function , which in turn determines the optimal control at any time :
Reinforcement Learning
Reinforcement learning is a branch of machine learning and optimal control focused on finding approximately optimal policies for MDPs when transition probabilities and rewards are unknown. [19]
Reinforcement learning allows us to tackle MDPs without explicitly defining the transition probabilities needed for policy iteration. Instead, these probabilities and rewards are learned through interaction with the MDP. The agent gathers experience by acting within the environment, aiming to maximize sample efficiency – minimizing the amount of experience needed to find a policy that performs close to optimally. Perfect optimality is generally unattainable with finite samples due to the inherent stochasticity.
Reinforcement Learning for Discrete MDPs
For discrete MDPs, a useful auxiliary function is defined as taking action and then proceeding optimally (or according to the current policy):
While this function is also unknown, learning experience is based on pairs and their outcomes (). Therefore, one can maintain a table of values and update it directly from experience. This is the essence of Q-learning.
Other Scopes
Learning Automata
The theory of learning automata offers another perspective on MDPs within machine learning. If the environment is stochastic, it can be viewed as a type of reinforcement learning. Early work by Narendra and Thathachar (1974) described learning automata as finite-state automata. [20] Similar to reinforcement learning, learning automata can solve problems without known probabilities or rewards. The key difference from Q-learning is that learning automata update action probabilities directly, eschewing the memory of Q-values. This scheme comes with rigorous convergence proofs. [21]
A stochastic automaton in this context consists of:
- A set of possible inputs.
- A set of internal states.
- A set of possible outputs or actions, where .
- An initial state probability vector .
- A computable function that updates to based on the current input and state at time .
- A function that generates the output (action) at each time step.
The states of such an automaton align with the states of a "discrete-state discrete-parameter Markov process". [22] At each time step , the automaton receives an input, updates its state probabilities using , randomly selects a successor state based on these probabilities, and outputs the corresponding action. The environment then processes this action and provides the next input. [21]
Category Theoretic Interpretation
Beyond rewards, a Markov decision process can be framed within Category theory. Let be the free monoid generated by . Let Dist be the Kleisli category of the Giry monad. Then, a functor can encode both the state space and the probability function .
This allows for a generalization of MDPs from monoids (categories with a single object) to arbitrary categories. Such a structure, , could be termed a context-dependent Markov decision process, as transitions between objects in can alter the available actions and the set of possible states. [ citation needed ]
Alternative Notations
The terminology and notation for MDPs are, shall we say, fluid. Two main schools of thought exist:
- One, prevalent in fields like economics, focuses on maximization problems, using terms like "action," "reward," and "value," and often employing or for the discount factor.
- The other, common in engineering and navigation, deals with minimization problems, using terms like "control," "cost," and "cost-to-go," and frequently using for the discount factor.
The notation for transition probabilities also varies:
| In this article | Alternative | Comment |
|---|---|---|
| action | control | |
| reward | cost | is typically the negative of |
| value | cost-to-go | is typically the negative of |
| policy | policy | |
| discounting factor | discounting factor | |
| transition probability | transition probability |
Additionally, transition probability might appear as , , or less commonly, .
See Also
- Probabilistic automata
- Odds algorithm
- Quantum finite automata
- Partially observable Markov decision process
- Dynamic programming
- Bellman equation (for applications in economics)
- Hamilton–Jacobi–Bellman equation
- Optimal control
- Recursive economics
- Mabinogion sheep problem
- Stochastic games
- Q-learning
- Markov chain