Multi-armed bandit

A multi-armed bandit, also sometimes called a K-armed bandit, is a simple machine learning problem based on an analogy with a traditional slot machine (one-armed bandit) but with more than one lever. When pulled, each lever provides a reward drawn from a distribution associated to that specific lever. The objective of the gambler is to maximize the collected reward sum through iterative pulls. It is classically assumed that the gambler has no initial knowledge about the levers. The crucial tradeoff the gambler faces at each trial is between "exploitation" of the lever that has the highest expected payoff and "experimentation" to get more information about the expected payoffs of the other levers.

Empirical motivation
The multi-armed bandit problem, originally described by Robbins in 1952, is a simple model of an agent that simultaneously attempts to acquire new knowledge and to optimize its decisions based on existing knowledge. Practical examples include clinical trials where the effects of different experimental treatments need to be investigated while minimizing patient losses, and adaptive routing efforts for minimizing delays in a network. The questions arising in these cases are related to the problem of balancing reward maximization based on the knowledge already acquired with attempting new actions to further increase knowledge. This is known as the exploitation vs. exploration tradeoff in reinforcement learning.

The multi-armed bandit model
The multi-armed bandit (bandit for short) can be seen as a set of real distributions $$B = \{R_1, \dots ,R_K\}$$, each distribution being associated with the rewards delivered by one of the K levers. Let $$\mu_1, \dots, \mu_K$$ be the mean values associated with these reward distributions. The gambler iteratively plays one lever per round and observes the associated reward. The objective is to maximize the sum of the collected rewards. The horizon H is the number of rounds that remain to be played. The bandit problem is formally equivalent to a one-state Markov decision process. The regret $$\rho$$ after T rounds is defined as the difference between the reward sum associated with an optimal strategy and the sum of the collected rewards: $$\rho = T \mu^* - \sum_{t=1}^T \widehat{r}_t$$, where $$\mu^*$$ is the maximal reward mean, $$\mu^* = \max_k \{ \mu_k \}$$, and $$\widehat{r}_t$$ is the reward at time t. A strategy whose average regret per round $$\rho / T$$ tends to zero with probability 1 when the number of played rounds tends to infinity is a zero-regret strategy. Intuitively, zero-regret strategies are guaranteed to converge to an optimal strategy, not necessarily unique, if enough rounds are played.

Common bandit strategies
Many strategies exist which provide an approximate solution to the bandit problem, and can be put into the three broad categories detailed below.

Semi-uniform strategies
Semi-uniform strategies were the earliest (and simplest) strategies discovered to approximately solve the bandit problem. All those strategies have in common a greedy behavior where the best lever (based on previous observations) is always pulled except when a (uniformly) random action is taken.


 * Epsilon-greedy strategy: The best lever is selected for a proportion $$1 - \epsilon$$ of the trials, and another lever is randomly selected (with uniform probability) for a proportion $$\epsilon$$. A typical parameter value might be $$\epsilon = 0.1$$, but this can vary widely depending on circumstances and predilections.


 * Epsilon-first strategy: A pure exploration phase is followed by a pure exploitation phase. For $$N$$ trials in total, the exploration phase occupies $$\epsilon N$$ trials and the exploitation phase $$(1 - \epsilon) N$$ trials. During the exploration phase a lever is randomly selected (with uniform probability); during the exploitation phase the best lever is always selected.


 * Epsilon-decreasing strategy: Similar to the epsilon-greedy strategy, except that the value of $$\epsilon$$ decreases as the experiment progresses, resulting in highly explorative behaviour at the start and highly exploitative behaviour at the finish.

Probability matching strategies
Probability matching strategies reflect the idea that the number of pulls for a given lever should match its actual probability of being the optimal lever.

Pricing strategies
Pricing strategies establish a price for each lever. The lever of highest price is always pulled.