Q-learning

Q-learning is a reinforcement learning technique that works by learning an action-value function that gives the expected utility of taking a given action in a given state and following a fixed policy thereafter. A strength with Q-learning is that it is able to compare the expected utility of the available actions without requiring a model of the environment. A recent variation called delayed-Q learning has shown substantial improvements, bringing PAC bounds to Markov Decision Processes.

Algorithm
The core of the algorithm is a simple value iteration update. For each state, s, from the state set S, and for each action, a, from the action set A, we can calculate an update to its expected discounted reward with the following expression:


 * $$Q(s_t,a_t) \leftarrow Q(s_t,a_t) + \alpha_t(s_t,a_t) [r_t + \gamma max_{a}Q(s_{t+1}, a)-Q(s_t,a_t)]$$

where $$r_t$$ is an observed real reward at time $$t$$, $$\alpha_t(s, a)$$ are the learning rates such that 0 ≤$$\alpha_t(s, a)$$≤ 1, and $$\gamma$$ is the discount factor such that 0 ≤$$ \gamma$$ < 1.