Marcus Hutter

Marcus Hutter (born 1967) was born and educated in Munich, where he studied physics and computer science. In 2000 he joined Jürgen Schmidhuber's group at the Swiss AI lab IDSIA, where he developed the first mathematical theory of optimal Universal Artificial Intelligence, based on Kolmogorov complexity and Ray Solomonoff's theory of universal inductive inference. In 2006 he also accepted a professorship at the Australian National University in Canberra.

The central idea of Hutter's type of universal AI is this: Suppose you want to maximize your future expected reward in some unknown dynamic environment, up to some future horizon. This is the general reinforcement learning problem. Solomonoff/Hutter's only assumption is that the reactions of the environment in response to your actions follow some unknown but computable probability distribution. At any time, given the limited observation sequence so far, what is the Bayes-optimal way of selecting the next action? Hutter proved that the answer is: use Solomonoff's universal prior to predict the future, and execute the first action of the action sequence that will maximize the predicted reward up to the horizon.

This is mainly a theoretical result. To overcome the problem that Solomonoff's prior is incomputable, in 2002 he also published a remarkable asymptotically fastest algorithm for all well-defined problems. Given some formal description of a problem class, the algorithm systematically generates all proofs in a sufficiently powerful axiomatic system that allows for proving time bounds of solution-computing programs. Simultaneously, whenever a proof has been found that shows that a particular program has a better time bound than the previous best, a clever resource allocation scheme will assign most of the remaining search time to this program. Hutter showed that his method is essentially as fast as the unknown fastest program for solving problems from the given class, save for an additive constant independent of the problem instance. For example, if the problem size is $$n$$, and there exists an initially unknown program that solves any problem in the class within $$n^7$$ computational steps, then Hutter's method will solve it within $$5n^7 + O(1)$$ steps. It should be noted, however, that the additive constant hidden in the $$O$$ notation may be large enough to render the algorithm practically infeasible despite its amazing theoretical properties.

Hutter Prize for Lossless Compression of Human Knowledge
On August 6, 2006, Professor Hutter announced the Hutter Prize for Lossless Compression of Human Knowledge with an initial purse of 50,000 Euros, the intent of which is to encourage the advancement of artificial intelligence through the exploitation of Hutter's theory of optimal universal artificial intelligence.

Partial bibliography

 * Universal Artificial Intelligence: Sequential Decisions Based On Algorithmic Probability ISBN 3-540-22139-5


 * On Generalized Computable Universal Priors and their Convergence. Theoretical Computer Science, 2005.


 * Optimality of Universal Bayesian Sequence Prediction for General Loss and Alphabet. Journal of Machine Learning Research 4, 971-1000, 2003.


 * The Fastest and Shortest Algorithm for All Well-Defined Problems. International Journal of Foundations of Computer Science, 13:3 (2002) 431-443, 2002.