Predictive complexity


Predictive complexity is a generalization of Kolmogorov complexity to a wide class of on-line prediction games (including both prediction and decision-making scenarios). Some of the most popular on-line prediction games are:

The log-loss game.
It formalizes the processes of probability forecasting and loss-less compression. Predictive complexity for this game is a variant of Kolmogorov complexity (the logarithm of the a priori semimeasure).
The square-loss game.
This game formalizes statistical regression done in the on-line fashion.
Cover's game.
Formalizes the process of investing in a stock market when only long positions are allowed.
The long-short game.
A version of the previous game when short positions are also allowed.
The area of potential applications of predictive complexity is very wide (such as pattern recognition, regression, financial portfolio management, etc.), but the research carried out so far has been mainly directed towards solving fundamental theoretical problems. I find the following results (mainly by Yuri Kalnishkan and Michael Vyugin) most interesting:

Downloads

Papers (zipped postscript):

  1. Probability theory for the Brier game by Volodya Vovk. This and next papers introduce predictive complexity. Published in the ALT 1997 proceedings and Theoretical Computer Science 261, 57–79, 2001.
  2. Universal portfolio selection by Vovk and Chris Watkins. This and previous papers introduce predictive complexity. Published in the COLT 1998 proceedings, pp 12–13.
  3. General linear relations between different types of predictive complexity by Yura Kalnishkan. This paper gives a general method for establishing tight linear inequalities between predictive complexities. Published in the ALT 1999 proceedings and Theoretical Computer Science 271, 181–200, 2002.
  4. Loss functions, complexities and the Legendre transformation by Kalnishkan, Vovk, and Misha Vyugin. This paper shows how the loss function can be extracted (up to the parameterization) from the predictive complexity. It appears that convex analysis (via, e.g., the notion of generalized entropy) plays a fundamental role in the theory of predictive complexity. Published in the ALT 2001 proceedings and Theoretical Computer Science 313, 195–207, 2004.
  5. Mixability and the existence of weak complexities by Kalnishkan and Vyugin. By definition, predictive complexity is the best, with accuracy O(1), upper semicomputable superloss process. For some important games predictive complexity in this sense does not exist. This paper shows that for a wide class of games there is "weak predictive complexity", where O(1) is replaced by O(sqrt(n)), n being the length of the data sequence. Published in the COLT 2002 proceedings.
  6. On the absence of predictive complexity for some games by Kalnishkan and Vyugin. This paper proves the absence of predictive complexity (defined, as usual, to within an additive constant) in the case where the curvature of the loss function vanishes at an interior point. This result is less general than the criterion in [8], but it gives an estimate of the accuracy with which predictive complexity exists. Published in the ALT 2002 proceedings.
  7. How many strings are easy to predict? by Kalnishkan, Vovk, and Vyugin. The result of this paper generalizes the incompressibility property of Kolmogorov complexity. Published in the COLT 2003 proceedings and Information and Computation 201, 55–71, 2005.
  8. A criterion for the existence of predictive complexity for binary games by Kalnishkan, Vovk, and Vyugin.
    It is well known that there exists a universal (i.e., optimal to within an additive constant if allowed to work infinitely long) algorithm for lossless data compression (Kolmogorov, Levin). The game of lossless compression is an example of an on-line prediction game; for some other on-line prediction games (such as the simple prediction game) a universal algorithm is known not to exist. In this paper we give an analytic characterisation of those binary on-line prediction games for which a universal prediction algorithm exists. Published in the ALT 2004 proceedings.
  9. Generalized entropy and asymptotic complexities of languages by Kalnishkan, Vovk, and Vyugin.
    In this paper the concept of asymptotic complexity of languages is introduced. This concept formalises the notion of learnability in a particular environment and generalises Lutz and Fortnow's concepts of predictability and dimension. Then asymptotic complexities in different prediction environments are compared by describing the set of all pairs of asymptotic complexities with respect to different environments. A geometric characterisation in terms of generalised entropies is obtained and thus the results of Lutz and Fortnow are generalised. The conference version of this paper is published in the COLT 2007 proceedings.
Important note: the copyright for some of these papers belongs to the publishers. All papers may be downloaded for personal or research purposes only.

Talks:

This page is maintained by Vladimir Vovk.   Last modified on 26 March 2008