Linear combination of one-step predictive information with an external reward

Contents

Abstract

One of the main challenges in the field of embodied artificial intelligence is the open-ended autonomous learning of complex behaviors. Our approach is to use task-independent, information-driven intrinsic motivation(s) to support task-dependent learning. The work presented here is a preliminary step in which we investigate the predictive information (the mutual information of the past and future of the sensor stream) as an intrinsic drive, ideally supporting any kind of task acquisition. Previous experiments have shown that the predictive information (PI) is a good candidate to support autonomous, open-ended learning of complex behaviors, because a maximization of the PI corresponds to an exploration of morphology- and environment-dependent behavioral regularities. The idea is that these regularities can then be exploited in order to solve any given task. Three different experiments are presented and their results lead to the conclusion that the linear combination of the one-step PI with an external reward function is not generally recommended in an episodic policy gradient setting. Only for hard tasks a great speed-up can be achieved at the cost of an asymptotic performance lost.

http://journal.frontiersin.org/article/10.3389/fpsyg.2013.00801/full

Reference

  • [PDF] K. Zahedi, G. Martius, and N. Ay, “Linear combination of one-step predictive information with an external reward in an episodic policy gradient setting: a critical analysis,” Frontiers in psychology, vol. 4, iss. 801, 2013.
    [Bibtex]
    @article{Zahedi2013aLinear,
    Author = {Zahedi, Keyan and Martius, Georg and Ay, Nihat},
    Issn = {1664-1078},
    Journal = {Frontiers in Psychology},
    Number = {801},
    Pdf = {http://www.frontiersin.org/cognitive_science/10.3389/fpsyg.2013.00801/abstract},
    Title = {Linear combination of one-step predictive information with an external reward in an episodic policy gradient setting: a critical analysis},
    Volume = {4},
    Year = {2013}}

In a nutshell

To be continued …

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.