Higher Coordination with Less Control

  • [PDF] K. Zahedi, N. Ay, and R. Der, “Higher coordination with less control — a result of information maximization in the sensori-motor loop,” Adaptive behavior, vol. 18, iss. 3–4, pp. 338-355, 2010.
    [Bibtex]
    @article{Zahedi2010aHigher,
    Author = {Zahedi, Keyan and Ay, Nihat and Der, Ralf},
    Eprint = {http://adb.sagepub.com/content/18/3-4/338.full.pdf+html},
    PDF = {http://adb.sagepub.com/content/18/3-4/338.full.pdf+html},
    URL = {http://adb.sagepub.com/content/18/3-4/338.full.pdf+html},
    Journal = {Adaptive Behavior},
    Number = {3--4},
    Pages = {338--355},
    Title = {Higher coordination with less control -- A result of information maximization in the sensori-motor loop},
    PDF = {http://adb.sagepub.com/content/18/3-4/338.abstract},
    Volume = {18},
    Year = {2010}}

Abstract

This work presents a novel learning method in the context of embodied artificial intelligence and self-organization, which has as few assumptions and restrictions as possible about the world and the underlying model. The learning rule is derived from the principle of maximizing the predictive information in the sensorimotor loop. It is evaluated on robot chains of varying length with individually controlled, non-communicating segments. The comparison of the results shows that maximizing the predictive information per wheel leads to a higher coordinated behavior of the physically connected robots compared to a maximization per robot. Another focus of this paper is the analysis of the effect of the robot chain length on the overall behavior of the robots. It will be shown that longer chains with less capable controllers outperform those of shorter length and more complex controllers. The reason is found and discussed in the information-geometric interpretation of the learning process.

http://journals.sagepub.com/doi/10.1177/1059712310375314

Primary Question asked in this paper

How can a system with no prior knowledge about itself or the environment gather information so that it is able to perform a task? This is the underlying question of this work.

The paper in a nutshell

Predictive information is the mutual information of the past and future of a random variable. We applied it to the sensor values  S of an autonomous agent. In this case, it can be written in the following form

     \begin{align*} I(\stackrel{\leftarrow}{S};\stackrel{\rightarrow}{S}) = H(\stackrel{\rightarrow}{S}) - H(\stackrel{\rightarrow}{S}|\stackrel{\leftarrow}{S}), \end{align*}

where  \stackrel{\leftarrow}{S} is the past and  \stackrel{\rightarrow}{S} is the future of the sensor values. The following image is a depiction of the PI




We use a the on-step approximation, which is given by

     \begin{align*} I(S_t;S_{t+1}) = H(S_{t+1}) - H(S_{t+1}|S_t). \end{align*}

The predictive information can be written in the following form

     \begin{align*} I(S_t;S_{t+1}) = \sum_{s,s'\in\mathcal{S}} p(s',s) \log_s\frac{p(s'|s)}{p(s')} \end{align*}

In this form, maximising predictive information would require information that is not intrinsically available to the agent. Hence, we rewrite it in the following form:

     \begin{align*} I(S_t;S_{t+1}) & = \sum_{s,s'\in\mathcal{S}} \sum_{a\in\mathcal{A}} p(s',s,a) \log_s\frac{\sum_{a'\in\mathcal{A}}p(s',s,a)}{p(s)\sum_{s''\in\mathcal{A},a'\in\mathcal{A}}p(s',s'',a')}\\ & = \sum_{s',s,a}p(s'|s,a)p(a|s)p(s)\log_2\frac{\sum_{a'}p(s'|s,a)p(a|s)p(s)}{p(s)\sum_{s'',a'} p(s'|s'',a')p(a'|s'')p(s'')}, \end{align*}

where  p(s'|s,a) is the intrinsic world model  p(a|s) is the policy, and  p(s) is the input distribution.

Applying Amari’s natural gradient method, we are able to obtain a policy gradient that maximises the predictive information:

The input distribution is updated through sampling

(1)    \begin{align*} p^{(0)}(s) & = \frac{1}{|\mathcal{S}|}\\ p^{(n+1)}(s) & = \left\{\begin{array}{cl}   \displaystyle\frac{n}{n+1}p^{(n-1)}(s)+\frac{1}{n+1} & \text{if } S_{n+1} = s\\[5ex]   \displaystyle\frac{n}{n+1}p^{(n-1)}(s) & \text{if } S_{n+1} \not= s   \end{array} \end{align*}

just as the world model is:

(2)    \begin{align*} p^{(0)}(s'|s,a) & = \frac{1}{|\mathcal{S}|}\\ p^{\left(n_{a}^s\right)}(s'|s,{a}) & :=  \left\{\begin{array}{ll}   \displaystyle\frac{n_{a}^s}{n_{a}^s+1}p^{(n_{a}^s-1)}(s'|s,{a})+\frac{1}{n_{a}^s+1}                    & {{\text{if } {S_{n_{a}^s+1} = s',\, S_n=s,\,A_{n_{a}^s+1}={a}}}}\\[3ex]   \displaystyle\frac{n_{a}^s}{n_{a}^s+1}p^{(n_{a}^s-1)}(s'|s,a)                    & {\text{if } S_{n_{a}^s+1} \not= s',\, S_n=s,\,A_{n_{a}^s+1}={a}}\\[3ex]   p^{(n_{a}^s-1)}(s'|s,{a})   & {\text{if } S_{n_{a}^s}\not=s \text{ or } \,A_{n_{_{s,{a}}}+1}\not={a}}   \end{array}\right. \end{align*}

The policy is updated in the following way

(3)    \begin{align*} \pi^{(0)}({a}|s) & :=  \frac{1}{|S|} \nonumber\\   \pi^{(n)}({a}|s) & = \pi^{(n-1)}({a}|s) +   \frac{1}{n+1} \pi^{(n)}({a}|s)   \left(F(s,a) - \sum_{a} \pi^{(n-1)}(a|s) F(s,a)\right)\\   F(s,a) & := p^{(n)}(s)\sum_{s'}p^{(n)}(s'|s,{a})   \log_2\frac{\sum_{{a}}\pi^{(n-1)}({a}|s)p^{(n)}(s'|s,{a})}     {\sum_{s''}p^{(n)}(s'') \sum_a \pi^{(n-1)}(a | s'') \, p^{(n)}(s' | s'',a)} \end{align*}

Experiments

We conducted our experiments with the YARS Simulator, which freely available [here]. Installation instructions are found [here]. Examples are found [here].

We simulated a two-wheeled, different drive robot that was loosely inspired by the Khepera robot, which we also passive couple to a chain of robots, as the following image shows:




Shown below are robot chains of length one, three, and five. For each of the these robot chain, we evaluated two different control strategies, to which we refer as combined and split control:

   

The results are shown below. For a discussion, please read the paper.

Videos

For details on the equations, please read the publication (see above).

Single robot with combined control

Single robot with split control

Three robots with combined control

Three robots with split control

Five robots with combined control

Five robots with split control

Leave a Reply

Your email address will not be published. Required fields are marked *