Contents
Abstract
In many robotic applications, softness leads to improved performance, robustness, and safety, while lowering manufacturing cost, increasing versatility, and simplifying control. The advantages of soft robots derive from the fact that their behavior partially results from interactions of the robot’s morphology with its environment, which is commonly referred to as morphological computation (MC). But not all MC is good in the sense that it supports the desired behavior. One of the challenges in soft robotics is to build systems that exploit the morphology (good MC) while avoiding body-environment interactions that are harmful with respect to the desired functionality (bad MC). Up to this point, constructing a competent soft robot design requires experience and intuition from the designer. This work is the first to propose a systematic approach that can be used in an automated design process. It is based on calculating a low-dimensional representation of an observed behavior, which can be used to distinguish between good and bad MC. We evaluate our method based on a set of grasping experiments, with variations in hand design, controller, and objects. Finally, we show that the information contained in the low-dimensional representation is comprehensive in the sense that it can be used to guide an automated design process.
Reference
- K. Ghazi-Zahedi, R. Deimel, G. Montúfar, V. Wall, and O. Brock, “Morphological computation: the good, the bad, and the ugly,” in Iros 2017, 2017.
[Bibtex]@inproceedings{Ghazi-Zahedi2017aMorphological, Author = {Ghazi-Zahedi, Keyan and Deimel, Raphael and Mont{\'u}far, Guido and Wall, Vincent and Brock, Oliver}, Booktitle = {IROS 2017}, PDF = {https://ieeexplore.ieee.org/document/8202194/}, Title = {Morphological Computation: The Good, the Bad, and the Ugly}, Year = {2017}}
In a nutshell
Soft robotics is a successful branch of robotics. In many applications, softness leads to improved performance, robustness, and safety, while lowering manufacturing cost, increasing versatility, and simplifying control. In spite of these advantages, there currently is no systematic method for exploiting the benefits of softness in robot design. At the moment, human designers rely on experience and intuition to design competent soft robots.
The advantages of soft robots derive from the way their behavior is generated. As with traditional robots, the behavior of soft robots is affected by the control commands a robot receives. However, this control-based behavior is modified through compliant interactions of the robot with its environment. These compliant interactions adapt the behavior to a particular context, without the need for explicit control. It is therefore important to note that the behavior of soft robots is not exclusively the result of control, it partially results from interactions of the robot’s morphology with its environment. This latter part of the robot’s behavior, stemming from interactions, is referred to as morphological computation (MC).
But not all MC is good. The interactions between a soft robot and the environment can also be harmful, for example, if it un-does what control accomplished or simply causes failure. We call this bad or ugly MC. Of course, MC can be good if the control-based behavior is modified in a favorable way (these informal definitions of good, bad, and ugly MC will be stated more precisely in Sec. II). To illustrate this with an example from soft manipulation: If MC leads to the adaptation of a soft hand to the shape of an object that results in a good grasp, we call that good MC. If the compliance of the fingers lead to a less firm grasp, we consider this bad MC. Both forms of MC describe hand-object interactions, but only the former is desirable, while the latter is to be avoided.
The automated design of soft robots must minimize bad MC and maximize good MC, relative to a particular task. In this paper, we propose for the first time a method to identify good and bad MC from observed behavior. If the observed behavior can be represented in some high-dimensional space, our method identifies sub-spaces associated with good MC and sub-spaces associated with bad MC. Such a criterion is a first and important step towards a quantitative design of soft robots.
In this work, we propose a method in four steps, which are outlined in the following and demonstrated with the [RBO Hand 2] (by the RBO Lab at the Technical University of Berlin, Oliver Brock and Raphael Deimel).
In the video shown above, the controller of the RBOHand 2 always executes the same prescriptive synergy, which is the set of motor commands that leads to the closing of the hand. There is no perception of the object’s shape, size, or position. The grasp is the result of the prescriptive synergy, the morphology of the hand, the material of the hand and the object. In the context of this work, we refer to the body-environment interactions as morphological computation. Good morphological computation is body-environment interactions that support the task and make the control easier. Bad morphological computation is body-environment interactions which make it more difficult to solve the task or would increase the amount of required control complexity. An example would be an increased softness of the fingers. This would lead to more interactions between the hand and its environment, but it would make it more difficult to grasp an object. This will be explained in more detail below.
The key question of this work is:
How can we identify good and bad MC from observation alone and use the information for an automated design process?
Our process requires four steps.
Optimisation Process
Step 1: Experiments
In the first step, we conducted a series of experiments with variations to
- Hand morphology
- Object’s shape and size
- Object’s initial position and orientation
- Controller parameters
The following image gives an impression of the type of variations that were used (images by Raphael Deimel, TU Berlin):
The following video shows what we mean by good and bad morphological computation:
The RBO Hand 2 is simulated by the movement of 32 coordinate frames (see image below, image by Raphael Deimel, TU Berlin). The 3D coordinates of these coordinate frames will be used in the next steps.
Step 2: Extracting behaviour-environment interactions
The second step is to extract the components of the behaviour that can be attributed to the hand-object interactions alone. We do this by subtracting the prescriptive behaviour from the grasp. The prescriptive synergies are the movements of the coordinate frames that result from the motor commands only. By this we mean, that the collision detected was turned off in the simulation. Hence, the hand’s movements are not influenced by hand-object and hand-environment interactions (next videos below, video on the right-hand side). We subtract the coordinates from the actual grasps, which are the hand’s movement resulting from the commands and resulting from hand-object and hand-environment interaction (next videos below, video on the left-hand side).
The following three videos show the visualisation of the difference. Left: Grasp, Centre: Prescriptive Synergies, Right: Grasp – Prescriptive Synergies.
The video on the left-hand side shows the grasp displayed in the video above. The video in the centre is the prescriptive synergy (right-hand video above), and the video of the right-hand side shows the difference between the two. The movements in this video (right-hand side) are the contribution of body-environment interactions. These movements are what we want to analyse and understand to improve the morphology of a soft robot in an automated design process.
In what follows, the data that is visualised on the right-hand side is used, i.e., on the component of the behaviour that can be attributed to body-environment interactions alone.
Step 3: Dimensionality Reduction
The data obtained from a single grasp is a matrix with 93 columns (x,y,z for 31 coordinate frames) and 300 rows (each row is a single time step). The next step is reduce the dimensionality significantly (for the clustering in the next step), but in a meaningful way. By this we mean that the data should reflect important properties of the corresponding grasps. We chose to calculate the co-variance matrix of each grasp, which is given by:
where the index i refers to a coordinate of a coordinate frame, e.g. i=1 refers to the x coordinate of the first coordinate frame and i=4 to the x coordinate of the second coordinate frame, and t is the time. Large positive covariance coefficients refer to movements of coordinate frames that move together. This could indicate that the link between these frames should be strengthened. A small absolute value means that the movement of one frame un-does the movement of the other, and finally, a large negative value means that the corresponding coordinate frames move in opposite directions. The following video is a visualisation of the three cases:
Step 4: Clustering
The final step, that we have completed in this publication, is the clustering of the covariance matrices and an analysis of the clusters with respect to their usability in an automated design process. We used t-SNE (van der Maaten & Hinton, 2008) to cluster the matrices in two dimensions. The result is shown in the following plot:
We now ask the question, how the resulting cluster can be explained. For this purpose, we post-hoc coloured the dots (which each represent a single covariance matrix) by the object’s shape and object’s initial position:
Both, the object’s shape and object’s initial position and orientation do no explain the clustering. The next two plots show the same clusters, but coloured post-hoc by the grasp success (small values are better) and morphological computation (large values are better):
Conclusions
This is the first proposal for an automated design process for soft robotics.
Acknowledgements
We gratefully acknowledge financial support by the European Commission (SOMA, H2020-ICT-645599) and the German Priority Program DFG-SPP 1527 “Autonomous Learning”.