Tuesday, September 9, 2014

ICDL-EpiRob 2014 Paper Preview

How can robots learn their own goals? How can even human babies that which start with no particular goals at hand develop such intentions and the abstractions thereof? In my upcoming ICDL paper I approach this question which has been longstanding (not only) in the field of developmental robotics:
  • Rolf, M., and M. Asada, "Autonomous Development of Goals: From Generic Rewards to Goal and Self Detection", IEEE Int. Conf. Development and Learning and on Epigenetic Robotics, Genoa, 10/2014. (in press)
Abstract — Goals are abstractions that express agents’ intention and allow them to organize their behavior appropriately. How can agents develop such goals autonomously? This paper proposes a conceptual and computational account to this longstanding problem. We argue to consider goals as abstractions of lower-level intention mechanisms such as rewards and values, and point out that goals need to be considered alongside with a detection of the own actions’ effects. Then, both goals and self-detection can be learned from generic rewards. We show experimentally that task-unspecific rewards induced by visual saliency lead to self and goal representations that constitute goal-directed reaching.
In the following weeks I will detail some of the paper's main points here. Stay tuned!

Wednesday, August 20, 2014

Talk @ Developmental robotics WS, Ro-Man 2014

Next week I will attend the Ro-Man 2014 conference in Edinburgh. On Monday August 25 I will give an invited talk in the workshop "Developmental and bio-inspired approaches for memory and emotion modelling in cognitive robotics" co-organized by Katrin. I will talk about my latest work on the autonomous development of goal systems in developmental robotics, part of which will appear at ICDL.

Wednesday, July 23, 2014

ICIS Workshop Summary and Thoughts

On July 02 we held the workshop "Computational Models of Infant Development" as a pre-conference event of ICIS 2014 in Berlin. The workshop was a follow-up on the 2012 workshop "Developmental Robotics" at the same venue. Our general goal, of course, was to bring experimental research in psychology and modelling research in machine learning and developmental robotics closer together. This time we focused on more computational aspects rather than immediate robotics effort and tried to compare things like connectionist and dynamical systems models.
The workshop was attended by 40 people, and sponsored by FIAS and our research project. We also a poster session with 10 posters, which I am not going to introduce, but you can check out the titles here. We had some remarkable keynote speeches, which I summarize below. More than that, we heard some important arguments and had a very insightful discussion about the science in "Constructive Developmental Science", "Developmental Robotics", "Autonomous Mental Development", or however you'd like to call it. Below I try to wrap up these arguments and try to situate them in context of recent debates in the ICDL community.


Wednesday, July 9, 2014

The image-scratch paradigm: a new paradigm for evaluating infants' motivated gaze control

Our project's most recent paper, that I had the pleasure to co-author, has appeared in Nature's Scientific Reports. It introduces a new experimental paradigm to assess infants' skill to grasp the effects of their actions and use them appropriately. Not an easy thing, since infants' motor abilities are not very versatile and precise. So even if their brains have figured out the effects of their actions and when to take them, they might not be able to execute them well enough - well enough for an experimenter to figure out what they are up to. Their eyes, however, they can control on a low level very early, which opens chances to measure their general ability to get and utilize their actions' effects.
  • Miyazaki, M., H. Takahashi, M. Rolf, H. Okada, and T. Omori, "The image-scratch paradigm: a new paradigm for evaluating infants' motivated gaze control", Scientific reports 4(5498), 06/2014. (online, pdf
Abstract Human infants show spontaneous behaviours such as general movement, goal-directed behaviour, and self-motivated behaviour from a very early age. However, it is unclear how these behaviours are organised throughout development. A major hindrance to empirical investigation is that there is no common paradigm for all ages that can circumvent infants' underdeveloped verbal and motor abilities. Here, we propose a new paradigm, named the image-scratch task, using a gaze-contingent technique that is adaptable to various extents of motor ability. In this task, participants scratch off a black layer on a display to uncover pictures beneath it by using their gaze. We established quantitative criteria for spontaneous eye-movement based on adults' gaze-data and demonstrated that our task is useful for evaluating eye-movements motivated by outcome attractiveness in 8-month-olds. Finally, we discuss the potential of this paradigm for revealing the mechanisms and developmental transitions underlying infants' spontaneous and intentional behaviours.

Wednesday, April 2, 2014

Computational Models of Infant Development @ ICIS

Together with Jochen Triesch, Matt Schlesinger, and Minoru Asada, I am currently co-organizing the workshop "Computational Models of Infant Development" as pre-conference event for the 19. Biennial Int. Conf. Infant Studies. The workshop will be held on July 2nd, the main conference from July 3 to July 5, both in the Maritim Hotel, Berlin, Germany.
The workshop will bring together a wide set of perspectives on computational development and learning:
Recent years have seen a growing interest in using computational modeling to complement classic experimental approaches for studying infant development. This workshop is aimed at modelers and non-modelers alike. It showcases recent successes of the fruitful interaction of empirical and theoretical/computational research in infant development. Leading theorists will highlight how careful theoretical work can lead to a better understanding of empirical data and generate testable predictions for future experiments. A second aim of the workshop is to compare different modeling approaches or „schools“ including neural networks, Bayesian reasoning, and dynamical systems, emphasizing their respective strengths and weaknesses.
Check out the full information here, and participate :-)

Thursday, March 13, 2014

Goal Babbling on the BHA featured in NewScientist

Last week during the HRI conference in Bielefeld NewScientist's reporter Paul Marks came to see Goal Babbling on the Bionic Handling Assistant during the lab-tours through Bielefeld University. His impressions he wrote down for the latest issue of the magazine. And what he reports is, like most people report it, that the robot seems somewhat alive. In particular if it moves by means of Goal Babbling. And even more, when one physically touches and guides the robot during that learning. Read the full report here: Robot elephant trunk learns motor skills like a baby.

Monday, March 3, 2014

HRI 2014 and CODEFROR Kick-Off

This week I attend the HRI 2014 conference in Bielefeld. I will give a talk in the workshop "Attention Models in Robotics", about my work on attention via synchrony. On Friday, the kick-off workshop of CODEFROR will take place, which is a new EU FP7 "International Research Staff Exchange Scheme" for the collaboration between IIT, Bielefeld University, Osaka University, and University of Toyko.