Recent advances in sensors and algorithms allow for robots with improved perception abilities. However, effective perception alone may not be sufficient for human-robot interaction, since the robot's reaction should depend on understanding the human's intention. Hence, my research interests lie in the strategic level of human-robot interaction, which serves as a bridge between perception of human action and planning for reaction. On one side, the robot needs to infer the underlying intention of humans. On the other side, efficient planning for reaction can be achieved by utilizing motor skills with reactive policies learned to choose the right skill at the right time.
I have been developing and implementing machine learning algorithms for intention inference and learning reactive policies. I have chosen robot table tennis as a benchmark, as it is a sufficiently complex scenario for evaluation while intuition still allows interpreting the results. We have achieved promising experimental results, which exhibit their potentials in many other human-robot interaction scenarios.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems