Header logo is

Learning Objective Functions for Manipulation

2013

Conference Paper

am

mg


We present an approach to learning objective functions for robotic manipulation based on inverse reinforcement learning. Our path integral inverse reinforcement learning algorithm can deal with high-dimensional continuous state-action spaces, and only requires local optimality of demonstrated trajectories. We use L 1 regularization in order to achieve feature selection, and propose an efficient algorithm to minimize the resulting convex objective function. We demonstrate our approach by applying it to two core problems in robotic manipulation. First, we learn a cost function for redundancy resolution in inverse kinematics. Second, we use our method to learn a cost function over trajectories, which is then used in optimization-based motion planning for grasping and manipulation tasks. Experimental results show that our method outperforms previous algorithms in high-dimensional settings.

Author(s): Kalakrishnan, M. and Pastor, P. and Righetti, L. and Schaal, S.
Book Title: 2013 IEEE International Conference on Robotics and Automation
Year: 2013
Publisher: IEEE

Department(s): Autonomous Motion, Movement Generation and Control
Bibtex Type: Conference Paper (inproceedings)

DOI: 10.1109/ICRA.2013.6630743

Address: Karlsruhe, Germany
URL: https://ieeexplore.ieee.org/abstract/document/6630743/

BibTex

@inproceedings{kalakrishnan_learning_2013,
  title = {Learning {Objective} {Functions} for {Manipulation}},
  author = {Kalakrishnan, M. and Pastor, P. and Righetti, L. and Schaal, S.},
  booktitle = {2013 {IEEE} {International} {Conference} on {Robotics} and {Automation}},
  publisher = {IEEE},
  address = {Karlsruhe, Germany},
  year = {2013},
  url = {https://ieeexplore.ieee.org/abstract/document/6630743/}
}