Purposeful and robust manipulation requires a good hand-eye coordination. To a certain extend this can be achieved using information from joint encoders and known kinematics. However, for many robots a significant error in the pose of the end-effector and fingers of several centimeters remains. Especially for fine manipulation tasks...
In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2016, IEEE, IEEE International Conference on Robotics and Automation, May 2016 (inproceedings)
To achieve accurate vision-based control with a robotic arm, a good hand-eye coordination is required. However, knowing the current configuration of the arm can be very difficult due to noisy readings from joint encoders or an inaccurate hand-eye calibration. We propose an approach for robot arm pose estimation that uses depth images of the arm as input to directly estimate angular joint positions. This is a frame-by-frame method which does not rely on good initialisation of the solution from the previous frames or knowledge from the joint encoders. For estimation, we employ a random regression forest which is trained on synthetically generated data. We compare different training objectives of the forest and also analyse the influence of prior segmentation of the arms on accuracy. We show that this approach improves previous work both in terms of computational complexity and accuracy. Despite being trained on synthetic data only, we demonstrate that the estimation also works on real depth images.
Eberhard-Karls-Universität Tübingen, May 2015 (mastersthesis)
For grasping and manipulation with robot arms, knowing the current pose of the arm is crucial
for successful controlling its motion. Often, pose estimations can be acquired from encoders
inside the arm, but they can have significant inaccuracy which makes the use of additional
In this master thesis, a novel approach of robot arm pose estimation is presented, that works on
single depth images without the need of prior foreground segmentation or other preprocessing
A random regression forest is used, which is trained only on synthetically generated data.
The approach improves former work by Bohg et al. by considerably reducing the computational
effort both at training and test time. The forest in the new method directly estimates the
desired joint angles while in the former approach, the forest casts 3D position votes for the
joints, which then have to be clustered and fed into an iterative inverse kinematic process to
finally get the joint angles.
To improve the estimation accuracy, the standard training objective of the forest training is
replaced by a specialized function that makes use of a model-dependent distance metric, called
Experimental results show that the specialized objective indeed improves pose estimation and
it is shown that the method, despite of being trained on synthetic data only, is able to
provide reasonable estimations for real data at test time.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems