Header logo is


2010


no image
Learning as a key ability for Human-Friendly Robots

Peters, J., Kober, J., Mülling, K., Krömer, O., Nguyen-Tuong, D., Wang, Z., Rodriguez Gomez, M., Grosse-Wentrup, M.

In pages: 1-2, 3rd Workshop for Young Researchers on Human-Friendly Robotics (HFR), October 2010 (inproceedings)

ei

Web [BibTex]

2010


Web [BibTex]


no image
Closing the sensorimotor loop: Haptic feedback facilitates decoding of arm movement imagery

Gomez Rodriguez, M., Peters, J., Hill, J., Schölkopf, B., Gharabaghi, A., Grosse-Wentrup, M.

In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (SMC 2010), pages: 121-126, IEEE, Piscataway, NJ, USA, IEEE International Conference on Systems, Man and Cybernetics (SMC), October 2010 (inproceedings)

Abstract
Brain-Computer Interfaces (BCIs) in combination with robot-assisted physical therapy may become a valuable tool for neurorehabilitation of patients with severe hemiparetic syndromes due to cerebrovascular brain damage (stroke) and other neurological conditions. A key aspect of this approach is reestablishing the disrupted sensorimotor feedback loop, i.e., determining the intended movement using a BCI and helping a human with impaired motor function to move the arm using a robot. It has not been studied yet, however, how artificially closing the sensorimotor feedback loop affects the BCI decoding performance. In this article, we investigate this issue in six healthy subjects, and present evidence that haptic feedback facilitates the decoding of arm movement intention. The results provide evidence of the feasibility of future rehabilitative efforts combining robot-assisted physical therapy with BCIs.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Learning Probabilistic Discriminative Models of Grasp Affordances under Limited Supervision

Erkan, A., Kroemer, O., Detry, R., Altun, Y., Piater, J., Peters, J.

In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010), pages: 1586-1591, IEEE, Piscataway, NJ, USA, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2010 (inproceedings)

Abstract
This paper addresses the problem of learning and efficiently representing discriminative probabilistic models of object-specific grasp affordances particularly when the number of labeled grasps is extremely limited. The proposed method does not require an explicit 3D model but rather learns an implicit manifold on which it defines a probability distribution over grasp affordances. We obtain hypothetical grasp configurations from visual descriptors that are associated with the contours of an object. While these hypothetical configurations are abundant, labeled configurations are very scarce as these are acquired via time-costly experiments carried out by the robot. Kernel logistic regression (KLR) via joint kernel maps is trained to map the hypothesis space of grasps into continuous class-conditional probability values indicating their achievability. We propose a soft-supervised extension of KLR and a framework to combine the merits of semi-supervised and active learning approaches to tackle the scarcity of labeled grasps. Experimental evaluation shows that combining active and semi-supervised learning is favorable in the existence of an oracle. Furthermore, semi-supervised learning outperforms supervised learning, particularly when the labeled data is very limited.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Discriminative frequent subgraph mining with optimality guarantees

Thoma, M., Cheng, H., Gretton, A., Han, J., Kriegel, H., Smola, A., Song, L., Yu, P., Yan, X., Borgwardt, K.

Journal of Statistical Analysis and Data Mining, 3(5):302–318, October 2010 (article)

Abstract
The goal of frequent subgraph mining is to detect subgraphs that frequently occur in a dataset of graphs. In classification settings, one is often interested in discovering discriminative frequent subgraphs, whose presence or absence is indicative of the class membership of a graph. In this article, we propose an approach to feature selection on frequent subgraphs, called CORK, that combines two central advantages. First, it optimizes a submodular quality criterion, which means that we can yield a near-optimal solution using greedy feature selection. Second, our submodular quality function criterion can be integrated into gSpan, the state-of-the-art tool for frequent subgraph mining, and help to prune the search space for discriminative frequent subgraphs even during frequent subgraph mining.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
A biomimetic approach to robot table tennis

Mülling, K., Kober, J., Peters, J.

In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010), pages: 1921-1926, IEEE, Piscataway, NJ, USA, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2010 (inproceedings)

Abstract
Although human beings see and move slower than table tennis or baseball robots, they manage to outperform such robot systems. One important aspect of this better performance is the human movement generation. In this paper, we study trajectory generation for table tennis from a biomimetic point of view. Our focus lies on generating efficient stroke movements capable of mastering variations in the environmental conditions, such as changing ball speed, spin and position. We study table tennis from a human motor control point of view. To make headway towards this goal, we construct a trajectory generator for a single stroke using the discrete movement stages hypothesis and the virtual hitting point hypothesis to create a model that produces a human-like stroke movement. We verify the functionality of the trajectory generator for a single forehand stroke both in a simulation and using a real Barrett WAM.

ei

Web DOI [BibTex]

Web DOI [BibTex]


Thumb xl screen shot 2015 08 23 at 15.18.17
Scene Representation and Object Grasping Using Active Vision

Gratal, X., Bohg, J., Björkman, M., Kragic, D.

In IROS’10 Workshop on Defining and Solving Realistic Perception Problems in Personal Robotics, October 2010 (inproceedings)

Abstract
Object grasping and manipulation pose major challenges for perception and control and require rich interaction between these two fields. In this paper, we concentrate on the plethora of perceptual problems that have to be solved before a robot can be moved in a controlled way to pick up an object. A vision system is presented that integrates a number of different computational processes, e.g. attention, segmentation, recognition or reconstruction to incrementally build up a representation of the scene suitable for grasping and manipulation of objects. Our vision system is equipped with an active robotic head and a robot arm. This embodiment enables the robot to perform a number of different actions like saccading, fixating, and grasping. By applying these actions, the robot can incrementally build a scene representation and use it for interaction. We demonstrate our system in a scenario for picking up known objects from a table top. We also show the system’s extendibility towards grasping of unknown and familiar objects.

am

video pdf slides [BibTex]

video pdf slides [BibTex]


Thumb xl after250measurementprmgoodlinespec
Strategies for multi-modal scene exploration

Bohg, J., Johnson-Roberson, M., Björkman, M., Kragic, D.

In Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on, pages: 4509-4515, October 2010 (inproceedings)

Abstract
We propose a method for multi-modal scene exploration where initial object hypothesis formed by active visual segmentation are confirmed and augmented through haptic exploration with a robotic arm. We update the current belief about the state of the map with the detection results and predict yet unknown parts of the map with a Gaussian Process. We show that through the integration of different sensor modalities, we achieve a more complete scene model. We also show that the prediction of the scene structure leads to a valid scene representation even if the map is not fully traversed. Furthermore, we propose different exploration strategies and evaluate them both in simulation and on our robotic platform.

am

video pdf DOI Project Page [BibTex]

video pdf DOI Project Page [BibTex]


Thumb xl screen shot 2015 08 23 at 01.22.09
Attention-based active 3D point cloud segmentation

Johnson-Roberson, M., Bohg, J., Björkman, M., Kragic, D.

In Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on, pages: 1165-1170, October 2010 (inproceedings)

Abstract
In this paper we present a framework for the segmentation of multiple objects from a 3D point cloud. We extend traditional image segmentation techniques into a full 3D representation. The proposed technique relies on a state-of-the-art min-cut framework to perform a fully 3D global multi-class labeling in a principled manner. Thereby, we extend our previous work in which a single object was actively segmented from the background. We also examine several seeding methods to bootstrap the graphical model-based energy minimization and these methods are compared over challenging scenes. All results are generated on real-world data gathered with an active vision robotic head. We present quantitive results over aggregate sets as well as visual results on specific examples.

am

pdf DOI [BibTex]

pdf DOI [BibTex]


no image
Combining active learning and reactive control for robot grasping

Kroemer, O., Detry, R., Piater, J., Peters, J.

Robotics and Autonomous Systems, 58(9):1105-1116, September 2010 (article)

Abstract
Grasping an object is a task that inherently needs to be treated in a hybrid fashion. The system must decide both where and how to grasp the object. While selecting where to grasp requires learning about the object as a whole, the execution only needs to reactively adapt to the context close to the grasp’s location. We propose a hierarchical controller that reflects the structure of these two sub-problems, and attempts to learn solutions that work for both. A hybrid architecture is employed by the controller to make use of various machine learning methods that can cope with the large amount of uncertainty inherent to the task. The controller’s upper level selects where to grasp the object using a reinforcement learner, while the lower level comprises an imitation learner and a vision-based reactive controller to determine appropriate grasping motions. The resulting system is able to quickly learn good grasps of a novel object in an unstructured environment, by executing smooth reaching motions and preshapin g the hand depending on the object’s geometry. The system was evaluated both in simulation and on a real robot.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Weakly-Paired Maximum Covariance Analysis for Multimodal Dimensionality Reduction and Transfer Learning

Lampert, C., Kroemer, O.

In Computer Vision – ECCV 2010, pages: 566-579, (Editors: Daniilidis, K. , P. Maragos, N. Paragios), Springer, Berlin, Germany, 11th European Conference on Computer Vision, September 2010 (inproceedings)

Abstract
We study the problem of multimodal dimensionality reduction assuming that data samples can be missing at training time, and not all data modalities may be present at application time. Maximum covariance analysis, as a generalization of PCA, has many desirable properties, but its application to practical problems is limited by its need for perfectly paired data. We overcome this limitation by a latent variable approach that allows working with weakly paired data and is still able to efficiently process large datasets using standard numerical routines. The resulting weakly paired maximum covariance analysis often finds better representations than alternative methods, as we show in two exemplary tasks: texture discrimination and transfer learning.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Simple algorithmic modifications for improving blind steganalysis performance

Schwamberger, V., Franz, M.

In Proceedings of the 12th ACM workshop on Multimedia and Security (MM&Sec 2010), pages: 225-230, (Editors: Campisi, P. , J. Dittmann, S. Craver), ACM Press, New York, NY, USA, 12th ACM Workshop on Multimedia and Security (MM&Sec), September 2010 (inproceedings)

Abstract
Most current algorithms for blind steganalysis of images are based on a two-stages approach: First, features are extracted in order to reduce dimensionality and to highlight potential manipulations; second, a classifier trained on pairs of clean and stego images finds a decision rule for these features to detect stego images. Thereby, vector components might vary significantly in their values, hence normalization of the feature vectors is crucial. Furthermore, most classifiers contain free parameters, and an automatic model selection step has to be carried out for adapting these parameters. However, the commonly used cross-validation destroys some information needed by the classifier because of the arbitrary splitting of image pairs (stego and clean version) in the training set. In this paper, we propose simple modifications of normalization and for standard cross-validation. In our experiments, we show that these methods lead to a significant improvement of the standard blind steganalyzer of Lyu and Farid.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Nonparametric Regression between General Riemannian Manifolds

Steinke, F., Hein, M., Schölkopf, B.

SIAM Journal on Imaging Sciences, 3(3):527-563, September 2010 (article)

Abstract
We study nonparametric regression between Riemannian manifolds based on regularized empirical risk minimization. Regularization functionals for mappings between manifolds should respect the geometry of input and output manifold and be independent of the chosen parametrization of the manifolds. We define and analyze the three most simple regularization functionals with these properties and present a rather general scheme for solving the resulting optimization problem. As application examples we discuss interpolation on the sphere, fingerprint processing, and correspondence computations between three-dimensional surfaces. We conclude with characterizing interesting and sometimes counterintuitive implications and new open problems that are specific to learning between Riemannian manifolds and are not encountered in multivariate regression in Euclidean space.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Semi-supervised Remote Sensing Image Classification via Maximum Entropy

Erkan, A., Camps-Valls, G., Altun, Y.

In Proceedings of the 2010 IEEE International Workshop on Machine Learning for Signal Processing (MLSP 2010), pages: 313-318, IEEE, Piscataway, NJ, USA, 2010 IEEE International Workshop on Machine Learning for Signal Processing (MLSP), September 2010 (inproceedings)

Abstract
Remote sensing image segmentation requires multi-category classification typically with limited number of labeled training samples. While semi-supervised learning (SSL) has emerged as a sub-field of machine learning to tackle the scarcity of labeled samples, most SSL algorithms to date have had trade-offs in terms of scalability and/or applicability to multi-categorical data. In this paper, we evaluate semi-supervised logistic regression (SLR), a recent information theoretic semi-supervised algorithm, for remote sensing image classification problems. SLR is a probabilistic discriminative classifier and a specific instance of the generalized maximum entropy framework with a convex loss function. Moreover, the method is inherently multi-class and easy to implement. These characteristics make SLR a strong alternative to the widely used semi-supervised variants of SVM for the segmentation of remote sensing images. We demonstrate the competitiveness of SLR in multispectral, hyperspectral and radar image classifica tion.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
MLSP Competition, 2010: Description of first place method

Leiva, JM., Martens, SMM.

In Proceedings of the 2010 IEEE International Workshop on Machine Learning for Signal Processing (MLSP 2010), pages: 112-113, IEEE, Piscataway, NJ, USA, 2010 IEEE International Workshop on Machine Learning for Signal Processing (MLSP), September 2010 (inproceedings)

Abstract
Our winning approach to the 2010 MLSP Competition is based on a generative method for P300-based BCI decoding, successfully applied to visual spellers. Here, generative has a double meaning. On the one hand, we work with a probability density model of the data given the target/non target labeling, as opposed to discriminative (e.g. SVM-based) methods. On the other hand, the natural consequence of this approach is a decoding based on comparing the observation to templates generated from the data.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Multiframe Blind Deconvolution, Super-Resolution, and Saturation Correction via Incremental EM

Harmeling, S., Sra, S., Hirsch, M., Schölkopf, B.

In Proceedings of the 17th International Conference on Image Processing (ICIP 2010), pages: 3313-3316, IEEE, Piscataway, NJ, USA, 17th International Conference on Image Processing (ICIP), September 2010 (inproceedings)

Abstract
We formulate the multiframe blind deconvolution problem in an incremental expectation maximization (EM) framework. Beyond deconvolution, we show how to use the same framework to address: (i) super-resolution despite noise and unknown blurring; (ii) saturationcorrection of overexposed pixels that confound image restoration. The abundance of data allows us to address both of these without using explicit image or blur priors. The end result is a simple but effective algorithm with no hyperparameters. We apply this algorithm to real-world images from astronomy and to super resolution tasks: for both, our algorithm yields increased resolution and deconvolved images simultaneously.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Gaussian Mixture Modeling with Gaussian Process Latent Variable Models

Nickisch, H., Rasmussen, C.

In Pattern Recognition, pages: 271-282, (Editors: Goesele, M. , S. Roth, A. Kuijper, B. Schiele, K. Schindler), Springer, Berlin, Germany, 32nd Annual Symposium of the German Association for Pattern Recognition (DAGM), September 2010 (inproceedings)

Abstract
Density modeling is notoriously difficult for high dimensional data. One approach to the problem is to search for a lower dimensional manifold which captures the main characteristics of the data. Recently, the Gaussian Process Latent Variable Model (GPLVM) has successfully been used to find low dimensional manifolds in a variety of complex data. The GPLVM consists of a set of points in a low dimensional latent space, and a stochastic map to the observed space. We show how it can be interpreted as a density model in the observed space. However, the GPLVM is not trained as a density model and therefore yields bad density estimates. We propose a new training strategy and obtain improved generalisation performance and better density estimates in comparative evaluations on several benchmark data sets.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
A Nearest Neighbor Data Structure for Graphics Hardware

Cayton, L.

In Proceedings of the First International Workshop on Accelerating Data Management Systems Using Modern Processor and Storage Architectures (ADMS 2010), pages: 1-6, First International Workshop on Accelerating Data Management Systems Using Modern Processor and Storage Architectures (ADMS), September 2010 (inproceedings)

Abstract
Nearest neighbor search is a core computational task in database systems and throughout data analysis. It is also a major computational bottleneck, and hence an enormous body of research has been devoted to data structures and algorithms for accelerating the task. Recent advances in graphics hardware provide tantalizing speedups on a variety of tasks and suggest an alternate approach to the problem: simply run brute force search on a massively parallel sys- tem. In this paper we marry the approaches with a novel data structure that can effectively make use of parallel systems such as graphics cards. The architectural complexities of graphics hardware - the high degree of parallelism, the small amount of memory relative to instruction throughput, and the single instruction, multiple data design- present significant challenges for data structure design. Furthermore, the brute force approach applies perfectly to graphics hardware, leading one to question whether an intelligent algorithm or data structure can even hope to outperform this basic approach. Despite these challenges and misgivings, we demonstrate that our data structure - termed a Random Ball Cover - provides significant speedups over the GPU- based brute force approach.

ei

PDF Web [BibTex]

PDF Web [BibTex]


Thumb xl screen shot 2012 12 01 at 2.37.12 pm
Visibility Maps for Improving Seam Carving

Mansfield, A., Gehler, P., Van Gool, L., Rother, C.

In Media Retargeting Workshop, European Conference on Computer Vision (ECCV), september 2010 (inproceedings)

ps

webpage pdf slides supplementary code [BibTex]

webpage pdf slides supplementary code [BibTex]


Thumb xl eigenclothingimagesmall2
A 2D human body model dressed in eigen clothing

Guan, P., Freifeld, O., Black, M. J.

In European Conf. on Computer Vision, (ECCV), pages: 285-298, Springer-Verlag, September 2010 (inproceedings)

Abstract
Detection, tracking, segmentation and pose estimation of people in monocular images are widely studied. Two-dimensional models of the human body are extensively used, however, they are typically fairly crude, representing the body either as a rough outline or in terms of articulated geometric primitives. We describe a new 2D model of the human body contour that combines an underlying naked body with a low-dimensional clothing model. The naked body is represented as a Contour Person that can take on a wide variety of poses and body shapes. Clothing is represented as a deformation from the underlying body contour. This deformation is learned from training examples using principal component analysis to produce eigen clothing. We find that the statistics of clothing deformations are skewed and we model the a priori probability of these deformations using a Beta distribution. The resulting generative model captures realistic human forms in monocular images and is used to infer 2D body shape and pose under clothing. We also use the coefficients of the eigen clothing to recognize different categories of clothing on dressed people. The method is evaluated quantitatively on synthetic and real images and achieves better accuracy than previous methods for estimating body shape under clothing.

ps

pdf data poster Project Page [BibTex]

pdf data poster Project Page [BibTex]


Thumb xl teaser eccvw
Analyzing and Evaluating Markerless Motion Tracking Using Inertial Sensors

Baak, A., Helten, T., Müller, M., Pons-Moll, G., Rosenhahn, B., Seidel, H.

In European Conference on Computer Vision (ECCV Workshops), September 2010 (inproceedings)

ps

pdf [BibTex]

pdf [BibTex]


no image
Statistical image analysis and percolation theory

Davies, P., Langovoy, M., Wittich, O.

73rd Annual Meeting of the Institute of Mathematical Statistics (IMS), August 2010 (talk)

Abstract
We develop a novel method for detection of signals and reconstruction of images in the presence of random noise. The method uses results from percolation theory. We specifically address the problem of detection of objects of unknown shapes in the case of nonparametric noise. The noise density is unknown and can be heavy-tailed. We view the object detection problem as hypothesis testing for discrete statistical inverse problems. We present an algorithm that allows to detect objects of various shapes in noisy images. We prove results on consistency and algorithmic complexity of our procedures.

ei

Web [BibTex]

Web [BibTex]


no image
Hybrid PET/MRI of Intracranial Masses: Initial Experiences and Comparison to PET/CT

Boss, A., Bisdas, S., Kolb, A., Hofmann, M., Ernemann, U., Claussen, C., Pfannenberg, C., Pichler, B., Reimold, M., Stegger, L.

Journal of Nuclear Medicine, 51(8):1198-1205, August 2010 (article)

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Epidural ECoG Online Decoding of Arm Movement Intention in Hemiparesis

Gomez Rodriguez, M., Grosse-Wentrup, M., Peters, J., Naros, G., Hill, J., Schölkopf, B., Gharabaghi, A.

In Proceedings of the 1st ICPR Workshop on Brain Decoding: Pattern Recognition Challenges in Neuroimaging (ICPR WBD 2010), pages: 36-39, (Editors: J. Richiardi and D Van De Ville and C Davatzikos and J Mourao-Miranda), IEEE, Piscataway, NJ, USA, 1st Workshop on Brain Decoding (WBD), August 2010 (inproceedings)

Abstract
Brain-Computer Interfaces (BCI) that rely upon epidural electrocorticographic signals may become a promising tool for neurorehabilitation of patients with severe hemiparatic syndromes due to cerebrovascular, traumatic or tumor-related brain damage. Here, we show in a patient-based feasibility study that online classification of arm movement intention is possible. The intention to move or to rest can be identified with high accuracy (~90 %), which is sufficient for BCI-guided neurorehabilitation. The observed spatial distribution of relevant features on the motor cortex indicates that cortical reorganization has been induced by the brain lesion. Low- and high-frequency components of the electrocorticographic power spectrum provide complementary information towards classification of arm movement intention.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
libDAI: A Free and Open Source C++ Library for Discrete Approximate Inference in Graphical Models

Mooij, JM.

Journal of Machine Learning Research, 11, pages: 2169-2173, August 2010 (article)

Abstract
This paper describes the software package libDAI, a free & open source C++ library that provides implementations of various exact and approximate inference methods for graphical models with discrete-valued variables. libDAI supports directed graphical models (Bayesian networks) as well as undirected ones (Markov random fields and factor graphs). It offers various approximations of the partition sum, marginal probability distributions and maximum probability states. Parameter learning is also supported. A feature comparison with other open source software packages for approximate inference is given. libDAI is licensed under the GPL v2+ license and is available at http://www.libdai.org.

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Convolutive blind source separation by efficient blind deconvolution and minimal filter distortion

Zhang, K., Chan, L.

Neurocomputing, 73(13-15):2580-2588, August 2010 (article)

Abstract
Convolutive blind source separation (BSS) usually encounters two difficulties—the filter indeterminacy in the recovered sources and the relatively high computational load. In this paper we propose an efficient method to convolutive BSS, by dealing with these two issues. It consists of two stages, namely, multichannel blind deconvolution (MBD) and learning the post-filters with the minimum filter distortion (MFD) principle. We present a computationally efficient approach to MBD in the first stage: a vector autoregression (VAR) model is first fitted to the data, admitting a closed-form solution and giving temporally independent errors; traditional independent component analysis (ICA) is then applied to these errors to produce the MBD results. In the second stage, the least linear reconstruction error (LLRE) constraint of the separation system, which was previously used to regularize the solutions to nonlinear ICA, enforces a MFD principle of the estimated mixing system for convolutive BSS. One can then easily learn the post-filters to preserve the temporal structure of the sources. We show that with this principle, each recovered source is approximately the principal component of the contributions of this source to all observations. Experimental results on both synthetic data and real room recordings show the good performance of this method.

ei

PDF PDF DOI [BibTex]


no image
Simulating Human Table Tennis with a Biomimetic Robot Setup

Mülling, K., Kober, J., Peters, J.

In From Animals to Animats 11, pages: 273-282, (Editors: Doncieux, S. , B. Girard, A. Guillot, J. Hallam, J.-A. Meyer, J.-B. Mouret), Springer, Berlin, Germany, 11th International Conference on Simulation of Adaptive Behavior (SAB), August 2010 (inproceedings)

Abstract
Playing table tennis is a difficult motor task which requires fast movements, accurate control and adaptation to task parameters. Although human beings see and move slower than most robot systems they outperform all table tennis robots significantly. In this paper we study human table tennis and present a robot system that mimics human striking behavior. Therefore we model the human movements involved in hitting a table tennis ball using discrete movement stages and the virtual hitting point hypothesis. The resulting model is implemented on an anthropomorphic robot arm with 7 degrees of freedom using robotics methods. We verify the functionality of the model both in a physical realistic simulation of an anthropomorphic robot arm and on a real Barrett WAM.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Statistical image analysis and percolation theory

Langovoy, M., Wittich, O.

28th European Meeting of Statisticians (EMS), August 2010 (talk)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Adapting Preshaped Grasping Movements Using Vision Descriptors

Kroemer, O., Detry, R., Piater, J., Peters, J.

In From Animals to Animats 11, pages: 156-166, (Editors: Doncieux, S. , B. Girard, A. Guillot, J. Hallam, J.-A. Meyer, J.-B. Mouret), Springer, Berlin, Germany, 11th International Conference on Simulation of Adaptive Behavior (SAB), August 2010 (inproceedings)

Abstract
Grasping is one of the most important abilities needed for future service robots. In the task of picking up an object from between clutter, traditional robotics approaches would determine a suitable grasping point and then use a movement planner to reach the goal. The planner would require precise and accurate information about the environment and long computation times, both of which are often not available. Therefore, methods are needed that execute grasps robustly even with imprecise information gathered only from standard stereo vision. We propose techniques that reactively modify the robot‘s learned motor primitives based on non-parametric potential fields centered on the Early Cognitive Vision descriptors. These allow both obstacle avoidance, and the adapting of finger motions to the object‘s local geometry. The methods were tested on a real robot, where they led to improved adaptability and quality of grasping actions.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


Thumb xl testing results 1
Trainable, Vision-Based Automated Home Cage Behavioral Phenotyping

Jhuang, H., Garrote, E., Edelman, N., Poggio, T., Steele, A., Serre, T.

In Measuring Behavior, August 2010 (inproceedings)

ps

pdf [BibTex]

pdf [BibTex]


no image
Biased Feedback in Brain-Computer Interfaces

Barbero, A., Grosse-Wentrup, M.

Journal of NeuroEngineering and Rehabilitation, 7(34):1-4, July 2010 (article)

Abstract
Even though feedback is considered to play an important role in learning how to operate a brain-computer interface (BCI), to date no significant influence of feedback design on BCI-performance has been reported in literature. In this work, we adapt a standard motor-imagery BCI-paradigm to study how BCI-performance is affected by biasing the belief subjects have on their level of control over the BCI system. Our findings indicate that subjects already capable of operating a BCI are impeded by inaccurate feedback, while subjects normally performing on or close to chance level may actually benefit from an incorrect belief on their performance level. Our results imply that optimal feedback design in BCIs should take into account a subject‘s current skill level.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Inferring Networks of Diffusion and Influence

Gomez Rodriguez, M., Leskovec, J., Krause, A.

In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2010), pages: 1019-1028, (Editors: Rao, B. , B. Krishnapuram, A. Tomkins, Q. Yang), ACM Press, New York, NY, USA, 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), July 2010 (inproceedings)

Abstract
Information diffusion and virus propagation are fundamental processes talking place in networks. While it is often possible to directly observe when nodes become infected, observing individual transmissions (i.e., who infects whom or who influences whom) is typically very difficult. Furthermore, in many applications, the underlying network over which the diffusions and propagations spread is actually unobserved. We tackle these challenges by developing a method for tracing paths of diffusion and influence through networks and inferring the networks over which contagions propagate. Given the times when nodes adopt pieces of information or become infected, we identify the optimal network that best explains the observed infection times. Since the optimization problem is NP-hard to solve exactly, we develop an efficient approximation algorithm that scales to large datasets and in practice gives provably near-optimal performance. We demonstrate the effectiveness of our approach by tracing information cascades in a set of 170 million blogs and news articles over a one year period to infer how information flows through the online media space. We find that the diffusion network of news tends to have a core-periphery structure with a small set of core media sites that diffuse information to the rest of the Web. These sites tend to have stable circles of influence with more general news media sites acting as connectors between them.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Relative Entropy Policy Search

Peters, J., Mülling, K., Altun, Y.

In Proceedings of the Twenty-Fourth National Conference on Artificial Intelligence, pages: 1607-1612, (Editors: Fox, M. , D. Poole), AAAI Press, Menlo Park, CA, USA, Twenty-Fourth National Conference on Artificial Intelligence (AAAI-10), July 2010 (inproceedings)

Abstract
Policy search is a successful approach to reinforcement learning. However, policy improvements often result in the loss of information. Hence, it has been marred by premature convergence and implausible solutions. As first suggested in the context of covariant policy gradients (Bagnell and Schneider 2003), many of these problems may be addressed by constraining the information loss. In this paper, we continue this path of reasoning and suggest the Relative Entropy Policy Search (REPS) method. The resulting method differs significantly from previous policy gradient approaches and yields an exact update step. It works well on typical reinforcement learning benchmark problems.

am ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Varieties of Justification in Machine Learning

Corfield, D.

Minds and Machines, 20(2):291-301, July 2010 (article)

Abstract
Forms of justification for inductive machine learning techniques are discussed and classified into four types. This is done with a view to introduce some of these techniques and their justificatory guarantees to the attention of philosophers, and to initiate a discussion as to whether they must be treated separately or rather can be viewed consistently from within a single framework.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Dirichlet Process Gaussian Mixture Models: Choice of the Base Distribution

Görür, D., Rasmussen, C.

Journal of Computer Science and Technology, 25(4):653-664, July 2010 (article)

Abstract
In the Bayesian mixture modeling framework it is possible to infer the necessary number of components to model the data and therefore it is unnecessary to explicitly restrict the number of components. Nonparametric mixture models sidestep the problem of finding the “correct” number of mixture components by assuming infinitely many components. In this paper Dirichlet process mixture (DPM) models are cast as infinite mixture models and inference using Markov chain Monte Carlo is described. The specification of the priors on the model parameters is often guided by mathematical and practical convenience. The primary goal of this paper is to compare the choice of conjugate and non-conjugate base distributions on a particular class of DPM models which is widely used in applications, the Dirichlet process Gaussian mixture model (DPGMM). We compare computational efficiency and modeling performance of DPGMM defined using a conjugate and a conditionally conjugate base distribution. We show that better density models can result from using a wider class of priors with no or only a modest increase in computational effort.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Robust probabilistic superposition and comparison of protein structures

Mechelke, M., Habeck, M.

BMC Bioinformatics, 11(363):1-13, July 2010 (article)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Inferring deterministic causal relations

Daniusis, P., Janzing, D., Mooij, J., Zscheischler, J., Steudel, B., Zhang, K., Schölkopf, B.

In Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence, pages: 143-150, (Editors: P Grünwald and P Spirtes), AUAI Press, Corvallis, OR, USA, UAI, July 2010 (inproceedings)

Abstract
We consider two variables that are related to each other by an invertible function. While it has previously been shown that the dependence structure of the noise can provide hints to determine which of the two variables is the cause, we presently show that even in the deterministic (noise-free) case, there are asymmetries that can be exploited for causal inference. Our method is based on the idea that if the function and the probability density of the cause are chosen independently, then the distribution of the effect will, in a certain sense, depend on the function. We provide a theoretical analysis of this method, showing that it also works in the low noise regime, and link it to information geometry. We report strong empirical results on various real-world data sets from different domains.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Cooperative Cuts: Graph Cuts with Submodular Edge Weights

Jegelka, S., Bilmes, J.

24th European Conference on Operational Research (EURO XXIV), July 2010 (talk)

Abstract
We introduce cooperative cut, a minimum cut problem whose cost is a submodular function on sets of edges: the cost of an edge that is added to a cut set depends on the edges in the set. Applications are e.g. in probabilistic graphical models and image processing. We prove NP hardness and a polynomial lower bound on the approximation factor, and upper bounds via four approximation algorithms based on different techniques. Our additional heuristics have attractive practical properties, e.g., to rely only on standard min-cut. Both our algorithms and heuristics appear to do well in practice.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Recent trends in classification of remote sensing data: active and semisupervised machine learning paradigms

Bruzzone, L., Persello, C.

In pages: 3720-3723 , IEEE, Piscataway, NJ, USA, IEEE International Geoscience and Remote Sensing Symposium (IGARSS), July 2010 (inproceedings)

Abstract
This paper addresses the recent trends in machine learning methods for the automatic classification of remote sensing (RS) images. In particular, we focus on two new paradigms: semisupervised and active learning. These two paradigms allow one to address classification problems in the critical conditions where the available labeled training samples are limited. These operational conditions are very usual in RS problems, due to the high cost and time associated with the collection of labeled samples. Semisupervised and active learning techniques allow one to enrich the initial training set information and to improve classification accuracy by exploiting unlabeled samples or requiring additional labeling phases from the user, respectively. The two aforementioned strategies are theoretically and experimentally analyzed considering SVM-based techniques in order to highlight advantages and disadvantages of both strategies.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Results of the GREAT08 Challenge: An image analysis competition for cosmological lensing

Bridle, S., Balan, S., Bethge, M., Gentile, M., Harmeling, S., Heymans, C., Hirsch, M., Hosseini, R., Jarvis, M., Kirk, D., Kitching, T., Kuijken, K., Lewis, A., Paulin-Henriksson, S., Schölkopf, B., Velander, M., Voigt, L., Witherick, D., Amara, A., Bernstein, G., Courbin, F., Gill, M., Heavens, A., Mandelbaum, R., Massey, R., Moghaddam, B., Rassat, A., Refregier, A., Rhodes, J., Schrabback, T., Shawe-Taylor, J., Shmakova, M., van Waerbeke, L., Wittman, D.

Monthly Notices of the Royal Astronomical Society, 405(3):2044-2061, July 2010 (article)

Abstract
We present the results of the GREAT08 Challenge, a blind analysis challenge to infer weak gravitational lensing shear distortions from images. The primary goal was to stimulate new ideas by presenting the problem to researchers outside the shear measurement community. Six GREAT08 Team methods were presented at the launch of the Challenge and five additional groups submitted results during the 6 month competition. Participants analyzed 30 million simulated galaxies with a range in signal to noise ratio, point-spread function ellipticity, galaxy size, and galaxy type. The large quantity of simulations allowed shear measurement methods to be assessed at a level of accuracy suitable for currently planned future cosmic shear observations for the first time. Different methods perform well in different parts of simulation parameter space and come close to the target level of accuracy in several of these. A number of fresh ideas have emerged as a result of the Challenge including a re-examination of the process of combining information from different galaxies, which reduces the dependence on realistic galaxy modelling. The image simulations will become increasingly sophis- ticated in future GREAT challenges, meanwhile the GREAT08 simulations remain as a benchmark for additional developments in shear measurement algorithms.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Source Separation and Higher-Order Causal Analysis of MEG and EEG

Zhang, K., Hyvärinen, A.

In Uncertainty in Artificial Intelligence: Proceedings of the Twenty-Sixth Conference (UAI 2010), pages: 709-716, (Editors: Grünwald, P. , P. Spirtes), AUAI Press, Corvallis, OR, USA, 26th Conference on Uncertainty in Artificial Intelligence (UAI), July 2010 (inproceedings)

Abstract
Separation of the sources and analysis of their connectivity have been an important topic in EEG/MEG analysis. To solve this problem in an automatic manner, we propose a twolayer model, in which the sources are conditionally uncorrelated from each other, but not independent; the dependence is caused by the causality in their time-varying variances (envelopes). The model is identified in two steps. We first propose a new source separation technique which takes into account the autocorrelations (which may be time-varying) and time-varying variances of the sources. The causality in the envelopes is then discovered by exploiting a special kind of multivariate GARCH (generalized autoregressive conditional heteroscedasticity) model. The resulting causal diagram gives the effective connectivity between the separated sources; in our experimental results on MEG data, sources with similar functions are grouped together, with negative influences between groups, and the groups are connected via some interesting sources.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Invariant Gaussian Process Latent Variable Models and Application in Causal Discovery

Zhang, K., Schölkopf, B., Janzing, D.

In Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence, pages: 717-724, (Editors: P Grünwald and P Spirtes), AUAI Press, Corvallis, OR, USA, UAI, July 2010 (inproceedings)

Abstract
In nonlinear latent variable models or dynamic models, if we consider the latent variables as confounders (common causes), the noise dependencies imply further relations between the observed variables. Such models are then closely related to causal discovery in the presence of nonlinear confounders, which is a challenging problem. However, generally in such models the observation noise is assumed to be independent across data dimensions, and consequently the noise dependencies are ignored. In this paper we focus on the Gaussian process latent variable model (GPLVM), from which we develop an extended model called invariant GPLVM (IGPLVM), which can adapt to arbitrary noise covariances. With the Gaussian process prior put on a particular transformation of the latent nonlinear functions, instead of the original ones, the algorithm for IGPLVM involves almost the same computational loads as that for the original GPLVM. Besides its potential application in causal discovery, IGPLVM has the advantage that its estimated latent nonlinear manifold is invariant to any nonsingular linear transformation of the data. Experimental results on both synthetic and realworld data show its encouraging performance in nonlinear manifold learning and causal discovery.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Remote Sensing Feature Selection by Kernel Dependence Estimation

Camps-Valls, G., Mooij, J., Schölkopf, B.

IEEE Geoscience and Remote Sensing Letters, 7(3):587-591, July 2010 (article)

Abstract
This letter introduces a nonlinear measure of independence between random variables for remote sensing supervised feature selection. The so-called Hilbert–Schmidt independence criterion (HSIC) is a kernel method for evaluating statistical dependence and it is based on computing the Hilbert–Schmidt norm of the cross-covariance operator of mapped samples in the corresponding Hilbert spaces. The HSIC empirical estimator is easy to compute and has good theoretical and practical properties. Rather than using this estimate for maximizing the dependence between the selected features and the class labels, we propose the more sensitive criterion of minimizing the associated HSIC p-value. Results in multispectral, hyperspectral, and SAR data feature selection for classification show the good performance of the proposed approach.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Clustering stability: an overview

von Luxburg, U.

Foundations and Trends in Machine Learning, 2(3):235-274, July 2010 (article)

Abstract
A popular method for selecting the number of clusters is based on stability arguments: one chooses the number of clusters such that the corresponding clustering results are "most stable". In recent years, a series of papers has analyzed the behavior of this method from a theoretical point of view. However, the results are very technical and difficult to interpret for non-experts. In this paper we give a high-level overview about the existing literature on clustering stability. In addition to presenting the results in a slightly informal but accessible way, we relate them to each other and discuss their different implications.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Multi-Label Learning by Exploiting Label Dependency

Zhang, M., Zhang, K.

In Proceedings of the 16th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2010), pages: 999-1008, (Editors: Rao, B. , B. Krishnapuram, A. Tomkins, Q. Yang), ACM Press, New York, NY, USA, 16th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), July 2010 (inproceedings)

Abstract
In multi-label learning, each training example is associated with a set of labels and the task is to predict the proper label set for the unseen example. Due to the tremendous (exponential) number of possible label sets, the task of learning from multi-label examples is rather challenging. Therefore, the key to successful multi-label learning is how to effectively exploit correlations between different labels to facilitate the learning process. In this paper, we propose to use a Bayesian network structure to efficiently encode the condi- tional dependencies of the labels as well as the feature set, with the feature set as the common parent of all labels. To make it practical, we give an approximate yet efficient procedure to find such a network structure. With the help of this network, multi-label learning is decomposed into a series of single-label classification problems, where a classifier is constructed for each label by incorporating its parental labels as additional features. Label sets of unseen examples are predicted recursively according to the label ordering given by the network. Extensive experiments on a broad range of data sets validate the effectiveness of our approach against other well-established methods.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


Thumb xl graspimagesmall
Decoding complete reach and grasp actions from local primary motor cortex populations

(Featured in Nature’s Research Highlights (Nature, Vol 466, 29 July 2010))

Vargas-Irwin, C. E., Shakhnarovich, G., Yadollahpour, P., Mislow, J., Black, M. J., Donoghue, J. P.

J. of Neuroscience, 39(29):9659-9669, July 2010 (article)

ps

pdf pdf from publisher Movie 1 Movie 2 Project Page [BibTex]

pdf pdf from publisher Movie 1 Movie 2 Project Page [BibTex]


no image
VerroTouch: High-Frequency Acceleration Feedback for Telerobotic Surgery

Kuchenbecker, K. J., Gewirtz, J., McMahan, W., Standish, D., Martin, P., Bohren, J., Mendoza, P. J., Lee, D. I.

In Haptics: Generating and Perceiving Tangible Sensations, Proc. EuroHaptics, Part I, 6191, pages: 189-196, Lecture Notes in Computer Science, Springer, Amsterdam, Netherlands, July 2010, Oral presentation given by Kuchenbecker (inproceedings)

hi

[BibTex]

[BibTex]


no image
Laser cooling of a magnetically guided ultracold atom beam

Aghajani-Talesh, A., Falkenau, M., Volchkov, V., Trafford, L., Griesmaier, A., Pfau, T.

New Journal of Physics, 12, pages: 065018, IOP Publishing and Deutsche Physikalische Gesellschaft, June 2010 (article)

Abstract
We report on the transverse laser cooling of a magnetically guided beam of ultracold chromium atoms. Radial compression by a tapering of the guide is employed to adiabatically heat the beam. Inside the tapered section, heat is extracted from the atom beam by a two-dimensional (2D) optical molasses perpendicular to it, resulting in a significant increase in atomic phase space density. A magnetic offset field is applied to prevent optical pumping to untrapped states. Our results demonstrate that, by a suitable choice of the magnetic offset field, the cooling beam intensity and detuning, atom losses and longitudinal heating can be avoided. Final temperatures below 65 μK have been achieved, corresponding to an increase in phase space density in the guided beam by more than a factor of 30.

sf

DOI [BibTex]

DOI [BibTex]


no image
Efficient Filter Flow for Space-Variant Multiframe Blind Deconvolution

Hirsch, M., Sra, S., Schölkopf, B., Harmeling, S.

In Proceedings of the 23rd IEEE Conference on Computer Vision and Pattern Recognition, pages: 607-614, IEEE, Piscataway, NJ, USA, CVPR, June 2010 (inproceedings)

Abstract
Ultimately being motivated by facilitating space-variant blind deconvolution, we present a class of linear transformations, that are expressive enough for space-variant filters, but at the same time especially designed for efficient matrix-vector-multiplications. Successful results on astronomical imaging through atmospheric turbulences and on noisy magnetic resonance images of constantly moving objects demonstrate the practical significance of our approach.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Grasping with Vision Descriptors and Motor Primitives

Kroemer, O., Detry, R., Piater, J., Peters, J.

In Proceedings of the 7th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2010), pages: 47-54, (Editors: Filipe, J. , J. Andrade-Cetto, J.-L. Ferrier), SciTePress , Lisboa, Portugal, 7th International Conference on Informatics in Control, Automation and Robotics (ICINCO), June 2010 (inproceedings)

Abstract
Grasping is one of the most important abilities needed for future service robots. Given the task of picking up an object from betweem clutter, traditional robotics approaches would determine a suitable grasping point and then use a movement planner to reach the goal. The planner would require precise and accurate information about the environment and long computation times, both of which may not always be available. Therefore, methods for executing grasps are required, which perform well with information gathered from only standard stereo vision, and make only a few necessary assumptions about the task environment. We propose techniques that reactively modify the robot’s learned motor primitives based on information derived from Early Cognitive Vision descriptors. The proposed techniques employ non-parametric potential fields centered on the Early Cognitive Vision descriptors to allow for curving hand trajectories around objects, and finger motions that adapt to the object’s local geometry. The methods were tested on a real robot and found to allow for easier imitation learning of human movements and give a considerable improvement to the robot’s performance in grasping tasks.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
An efficient divide-and-conquer cascade for nonlinear object detection

Lampert, CH.

In Proceedings of the Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2010), pages: 1022-1029, IEEE, Piscataway, NJ, USA, Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2010 (inproceedings)

Abstract
We introduce a method to accelerate the evaluation of object detection cascades with the help of a divide-and-conquer procedure in the space of candidate regions. Compared to the exhaustive procedure that thus far is the state-of-the-art for cascade evaluation, the proposed method requires fewer evaluations of the classifier functions, thereby speeding up the search. Furthermore, we show how the recently developed efficient subwindow search (ESS) procedure [11] can be integrated into the last stage of our method. This allows us to use our method to act not only as a faster procedure for cascade evaluation, but also as a tool to perform efficient branch-and-bound object detection with nonlinear quality functions, in particular kernelized support vector machines. Experiments on the PASCAL VOC 2006 dataset show an acceleration of more than 50% by our method compared to standard cascade evaluation.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]