Header logo is de


2010


no image
Reinforcement learning of full-body humanoid motor skills

Stulp, F., Buchli, J., Theodorou, E., Schaal, S.

In Humanoid Robots (Humanoids), 2010 10th IEEE-RAS International Conference on, pages: 405-410, December 2010, clmc (inproceedings)

Abstract
Applying reinforcement learning to humanoid robots is challenging because humanoids have a large number of degrees of freedom and state and action spaces are continuous. Thus, most reinforcement learning algorithms would become computationally infeasible and require a prohibitive amount of trials to explore such high-dimensional spaces. In this paper, we present a probabilistic reinforcement learning approach, which is derived from the framework of stochastic optimal control and path integrals. The algorithm, called Policy Improvement with Path Integrals (PI2), has a surprisingly simple form, has no open tuning parameters besides the exploration noise, is model-free, and performs numerically robustly in high dimensional learning problems. We demonstrate how PI2 is able to learn full-body motor skills on a 34-DOF humanoid robot. To demonstrate the generality of our approach, we also apply PI2 in the context of variable impedance control, where both planned trajectories and gain schedules for each joint are optimized simultaneously.

am

link (url) [BibTex]

2010


link (url) [BibTex]


no image
Learning Table Tennis with a Mixture of Motor Primitives

Mülling, K., Kober, J., Peters, J.

In Proceedings of the 10th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2010), pages: 411-416, IEEE, Piscataway, NJ, USA, 10th IEEE-RAS International Conference on Humanoid Robots (Humanoids), December 2010 (inproceedings)

Abstract
Table tennis is a sufficiently complex motor task for studying complete skill learning systems. It consists of several elementary motions and requires fast movements, accurate control, and online adaptation. To represent the elementary movements needed for robot table tennis, we rely on dynamic systems motor primitives (DMP). While such DMPs have been successfully used for learning a variety of simple motor tasks, they only represent single elementary actions. In order to select and generalize among different striking movements, we present a new approach, called Mixture of Motor Primitives that uses a gating network to activate appropriate motor primitives. The resulting policy enables us to select among the appropriate motor primitives as well as to generalize between them. In order to obtain a fully learned robot table tennis setup, we also address the problem of predicting the necessary context information, i.e., the hitting point in time and space where we want to hit the ball. We show that the resulting setup was capable of playing rudimentary table tennis using an anthropomorphic robot arm.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Learning an interactive segmentation system

Nickisch, H., Rother, C., Kohli, P., Rhemann, C.

In Proceedings of the Seventh Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP 2010), pages: 274-281, (Editors: Chellapa, R. , P. Anandan, A. N. Rajagopalan, P. J. Narayanan, P. Torr), ACM Press, Nw York, NY, USA, Seventh Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP), December 2010 (inproceedings)

Abstract
Many successful applications of computer vision to image or video manipulation are interactive by nature. However, parameters of such systems are often trained neglecting the user. Traditionally, interactive systems have been treated in the same manner as their fully automatic counterparts. Their performance is evaluated by computing the accuracy of their solutions under some fixed set of user interactions. This paper proposes a new evaluation and learning method which brings the user in the loop. It is based on the use of an active robot user -- a simulated model of a human user. We show how this approach can be used to evaluate and learn parameters of state-of-the-art interactive segmentation systems. We also show how simulated user models can be integrated into the popular max-margin method for parameter learning and propose an algorithm to solve the resulting optimisation problem.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Using an Infinite Von Mises-Fisher Mixture Model to Cluster Treatment Beam Directions in External Radiation Therapy

Bangert, M., Hennig, P., Oelfke, U.

In pages: 746-751 , (Editors: Draghici, S. , T.M. Khoshgoftaar, V. Palade, W. Pedrycz, M.A. Wani, X. Zhu), IEEE, Piscataway, NJ, USA, Ninth International Conference on Machine Learning and Applications (ICMLA), December 2010 (inproceedings)

Abstract
We present a method for fully automated selection of treatment beam ensembles for external radiation therapy. We reformulate the beam angle selection problem as a clustering problem of locally ideal beam orientations distributed on the unit sphere. For this purpose we construct an infinite mixture of von Mises-Fisher distributions, which is suited in general for density estimation from data on the D-dimensional sphere. Using a nonparametric Dirichlet process prior, our model infers probability distributions over both the number of clusters and their parameter values. We describe an efficient Markov chain Monte Carlo inference algorithm for posterior inference from experimental data in this model. The performance of the suggested beam angle selection framework is illustrated for one intra-cranial, pancreas, and prostate case each. The infinite von Mises-Fisher mixture model (iMFMM) creates between 18 and 32 clusters, depending on the patient anatomy. This suggests to use the iMFMM directly for beam ensemble selection in robotic radio surgery, or to generate low-dimensional input for both subsequent optimization of trajectories for arc therapy and beam ensemble selection for conventional radiation therapy.

ei pn

Web DOI [BibTex]

Web DOI [BibTex]


no image
Online algorithms for submodular minimization with combinatorial constraints

Jegelka, S., Bilmes, J.

In pages: 1-6, NIPS Workshop on Discrete Optimization in Machine Learning: Structures, Algorithms and Applications (DISCML), December 2010 (inproceedings)

Abstract
Building on recent results for submodular minimization with combinatorial constraints, and on online submodular minimization, we address online approximation algorithms for submodular minimization with combinatorial constraints. We discuss two types of algorithms and outline approximation algorithms that integrate into those.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Multi-agent random walks for local clustering

Alamgir, M., von Luxburg, U.

In Proceedings of the IEEE International Conference on Data Mining (ICDM 2010), pages: 18-27, (Editors: Webb, G. I., B. Liu, C. Zhang, D. Gunopulos, X. Wu), IEEE, Piscataway, NJ, USA, IEEE International Conference on Data Mining (ICDM), December 2010 (inproceedings)

Abstract
We consider the problem of local graph clustering where the aim is to discover the local cluster corresponding to a point of interest. The most popular algorithms to solve this problem start a random walk at the point of interest and let it run until some stopping criterion is met. The vertices visited are then considered the local cluster. We suggest a more powerful alternative, the multi-agent random walk. It consists of several “agents” connected by a fixed rope of length l. All agents move independently like a standard random walk on the graph, but they are constrained to have distance at most l from each other. The main insight is that for several agents it is harder to simultaneously travel over the bottleneck of a graph than for just one agent. Hence, the multi-agent random walk has less tendency to mistakenly merge two different clusters than the original random walk. In our paper we analyze the multi-agent random walk theoretically and compare it experimentally to the major local graph clustering algorithms from the literature. We find that our multi-agent random walk consistently outperforms these algorithms.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Effects of Packet Losses to Stability in Bilateral Teleoperation Systems

Hong, A., Cho, JH., Lee, DY.

In pages: 1043-1044, Korean Society of Mechanical Engineers, Seoul, South Korea, KSME Fall Annual Meeting, November 2010 (inproceedings)

ei

[BibTex]

[BibTex]


no image
Combining Real-Time Brain-Computer Interfacing and Robot Control for Stroke Rehabilitation

Gomez Rodriguez, M., Peters, J., Hill, J., Gharabaghi, A., Schölkopf, B., Grosse-Wentrup, M.

In Proceedings of SIMPAR 2010 Workshops, pages: 59-63, Brain-Computer Interface Workshop at SIMPAR: 2nd International Conference on Simulation, Modeling, and Programming for Autonomous Robots, November 2010 (inproceedings)

Abstract
Brain-Computer Interfaces based on electrocorticography (ECoG) or electroencephalography (EEG), in combination with robot-assisted active physical therapy, may support traditional rehabilitation procedures for patients with severe motor impairment due to cerebrovascular brain damage caused by stroke. In this short report, we briefly review the state-of-the art in this exciting new field, give an overview of the work carried out at the Max Planck Institute for Biological Cybernetics and the University of T{\"u}bingen, and discuss challenges that need to be addressed in order to move from basic research to clinical studies.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Closing the sensorimotor loop: Haptic feedback facilitates decoding of arm movement imagery

Gomez Rodriguez, M., Peters, J., Hill, J., Schölkopf, B., Gharabaghi, A., Grosse-Wentrup, M.

In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics (SMC 2010), pages: 121-126, IEEE, Piscataway, NJ, USA, IEEE International Conference on Systems, Man and Cybernetics (SMC), October 2010 (inproceedings)

Abstract
Brain-Computer Interfaces (BCIs) in combination with robot-assisted physical therapy may become a valuable tool for neurorehabilitation of patients with severe hemiparetic syndromes due to cerebrovascular brain damage (stroke) and other neurological conditions. A key aspect of this approach is reestablishing the disrupted sensorimotor feedback loop, i.e., determining the intended movement using a BCI and helping a human with impaired motor function to move the arm using a robot. It has not been studied yet, however, how artificially closing the sensorimotor feedback loop affects the BCI decoding performance. In this article, we investigate this issue in six healthy subjects, and present evidence that haptic feedback facilitates the decoding of arm movement intention. The results provide evidence of the feasibility of future rehabilitative efforts combining robot-assisted physical therapy with BCIs.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Learning as a key ability for Human-Friendly Robots

Peters, J., Kober, J., Mülling, K., Krömer, O., Nguyen-Tuong, D., Wang, Z., Rodriguez Gomez, M., Grosse-Wentrup, M.

In pages: 1-2, 3rd Workshop for Young Researchers on Human-Friendly Robotics (HFR), October 2010 (inproceedings)

ei

Web [BibTex]

Web [BibTex]


no image
Learning Probabilistic Discriminative Models of Grasp Affordances under Limited Supervision

Erkan, A., Kroemer, O., Detry, R., Altun, Y., Piater, J., Peters, J.

In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010), pages: 1586-1591, IEEE, Piscataway, NJ, USA, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2010 (inproceedings)

Abstract
This paper addresses the problem of learning and efficiently representing discriminative probabilistic models of object-specific grasp affordances particularly when the number of labeled grasps is extremely limited. The proposed method does not require an explicit 3D model but rather learns an implicit manifold on which it defines a probability distribution over grasp affordances. We obtain hypothetical grasp configurations from visual descriptors that are associated with the contours of an object. While these hypothetical configurations are abundant, labeled configurations are very scarce as these are acquired via time-costly experiments carried out by the robot. Kernel logistic regression (KLR) via joint kernel maps is trained to map the hypothesis space of grasps into continuous class-conditional probability values indicating their achievability. We propose a soft-supervised extension of KLR and a framework to combine the merits of semi-supervised and active learning approaches to tackle the scarcity of labeled grasps. Experimental evaluation shows that combining active and semi-supervised learning is favorable in the existence of an oracle. Furthermore, semi-supervised learning outperforms supervised learning, particularly when the labeled data is very limited.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
A biomimetic approach to robot table tennis

Mülling, K., Kober, J., Peters, J.

In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2010), pages: 1921-1926, IEEE, Piscataway, NJ, USA, 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), October 2010 (inproceedings)

Abstract
Although human beings see and move slower than table tennis or baseball robots, they manage to outperform such robot systems. One important aspect of this better performance is the human movement generation. In this paper, we study trajectory generation for table tennis from a biomimetic point of view. Our focus lies on generating efficient stroke movements capable of mastering variations in the environmental conditions, such as changing ball speed, spin and position. We study table tennis from a human motor control point of view. To make headway towards this goal, we construct a trajectory generator for a single stroke using the discrete movement stages hypothesis and the virtual hitting point hypothesis to create a model that produces a human-like stroke movement. We verify the functionality of the trajectory generator for a single forehand stroke both in a simulation and using a real Barrett WAM.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Weakly-Paired Maximum Covariance Analysis for Multimodal Dimensionality Reduction and Transfer Learning

Lampert, C., Kroemer, O.

In Computer Vision – ECCV 2010, pages: 566-579, (Editors: Daniilidis, K. , P. Maragos, N. Paragios), Springer, Berlin, Germany, 11th European Conference on Computer Vision, September 2010 (inproceedings)

Abstract
We study the problem of multimodal dimensionality reduction assuming that data samples can be missing at training time, and not all data modalities may be present at application time. Maximum covariance analysis, as a generalization of PCA, has many desirable properties, but its application to practical problems is limited by its need for perfectly paired data. We overcome this limitation by a latent variable approach that allows working with weakly paired data and is still able to efficiently process large datasets using standard numerical routines. The resulting weakly paired maximum covariance analysis often finds better representations than alternative methods, as we show in two exemplary tasks: texture discrimination and transfer learning.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Simple algorithmic modifications for improving blind steganalysis performance

Schwamberger, V., Franz, M.

In Proceedings of the 12th ACM workshop on Multimedia and Security (MM&Sec 2010), pages: 225-230, (Editors: Campisi, P. , J. Dittmann, S. Craver), ACM Press, New York, NY, USA, 12th ACM Workshop on Multimedia and Security (MM&Sec), September 2010 (inproceedings)

Abstract
Most current algorithms for blind steganalysis of images are based on a two-stages approach: First, features are extracted in order to reduce dimensionality and to highlight potential manipulations; second, a classifier trained on pairs of clean and stego images finds a decision rule for these features to detect stego images. Thereby, vector components might vary significantly in their values, hence normalization of the feature vectors is crucial. Furthermore, most classifiers contain free parameters, and an automatic model selection step has to be carried out for adapting these parameters. However, the commonly used cross-validation destroys some information needed by the classifier because of the arbitrary splitting of image pairs (stego and clean version) in the training set. In this paper, we propose simple modifications of normalization and for standard cross-validation. In our experiments, we show that these methods lead to a significant improvement of the standard blind steganalyzer of Lyu and Farid.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Semi-supervised Remote Sensing Image Classification via Maximum Entropy

Erkan, A., Camps-Valls, G., Altun, Y.

In Proceedings of the 2010 IEEE International Workshop on Machine Learning for Signal Processing (MLSP 2010), pages: 313-318, IEEE, Piscataway, NJ, USA, 2010 IEEE International Workshop on Machine Learning for Signal Processing (MLSP), September 2010 (inproceedings)

Abstract
Remote sensing image segmentation requires multi-category classification typically with limited number of labeled training samples. While semi-supervised learning (SSL) has emerged as a sub-field of machine learning to tackle the scarcity of labeled samples, most SSL algorithms to date have had trade-offs in terms of scalability and/or applicability to multi-categorical data. In this paper, we evaluate semi-supervised logistic regression (SLR), a recent information theoretic semi-supervised algorithm, for remote sensing image classification problems. SLR is a probabilistic discriminative classifier and a specific instance of the generalized maximum entropy framework with a convex loss function. Moreover, the method is inherently multi-class and easy to implement. These characteristics make SLR a strong alternative to the widely used semi-supervised variants of SVM for the segmentation of remote sensing images. We demonstrate the competitiveness of SLR in multispectral, hyperspectral and radar image classifica tion.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
MLSP Competition, 2010: Description of first place method

Leiva, JM., Martens, SMM.

In Proceedings of the 2010 IEEE International Workshop on Machine Learning for Signal Processing (MLSP 2010), pages: 112-113, IEEE, Piscataway, NJ, USA, 2010 IEEE International Workshop on Machine Learning for Signal Processing (MLSP), September 2010 (inproceedings)

Abstract
Our winning approach to the 2010 MLSP Competition is based on a generative method for P300-based BCI decoding, successfully applied to visual spellers. Here, generative has a double meaning. On the one hand, we work with a probability density model of the data given the target/non target labeling, as opposed to discriminative (e.g. SVM-based) methods. On the other hand, the natural consequence of this approach is a decoding based on comparing the observation to templates generated from the data.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Multiframe Blind Deconvolution, Super-Resolution, and Saturation Correction via Incremental EM

Harmeling, S., Sra, S., Hirsch, M., Schölkopf, B.

In Proceedings of the 17th International Conference on Image Processing (ICIP 2010), pages: 3313-3316, IEEE, Piscataway, NJ, USA, 17th International Conference on Image Processing (ICIP), September 2010 (inproceedings)

Abstract
We formulate the multiframe blind deconvolution problem in an incremental expectation maximization (EM) framework. Beyond deconvolution, we show how to use the same framework to address: (i) super-resolution despite noise and unknown blurring; (ii) saturationcorrection of overexposed pixels that confound image restoration. The abundance of data allows us to address both of these without using explicit image or blur priors. The end result is a simple but effective algorithm with no hyperparameters. We apply this algorithm to real-world images from astronomy and to super resolution tasks: for both, our algorithm yields increased resolution and deconvolved images simultaneously.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Gaussian Mixture Modeling with Gaussian Process Latent Variable Models

Nickisch, H., Rasmussen, C.

In Pattern Recognition, pages: 271-282, (Editors: Goesele, M. , S. Roth, A. Kuijper, B. Schiele, K. Schindler), Springer, Berlin, Germany, 32nd Annual Symposium of the German Association for Pattern Recognition (DAGM), September 2010 (inproceedings)

Abstract
Density modeling is notoriously difficult for high dimensional data. One approach to the problem is to search for a lower dimensional manifold which captures the main characteristics of the data. Recently, the Gaussian Process Latent Variable Model (GPLVM) has successfully been used to find low dimensional manifolds in a variety of complex data. The GPLVM consists of a set of points in a low dimensional latent space, and a stochastic map to the observed space. We show how it can be interpreted as a density model in the observed space. However, the GPLVM is not trained as a density model and therefore yields bad density estimates. We propose a new training strategy and obtain improved generalisation performance and better density estimates in comparative evaluations on several benchmark data sets.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
A Nearest Neighbor Data Structure for Graphics Hardware

Cayton, L.

In Proceedings of the First International Workshop on Accelerating Data Management Systems Using Modern Processor and Storage Architectures (ADMS 2010), pages: 1-6, First International Workshop on Accelerating Data Management Systems Using Modern Processor and Storage Architectures (ADMS), September 2010 (inproceedings)

Abstract
Nearest neighbor search is a core computational task in database systems and throughout data analysis. It is also a major computational bottleneck, and hence an enormous body of research has been devoted to data structures and algorithms for accelerating the task. Recent advances in graphics hardware provide tantalizing speedups on a variety of tasks and suggest an alternate approach to the problem: simply run brute force search on a massively parallel sys- tem. In this paper we marry the approaches with a novel data structure that can effectively make use of parallel systems such as graphics cards. The architectural complexities of graphics hardware - the high degree of parallelism, the small amount of memory relative to instruction throughput, and the single instruction, multiple data design- present significant challenges for data structure design. Furthermore, the brute force approach applies perfectly to graphics hardware, leading one to question whether an intelligent algorithm or data structure can even hope to outperform this basic approach. Despite these challenges and misgivings, we demonstrate that our data structure - termed a Random Ball Cover - provides significant speedups over the GPU- based brute force approach.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Epidural ECoG Online Decoding of Arm Movement Intention in Hemiparesis

Gomez Rodriguez, M., Grosse-Wentrup, M., Peters, J., Naros, G., Hill, J., Schölkopf, B., Gharabaghi, A.

In Proceedings of the 1st ICPR Workshop on Brain Decoding: Pattern Recognition Challenges in Neuroimaging (ICPR WBD 2010), pages: 36-39, (Editors: J. Richiardi and D Van De Ville and C Davatzikos and J Mourao-Miranda), IEEE, Piscataway, NJ, USA, 1st Workshop on Brain Decoding (WBD), August 2010 (inproceedings)

Abstract
Brain-Computer Interfaces (BCI) that rely upon epidural electrocorticographic signals may become a promising tool for neurorehabilitation of patients with severe hemiparatic syndromes due to cerebrovascular, traumatic or tumor-related brain damage. Here, we show in a patient-based feasibility study that online classification of arm movement intention is possible. The intention to move or to rest can be identified with high accuracy (~90 %), which is sufficient for BCI-guided neurorehabilitation. The observed spatial distribution of relevant features on the motor cortex indicates that cortical reorganization has been induced by the brain lesion. Low- and high-frequency components of the electrocorticographic power spectrum provide complementary information towards classification of arm movement intention.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Simulating Human Table Tennis with a Biomimetic Robot Setup

Mülling, K., Kober, J., Peters, J.

In From Animals to Animats 11, pages: 273-282, (Editors: Doncieux, S. , B. Girard, A. Guillot, J. Hallam, J.-A. Meyer, J.-B. Mouret), Springer, Berlin, Germany, 11th International Conference on Simulation of Adaptive Behavior (SAB), August 2010 (inproceedings)

Abstract
Playing table tennis is a difficult motor task which requires fast movements, accurate control and adaptation to task parameters. Although human beings see and move slower than most robot systems they outperform all table tennis robots significantly. In this paper we study human table tennis and present a robot system that mimics human striking behavior. Therefore we model the human movements involved in hitting a table tennis ball using discrete movement stages and the virtual hitting point hypothesis. The resulting model is implemented on an anthropomorphic robot arm with 7 degrees of freedom using robotics methods. We verify the functionality of the model both in a physical realistic simulation of an anthropomorphic robot arm and on a real Barrett WAM.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Adapting Preshaped Grasping Movements Using Vision Descriptors

Kroemer, O., Detry, R., Piater, J., Peters, J.

In From Animals to Animats 11, pages: 156-166, (Editors: Doncieux, S. , B. Girard, A. Guillot, J. Hallam, J.-A. Meyer, J.-B. Mouret), Springer, Berlin, Germany, 11th International Conference on Simulation of Adaptive Behavior (SAB), August 2010 (inproceedings)

Abstract
Grasping is one of the most important abilities needed for future service robots. In the task of picking up an object from between clutter, traditional robotics approaches would determine a suitable grasping point and then use a movement planner to reach the goal. The planner would require precise and accurate information about the environment and long computation times, both of which are often not available. Therefore, methods are needed that execute grasps robustly even with imprecise information gathered only from standard stereo vision. We propose techniques that reactively modify the robot‘s learned motor primitives based on non-parametric potential fields centered on the Early Cognitive Vision descriptors. These allow both obstacle avoidance, and the adapting of finger motions to the object‘s local geometry. The methods were tested on a real robot, where they led to improved adaptability and quality of grasping actions.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Inferring Networks of Diffusion and Influence

Gomez Rodriguez, M., Leskovec, J., Krause, A.

In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2010), pages: 1019-1028, (Editors: Rao, B. , B. Krishnapuram, A. Tomkins, Q. Yang), ACM Press, New York, NY, USA, 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), July 2010 (inproceedings)

Abstract
Information diffusion and virus propagation are fundamental processes talking place in networks. While it is often possible to directly observe when nodes become infected, observing individual transmissions (i.e., who infects whom or who influences whom) is typically very difficult. Furthermore, in many applications, the underlying network over which the diffusions and propagations spread is actually unobserved. We tackle these challenges by developing a method for tracing paths of diffusion and influence through networks and inferring the networks over which contagions propagate. Given the times when nodes adopt pieces of information or become infected, we identify the optimal network that best explains the observed infection times. Since the optimization problem is NP-hard to solve exactly, we develop an efficient approximation algorithm that scales to large datasets and in practice gives provably near-optimal performance. We demonstrate the effectiveness of our approach by tracing information cascades in a set of 170 million blogs and news articles over a one year period to infer how information flows through the online media space. We find that the diffusion network of news tends to have a core-periphery structure with a small set of core media sites that diffuse information to the rest of the Web. These sites tend to have stable circles of influence with more general news media sites acting as connectors between them.

ei

PDF Web DOI Project Page [BibTex]

PDF Web DOI Project Page [BibTex]


no image
Relative Entropy Policy Search

Peters, J., Mülling, K., Altun, Y.

In Proceedings of the Twenty-Fourth National Conference on Artificial Intelligence, pages: 1607-1612, (Editors: Fox, M. , D. Poole), AAAI Press, Menlo Park, CA, USA, Twenty-Fourth National Conference on Artificial Intelligence (AAAI-10), July 2010 (inproceedings)

Abstract
Policy search is a successful approach to reinforcement learning. However, policy improvements often result in the loss of information. Hence, it has been marred by premature convergence and implausible solutions. As first suggested in the context of covariant policy gradients (Bagnell and Schneider 2003), many of these problems may be addressed by constraining the information loss. In this paper, we continue this path of reasoning and suggest the Relative Entropy Policy Search (REPS) method. The resulting method differs significantly from previous policy gradient approaches and yields an exact update step. It works well on typical reinforcement learning benchmark problems.

am ei

PDF Web Project Page [BibTex]

PDF Web Project Page [BibTex]


no image
Inferring deterministic causal relations

Daniusis, P., Janzing, D., Mooij, J., Zscheischler, J., Steudel, B., Zhang, K., Schölkopf, B.

In Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence, pages: 143-150, (Editors: P Grünwald and P Spirtes), AUAI Press, Corvallis, OR, USA, UAI, July 2010 (inproceedings)

Abstract
We consider two variables that are related to each other by an invertible function. While it has previously been shown that the dependence structure of the noise can provide hints to determine which of the two variables is the cause, we presently show that even in the deterministic (noise-free) case, there are asymmetries that can be exploited for causal inference. Our method is based on the idea that if the function and the probability density of the cause are chosen independently, then the distribution of the effect will, in a certain sense, depend on the function. We provide a theoretical analysis of this method, showing that it also works in the low noise regime, and link it to information geometry. We report strong empirical results on various real-world data sets from different domains.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Source Separation and Higher-Order Causal Analysis of MEG and EEG

Zhang, K., Hyvärinen, A.

In Uncertainty in Artificial Intelligence: Proceedings of the Twenty-Sixth Conference (UAI 2010), pages: 709-716, (Editors: Grünwald, P. , P. Spirtes), AUAI Press, Corvallis, OR, USA, 26th Conference on Uncertainty in Artificial Intelligence (UAI), July 2010 (inproceedings)

Abstract
Separation of the sources and analysis of their connectivity have been an important topic in EEG/MEG analysis. To solve this problem in an automatic manner, we propose a twolayer model, in which the sources are conditionally uncorrelated from each other, but not independent; the dependence is caused by the causality in their time-varying variances (envelopes). The model is identified in two steps. We first propose a new source separation technique which takes into account the autocorrelations (which may be time-varying) and time-varying variances of the sources. The causality in the envelopes is then discovered by exploiting a special kind of multivariate GARCH (generalized autoregressive conditional heteroscedasticity) model. The resulting causal diagram gives the effective connectivity between the separated sources; in our experimental results on MEG data, sources with similar functions are grouped together, with negative influences between groups, and the groups are connected via some interesting sources.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Recent trends in classification of remote sensing data: active and semisupervised machine learning paradigms

Bruzzone, L., Persello, C.

In pages: 3720-3723 , IEEE, Piscataway, NJ, USA, IEEE International Geoscience and Remote Sensing Symposium (IGARSS), July 2010 (inproceedings)

Abstract
This paper addresses the recent trends in machine learning methods for the automatic classification of remote sensing (RS) images. In particular, we focus on two new paradigms: semisupervised and active learning. These two paradigms allow one to address classification problems in the critical conditions where the available labeled training samples are limited. These operational conditions are very usual in RS problems, due to the high cost and time associated with the collection of labeled samples. Semisupervised and active learning techniques allow one to enrich the initial training set information and to improve classification accuracy by exploiting unlabeled samples or requiring additional labeling phases from the user, respectively. The two aforementioned strategies are theoretically and experimentally analyzed considering SVM-based techniques in order to highlight advantages and disadvantages of both strategies.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Invariant Gaussian Process Latent Variable Models and Application in Causal Discovery

Zhang, K., Schölkopf, B., Janzing, D.

In Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence, pages: 717-724, (Editors: P Grünwald and P Spirtes), AUAI Press, Corvallis, OR, USA, UAI, July 2010 (inproceedings)

Abstract
In nonlinear latent variable models or dynamic models, if we consider the latent variables as confounders (common causes), the noise dependencies imply further relations between the observed variables. Such models are then closely related to causal discovery in the presence of nonlinear confounders, which is a challenging problem. However, generally in such models the observation noise is assumed to be independent across data dimensions, and consequently the noise dependencies are ignored. In this paper we focus on the Gaussian process latent variable model (GPLVM), from which we develop an extended model called invariant GPLVM (IGPLVM), which can adapt to arbitrary noise covariances. With the Gaussian process prior put on a particular transformation of the latent nonlinear functions, instead of the original ones, the algorithm for IGPLVM involves almost the same computational loads as that for the original GPLVM. Besides its potential application in causal discovery, IGPLVM has the advantage that its estimated latent nonlinear manifold is invariant to any nonsingular linear transformation of the data. Experimental results on both synthetic and realworld data show its encouraging performance in nonlinear manifold learning and causal discovery.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Multi-Label Learning by Exploiting Label Dependency

Zhang, M., Zhang, K.

In Proceedings of the 16th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2010), pages: 999-1008, (Editors: Rao, B. , B. Krishnapuram, A. Tomkins, Q. Yang), ACM Press, New York, NY, USA, 16th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD), July 2010 (inproceedings)

Abstract
In multi-label learning, each training example is associated with a set of labels and the task is to predict the proper label set for the unseen example. Due to the tremendous (exponential) number of possible label sets, the task of learning from multi-label examples is rather challenging. Therefore, the key to successful multi-label learning is how to effectively exploit correlations between different labels to facilitate the learning process. In this paper, we propose to use a Bayesian network structure to efficiently encode the condi- tional dependencies of the labels as well as the feature set, with the feature set as the common parent of all labels. To make it practical, we give an approximate yet efficient procedure to find such a network structure. With the help of this network, multi-label learning is decomposed into a series of single-label classification problems, where a classifier is constructed for each label by incorporating its parental labels as additional features. Label sets of unseen examples are predicted recursively according to the label ordering given by the network. Extensive experiments on a broad range of data sets validate the effectiveness of our approach against other well-established methods.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Efficient Filter Flow for Space-Variant Multiframe Blind Deconvolution

Hirsch, M., Sra, S., Schölkopf, B., Harmeling, S.

In Proceedings of the 23rd IEEE Conference on Computer Vision and Pattern Recognition, pages: 607-614, IEEE, Piscataway, NJ, USA, CVPR, June 2010 (inproceedings)

Abstract
Ultimately being motivated by facilitating space-variant blind deconvolution, we present a class of linear transformations, that are expressive enough for space-variant filters, but at the same time especially designed for efficient matrix-vector-multiplications. Successful results on astronomical imaging through atmospheric turbulences and on noisy magnetic resonance images of constantly moving objects demonstrate the practical significance of our approach.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Grasping with Vision Descriptors and Motor Primitives

Kroemer, O., Detry, R., Piater, J., Peters, J.

In Proceedings of the 7th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2010), pages: 47-54, (Editors: Filipe, J. , J. Andrade-Cetto, J.-L. Ferrier), SciTePress , Lisboa, Portugal, 7th International Conference on Informatics in Control, Automation and Robotics (ICINCO), June 2010 (inproceedings)

Abstract
Grasping is one of the most important abilities needed for future service robots. Given the task of picking up an object from betweem clutter, traditional robotics approaches would determine a suitable grasping point and then use a movement planner to reach the goal. The planner would require precise and accurate information about the environment and long computation times, both of which may not always be available. Therefore, methods for executing grasps are required, which perform well with information gathered from only standard stereo vision, and make only a few necessary assumptions about the task environment. We propose techniques that reactively modify the robot’s learned motor primitives based on information derived from Early Cognitive Vision descriptors. The proposed techniques employ non-parametric potential fields centered on the Early Cognitive Vision descriptors to allow for curving hand trajectories around objects, and finger motions that adapt to the object’s local geometry. The methods were tested on a real robot and found to allow for easier imitation learning of human movements and give a considerable improvement to the robot’s performance in grasping tasks.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
An efficient divide-and-conquer cascade for nonlinear object detection

Lampert, CH.

In Proceedings of the Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2010), pages: 1022-1029, IEEE, Piscataway, NJ, USA, Twenty-Third IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2010 (inproceedings)

Abstract
We introduce a method to accelerate the evaluation of object detection cascades with the help of a divide-and-conquer procedure in the space of candidate regions. Compared to the exhaustive procedure that thus far is the state-of-the-art for cascade evaluation, the proposed method requires fewer evaluations of the classifier functions, thereby speeding up the search. Furthermore, we show how the recently developed efficient subwindow search (ESS) procedure [11] can be integrated into the last stage of our method. This allows us to use our method to act not only as a faster procedure for cascade evaluation, but also as a tool to perform efficient branch-and-bound object detection with nonlinear quality functions, in particular kernelized support vector machines. Experiments on the PASCAL VOC 2006 dataset show an acceleration of more than 50% by our method compared to standard cascade evaluation.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Non-parametric estimation of integral probability metrics

Sriperumbudur, B., Fukumizu, K., Gretton, A., Schölkopf, B., Lanckriet, G.

In Proceedings of the IEEE International Symposium on Information Theory (ISIT 2010), pages: 1428-1432, IEEE, Piscataway, NJ, USA, IEEE International Symposium on Information Theory (ISIT), June 2010 (inproceedings)

Abstract
In this paper, we develop and analyze a nonparametric method for estimating the class of integral probability metrics (IPMs), examples of which include the Wasserstein distance, Dudley metric, and maximum mean discrepancy (MMD). We show that these distances can be estimated efficiently by solving a linear program in the case of Wasserstein distance and Dudley metric, while MMD is computable in a closed form. All these estimators are shown to be strongly consistent and their convergence rates are analyzed. Based on these results, we show that IPMs are simple to estimate and the estimators exhibit good convergence behavior compared to fi-divergence estimators.

ei

PDF Web DOI Project Page [BibTex]

PDF Web DOI Project Page [BibTex]


no image
Causal Markov condition for submodular information measures

Steudel, B., Janzing, D., Schölkopf, B.

In Proceedings of the 23rd Annual Conference on Learning Theory, pages: 464-476, (Editors: AT Kalai and M Mohri), OmniPress, Madison, WI, USA, COLT, June 2010 (inproceedings)

Abstract
The causal Markov condition (CMC) is a postulate that links observations to causality. It describes the conditional independences among the observations that are entailed by a causal hypothesis in terms of a directed acyclic graph. In the conventional setting, the observations are random variables and the independence is a statistical one, i.e., the information content of observations is measured in terms of Shannon entropy. We formulate a generalized CMC for any kind of observations on which independence is defined via an arbitrary submodular information measure. Recently, this has been discussed for observations in terms of binary strings where information is understood in the sense of Kolmogorov complexity. Our approach enables us to find computable alternatives to Kolmogorov complexity, e.g., the length of a text after applying existing data compression schemes. We show that our CMC is justified if one restricts the attention to a class of causal mechanisms that is adapted to the respective information measure. Our justification is similar to deriving the statistical CMC from functional models of causality, where every variable is a deterministic function of its observed causes and an unobserved noise term. Our experiments on real data demonstrate the performance of compression based causal inference.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
UDP Communication channel design of master-slave robot system

Hong, A., Cho, JH., Wang, H., Lee, DY.

In pages: 231-232, 2010 KSME Conference, June 2010 (inproceedings)

ei

[BibTex]

[BibTex]


no image
Telling cause from effect based on high-dimensional observations

Janzing, D., Hoyer, P., Schölkopf, B.

In Proceedings of the 27th International Conference on Machine Learning, pages: 479-486, (Editors: J Fürnkranz and T Joachims), International Machine Learning Society, Madison, WI, USA, ICML, June 2010 (inproceedings)

Abstract
We describe a method for inferring linear causal relations among multi-dimensional variables. The idea is to use an asymmetry between the distributions of cause and effect that occurs if the covariance matrix of the cause and the structure matrix mapping the cause to the effect are independently chosen. The method applies to both stochastic and deterministic causal relations, provided that the dimensionality is sufficiently high (in some experiments, 5 was enough). It is applicable to Gaussian as well as non-Gaussian data.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
A scalable trust-region algorithm with application to mixed-norm regression

Kim, D., Sra, S., Dhillon, I.

In Proceedings of the 27th International Conference on Machine Learning (ICML 2010), pages: 519-526, (Editors: Fürnkranz, J. , T. Joachims), International Machine Learning Society, Madison, WI, USA, 27th International Conference on Machine Learning (ICML), June 2010 (inproceedings)

Abstract
We present a new algorithm for minimizing a convex loss-function subject to regularization. Our framework applies to numerous problems in machine learning and statistics; notably, for sparsity-promoting regularizers such as ℓ1 or ℓ1, ∞ norms, it enables efficient computation of sparse solutions. Our approach is based on the trust-region framework with nonsmooth objectives, which allows us to build on known results to provide convergence analysis. We avoid the computational overheads associated with the conventional Hessian approximation used by trust-region methods by instead using a simple separable quadratic approximation. This approximation also enables use of proximity operators for tackling nonsmooth regularizers. We illustrate the versatility of our resulting algorithm by specializing it to three mixed-norm regression problems: group lasso [36], group logistic regression [21], and multi-task lasso [19]. We experiment with both synthetic and real-world large-scale data—our method is seen to be competitive, robust, and scalable.

ei

PDF Web Project Page [BibTex]

PDF Web Project Page [BibTex]


no image
The Influence of the Image Basis on Modeling and Steganalysis Performance

Schwamberger, V., Le, P., Schölkopf, B., Franz, M.

In Information Hiding, pages: 133-144, (Editors: R Böhme and PWL Fong and R Safavi-Naini), Springer, Berlin, Germany, 12th international Workshop (IH), June 2010 (inproceedings)

Abstract
We compare two image bases with respect to their capabilities for image modeling and steganalysis. The first basis consists of wavelets, the second is a Laplacian pyramid. Both bases are used to decompose the image into subbands where the local dependency structure is modeled with a linear Bayesian estimator. Similar to existing approaches, the image model is used to predict coefficient values from their neighborhoods, and the final classification step uses statistical descriptors of the residual. Our findings are counter-intuitive on first sight: Although Laplacian pyramids have better image modeling capabilities than wavelets, steganalysis based on wavelets is much more successful. We present a number of experiments that suggest possible explanations for this result.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
A PAC-Bayesian Analysis of Co-clustering, Graph Clustering, and Pairwise Clustering

Seldin, Y.

In ICML 2010 Workshop on Social Analytics: Learning from human interactions, pages: 1-5, ICML Workshop on Social Analytics: Learning from human interactions, June 2010 (inproceedings)

Abstract
We review briefly the PAC-Bayesian analysis of co-clustering (Seldin and Tishby, 2008, 2009, 2010), which provided generalization guarantees and regularization terms absent in the preceding formulations of this problem and achieved state-of-the-art prediction results in MovieLens collaborative filtering task. Inspired by this analysis we formulate weighted graph clustering1 as a prediction problem: given a subset of edge weights we analyze the ability of graph clustering to predict the remaining edge weights. This formulation enables practical and theoretical comparison of different approaches to graph clustering as well as comparison of graph clustering with other possible ways to model the graph. Following the lines of (Seldin and Tishby, 2010) we derive PAC-Bayesian generalization bounds for graph clustering. The bounds show that graph clustering should optimize a trade-off between empirical data fit and the mutual information that clusters preserve on the graph nodes. A similar trade-off derived from information-theoretic considerations was already shown to produce state-of-the-art results in practice (Slonim et al., 2005; Yom-Tov and Slonim, 2009). This paper supports the empirical evidence by providing a better theoretical foundation, suggesting formal generalization guarantees, and offering a more accurate way to deal with finite sample issues.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Reinforcement learning of motor skills in high dimensions: A path integral approach

Theodorou, E., Buchli, J., Schaal, S.

In Robotics and Automation (ICRA), 2010 IEEE International Conference on, pages: 2397-2403, May 2010, clmc (inproceedings)

Abstract
Reinforcement learning (RL) is one of the most general approaches to learning control. Its applicability to complex motor systems, however, has been largely impossible so far due to the computational difficulties that reinforcement learning encounters in high dimensional continuous state-action spaces. In this paper, we derive a novel approach to RL for parameterized control policies based on the framework of stochastic optimal control with path integrals. While solidly grounded in optimal control theory and estimation theory, the update equations for learning are surprisingly simple and have no danger of numerical instabilities as neither matrix inversions nor gradient learning rates are required. Empirical evaluations demonstrate significant performance improvements over gradient-based policy learning and scalability to high-dimensional control problems. Finally, a learning experiment on a robot dog illustrates the functionality of our algorithm in a real-world scenario. We believe that our new algorithm, Policy Improvement with Path Integrals (PI2), offers currently one of the most efficient, numerically robust, and easy to implement algorithms for RL in robotics.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Inverse dynamics control of floating base systems using orthogonal decomposition

Mistry, M., Buchli, J., Schaal, S.

In Robotics and Automation (ICRA), 2010 IEEE International Conference on, pages: 3406-3412, May 2010, clmc (inproceedings)

Abstract
Model-based control methods can be used to enable fast, dexterous, and compliant motion of robots without sacrificing control accuracy. However, implementing such techniques on floating base robots, e.g., humanoids and legged systems, is non-trivial due to under-actuation, dynamically changing constraints from the environment, and potentially closed loop kinematics. In this paper, we show how to compute the analytically correct inverse dynamics torques for model-based control of sufficiently constrained floating base rigid-body systems, such as humanoid robots with one or two feet in contact with the environment. While our previous inverse dynamics approach relied on an estimation of contact forces to compute an approximate inverse dynamics solution, here we present an analytically correct solution by using an orthogonal decomposition to project the robot dynamics onto a reduced dimensional space, independent of contact forces. We demonstrate the feasibility and robustness of our approach on a simulated floating base bipedal humanoid robot and an actual robot dog locomoting over rough terrain.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Fast, robust quadruped locomotion over challenging terrain

Kalakrishnan, M., Buchli, J., Pastor, P., Mistry, M., Schaal, S.

In Robotics and Automation (ICRA), 2010 IEEE International Conference on, pages: 2665-2670, May 2010, clmc (inproceedings)

Abstract
We present a control architecture for fast quadruped locomotion over rough terrain. We approach the problem by decomposing it into many sub-systems, in which we apply state-of-the-art learning, planning, optimization and control techniques to achieve robust, fast locomotion. Unique features of our control strategy include: (1) a system that learns optimal foothold choices from expert demonstration using terrain templates, (2) a body trajectory optimizer based on the Zero-Moment Point (ZMP) stability criterion, and (3) a floating-base inverse dynamics controller that, in conjunction with force control, allows for robust, compliant locomotion over unperceived obstacles. We evaluate the performance of our controller by testing it on the LittleDog quadruped robot, over a wide variety of rough terrain of varying difficulty levels. We demonstrate the generalization ability of this controller by presenting test results from an independent external test team on terrains that have never been shown to us.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Apprenticeship learning via soft local homomorphisms

Boularias, A., Chaib-Draa, B.

In Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA 2010), pages: 2971-2976, IEEE, Piscataway, NJ, USA, 2010 IEEE International Conference on Robotics and Automation (ICRA), May 2010 (inproceedings)

Abstract
We consider the problem of apprenticeship learning when the expert's demonstration covers only a small part of a large state space. Inverse Reinforcement Learning (IRL) provides an efficient solution to this problem based on the assumption that the expert is optimally acting in a Markov Decision Process (MDP). However, past work on IRL requires an accurate estimate of the frequency of encountering each feature of the states when the robot follows the expert‘s policy. Given that the complete policy of the expert is unknown, the features frequencies can only be empirically estimated from the demonstrated trajectories. In this paper, we propose to use a transfer method, known as soft homomorphism, in order to generalize the expert‘s policy to unvisited regions of the state space. The generalized policy can be used either as the robot‘s final policy, or to calculate the features frequencies within an IRL algorithm. Empirical results show that our approach is able to learn good policies from a small number of demonstrations.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Using Model Knowledge for Learning Inverse Dynamics

Nguyen-Tuong, D., Peters, J.

In Proceedings of the 2010 IEEE International Conference on Robotics and Automation (ICRA 2010), pages: 2677-2682, IEEE, Piscataway, NJ, USA, 2010 IEEE International Conference on Robotics and Automation (ICRA), May 2010 (inproceedings)

Abstract
In recent years, learning models from data has become an increasingly interesting tool for robotics, as it allows straightforward and accurate model approximation. However, in most robot learning approaches, the model is learned from scratch disregarding all prior knowledge about the system. For many complex robot systems, available prior knowledge from advanced physics-based modeling techniques can entail valuable information for model learning that may result in faster learning speed, higher accuracy and better generalization. In this paper, we investigate how parametric physical models (e.g., obtained from rigid body dynamics) can be used to improve the learning performance, and, especially, how semiparametric regression methods can be applied in this context. We present two possible semiparametric regression approaches, where the knowledge of the physical model can either become part of the mean function or of the kernel in a nonparametric Gaussian process regression. We compare the learning performance o f these methods first on sampled data and, subsequently, apply the obtained inverse dynamics models in tracking control on a real Barrett WAM. The results show that the semiparametric models learned with rigid body dynamics as prior outperform the standard rigid body dynamics models on real data while generalizing better for unknown parts of the state space.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Incremental Sparsification for Real-time Online Model Learning

Nguyen-Tuong, D., Peters, J.

In JMLR Workshop and Conference Proceedings Volume 9: AISTATS 2010, pages: 557-564, (Editors: Teh, Y.W. , M. Titterington), JMLR, Cambridge, MA, USA, Thirteenth International Conference on Artificial Intelligence and Statistics, May 2010 (inproceedings)

Abstract
Online model learning in real-time is required by many applications such as in robot tracking control. It poses a difficult problem, as fast and incremental online regression with large data sets is the essential component which cannot be achieved by straightforward usage of off-the-shelf machine learning methods (such as Gaussian process regression or support vector regression). In this paper, we propose a framework for online, incremental sparsification with a fixed budget designed for large scale real-time model learning. The proposed approach combines a sparsification method based on an independence measure with a large scale database. In combination with an incremental learning approach such as sequential support vector regression, we obtain a regression method which is applicable in real-time online learning. It exhibits competitive learning accuracy when compared with standard regression techniques. Implementation on a real robot emphasizes the applicability of the proposed approach in real-time online model learning for real world systems.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Coherent Inference on Optimal Play in Game Trees

Hennig, P., Stern, D., Graepel, T.

In JMLR Workshop and Conference Proceedings Volume 9: AISTATS 2010, pages: 326-333, (Editors: Teh, Y.W. , M. Titterington ), JMLR, Cambridge, MA, USA, Thirteenth International Conference on Artificial Intelligence and Statistics, May 2010 (inproceedings)

Abstract
Round-based games are an instance of discrete planning problems. Some of the best contemporary game tree search algorithms use random roll-outs as data. Relying on a good policy, they learn on-policy values by propagating information upwards in the tree, but not between sibling nodes. Here, we present a generative model and a corresponding approximate message passing scheme for inference on the optimal, off-policy value of nodes in smooth AND/OR trees, given random roll-outs. The crucial insight is that the distribution of values in game trees is not completely arbitrary. We define a generative model of the on-policy values using a latent score for each state, representing the value under the random roll-out policy. Inference on the values under the optimal policy separates into an inductive, pre-data step and a deductive, post-data part. Both can be solved approximately with Expectation Propagation, allowing off-policy value inference for any node in the (exponentially big) tree in linear time.

ei pn

PDF Web [BibTex]

PDF Web [BibTex]


no image
Multitask Learning for Brain-Computer Interfaces

Alamgir, M., Grosse-Wentrup, M., Altun, Y.

In JMLR Workshop and Conference Proceedings Volume 9: AISTATS 2010, pages: 17-24, (Editors: Teh, Y.W. , M. Titterington), JMLR, Cambridge, MA, USA, Thirteenth International Conference on Artificial Intelligence and Statistics , May 2010 (inproceedings)

Abstract
Brain-computer interfaces (BCIs) are limited in their applicability in everyday settings by the current necessity to record subjectspecific calibration data prior to actual use of the BCI for communication. In this paper, we utilize the framework of multitask learning to construct a BCI that can be used without any subject-specific calibration process. We discuss how this out-of-the-box BCI can be further improved in a computationally efficient manner as subject-specific data becomes available. The feasibility of the approach is demonstrated on two sets of experimental EEG data recorded during a standard two-class motor imagery paradigm from a total of 19 healthy subjects. Specifically, we show that satisfactory classification results can be achieved with zero training data, and combining prior recordings with subjectspecific calibration data substantially outperforms using subject-specific data only. Our results further show that transfer between recordings under slightly different experimental setups is feasible.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Identifying Cause and Effect on Discrete Data using Additive Noise Models

Peters, J., Janzing, D., Schölkopf, B.

In JMLR Workshop and Conference Proceedings Volume 9: AISTATS 2010, pages: 597-604, (Editors: YW Teh and M Titterington), JMLR, Cambridge, MA, USA, 13th International Conference on Artificial Intelligence and Statistics, May 2010 (inproceedings)

Abstract
Inferring the causal structure of a set of random variables from a finite sample of the joint distribution is an important problem in science. Recently, methods using additive noise models have been suggested to approach the case of continuous variables. In many situations, however, the variables of interest are discrete or even have only finitely many states. In this work we extend the notion of additive noise models to these cases. Whenever the joint distribution P(X;Y ) admits such a model in one direction, e.g. Y = f(X) + N; N ? X, it does not admit the reversed model X = g(Y ) + ~N ; ~N ? Y as long as the model is chosen in a generic way. Based on these deliberations we propose an efficient new algorithm that is able to distinguish between cause and effect for a finite sample of discrete variables. We show that this algorithm works both on synthetic and real data sets.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Semi-supervised Learning via Generalized Maximum Entropy

Erkan, A., Altun, Y.

In JMLR Workshop and Conference Proceedings Volume 9: AISTATS 2010, pages: 209-216, (Editors: Teh, Y.W. , M. Titterington), JMLR, Cambridge, MA, USA, Thirteenth International Conference on Artificial Intelligence and Statistics , May 2010 (inproceedings)

Abstract
Various supervised inference methods can be analyzed as convex duals of the generalized maximum entropy (MaxEnt) framework. Generalized MaxEnt aims to find a distribution that maximizes an entropy function while respecting prior information represented as potential functions in miscellaneous forms of constraints and/or penalties. We extend this framework to semi-supervised learning by incorporating unlabeled data via modifications to these potential functions reflecting structural assumptions on the data geometry. The proposed approach leads to a family of discriminative semi-supervised algorithms, that are convex, scalable, inherently multi-class, easy to implement, and that can be kernelized naturally. Experimental evaluation of special cases shows the competitiveness of our methodology.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
A New Algorithm for Improving the Resolution of Cryo-EM Density Maps

Hirsch, M., Schölkopf, B., Habeck, M.

In Research in Computational Molecular Biology, Lecture Notes in Bioinformatics, Vol. 6044 , pages: 174-188, (Editors: B Berger), Springer, Berlin, Germany, 14th International Conference on Research in Computational Molecular Biology (RECOMB), May 2010 (inproceedings)

Abstract
Cryo-electron microscopy (cryo-EM) plays an increasingly prominent role in structure elucidation of macromolecular assemblies. Advances in experimental instrumentation and computational power have spawned numerous cryo-EM studies of large biomolecular complexes resulting in the reconstruction of three-dimensional density maps at intermediate and low resolution. In this resolution range, identification and interpretation of structural elements and modeling of biomolecular structure with atomic detail becomes problematic. In this paper, we present a novel algorithm that enhances the resolution of intermediate- and low-resolution density maps. Our underlying assumption is to model the low-resolution density map as a blurred and possibly noise-corrupted version of an unknown high-resolution map that we seek to recover by deconvolution. By exploiting the nonnegativity of both the high-resolution map and blur kernel we derive multiplicative updates reminiscent of those used in nonnegative matrix factorization. Our framework allows for easy incorporation of additional prior knowledge such as smoothness and sparseness, on both the sharpened density map and the blur kernel. A probabilistic formulation enables us to derive updates for the hyperparameters, therefore our approach has no parameter that needs adjustment. We apply the algorithm to simulated three-dimensional electron microscopic data. We show that our method provides better resolved density maps when compared with B-factor sharpening, especially in the presence of noise. Moreover, our method can use additional information provided by homologous structures, which helps to improve the resolution even further.

ei

Web DOI [BibTex]

Web DOI [BibTex]