Header logo is


2011


Thumb xl kthexecution
Mind the gap - robotic grasping under incomplete observation

Bohg, J., Johnson-Roberson, M., Leon, B., Felip, J., Gratal, X., Bergstrom, N., Kragic, D., Morales, A.

In Robotics and Automation (ICRA), 2011 IEEE International Conference on, pages: 686-693, May 2011 (inproceedings)

Abstract
We consider the problem of grasp and manipulation planning when the state of the world is only partially observable. Specifically, we address the task of picking up unknown objects from a table top. The proposed approach to object shape prediction aims at closing the knowledge gaps in the robot's understanding of the world. A completed state estimate of the environment can then be provided to a simulator in which stable grasps and collision-free movements are planned. The proposed approach is based on the observation that many objects commonly in use in a service robotic scenario possess symmetries. We search for the optimal parameters of these symmetries given visibility constraints. Once found, the point cloud is completed and a surface mesh reconstructed. Quantitative experiments show that the predictions are valid approximations of the real object shape. By demonstrating the approach on two very different robotic platforms its generality is emphasized.

am

pdf video code data DOI Project Page [BibTex]

2011


pdf video code data DOI Project Page [BibTex]


no image
VerroTouch: Vibrotactile Feedback for Robotic Minimally Invasive Surgery

McMahan, W., Gewirtz, J., Standish, D., Martin, P., Kunkel, J., Lilavois, M., Wedmid, A., Lee, D. I., Kuchenbecker, K. J.

Journal of Urology, 185(4, Supplement):e373, May 2011, Poster presentation given by McMahan at the Annual Meeting of the American Urological Association in Washington, D.C., USA (article)

hi

[BibTex]

[BibTex]


Thumb xl sigalijcv11
Loose-limbed People: Estimating 3D Human Pose and Motion Using Non-parametric Belief Propagation

Sigal, L., Isard, M., Haussecker, H., Black, M. J.

International Journal of Computer Vision, 98(1):15-48, Springer Netherlands, May 2011 (article)

Abstract
We formulate the problem of 3D human pose estimation and tracking as one of inference in a graphical model. Unlike traditional kinematic tree representations, our model of the body is a collection of loosely-connected body-parts. In particular, we model the body using an undirected graphical model in which nodes correspond to parts and edges to kinematic, penetration, and temporal constraints imposed by the joints and the world. These constraints are encoded using pair-wise statistical distributions, that are learned from motion-capture training data. Human pose and motion estimation is formulated as inference in this graphical model and is solved using Particle Message Passing (PaMPas). PaMPas is a form of non-parametric belief propagation that uses a variation of particle filtering that can be applied over a general graphical model with loops. The loose-limbed model and decentralized graph structure allow us to incorporate information from "bottom-up" visual cues, such as limb and head detectors, into the inference process. These detectors enable automatic initialization and aid recovery from transient tracking failures. We illustrate the method by automatically tracking people in multi-view imagery using a set of calibrated cameras and present quantitative evaluation using the HumanEva dataset.

ps

pdf publisher's site link (url) Project Page Project Page [BibTex]

pdf publisher's site link (url) Project Page Project Page [BibTex]


no image
Continuous Loading of a Conservative Potential Trap from an Atomic Beam

Falkenau, M., Volchkov, V., Rührig, J., Griesmaier, A., Pfau, T.

Physical Review Letters, 106, pages: 163002, American Physical Society (APS), April 2011 (article)

Abstract
We demonstrate the fast accumulation of 52Cr atoms in a conservative potential from a guided atomic beam. Without laser cooling on a cycling transition, a dissipative step involving optical pumping allows us to load atoms at a rate of 2×10^7  s^−1 in the trap. Within less than 100 ms we reach the collisionally dense regime, from which we produce a Bose-Einstein condensate with subsequent evaporative cooling. This constitutes a new approach to degeneracy where Bose-Einstein condensation can be reached without a closed cycling transition, provided that a slow beam of particles can be produced.

sf

DOI [BibTex]

DOI [BibTex]


no image
Improving quantification of functional networks with EEG inverse problem: Evidence from a decoding point of view

Besserve, M., Martinerie, J., Garnero, L.

NeuroImage, 55(4):1536-1547, April 2011 (article)

Abstract
Decoding experimental conditions from single trial Electroencephalographic (EEG) signals is becoming a major challenge for the study of brain function and real-time applications such as Brain Computer Interface. EEG source reconstruction offers principled ways to estimate the cortical activities from EEG signals. But to what extent it can enhance informative brain signals in single trial has not been addressed in a general setting. We tested this using the minimum norm estimate solution (MNE) to estimate spectral power and coherence features at the cortical level. With a fast implementation, we computed a support vector machine (SVM) classifier output from these quantities in real-time, without prior on the relevant functional networks. We applied this approach to single trial decoding of ongoing mental imagery tasks using EEG data recorded in 5 subjects. Our results show that reconstructing the underlying cortical network dynamics significantly outperforms a usual electrode level approach in terms of information transfer and also reduces redundancy between coherence and power features, supporting a decrease of volume conduction effects. Additionally, the classifier coefficients reflect the most informative features of network activity, showing an important contribution of localized motor and sensory brain areas, and of coherence between areas up to 6 cm distance. This study provides a computationally efficient and interpretable strategy to extract information from functional networks at the cortical level in single trial. Moreover, this sets a general framework to evaluate the performance of EEG source reconstruction methods by their decoding abilities.

ei

Web DOI [BibTex]


no image
Fast Convergent Algorithms for Expectation Propagation Approximate Bayesian Inference

Seeger, M., Nickisch, H.

In JMLR Workshop and Conference Proceedings Volume 15: AISTATS 2011, pages: 652-660, (Editors: Gordon, G. , D. Dunson, M. Dudík ), MIT Press, Cambridge, MA, USA, 14th International Conference on Artificial Intelligence and Statistics, April 2011 (inproceedings)

Abstract
We propose a novel algorithm to solve the expectation propagation relaxation of Bayesian inference for continuous-variable graphical models. In contrast to most previous algorithms, our method is provably convergent. By marrying convergent EP ideas from (Opper&Winther, 2005) with covariance decoupling techniques (Wipf&Nagarajan, 2008; Nickisch&Seeger, 2009), it runs at least an order of magnitude faster than the most common EP solver.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Active Exploration for Robot Parameter Selection in Episodic Reinforcement Learning

Kroemer, O., Peters, J.

In Proceedings of the 2011 IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL 2011), pages: 25-31, IEEE, Piscataway, NJ, USA, IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning (ADPRL), April 2011 (inproceedings)

Abstract
As the complexity of robots and other autonomous systems increases, it becomes more important that these systems can adapt and optimize their settings actively. However, such optimization is rarely trivial. Sampling from the system is often expensive in terms of time and other costs, and excessive sampling should therefore be avoided. The parameter space is also usually continuous and multi-dimensional. Given the inherent exploration-exploitation dilemma of the problem, we propose treating it as an episodic reinforcement learning problem. In this reinforcement learning framework, the policy is defined by the system's parameters and the rewards are given by the system's performance. The rewards accumulate during each episode of a task. In this paper, we present a method for efficiently sampling and optimizing in continuous multidimensional spaces. The approach is based on Gaussian process regression, which can represent continuous non-linear mappings from parameters to system performance. We employ an upper confidence bound policy, which explicitly manages the trade-off between exploration and exploitation. Unlike many other policies for this kind of problem, we do not rely on a discretization of the action space. The presented method was evaluated on a real robot. The robot had to learn grasping parameters in order to adapt its grasping execution to different objects. The proposed method was also tested on a more general gain tuning problem. The results of the experiments show that the presented method can quickly determine suitable parameters and is applicable to real online learning applications.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Using brain–computer interfaces to induce neural plasticity and restore function

Grosse-Wentrup, M., Mattia, D., Oweiss, K.

Journal of Neural Engineering, 8(2):1-5, April 2011 (article)

Abstract
Analyzing neural signals and providing feedback in real-time is one of the core characteristics of a brain-computer interface (BCI). As this feature may be employed to induce neural plasticity, utilizing BCI-technology for therapeutic purposes is increasingly gaining popularity in the BCI-community. In this review, we discuss the state-of-the-art of research on this topic, address the principles of and challenges in inducing neural plasticity by means of a BCI, and delineate the problems of study design and outcome evaluation arising in this context. The review concludes with a list of open questions and recommendations for future research in this field.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
EPIBLASTER-fast exhaustive two-locus epistasis detection strategy using graphical processing units

Kam-Thong, T., Czamara, D., Tsuda, K., Borgwardt, K., Lewis, C., Erhardt-Lehmann, A., Hemmer, B., Rieckmann, P., Daake, M., Weber, F., Wolf, C., Ziegler, A., Pütz, B., Holsboer, F., Schölkopf, B., Müller-Myhsok, B.

European Journal of Human Genetics, 19(4):465-471, April 2011 (article)

Abstract
Detection of epistatic interaction between loci has been postulated to provide a more in-depth understanding of the complex biological and biochemical pathways underlying human diseases. Studying the interaction between two loci is the natural progression following traditional and well-established single locus analysis. However, the added costs and time duration required for the computation involved have thus far deterred researchers from pursuing a genome-wide analysis of epistasis. In this paper, we propose a method allowing such analysis to be conducted very rapidly. The method, dubbed EPIBLASTER, is applicable to case–control studies and consists of a two-step process in which the difference in Pearson‘s correlation coefficients is computed between controls and cases across all possible SNP pairs as an indication of significant interaction warranting further analysis. For the subset of interactions deemed potentially significant, a second-stage analysis is performed using the likelihood ratio test from the logistic regression to obtain the P-value for the estimated coefficients of the individual effects and the interaction term. The algorithm is implemented using the parallel computational capability of commercially available graphical processing units to greatly reduce the computation time involved. In the current setup and example data sets (211 cases, 222 controls, 299468 SNPs; and 601 cases, 825 controls, 291095 SNPs), this coefficient evaluation stage can be completed in roughly 1 day. Our method allows for exhaustive and rapid detection of significant SNP pair interactions without imposing significant marginal effects of the single loci involved in the pair.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Model learning for robot control: a survey

Nguyen-Tuong, D., Peters, J.

Cognitive Processing, 12(4):319-340, April 2011 (article)

Abstract
Models are among the most essential tools in robotics, such as kinematics and dynamics models of the robot’s own body and controllable external objects. It is widely believed that intelligent mammals also rely on internal models in order to generate their actions. However, while classical robotics relies on manually generated models that are based on human insights into physics, future autonomous, cognitive robots need to be able to automatically generate models that are based on information which is extracted from the data streams accessible to the robot. In this paper, we survey the progress in model learning with a strong focus on robot control on a kinematic as well as dynamical level. Here, a model describes essential information about the behavior of the environment and the influence of an agent on this environment. In the context of model-based learning control, we view the model from three different perspectives. First, we need to study the different possible model learning architectures for robotics. Second, we discuss what kind of problems these architecture and the domain of robotics imply for the applicable learning methods. From this discussion, we deduce future directions of real-time learning algorithms. Third, we show where these scenarios have been used successfully in several case studies.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Relative Entropy Inverse Reinforcement Learning

Boularias, A., Kober, J., Peters, J.

In JMLR Workshop and Conference Proceedings Volume 15: AISTATS 2011, pages: 182-189, (Editors: Gordon, G. , D. Dunson, M. Dudík ), MIT Press, Cambridge, MA, USA, Fourteenth International Conference on Artificial Intelligence and Statistics, April 2011 (inproceedings)

Abstract
We consider the problem of imitation learning where the examples, demonstrated by an expert, cover only a small part of a large state space. Inverse Reinforcement Learning (IRL) provides an efficient tool for generalizing the demonstration, based on the assumption that the expert is optimally acting in a Markov Decision Process (MDP). Most of the past work on IRL requires that a (near)-optimal policy can be computed for different reward functions. However, this requirement can hardly be satisfied in systems with a large, or continuous, state space. In this paper, we propose a model-free IRL algorithm, where the relative entropy between the empirical distribution of the state-action trajectories under a uniform policy and their distribution under the learned policy is minimized by stochastic gradient descent. We compare this new approach to well-known IRL algorithms using approximate MDP models. Empirical results on simulated car racing, gridworld and ball-in-a-cup problems show that our approach is able to learn good policies from a small number of demonstrations.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Removing noise from astronomical images using a pixel-specific noise model

Burger, H., Schölkopf, B., Harmeling, S.

In pages: 8, (Editors: H Lensch and SL Narasimhan and ME Testorf), IEEE, Piscataway, NJ, USA, IEEE International Conference on Computational Photography (ICCP), April 2011 (inproceedings)

Abstract
For digital photographs of astronomical objects, where exposure times are usually long and ISO settings high, the so-called dark-current is a significant source of noise. Dark-current refers to thermally generated electrons and is therefore present even in the absence of light. This paper presents a novel approach for denoising astronomical images that have been corrupted by dark-current noise. Our method relies on a probabilistic description of the dark-current of each pixel of a given camera. The noise model is then combined with an image prior which is adapted to astronomical images. In a laboratory environment, we use a black and white CCD camera containing a cooling unit and show that our method is superior to existing methods in terms of root mean squared error. Furthermore, we show that our method is practically relevant by providing visually more appealing results on astronomical photographs taken with a single lens reflex CMOS camera.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Critical issues in state-of-the-art brain–computer interface signal processing

Krusienski, D., Grosse-Wentrup, M., Galan, F., Coyle, D., Miller, K., Forney, E., Anderson, C.

Journal of Neural Engineering, 8(2):1-8, April 2011 (article)

Abstract
This paper reviews several critical issues facing signal processing for brain–computer interfaces (BCIs) and suggests several recent approaches that should be further examined. The topics were selected based on discussions held during the 4th International BCI Meeting at a workshop organized to review and evaluate the current state of, and issues relevant to, feature extraction and translation of field potentials for BCIs. The topics presented in this paper include the relationship between electroencephalography and electrocorticography, novel features for performance prediction, time-embedded signal representations, phase information, signal non-stationarity, and unsupervised adaptation.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


Thumb xl pointclickimagewide
Point-and-Click Cursor Control With an Intracortical Neural Interface System by Humans With Tetraplegia

Kim, S., Simeral, J. D., Hochberg, L. R., Donoghue, J. P., Friehs, G. M., Black, M. J.

IEEE Transactions on Neural Systems and Rehabilitation Engineering, 19(2):193-203, April 2011 (article)

Abstract
We present a point-and-click intracortical neural interface system (NIS) that enables humans with tetraplegia to volitionally move a 2D computer cursor in any desired direction on a computer screen, hold it still and click on the area of interest. This direct brain-computer interface extracts both discrete (click) and continuous (cursor velocity) signals from a single small population of neurons in human motor cortex. A key component of this system is a multi-state probabilistic decoding algorithm that simultaneously decodes neural spiking activity and outputs either a click signal or the velocity of the cursor. The algorithm combines a linear classifier, which determines whether the user is intending to click or move the cursor, with a Kalman filter that translates the neural population activity into cursor velocity. We present a paradigm for training the multi-state decoding algorithm using neural activity observed during imagined actions. Two human participants with tetraplegia (paralysis of the four limbs) performed a closed-loop radial target acquisition task using the point-and-click NIS over multiple sessions. We quantified point-and-click performance using various human-computer interaction measurements for pointing devices. We found that participants were able to control the cursor motion accurately and click on specified targets with a small error rate (< 3% in one participant). This study suggests that signals from a small ensemble of motor cortical neurons (~40) can be used for natural point-and-click 2D cursor control of a personal computer.

ps

pdf publishers's site pub med link (url) Project Page [BibTex]

pdf publishers's site pub med link (url) Project Page [BibTex]


no image
A Blind Deconvolution Approach for Improving the Resolution of Cryo-EM Density Maps

Hirsch, M., Schölkopf, B., Habeck, M.

Journal of Computational Biology, 18(3):335-346, March 2011 (article)

Abstract
Cryo-electron microscopy (cryo-EM) plays an increasingly prominent role in structure elucidation of macromolecular assemblies. Advances in experimental instrumentation and computational power have spawned numerous cryo-EM studies of large biomolecular complexes resulting in the reconstruction of three-dimensional density maps at intermediate and low resolution. In this resolution range, identification and interpretation of structural elements and modeling of biomolecular structure with atomic detail becomes problematic. In this article, we present a novel algorithm that enhances the resolution of intermediate- and low-resolution density maps. Our underlying assumption is to model the low-resolution density map as a blurred and possibly noise-corrupted version of an unknown high-resolution map that we seek to recover by deconvolution. By exploiting the nonnegativity of both the high-resolution map and blur kernel, we derive multiplicative updates reminiscent of those used in nonnegative matrix factorization. Our framework allows for easy incorporation of additional prior knowledge such as smoothness and sparseness, on both the sharpened density map and the blur kernel. A probabilistic formulation enables us to derive updates for the hyperparameters; therefore, our approach has no parameter that needs adjustment. We apply the algorithm to simulated three-dimensional electron microscopic data. We show that our method provides better resolved density maps when compared with B-factor sharpening, especially in the presence of noise. Moreover, our method can use additional information provided by homologous structures, which helps to improve the resolution even further.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Dynamics of excitable neural networks with heterogeneous connectivity

Chavez, M., Besserve, M., Le Van Quyen, M.

Progress in Biophysics and Molecular Biology, 105(1-2):29-33, March 2011 (article)

Abstract
A central issue of neuroscience is to understand how neural units integrates internal and external signals to create coherent states. Recently, it has been shown that the sensitivity and dynamic range of neural assemblies are optimal at a critical coupling among its elements. Complex architectures of connections seem to play a constructive role on the reliable coordination of neural units. Here we show that, the synchronizability and sensitivity of excitable neural networks can be tuned by diversity in the connections strengths. We illustrate our findings for weighted networks with regular, random and complex topologies. Additional comparisons of real brain networks support previous studies suggesting that heterogeneity in the connectivity may play a constructive role on information processing. These findings provide insights into the relationship between structure and function of neural circuits.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Combining computational modeling with sparse and low-resolution data

Habeck, M., Nilges, M.

Journal of Structural Biology, 173(3):419, March 2011 (article)

Abstract
Structural biology is moving into a new era by shifting its focus from static structures of single proteins and protein domains to large and often fragile multi-component complexes. Over the past decade, structural genomics initiatives aimed to fill the voids in fold space and to provide a census of all protein structures. Completion of such an atlas of protein structures is still ongoing, but not sufficient for a mechanistic understanding of how living cells function. One of the great challenges is to bridge the gap between atomic resolution detail and the more fuzzy description of the molecular complexes that govern cellular processes or host–pathogen interactions. We want to move from cartoon-like representations of multi-component complexes to atomic resolution structures. To characterize the structures of the increasingly large and often flexible complexes, high resolution structure determination (as was possible for example for the ribosome) will likely stay the exception. Rather, data from many different methods providing information on the shape (X-ray crystallography, electron microscopy, SAXS, AFM, etc.) or on contacts between components (mass spectrometry, co-purification, or spectroscopic methods) need to be integrated with prior structural knowledge to build a consistent model of the complex. A particular difficulty is that the ratio between the number of conformational degrees of freedom and the number of measurements becomes unfavorable as we work with large complexes: data become increasingly sparse. Structural characterization of large molecular assemblies often involves a loss in resolution as well as in number and quality of data. We are good at solving structures of single proteins, but classical high-resolution structure determination by X-ray crystallography and NMR spectroscopy is often facing its limits as we move to higher molecular mass and increased flexibility. Therefore, structural studies on large complexes rely on new experimental techniques that complement the classical high resolution methods. But also computational approaches are becoming more important when it comes to integrating and analyzing structural information of often heterogeneous nature. Cryoelectron microscopy may serve as an example of how experimental methods can benefit from computation. Low-resolution data from cryo-EM show their true power when combined with modeling and bioinformatics methods such rigid docking and secondary structure hunting. Even in high resolution structure determination, molecular modeling is always necessary to calculate structures from data, to complement the missing information and to evaluate and score the obtained structures. With sparse data, all these three aspects become increasingly difficult, and the quality of the modeling approach becomes more important. With data alone, algorithms may not converge any more; scoring against data becomes meaningless; and the potential energy function becomes central not only as a help in making algorithms converge but also to score and evaluate the structures. In addition to the sparsity of the data, hybrid approaches bring the additional difficulty that the different sources of data may have rather different quality, and may be in the extreme case incompatible with each other. In addition to scoring the structures, modeling should also score in some way the data going into the calculation. This special issue brings together some of the numerous efforts to solve the problems that come from sparsity of data and from integrating data from different sources in hybrid approaches. The methods range from predominantly force-field based to mostly data based. Systems of very different sizes, ranging from single domains to multi-component complexes, are treated. We hope that you will enjoy reading the issue and find it a useful and inspiring resource.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Batch-Mode Active-Learning Methods for the Interactive Classification of Remote Sensing Images

Demir, B., Persello, C., Bruzzone, L.

IEEE Transactions on Geoscience and Remote Sensing, 49(3):1014-1031, March 2011 (article)

Abstract
This paper investigates different batch-mode active-learning (AL) techniques for the classification of remote sensing (RS) images with support vector machines. This is done by generalizing to multiclass problem techniques defined for binary classifiers. The investigated techniques exploit different query functions, which are based on the evaluation of two criteria: uncertainty and diversity. The uncertainty criterion is associated to the confidence of the supervised algorithm in correctly classifying the considered sample, while the diversity criterion aims at selecting a set of unlabeled samples that are as more diverse (distant one another) as possible, thus reducing the redundancy among the selected samples. The combination of the two criteria results in the selection of the potentially most informative set of samples at each iteration of the AL process. Moreover, we propose a novel query function that is based on a kernel-clustering technique for assessing the diversity of samples and a new strategy for selecting the most informative representative sample from each cluster. The investigated and proposed techniques are theoretically and experimentally compared with state-of-the-art methods adopted for RS applications. This is accomplished by considering very high resolution multispectral and hyperspectral images. By this comparison, we observed that the proposed method resulted in better accuracy with respect to other investigated and state-of-the art methods on both the considered data sets. Furthermore, we derived some guidelines on the design of AL systems for the classification of different types of RS images.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Statistical mechanics analysis of sparse data

Habeck, M.

Journal of Structural Biology, 173(3):541-548, March 2011 (article)

Abstract
Inferential structure determination uses Bayesian theory to combine experimental data with prior structural knowledge into a posterior probability distribution over protein conformational space. The posterior distribution encodes everything one can say objectively about the native structure in the light of the available data and additional prior assumptions and can be searched for structural representatives. Here an analogy is drawn between the posterior distribution and the canonical ensemble of statistical physics. A statistical mechanics analysis assesses the complexity of a structure calculation globally in terms of ensemble properties. Analogs of the free energy and density of states are introduced; partition functions evaluate the consistency of prior assumptions with data. Critical behavior is observed with dwindling restraint density, which impairs structure determination with too sparse data. However, prior distributions with improved realism ameliorate the situation by lowering the critical number of observations. An in-depth analysis of various experimentally accessible structural parameters and force field terms will facilitate a statistical approach to protein structure determination with sparse data that avoids bias as much as possible.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Large Scale Bayesian Inference and Experimental Design for Sparse Linear Models

Seeger, M., Nickisch, H.

SIAM Journal on Imaging Sciences, 4(1):166-199, March 2011 (article)

Abstract
Many problems of low-level computer vision and image processing, such as denoising, deconvolution, tomographic reconstruction or super-resolution, can be addressed by maximizing the posterior distribution of a sparse linear model (SLM). We show how higher-order Bayesian decision-making problems, such as optimizing image acquisition in magnetic resonance scanners, can be addressed by querying the SLM posterior covariance, unrelated to the density‘s mode. We propose a scalable algorithmic framework, with which SLM posteriors over full, high-resolution images can be approximated for the first time, solving a variational optimization problem which is convex iff posterior mode finding is convex. These methods successfully drive the optimization of sampling trajectories for real-world magnetic resonance imaging through Bayesian experimental design, which has not been attempted before. Our methodology provides new insight into similarities and differences between sparse reconstruction and approximate Bayesian inference, and has important implications for compressive sensing of real-world images.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Learning grasp affordance densities

Detry, R., Kraft, D., Kroemer, O., Peters, J., Krüger, N., Piater, J.

Paladyn: Journal of Behavioral Robotics, 2(1):1-17, March 2011 (article)

Abstract
We address the issue of learning and representing object grasp affordance models. We model grasp affordances with continuous probability density functions (grasp densities) which link object-relative grasp poses to their success probability. The underlying function representation is nonparametric and relies on kernel density estimation to provide a continuous model. Grasp densities are learned and refined from exploration, by letting a robot “play” with an object in a sequence of grasp-and-drop actions: the robot uses visual cues to generate a set of grasp hypotheses, which it then executes and records their outcomes. When a satisfactory amount of grasp data is available, an importance-sampling algorithm turns it into a grasp density. We evaluate our method in a largely autonomous learning experiment, run on three objects with distinct shapes. The experiment shows how learning increases success rates. It also measures the success rate of grasps chosen to maximize the probability of success, given reaching constraints.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


Thumb xl middleburyimagesmall
A Database and Evaluation Methodology for Optical Flow

Baker, S., Scharstein, D., Lewis, J. P., Roth, S., Black, M. J., Szeliski, R.

International Journal of Computer Vision, 92(1):1-31, March 2011 (article)

Abstract
The quantitative evaluation of optical flow algorithms by Barron et al. (1994) led to significant advances in performance. The challenges for optical flow algorithms today go beyond the datasets and evaluation methods proposed in that paper. Instead, they center on problems associated with complex natural scenes, including nonrigid motion, real sensor noise, and motion discontinuities. We propose a new set of benchmarks and evaluation methods for the next generation of optical flow algorithms. To that end, we contribute four types of data to test different aspects of optical flow algorithms: (1) sequences with nonrigid motion where the ground-truth flow is determined by tracking hidden fluorescent texture, (2) realistic synthetic sequences, (3) high frame-rate video used to study interpolation error, and (4) modified stereo sequences of static scenes. In addition to the average angular error used by Barron et al., we compute the absolute flow endpoint error, measures for frame interpolation error, improved statistics, and results at motion discontinuities and in textureless regions. In October 2007, we published the performance of several well-known methods on a preliminary version of our data to establish the current state of the art. We also made the data freely available on the web at http://vision.middlebury.edu/flow/ . Subsequently a number of researchers have uploaded their results to our website and published papers using the data. A significant improvement in performance has already been achieved. In this paper we analyze the results obtained to date and draw a large number of conclusions from them.

ps

pdf pdf from publisher Middlebury Flow Evaluation Website [BibTex]

pdf pdf from publisher Middlebury Flow Evaluation Website [BibTex]


Thumb xl jampani11 spie
Role of expertise and contralateral symmetry in the diagnosis of pneumoconiosis: an experimental study

Jampani, V., Vaidya, V., Sivaswamy, J., Tourani, K. L.

In Proc. SPIE 7966, Medical Imaging: Image Perception, Observer Performance, and Technology Assessment, 2011, Florida, March 2011 (inproceedings)

Abstract
Pneumoconiosis, a lung disease caused by the inhalation of dust, is mainly diagnosed using chest radiographs. The effects of using contralateral symmetric (CS) information present in chest radiographs in the diagnosis of pneumoconiosis are studied using an eye tracking experimental study. The role of expertise and the influence of CS information on the performance of readers with different expertise level are also of interest. Experimental subjects ranging from novices & medical students to staff radiologists were presented with 17 double and 16 single lung images, and were asked to give profusion ratings for each lung zone. Eye movements and the time for their diagnosis were also recorded. Kruskal-Wallis test (χ2(6) = 13.38, p = .038), showed that the observer error (average sum of absolute differences) in double lung images differed significantly across the different expertise categories when considering all the participants. Wilcoxon-signed rank test indicated that the observer error was significantly higher for single-lung images (Z = 3.13, p < .001) than for the double-lung images for all the participants. Mann-Whitney test (U = 28, p = .038) showed that the differential error between single and double lung images is significantly higher in doctors [staff & residents] than in non-doctors [others]. Thus, Expertise & CS information plays a significant role in the diagnosis of pneumoconiosis. CS information helps in diagnosing pneumoconiosis by reducing the general tendency of giving less profusion ratings. Training and experience appear to play important roles in learning to use the CS information present in the chest radiographs.

ps

url link (url) [BibTex]

url link (url) [BibTex]


no image
Client–Server Multitask Learning From Distributed Datasets

Dinuzzo, F., Pillonetto, G., De Nicolao, G.

IEEE Transactions on Neural Networks, 22(2):290-303, February 2011 (article)

Abstract
A client-server architecture to simultaneously solve multiple learning tasks from distributed datasets is described. In such architecture, each client corresponds to an individual learning task and the associated dataset of examples. The goal of the architecture is to perform information fusion from multiple datasets while preserving privacy of individual data. The role of the server is to collect data in real time from the clients and codify the information in a common database. Such information can be used by all the clients to solve their individual learning task, so that each client can exploit the information content of all the datasets without actually having access to private data of others. The proposed algorithmic framework, based on regularization and kernel methods, uses a suitable class of “mixed effect” kernels. The methodology is illustrated through a simulated recommendation system, as well as an experiment involving pharmacological data coming from a multicentric clinical trial.

ei

DOI [BibTex]

DOI [BibTex]


no image
Extraction of functional information from ongoing brain electrical activity: Extraction en temps-réel d’informations fonctionnelles à partir de l’activité électrique cérébrale

Besserve, M., Martinerie, J.

IRBM, 32(1):27-34, February 2011 (article)

Abstract
The modern analysis of multivariate electrical brain signals requires advanced statistical tools to automatically extract and quantify their information content. These tools include machine learning techniques and information theory. They are currently used both in basic neuroscience and challenging applications such as brain computer interfaces. We review here how these methods have been used at the Laboratoire d’Électroencéphalographie et de Neurophysiologie Appliquée (LENA) to develop a general tool for the real time analysis of functional brain signals. We then give some perspectives on how these tools can help understanding the biological mechanisms of information processing.

ei

PDF DOI [BibTex]


no image
Learning Visual Representations for Perception-Action Systems

Piater, J., Jodogne, S., Detry, R., Kraft, D., Krüger, N., Kroemer, O., Peters, J.

International Journal of Robotics Research, 30(3):294-307, February 2011 (article)

Abstract
We discuss vision as a sensory modality for systems that interact flexibly with uncontrolled environments. Instead of trying to build a generic vision system that produces task-independent representations, we argue in favor of task-specific, learnable representations. This concept is illustrated by two examples of our own work. First, our RLVC algorithm performs reinforcement learning directly on the visual input space. To make this very large space manageable, RLVC interleaves the reinforcement learner with a supervised classification algorithm that seeks to split perceptual states so as to reduce perceptual aliasing. This results in an adaptive discretization of the perceptual space based on the presence or absence of visual features. Its extension, RLJC, additionally handles continuous action spaces. In contrast to the minimalistic visual representations produced by RLVC and RLJC, our second method learns structural object models for robust object detection and pose estimation by probabilistic inference. To these models, the method associates grasp experiences autonomously learned by trial and error. These experiences form a non-parametric representation of grasp success likelihoods over gripper poses, which we call a grasp density. Thus, object detection in a novel scene simultaneously produces suitable grasping options.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Multi-way set enumeration in weight tensors

Georgii, E., Tsuda, K., Schölkopf, B.

Machine Learning, 82(2):123-155, February 2011 (article)

Abstract
The analysis of n-ary relations receives attention in many different fields, for instance biology, web mining, and social studies. In the basic setting, there are n sets of instances, and each observation associates n instances, one from each set. A common approach to explore these n-way data is the search for n-set patterns, the n-way equivalent of itemsets. More precisely, an n-set pattern consists of specific subsets of the n instance sets such that all possible associations between the corresponding instances are observed in the data. In contrast, traditional itemset mining approaches consider only two-way data, namely items versus transactions. The n-set patterns provide a higher-level view of the data, revealing associative relationships between groups of instances. Here, we generalize this approach in two respects. First, we tolerate missing observations to a certain degree, that means we are also interested in n-sets where most (although not all) of the possible associations have been recorded in the data. Second, we take association weights into account. In fact, we propose a method to enumerate all n-sets that satisfy a minimum threshold with respect to the average association weight. Technically, we solve the enumeration task using a reverse search strategy, which allows for effective pruning of the search space. In addition, our algorithm provides a ranking of the solutions and can consider further constraints. We show experimental results on artificial and real-world datasets from different domains.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Learning, planning, and control for quadruped locomotion over challenging terrain

Kalakrishnan, Mrinal, Buchli, Jonas, Pastor, Peter, Mistry, Michael, Schaal, S.

International Journal of Robotics Research, 30(2):236-258, February 2011 (article)

am

[BibTex]

[BibTex]


no image
A graphical model framework for decoding in the visual ERP-based BCI speller

Martens, S., Mooij, J., Hill, N., Farquhar, J., Schölkopf, B.

Neural Computation, 23(1):160-182, January 2011 (article)

Abstract
We present a graphical model framework for decoding in the visual ERP-based speller system. The proposed framework allows researchers to build generative models from which the decoding rules are obtained in a straightforward manner. We suggest two models for generating brain signals conditioned on the stimulus events. Both models incorporate letter frequency information but assume different dependencies between brain signals and stimulus events. For both models, we derive decoding rules and perform a discriminative training. We show on real visual speller data how decoding performance improves by incorporating letter frequency information and using a more realistic graphical model for the dependencies between the brain signals and the stimulus events. Furthermore, we discuss how the standard approach to decoding can be seen as a special case of the graphical model framework. The letter also gives more insight into the discriminative approach for decoding in the visual speller system.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Robust Control of Teleoperation Systems Interacting with Viscoelastic Soft Tissues

Cho, JH., Son, HI., Bhattacharjee, T., Lee, DG., Lee, DY.

IEEE Transactions on Control Systems Technology, January 2011 (article) In revision

ei

[BibTex]

[BibTex]


no image
Towards Motor Skill Learning for Robotics

Peters, J., Mülling, K., Kober, J., Nguyen-Tuong, D., Kroemer, O.

In Robotics Research, pages: 469-482, (Editors: Pradalier, C. , R. Siegwart, G. Hirzinger), Springer, Berlin, Germany, 14th International Symposium on Robotics Research (ISRR), January 2011 (inproceedings)

Abstract
Learning robots that can acquire new motor skills and refine existing one has been a long standing vision of robotics, artificial intelligence, and the cognitive sciences. Early steps towards this goal in the 1980s made clear that reasoning and human insights will not suffice. Instead, new hope has been offered by the rise of modern machine learning approaches. However, to date, it becomes increasingly clear that off-the-shelf machine learning approaches will not suffice for motor skill learning as these methods often do not scale into the high-dimensional domains of manipulator and humanoid robotics nor do they fulfill the real-time requirement of our domain. As an alternative, we propose to break the generic skill learning problem into parts that we can understand well from a robotics point of view. After designing appropriate learning approaches for these basic components, these will serve as the ingredients of a general approach to motor skill learning. In this paper, we discuss our recent and current progress in this direction. For doing so, we present our work on learning to control, on learning elementary movements as well as our steps towards learning of complex tasks. We show several evaluations both using real robots as well as physically realistic simulations.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Effect of Control Parameters and Haptic Cues on Human Perception for Remote Operations

Son, HI., Bhattacharjee, T., Jung, H., Lee, DY.

Experimental Brain Research, January 2011 (article) Submitted

ei

[BibTex]

[BibTex]


no image
Learning Visual Representations for Interactive Systems

Piater, J., Jodogne, S., Detry, R., Kraft, D., Krüger, N., Kroemer, O., Peters, J.

In Robotics Research, pages: 399-416, (Editors: Pradalier, C. , R. Siegwart, G. Hirzinger), Springer, Berlin, Germany, 14th International Symposium on Robotics Research (ISRR), January 2011 (inproceedings)

Abstract
We describe two quite different methods for associating action parameters to visual percepts. Our RLVC algorithm performs reinforcement learning directly on the visual input space. To make this very large space manageable, RLVC interleaves the reinforcement learner with a supervised classi&amp;amp;#64257;cation algorithm that seeks to split perceptual states so as to reduce perceptual aliasing. This results in an adaptive discretization of the perceptual space based on the presence or absence of visual features. Its extension RLJC also handles continuous action spaces. In contrast to the minimalistic visual representations produced by RLVC and RLJC, our second method learns structural object models for robust object detection and pose estimation by probabilistic inference. To these models, the method associates grasp experiences autonomously learned by trial and error. These experiences form a non-parametric representation of grasp success likelihoods over gripper poses, which we call a gra sp d ensi ty. Thus, object detection in a novel scene simultaneously produces suitable grasping options.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Joint Genetic Analysis of Gene Expression Data with Inferred Cellular Phenotypes

Parts, L., Stegle, O., Winn, J., Durbin, R.

PLoS Genetics, 7(1):1-10, January 2011 (article)

Abstract
Even within a defined cell type, the expression level of a gene differs in individual samples. The effects of genotype, measured factors such as environmental conditions, and their interactions have been explored in recent studies. Methods have also been developed to identify unmeasured intermediate factors that coherently influence transcript levels of multiple genes. Here, we show how to bring these two approaches together and analyse genetic effects in the context of inferred determinants of gene expression. We use a sparse factor analysis model to infer hidden factors, which we treat as intermediate cellular phenotypes that in turn affect gene expression in a yeast dataset. We find that the inferred phenotypes are associated with locus genotypes and environmental conditions and can explain genetic associations to genes in trans. For the first time, we consider and find interactions between genotype and intermediate phenotypes inferred from gene expression levels, complementing and extending established results.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
A Non-Parametric Approach to Dynamic Programming

Kroemer, O., Peters, J.

In Advances in Neural Information Processing Systems 24, pages: 1719-1727, (Editors: J Shawe-Taylor and RS Zemel and P Bartlett and F Pereira and KQ Weinberger), Twenty-Fifth Annual Conference on Neural Information Processing Systems (NIPS), 2011 (inproceedings)

Abstract
In this paper, we consider the problem of policy evaluation for continuousstate systems. We present a non-parametric approach to policy evaluation, which uses kernel density estimation to represent the system. The true form of the value function for this model can be determined, and can be computed using Galerkin’s method. Furthermore, we also present a unified view of several well-known policy evaluation methods. In particular, we show that the same Galerkin method can be used to derive Least-Squares Temporal Difference learning, Kernelized Temporal Difference learning, and a discrete-state Dynamic Programming solution, as well as our proposed method. In a numerical evaluation of these algorithms, the proposed approach performed better than the other methods.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Reinforcement Learning with Bounded Information Loss

Peters, J., Peters, J., Mülling, K., Altun, Y.

AIP Conference Proceedings, 1305(1):365-372, 2011 (article)

Abstract
Policy search is a successful approach to reinforcement learning. However, policy improvements often result in the loss of information. Hence, it has been marred by premature convergence and implausible solutions. As first suggested in the context of covariant or natural policy gradients, many of these problems may be addressed by constraining the information loss. In this paper, we continue this path of reasoning and suggest two reinforcement learning methods, i.e., a model‐based and a model free algorithm that bound the loss in relative entropy while maximizing their return. The resulting methods differ significantly from previous policy gradient approaches and yields an exact update step. It works well on typical reinforcement learning benchmark problems as well as novel evaluations in robotics. We also show a Bayesian bound motivation of this new approach [8].

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Transfer Learning with Copulas

Lopez-Paz, D., Hernandez-Lobato, J.

In pages: 2, NIPS, Workshop on Copulas in Machine Learning, 2011 (inproceedings)

ei

PDF [BibTex]

PDF [BibTex]


no image
Denoising sparse noise via online dictionary learning

Cherian, A., Sra, S., Papanikolopoulos, N.

In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2011, pages: 2060 -2063, IEEE, IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2011 (inproceedings)

ei

DOI [BibTex]

DOI [BibTex]


no image
PILCO: A Model-Based and Data-Efficient Approach to Policy Search

Deisenroth, MP., Rasmussen, CE.

In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, pages: 465-472, (Editors: L Getoor and T Scheffer), Omnipress, 2011 (inproceedings)

Abstract
In this paper, we introduce PILCO, a practical, data-efficient model-based policy search method. PILCO reduces model bias, one of the key problems of model-based reinforcement learning, in a principled way. By learning a probabilistic dynamics model and explicitly incorporating model uncertainty into long-term planning, PILCO can cope with very little data and facilitates learning from scratch in only a few trials. Policy evaluation is performed in closed form using state-of-the-art approximate inference. Furthermore, policy gradients are computed analytically for policy improvement. We report unprecedented learning efficiency on challenging and high-dimensional control tasks.

ei

Web [BibTex]

Web [BibTex]


no image
Kernel Bayes’ Rule

Fukumizu, K., Song, L., Gretton, A.

In Advances in Neural Information Processing Systems 24, pages: 1737-1745, (Editors: J Shawe-Taylor and RS Zemel and P Bartlett and F Pereira and KQ Weinberger), Curran Associates, Inc., Red Hook, NY, USA, Twenty-Fifth Annual Conference on Neural Information Processing Systems (NIPS), 2011 (inproceedings)

ei

PDF [BibTex]

PDF [BibTex]


no image
Optimal Reinforcement Learning for Gaussian Systems

Hennig, P.

In Advances in Neural Information Processing Systems 24, pages: 325-333, (Editors: J Shawe-Taylor and RS Zemel and P Bartlett and F Pereira and KQ Weinberger), Twenty-Fifth Annual Conference on Neural Information Processing Systems (NIPS), 2011 (inproceedings)

Abstract
The exploration-exploitation trade-off is among the central challenges of reinforcement learning. The optimal Bayesian solution is intractable in general. This paper studies to what extent analytic statements about optimal learning are possible if all beliefs are Gaussian processes. A first order approximation of learning of both loss and dynamics, for nonlinear, time-varying systems in continuous time and space, subject to a relatively weak restriction on the dynamics, is described by an infinite-dimensional partial differential equation. An approximate finitedimensional projection gives an impression for how this result may be helpful.

ei pn

PDF Web [BibTex]

PDF Web [BibTex]


no image
Efficient inference in matrix-variate Gaussian models with iid observation noise

Stegle, O., Lippert, C., Mooij, J., Lawrence, N., Borgwardt, K.

In Advances in Neural Information Processing Systems 24, pages: 630-638, (Editors: J Shawe-Taylor and RS Zemel and P Bartlett and F Pereira and KQ Weinberger), Twenty-Fifth Annual Conference on Neural Information Processing Systems (NIPS), 2011 (inproceedings)

Abstract
Inference in matrix-variate Gaussian models has major applications for multioutput prediction and joint learning of row and column covariances from matrixvariate data. Here, we discuss an approach for efficient inference in such models that explicitly account for iid observation noise. Computational tractability can be retained by exploiting the Kronecker product between row and column covariance matrices. Using this framework, we show how to generalize the Graphical Lasso in order to learn a sparse inverse covariance between features while accounting for a low-rank confounding covariance between samples. We show practical utility on applications to biology, where we model covariances with more than 100,000 dimensions. We find greater accuracy in recovering biological network structures and are able to better reconstruct the confounders.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Expectation Propagation for the Estimation of Conditional Bivariate Copulas

Hernandez-Lobato, J., Lopez-Paz, D., Gharhamani, Z.

In pages: 2, NIPS, Workshop on Copulas in Machine Learning, 2011 (inproceedings)

ei

PDF [BibTex]

PDF [BibTex]


no image
Efficient Similarity Search for Covariance Matrices via the Jensen-Bregman LogDet Divergence

Cherian, A., Sra, S., Banerjee, A., Papanikolopoulos, N.

In IEEE International Conference on Computer Vision, ICCV 2011, pages: 2399-2406, (Editors: DN Metaxas and L Quan and A Sanfeliu and LJ Van Gool), IEEE, 13th International Conference on Computer Vision (ICCV), 2011 (inproceedings)

ei

DOI [BibTex]

DOI [BibTex]


no image
Introducing the detection of auditory error responses based on BCI technology for passive interaction

Zander, TO., Klippel, DM., Scherer, R.

In Proceedings of the 5th International Brain–Computer Interface Conference, pages: 252-255, (Editors: GR Müller-Putz and R Scherer and M Billinger and A Kreilinger and V Kaiser and C Neuper), Graz: Verlag der Technischen Universität, 2011 (inproceedings)

ei

[BibTex]

[BibTex]


no image
Generalized Dictionary Learning for Symmetric Positive Definite Matrices with Application to Nearest Neighbor Retrieval

Sra, S., Cherian, A.

In Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2011, LNCS vol 6913, Part III, pages: 318-332, (Editors: D Gunopulos and T Hofmann and D Malerba and M Vazirgiannis), Springer, 22th European Conference on Machine Learning (ECML), 2011 (inproceedings)

ei

DOI [BibTex]

DOI [BibTex]


no image
Restricted boltzmann machines as useful tool for detecting oscillatory eeg components

Balderas, D., Zander, TO., Bachl, F., Neuper, C., Scherer, R.

In Proceedings of the 5th International Brain–Computer Interface Conference, pages: 68-71, (Editors: GR Müller-Putz and R Scherer and M Billinger and A Kkreilinger and V Kaiser and C Neuper), Graz: Verlag der Technischen Universität, 2011 (inproceedings)

ei

[BibTex]

[BibTex]


no image
Hierarchical Multitask Structured Output Learning for Large-scale Sequence Segmentation

Görnitz, N., Widmer, C., Zeller, G., Kahles, A., Sonnenburg, S., Rätsch, G.

In Advances in Neural Information Processing Systems 24, pages: 2690-2698, (Editors: J Shawe-Taylor and RS Zemel and P Bartlett and FCN Pereira and KQ Weinberger), Curran Associates, Inc., Red Hook, NY, USA, Twenty-Fifth Annual Conference on Neural Information Processing Systems (NIPS), 2011 (inproceedings)

ei

PDF [BibTex]

PDF [BibTex]


no image
Phase transition in the family of p-resistances

Alamgir, M., von Luxburg, U.

In Advances in Neural Information Processing Systems 24, pages: 379-387, (Editors: J Shawe-Taylor and RS Zemel and P Bartlett and F Pereira and KQ Weinberger), Twenty-Fifth Annual Conference on Neural Information Processing Systems (NIPS), 2011 (inproceedings)

Abstract
We study the family of p-resistances on graphs for p ≥ 1. This family generalizes the standard resistance distance. We prove that for any fixed graph, for p=1, the p-resistance coincides with the shortest path distance, for p=2 it coincides with the standard resistance distance, and for p → ∞ it converges to the inverse of the minimal s-t-cut in the graph. Secondly, we consider the special case of random geometric graphs (such as k-nearest neighbor graphs) when the number n of vertices in the graph tends to infinity. We prove that an interesting phase-transition takes place. There exist two critical thresholds p^* and p^** such that if p < p^*, then the p-resistance depends on meaningful global properties of the graph, whereas if p > p^**, it only depends on trivial local quantities and does not convey any useful information. We can explicitly compute the critical values: p^* = 1 + 1/(d-1) and p^** = 1 + 1/(d-2) where d is the dimension of the underlying space (we believe that the fact that there is a small gap between p^* and p^** is an artifact of our proofs. We also relate our findings to Laplacian regularization and suggest to use q-Laplacians as regularizers, where q satisfies 1/p^* + 1/q = 1.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
On Fast Approximate Submodular Minimization

Jegelka, S., Lin, H., Bilmes, J.

In Advances in Neural Information Processing Systems 24, pages: 460-468, (Editors: J Shawe-Taylor and RS Zemel and P Bartlett and F Pereira and KQ Weinberger), Twenty-Fifth Annual Conference on Neural Information Processing Systems (NIPS), 2011 (inproceedings)

Abstract
We are motivated by an application to extract a representative subset of machine learning training data and by the poor empirical performance we observe of the popular minimum norm algorithm. In fact, for our application, minimum norm can have a running time of about O(n7) (O(n5) oracle calls). We therefore propose a fast approximate method to minimize arbitrary submodular functions. For a large sub-class of submodular functions, the algorithm is exact. Other submodular functions are iteratively approximated by tight submodular upper bounds, and then repeatedly optimized. We show theoretical properties, and empirical results suggest significant speedups over minimum norm while retaining higher accuracies.

ei

PDF Web [BibTex]

PDF Web [BibTex]