Header logo is


2010


no image
Probabilistic Assignment of Chemical Shift Data for Semi-Automatic Amino Acid Recognition

Hooge, J.

11(10):30, 11th Conference of Junior Neuroscientists of T{\"u}bingen (NeNa), October 2010 (poster)

Abstract
manner. First the backbone resonances are assigned. This is usually achieved from sequential information provided by three chemical shifts: CA, CB and C’. Once the sequence is solved, the second assignment step takes place. For this purpose, the CA-CB and HA chemical shifts are used as a start point for assignment of the side chain resonances, thus connecting the backbone resonances to their respective side chains. This strategy is unfortunately limited by the size of the protein due to increasing signal overlap and missing signals. Therefore, amino acid recognition is in many cases not possible as the CA-CB chemical shift pattern is not sufficient to discriminate between the 20 amino acids. As a result, the first step of the strategy described above remains tedious and time consuming. The combination of modern NMR techniques with new spectrometers now provide information that was not always accessible in the past, due to sensitivity problems. These experiments can be applied efficiently to measure a protein size up to 45 kDa and furthermore provide a unique combination of sequential carbon spin system information. The assignment process can thus benefit from a maximum knowledge input, containing âallâ backbone and side chain chemical shifts as well as an immediate amino acid recognition from the side chain spin system. We propose to extend the software PASTA (Protein ASsignment by Threshold Accepting) to achieve a general sequential assignment of backbone and side-chain resonances in a semi- to fullautomatic per-residue approach. PASTA will offer the possibility to achieve the sequential assignment using any kind of chemical shifts (carbons and/or protons) that can provide sequential information combined with an amino acid recognition feature based on carbon spin system analysis.

ei

PDF [BibTex]

2010


PDF [BibTex]


no image
Generalizing Demonstrated Actions in Manipulation Tasks

Kroemer, O., Detry, R., Piater, J., Peters, J.

IROS 2010 Workshop on Grasp Planning and Task Learning by Imitation, 2010, pages: 1, October 2010 (poster)

Abstract
Programming-by-demonstration promises to significantly reduce the burden of coding robots to perform new tasks. However, service robots will be presented with a variety of different situations that were not specifically demonstrated to it. In such cases, the robot must autonomously generalize its learned motions to these new situations. We propose a system that can generalize movements to new target locations and even new objects. The former is achieved by using a task-specific coordinate system together with dynamical systems motor primitives. Generalizing actions to new objects is a more complex problem, which we solve by treating it as a continuum-armed bandits problem. Using the bandits framework, we can efficiently optimize the learned action for a specific object. The proposed method was implemented on a real robot and succesfully adapted the grasping action to three different objects. Although we focus on grasping as an example of a task, the proposed methods are much more widely applicable to robot manipulation tasks.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Discriminative frequent subgraph mining with optimality guarantees

Thoma, M., Cheng, H., Gretton, A., Han, J., Kriegel, H., Smola, A., Song, L., Yu, P., Yan, X., Borgwardt, K.

Journal of Statistical Analysis and Data Mining, 3(5):302–318, October 2010 (article)

Abstract
The goal of frequent subgraph mining is to detect subgraphs that frequently occur in a dataset of graphs. In classification settings, one is often interested in discovering discriminative frequent subgraphs, whose presence or absence is indicative of the class membership of a graph. In this article, we propose an approach to feature selection on frequent subgraphs, called CORK, that combines two central advantages. First, it optimizes a submodular quality criterion, which means that we can yield a near-optimal solution using greedy feature selection. Second, our submodular quality function criterion can be integrated into gSpan, the state-of-the-art tool for frequent subgraph mining, and help to prune the search space for discriminative frequent subgraphs even during frequent subgraph mining.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Inhomogeneous Positron Range Effects in High Magnetic Fields might Cause Severe Artefacts in PET/MRI

Kolb, A., Hofmann, M., Sauter, A., Liu, C., Eriksson, L., Pichler, B.

(0305B), 2010 World Molecular Imaging Congress (WMIC), September 2010 (poster)

Abstract
The combination of PET and MRI is an emerging field of current research. It is known that the positron range is shortened in high magnetic fields (MF), leading to an improved resolution in PET images. Interestingly, only the fraction of positron range (PR) orthogonal to the MF is reduced and the fraction along the MF is not affected and yields to a non-isotropic count distribution. We measured the PR effect with PET isotopes like F-18, Cu-64, C-11, N-13 and Ga-68. A piece of paper (1 cm2) was soaked with each isotope and placed in the cFOV of a clinical 3T BrainPET/MR scanner. A polyethylene board (PE) was placed as a positron (β+) stopper with an axial distance of 3 cm from the soaked paper. The area under the peaks of one pixel wide profiles along the z-axis in coronal images was compared. Based on these measurements we confirmed our data in organic tissue. A larynx/trachea and lung of a butchered swine were injected with a mixture of NiSO4 for T1 MRI signals and Ga-68, simulating tumor lesions in the respiratory tract. The trachea/larynx were aligned in 35° to the MF lines and a small mass lesion was inserted to imitate a primary tracheal tumor whereas the larynx was injected submucosally in the lower medial part of the epiglottis. Reconstructed PET data show that the annihilated ratio of β+ at the origin position and in the PE depends on the isotope energy and the direction of the MF. The annihilation ratios of the source and PE are 52.4/47.6 (F-18), 57.5/42.5 (Cu-64), 43.7/56.7 (C-11), 31.1/68.9 (N-13) and 14.9/85.1 (Ga-68). In the swine larynx measurement, an artefact with approximately 39% of the lesion activity formed along MF lines 3cm away from the original injected position (fig.1). The data of the trachea showed two shine artefacts with a symmetric alignment along the MF lines. About 58% of the positrons annihilated at the lesion and 21% formed each artefact. The PR effects areminor in tissue of higher or equal density to water (0.096 cm-1). However, the effect is severe in low density tissue or air and might lead to misinterpretation of clinical data.

ei

Web [BibTex]

Web [BibTex]


no image
Combining active learning and reactive control for robot grasping

Kroemer, O., Detry, R., Piater, J., Peters, J.

Robotics and Autonomous Systems, 58(9):1105-1116, September 2010 (article)

Abstract
Grasping an object is a task that inherently needs to be treated in a hybrid fashion. The system must decide both where and how to grasp the object. While selecting where to grasp requires learning about the object as a whole, the execution only needs to reactively adapt to the context close to the grasp’s location. We propose a hierarchical controller that reflects the structure of these two sub-problems, and attempts to learn solutions that work for both. A hybrid architecture is employed by the controller to make use of various machine learning methods that can cope with the large amount of uncertainty inherent to the task. The controller’s upper level selects where to grasp the object using a reinforcement learner, while the lower level comprises an imitation learner and a vision-based reactive controller to determine appropriate grasping motions. The resulting system is able to quickly learn good grasps of a novel object in an unstructured environment, by executing smooth reaching motions and preshapin g the hand depending on the object’s geometry. The system was evaluated both in simulation and on a real robot.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Nonparametric Regression between General Riemannian Manifolds

Steinke, F., Hein, M., Schölkopf, B.

SIAM Journal on Imaging Sciences, 3(3):527-563, September 2010 (article)

Abstract
We study nonparametric regression between Riemannian manifolds based on regularized empirical risk minimization. Regularization functionals for mappings between manifolds should respect the geometry of input and output manifold and be independent of the chosen parametrization of the manifolds. We define and analyze the three most simple regularization functionals with these properties and present a rather general scheme for solving the resulting optimization problem. As application examples we discuss interpolation on the sphere, fingerprint processing, and correspondence computations between three-dimensional surfaces. We conclude with characterizing interesting and sometimes counterintuitive implications and new open problems that are specific to learning between Riemannian manifolds and are not encountered in multivariate regression in Euclidean space.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Method and device for recovering a digital image from a sequence of observed digital images

Harmeling, S., Hirsch, M., Sra, S., Schölkopf, B.

United States Provisional Patent Application, No 61387025, September 2010 (patent)

ei

[BibTex]

[BibTex]


no image
Hybrid PET/MRI of Intracranial Masses: Initial Experiences and Comparison to PET/CT

Boss, A., Bisdas, S., Kolb, A., Hofmann, M., Ernemann, U., Claussen, C., Pfannenberg, C., Pichler, B., Reimold, M., Stegger, L.

Journal of Nuclear Medicine, 51(8):1198-1205, August 2010 (article)

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
libDAI: A Free and Open Source C++ Library for Discrete Approximate Inference in Graphical Models

Mooij, JM.

Journal of Machine Learning Research, 11, pages: 2169-2173, August 2010 (article)

Abstract
This paper describes the software package libDAI, a free & open source C++ library that provides implementations of various exact and approximate inference methods for graphical models with discrete-valued variables. libDAI supports directed graphical models (Bayesian networks) as well as undirected ones (Markov random fields and factor graphs). It offers various approximations of the partition sum, marginal probability distributions and maximum probability states. Parameter learning is also supported. A feature comparison with other open source software packages for approximate inference is given. libDAI is licensed under the GPL v2+ license and is available at http://www.libdai.org.

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Convolutive blind source separation by efficient blind deconvolution and minimal filter distortion

Zhang, K., Chan, L.

Neurocomputing, 73(13-15):2580-2588, August 2010 (article)

Abstract
Convolutive blind source separation (BSS) usually encounters two difficulties—the filter indeterminacy in the recovered sources and the relatively high computational load. In this paper we propose an efficient method to convolutive BSS, by dealing with these two issues. It consists of two stages, namely, multichannel blind deconvolution (MBD) and learning the post-filters with the minimum filter distortion (MFD) principle. We present a computationally efficient approach to MBD in the first stage: a vector autoregression (VAR) model is first fitted to the data, admitting a closed-form solution and giving temporally independent errors; traditional independent component analysis (ICA) is then applied to these errors to produce the MBD results. In the second stage, the least linear reconstruction error (LLRE) constraint of the separation system, which was previously used to regularize the solutions to nonlinear ICA, enforces a MFD principle of the estimated mixing system for convolutive BSS. One can then easily learn the post-filters to preserve the temporal structure of the sources. We show that with this principle, each recovered source is approximately the principal component of the contributions of this source to all observations. Experimental results on both synthetic data and real room recordings show the good performance of this method.

ei

PDF PDF DOI [BibTex]


Thumb xl toc image patent
Magnetic Nanostructured Propellers

Fischer, P., Ghosh, A.

July 2010 (patent)

pf

[BibTex]

[BibTex]


no image
Biased Feedback in Brain-Computer Interfaces

Barbero, A., Grosse-Wentrup, M.

Journal of NeuroEngineering and Rehabilitation, 7(34):1-4, July 2010 (article)

Abstract
Even though feedback is considered to play an important role in learning how to operate a brain-computer interface (BCI), to date no significant influence of feedback design on BCI-performance has been reported in literature. In this work, we adapt a standard motor-imagery BCI-paradigm to study how BCI-performance is affected by biasing the belief subjects have on their level of control over the BCI system. Our findings indicate that subjects already capable of operating a BCI are impeded by inaccurate feedback, while subjects normally performing on or close to chance level may actually benefit from an incorrect belief on their performance level. Our results imply that optimal feedback design in BCIs should take into account a subject‘s current skill level.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Varieties of Justification in Machine Learning

Corfield, D.

Minds and Machines, 20(2):291-301, July 2010 (article)

Abstract
Forms of justification for inductive machine learning techniques are discussed and classified into four types. This is done with a view to introduce some of these techniques and their justificatory guarantees to the attention of philosophers, and to initiate a discussion as to whether they must be treated separately or rather can be viewed consistently from within a single framework.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Dirichlet Process Gaussian Mixture Models: Choice of the Base Distribution

Görür, D., Rasmussen, C.

Journal of Computer Science and Technology, 25(4):653-664, July 2010 (article)

Abstract
In the Bayesian mixture modeling framework it is possible to infer the necessary number of components to model the data and therefore it is unnecessary to explicitly restrict the number of components. Nonparametric mixture models sidestep the problem of finding the “correct” number of mixture components by assuming infinitely many components. In this paper Dirichlet process mixture (DPM) models are cast as infinite mixture models and inference using Markov chain Monte Carlo is described. The specification of the priors on the model parameters is often guided by mathematical and practical convenience. The primary goal of this paper is to compare the choice of conjugate and non-conjugate base distributions on a particular class of DPM models which is widely used in applications, the Dirichlet process Gaussian mixture model (DPGMM). We compare computational efficiency and modeling performance of DPGMM defined using a conjugate and a conditionally conjugate base distribution. We show that better density models can result from using a wider class of priors with no or only a modest increase in computational effort.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Robust probabilistic superposition and comparison of protein structures

Mechelke, M., Habeck, M.

BMC Bioinformatics, 11(363):1-13, July 2010 (article)

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Reinforcement Learning by Relative Entropy Policy Search

Peters, J., Mülling, K., Altun, Y.

30th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2010), 30, pages: 69, July 2010 (poster)

Abstract
Policy search is a successful approach to reinforcement learning. However, policy improvements often result in the loss of information. Hence, it has been marred by premature convergence and implausible solutions. As first suggested in the context of covariant policy gradients, many of these problems may be addressed by constraining the information loss. In this book chapter, we continue this path of reasoning and suggest the Relative Entropy Policy Search (REPS) method. The resulting method differs significantly from previous policy gradient approaches and yields an exact update step. It works well on typical reinforcement learning benchmark problems. We will also present a real-world applications where a robot employs REPS to learn how to return balls in a game of table tennis.

ei

PDF [BibTex]

PDF [BibTex]


no image
Results of the GREAT08 Challenge: An image analysis competition for cosmological lensing

Bridle, S., Balan, S., Bethge, M., Gentile, M., Harmeling, S., Heymans, C., Hirsch, M., Hosseini, R., Jarvis, M., Kirk, D., Kitching, T., Kuijken, K., Lewis, A., Paulin-Henriksson, S., Schölkopf, B., Velander, M., Voigt, L., Witherick, D., Amara, A., Bernstein, G., Courbin, F., Gill, M., Heavens, A., Mandelbaum, R., Massey, R., Moghaddam, B., Rassat, A., Refregier, A., Rhodes, J., Schrabback, T., Shawe-Taylor, J., Shmakova, M., van Waerbeke, L., Wittman, D.

Monthly Notices of the Royal Astronomical Society, 405(3):2044-2061, July 2010 (article)

Abstract
We present the results of the GREAT08 Challenge, a blind analysis challenge to infer weak gravitational lensing shear distortions from images. The primary goal was to stimulate new ideas by presenting the problem to researchers outside the shear measurement community. Six GREAT08 Team methods were presented at the launch of the Challenge and five additional groups submitted results during the 6 month competition. Participants analyzed 30 million simulated galaxies with a range in signal to noise ratio, point-spread function ellipticity, galaxy size, and galaxy type. The large quantity of simulations allowed shear measurement methods to be assessed at a level of accuracy suitable for currently planned future cosmic shear observations for the first time. Different methods perform well in different parts of simulation parameter space and come close to the target level of accuracy in several of these. A number of fresh ideas have emerged as a result of the Challenge including a re-examination of the process of combining information from different galaxies, which reduces the dependence on realistic galaxy modelling. The image simulations will become increasingly sophis- ticated in future GREAT challenges, meanwhile the GREAT08 simulations remain as a benchmark for additional developments in shear measurement algorithms.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Remote Sensing Feature Selection by Kernel Dependence Estimation

Camps-Valls, G., Mooij, J., Schölkopf, B.

IEEE Geoscience and Remote Sensing Letters, 7(3):587-591, July 2010 (article)

Abstract
This letter introduces a nonlinear measure of independence between random variables for remote sensing supervised feature selection. The so-called Hilbert–Schmidt independence criterion (HSIC) is a kernel method for evaluating statistical dependence and it is based on computing the Hilbert–Schmidt norm of the cross-covariance operator of mapped samples in the corresponding Hilbert spaces. The HSIC empirical estimator is easy to compute and has good theoretical and practical properties. Rather than using this estimate for maximizing the dependence between the selected features and the class labels, we propose the more sensitive criterion of minimizing the associated HSIC p-value. Results in multispectral, hyperspectral, and SAR data feature selection for classification show the good performance of the proposed approach.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Clustering stability: an overview

von Luxburg, U.

Foundations and Trends in Machine Learning, 2(3):235-274, July 2010 (article)

Abstract
A popular method for selecting the number of clusters is based on stability arguments: one chooses the number of clusters such that the corresponding clustering results are "most stable". In recent years, a series of papers has analyzed the behavior of this method from a theoretical point of view. However, the results are very technical and difficult to interpret for non-experts. In this paper we give a high-level overview about the existing literature on clustering stability. In addition to presenting the results in a slightly informal but accessible way, we relate them to each other and discuss their different implications.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
A Maximum Entropy Approach to Semi-supervised Learning

Erkan, A., Altun, Y.

30th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (MaxEnt 2010), 30, pages: 80, July 2010 (poster)

Abstract
Maximum entropy (MaxEnt) framework has been studied extensively in supervised learning. Here, the goal is to find a distribution p that maximizes an entropy function while enforcing data constraints so that the expected values of some (pre-defined) features with respect to p match their empirical counterparts approximately. Using different entropy measures, different model spaces for p and different approximation criteria for the data constraints yields a family of discriminative supervised learning methods (e.g., logistic regression, conditional random fields, least squares and boosting). This framework is known as the generalized maximum entropy framework. Semi-supervised learning (SSL) has emerged in the last decade as a promising field that combines unlabeled data along with labeled data so as to increase the accuracy and robustness of inference algorithms. However, most SSL algorithms to date have had trade-offs, e.g., in terms of scalability or applicability to multi-categorical data. We extend the generalized MaxEnt framework to develop a family of novel SSL algorithms. Extensive empirical evaluation on benchmark data sets that are widely used in the literature demonstrates the validity and competitiveness of the proposed algorithms.

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
The effect of positioning aids on PET quantification following MR-based attenuation correction (AC) in PET/MR imaging

Mantlik, F., Hofmann, M., Kupferschläger, J., Werner, M., Pichler, B., Beyer, T.

Journal of Nuclear Medicine, 51(Supplement 2):1418 , June 2010 (poster)

Abstract
Objectives: We study the quantitative effect of not accounting for the attenuation of patient positioning aids in combined PET/MR imaging. Methods: Positioning aids cannot be detected with conventional MR sequences. We mimic this effect using PET/CT data (Biograph HiRez16) with the foams removed from CT images prior to using them for CT-AC. PET/CT data were acquired using standard parameters (phantoms/patients): 120/140 kVp, 30/250 mAs, 5 mm slices, OSEM (4i, 8s, 5 mm filter) following CT-AC. First, a uniform 68Ge-cylinder was positioned centrally in the PET/CT and fixed with a vacuum mattress (10 cm thick). Second, the same cylinder was placed in 3 positioning aids from the PET/MR (BrainPET-3T). Third, 5 head/neck patients who were fixed in a vacuum mattress were selected. In all 3 studies PET recon post CT-AC based on measured CT images was used as the reference (mCT-AC). The PET/MR set-up was mimicked by segmenting the foam inserts from the measured CT images and setting their voxel values to -1000 HU (air). PET images were reconstructed using CT-AC with the segmented CT images (sCT-AC). PET images with mCT- and sCT-AC were compared. Results: sCT-AC underestimated PET voxel values in the phantom by 6.7% on average compared to mCT-AC with the vacuum mattress in place. 5% of the PET voxels were underestimated by >=10%. Not accounting for MR positioning aids during AC led to an underestimation of 2.8% following sCT-AC, with 5% of the PET voxels being underestimated by >=7% wrt mCT-AC. Preliminary evaluation of the patient data indicates a slightly higher bias from not accounting for patient positioning aids (mean: -9.1%, 5% percentile: -11.2%). Conclusions: A considerable and regionally variable underestimation of the PET activity following AC is observed when positioning aids are not accounted for. This bias may become relevant in neurological activation or dementia studies with PET/MR

ei

Web [BibTex]

Web [BibTex]


no image
Justifying Additive Noise Model-Based Causal Discovery via Algorithmic Information Theory

Janzing, D., Steudel, B.

Open Systems and Information Dynamics, 17(2):189-212, June 2010 (article)

Abstract
A recent method for causal discovery is in many cases able to infer whether X causes Y or Y causes X for just two observed variables X and Y. It is based on the observation that there exist (non-Gaussian) joint distributions P(X,Y) for which Y may be written as a function of X up to an additive noise term that is independent of X and no such model exists from Y to X. Whenever this is the case, one prefers the causal model X → Y. Here we justify this method by showing that the causal hypothesis Y → X is unlikely because it requires a specific tuning between P(Y) and P(X|Y) to generate a distribution that admits an additive noise model from X to Y. To quantify the amount of tuning, needed we derive lower bounds on the algorithmic information shared by P(Y) and P(X|Y). This way, our justification is consistent with recent approaches for using algorithmic information theory for causal reasoning. We extend this principle to the case where P(X,Y) almost admits an additive noise model. Our results suggest that the above conclusion is more reliable if the complexity of P(Y) is high.

ei

PDF Web DOI [BibTex]


no image
Multi-task Learning for Zero Training Brain-Computer Interfaces

Alamgir, M., Grosse-Wentrup, M., Altun, Y.

4th International BCI Meeting, June 2010 (poster)

Abstract
Brain-computer interfaces (BCIs) are limited in their applicability in everyday settings by the current necessity to record subject-specific calibration data prior to actual use of the BCI for communication. In this work, we utilize the framework of multitask learning to construct a BCI that can be used without any subject-specific calibration process, i.e., with zero training data. In BCIs based on EEG or MEG, the predictive function of a subject's intention is commonly modeled as a linear combination of some features derived from spatial and spectral recordings. The coefficients of this combination correspond to the importance of the features for predicting the intention of the subject. These coefficients are usually learned separately for each subject due to inter-subject variability. Principle feature characteristics, however, are known to remain invariant across subject. For example, it is well known that in motor imagery paradigms spectral power in the mu- and beta frequency ranges (roughly 8-14 Hz and 20-30 Hz, respectively) over sensorimotor areas provides most information on a subject's intention. Based on this assumption, we define the intention prediction function as a combination of subject-invariant and subject-specific models, and propose a machine learning method that infers these models jointly using data from multiple subjects. This framework leads to an out-of-the-box intention predictor, where the subject-invariant model can be employed immediately for a subject with no prior data. We present a computationally efficient method to further improve this BCI to incorporate subject-specific variations as such data becomes available. To overcome the problem of high dimensional feature spaces in this context, we further present a new method for finding the relevance of different recording channels according to actions performed by subjects. Usually, the BCI feature representation is a concatenation of spectral features extracted from different channels. This representation, however, is redundant, as recording channels at different spatial locations typically measure overlapping sources within the brain due to volume conduction. We address this problem by assuming that the relevance of different spectral bands is invariant across channels, while learning different weights for each recording electrode. This framework allows us to significantly reduce the feature space dimensionality without discarding potentially useful information. Furthermore, the resulting out-of-the-box BCI can be adapted to different experimental setups, for example EEG caps with different numbers of channels, as long as there exists a mapping across channels in different setups. We demonstrate the feasibility of our approach on a set of experimental EEG data recorded during a standard two-class motor imagery paradigm from a total of ten healthy subjects. Specifically, we show that satisfactory classification results can be achieved with zero training data, and that combining prior recordings with subject-specific calibration data substantially outperforms using subject-specific data only.

ei

Web [BibTex]


no image
Causal Influence of Gamma Oscillations on Performance in Brain-Computer Interfaces

Grosse-Wentrup, M., Hill, J., Schölkopf, B.

4th International BCI Meeting0, June 2010 (poster)

Abstract
Background and Objective: While machine learning approaches have led to tremendous advances in brain-computer interfaces (BCIs) in recent years (cf. [1]), there still exists a large variation in performance across subjects. Furthermore, a significant proportion of subjects appears incapable of achieving above chance-level classification accuracy [2], which to date includes all subjects in a completely locked-in state that have been trained in BCI control. Understanding the reasons for this variation in performance arguably constitutes one of the most fundamental open questions in research on BCIs. Methods & Results Using a machine learning approach, we derive a trial-wise measure of how well EEG recordings can be classified as either left- or right-hand motor imagery. Specifically, we train a support vector machine (SVM) on log-bandpower features (7-40 Hz) derived from EEG channels after spatial filtering with a surface Laplacian, and then compute the trial-wise distance of the output of the SVM from the separating hyperplane using a cross-validation procedure. We then correlate this trial-wise performance measure, computed on EEG recordings of ten healthy subjects, with log-bandpower in the gamma frequency range (55-85 Hz), and demonstrate that it is positively correlated with frontal- and occipital gamma-power and negatively correlated with centro-parietal gamma-power. This correlation is shown to be highly significant on the group level as well as in six out of ten subjects on the single-subject level. We then utilize the framework for causal inference developed by Pearl, Spirtes and others [3,4] to present evidence that gamma-power is not only correlated with BCI performance but does indeed exert a causal influence on it. Discussion and Conclusions Our results indicate that successful execution of motor imagery, and hence reliable communication by means of a BCI based on motor imagery, requires a volitional shift of gamma-power from centro-parietal to frontal and occipital regions. As such, our results provide the first non-trivial explanation for the variation in BCI performance across and within subjects. As this topographical alteration in gamma-power is likely to correspond to a specific attentional shift, we propose to provide subjects with feedback on their topographical distribution of gamma-power in order to establish the attentional state required for successful execution of motor imagery.

ei

Web [BibTex]


no image
Solving large-scale nonnegative least-squares

Sra, S.

16th Conference of the International Linear Algebra Society (ILAS 2010), 16, pages: 19, June 2010, based on Joint work with Dongmin Kim and Inderjit Dhillon (poster)

Abstract
We study the fundamental problem of nonnegative least squares. This problem was apparently introduced by Lawson and Hanson [1] under the name NNLS. As is evident from its name, NNLS seeks least-squares solutions that are also nonnegative. Owing to its wide-applicability numerous algorithms have been derived for NNLS, beginning from the active-set approach of Lawson and Hanson [1] leading up to the sophisticated interior-point method of Bellavia et al. [2]. We present a new algorithm for NNLS that combines projected subgradients with the non-monotonic gradient descent idea of Barzilai and Borwein [3]. Our resulting algorithm is called BBSG, and we guarantee its convergence by exploiting properties of NNLS in conjunction with projected subgradients. BBSG is surprisingly simple and scales well to large problems. We substantiate our claims by empirically evaluating BBSG and comparing it with established convex solvers and specialized NNLS algorithms. The numerical results suggest that BBSG is a practical method for solving large-scale NNLS problems.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Dynamic Dissimilarity Measure for Support-Based Clustering

Lee, D., Lee, J.

IEEE Transactions on Knowledge and Data Engineering, 22(6):900-905, June 2010 (article)

Abstract
Clustering methods utilizing support estimates of a data distribution have recently attracted much attention because of their ability to generate cluster boundaries of arbitrary shape and to deal with outliers efficiently. In this paper, we propose a novel dissimilarity measure based on a dynamical system associated with support estimating functions. Theoretical foundations of the proposed measure are developed and applied to construct a clustering method that can effectively partition the whole data space. Simulation results demonstrate that clustering based on the proposed dissimilarity measure is robust to the choice of kernel parameters and able to control the number of clusters efficiently.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Sparse Spectrum Gaussian Process Regression

Lázaro-Gredilla, M., Quiñonero-Candela, J., Rasmussen, CE., Figueiras-Vidal, AR.

Journal of Machine Learning Research, 11, pages: 1865-1881, June 2010 (article)

Abstract
We present a new sparse Gaussian Process (GP) model for regression. The key novel idea is to sparsify the spectral representation of the GP. This leads to a simple, practical algorithm for regression tasks. We compare the achievable trade-offs between predictive accuracy and computational requirements, and show that these are typically superior to existing state-of-the-art sparse approximations. We discuss both the weight space and function space representations, and note that the new construction implies priors over functions which are always stationary, and can approximate any covariance function in this class.

ei

PDF [BibTex]

PDF [BibTex]


no image
Simultaneous PET/MRI for the evaluation of hemato-oncological diseases with lower extremity manifestations

Sauter, A., Horger, M., Boss, A., Kolb, A., Mantlik, F., Kanz, L., Pfannenberg, C., Stegger, L., Claussen, C., Pichler, B.

Journal of Nuclear Medicine, 51(Supplement 2):1001 , June 2010 (poster)

Abstract
Objectives: The study purpose is the evaluation of patients, suffering from hemato-oncological disease with complications at the lower extremities, using simultaneous PET/MRI. Methods: Until now two patients (chronic active graft-versus-host-disease [GvHD], B-non Hodgkin lymphoma [B-NHL]) before and after therapy were examined in a 3-Tesla-BrainPET/MRI hybrid system following F-18-FDG-PET/CT. Simultaneous static PET (1200 sec.) and MRI scans (T1WI, T2WI, post-CA) were acquired. Results: Initial results show the feasibility of using hybrid PET/MRI-technology for musculoskeletal imaging of the lower extremities. Simultaneous PET and MRI could be acquired in diagnostic quality. Before treatment our patient with GvHD had a high fascia and muscle FDG uptake, possibly due to muscle encasement. T2WI and post gadolinium T1WI revealed a fascial thickening and signs of inflammation. After therapy with steroids followed by imatinib the patient’s symptoms improved while, the muscular FDG uptake droped whereas the MRI signal remained unchanged. We assume that fascial elasticity improved during therapy despite persistance of fascial thickening. The examination of the second patient with B-NHL manifestation in the tibia showed a significant signal and uptake decrease in the bone marrow and surrounding lesions in both, MRI and PET after therapy with rituximab. The lack of residual FDG-uptake proved superior to MRI information alone helping for exclusion of vital tumor. Conclusions: Combined PET/MRI is a powerful tool to monitor diseases requiring high soft tissue contrast along with molecular information from the FDG uptake.

ei

Web [BibTex]

Web [BibTex]


no image
Unsupervised Object Discovery: A Comparison

Tuytelaars, T., Lampert, CH., Blaschko, MB., Buntine, W.

International Journal of Computer Vision, 88(2):284-302, June 2010 (article)

Abstract
The goal of this paper is to evaluate and compare models and methods for learning to recognize basic entities in images in an unsupervised setting. In other words, we want to discover the objects present in the images by analyzing unlabeled data and searching for re-occurring patterns. We experiment with various baseline methods, methods based on latent variable models, as well as spectral clustering methods. The results are presented and compared both on subsets of Caltech256 and MSRC2, data sets that are larger and more challenging and that include more object classes than what has previously been reported in the literature. A rigorous framework for evaluating unsupervised object discovery methods is proposed.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
How to Explain Individual Classification Decisions

Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., Müller, K.

Journal of Machine Learning Research, 11, pages: 1803-1831, June 2010 (article)

Abstract
After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the most likely label of a given unseen data point. However, most methods will provide no answer why the model predicted a particular label for a single instance and what features were most influential for that particular instance. The only method that is currently able to provide such explanations are decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to explain the decisions of any classification method.

ei

PDF PDF [BibTex]

PDF PDF [BibTex]


no image
Single-Image Super-Resolution Using Sparse Regression and Natural Image Prior

Kim, K., Kwon, Y.

IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(6):1127-1133, June 2010 (article)

Abstract
This paper proposes a framework for single-image super-resolution. The underlying idea is to learn a map from input low-resolution images to target high-resolution images based on example pairs of input and output images. Kernel ridge regression (KRR) is adopted for this purpose. To reduce the time complexity of training and testing for KRR, a sparse solution is found by combining the ideas of kernel matching pursuit and gradient descent. As a regularized solution, KRR leads to a better generalization than simply storing the examples as has been done in existing example-based algorithms and results in much less noisy images. However, this may introduce blurring and ringing artifacts around major edges as sharp changes are penalized severely. A prior model of a generic image class which takes into account the discontinuity property of images is adopted to resolve this problem. Comparison with existing algorithms shows the effectiveness of the proposed method.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Imitation and Reinforcement Learning

Kober, J., Peters, J.

IEEE Robotics and Automation Magazine, 17(2):55-62, June 2010 (article)

Abstract
In this article, we present both novel learning algorithms and experiments using the dynamical system MPs. As such, we describe this MP representation in a way that it is straightforward to reproduce. We review an appropriate imitation learning method, i.e., locally weighted regression, and show how this method can be used both for initializing RL tasks as well as for modifying the start-up phase in a rhythmic task. We also show our current best-suited RL algorithm for this framework, i.e., PoWER. We present two complex motor tasks, i.e., ball-in-a-cup and ball paddling, learned on a real, physical Barrett WAM, using the methods presented in this article. Of particular interest is the ball-paddling application, as it requires a combination of both rhythmic and discrete dynamical systems MPs during the start-up phase to achieve a particular task.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Diffusion Tensor Imaging in a Human PET/MR Hybrid System

Boss, A., Kolb, A., Hofmann, M., Bisdas, S., Nägele, T., Ernemann, U., Stegger, L., Rossi, C., Schlemmer, H., Pfannenberg, C., Reimold, M., Claussen, C., Pichler, B., Klose, U.

Investigative Radiology, 45(5):270-274, May 2010 (article)

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
A Bayesian Framework to Account for Complex Non-Genetic Factors in Gene Expression Levels Greatly Increases Power in eQTL Studies

Stegle, O., Parts, L., Durbin, R., Winn, JM.

PLoS Computational Biology, 6(5):1-11, May 2010 (article)

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Estimation of a Structural Vector Autoregression Model Using Non-Gaussianity

Hyvärinen, A., Zhang, K., Shimizu, S., Hoyer, P.

Journal of Machine Learning Research, 11, pages: 1709-1731, May 2010 (article)

Abstract
Analysis of causal effects between continuous-valued variables typically uses either autoregressive models or structural equation models with instantaneous effects. Estimation of Gaussian, linear structural equation models poses serious identifiability problems, which is why it was recently proposed to use non-Gaussian models. Here, we show how to combine the non-Gaussian instantaneous model with autoregressive models. This is effectively what is called a structural vector autoregression (SVAR) model, and thus our work contributes to the long-standing problem of how to estimate SVAR‘s. We show that such a non-Gaussian model is identifiable without prior knowledge of network structure. We propose computationally efficient methods for estimating the model, as well as methods to assess the significance of the causal influences. The model is successfully applied on financial and brain imaging data.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
A Robust Bayesian Two-Sample Test for Detecting Intervals of Differential Gene Expression in Microarray Time Series

Stegle, O., Denby, KJ., Cooke, EJ., Wild, DL., Ghahramani, Z., Borgwardt, KM.

Journal of Computational Biology, 17(3):355-367, May 2010 (article)

Abstract
Understanding the regulatory mechanisms that are responsible for an organism‘s response to environmental change is an important issue in molecular biology. A first and important step towards this goal is to detect genes whose expression levels are affected by altered external conditions. A range of methods to test for differential gene expression, both in static as well as in time-course experiments, have been proposed. While these tests answer the question whether a gene is differentially expressed, they do not explicitly address the question when a gene is differentially expressed, although this information may provide insights into the course and causal structure of regulatory programs. In this article, we propose a two-sample test for identifying intervals of differential gene expression in microarray time series. Our approach is based on Gaussian process regression, can deal with arbitrary numbers of replicates, and is robust with respect to outliers. We apply our algorithm to study the response of Arabidopsis thaliana genes to an infection by a fungal pathogen using a microarray time series dataset covering 30,336 gene probes at 24 observed time points. In classification experiments, our test compares favorably with existing methods and provides additional insights into time-dependent differential expression.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Statistical Tests for Detecting Differential RNA-Transcript Expression from Read Counts

Stegle, O., Drewe, P., Bohnert, R., Borgwardt, K., Rätsch, G.

Nature Precedings, 2010, pages: 1-11, May 2010 (article)

Abstract
As a fruit of the current revolution in sequencing technology, transcriptomes can now be analyzed at an unprecedented level of detail. These advances have been exploited for detecting differential expressed genes across biological samples and for quantifying the abundances of various RNA transcripts within one gene. However, explicit strategies for detecting the hidden differential abundances of RNA transcripts in biological samples have not been defined. In this work, we present two novel statistical tests to address this issue: a "gene structure sensitive" Poisson test for detecting differential expression when the transcript structure of the gene is known, and a kernel-based test called Maximum Mean Discrepancy when it is unknown. We analyzed the proposed approaches on simulated read data for two artificial samples as well as on factual reads generated by the Illumina Genome Analyzer for two C. elegans samples. Our analysis shows that the Poisson test identifies genes with differential transcript expression considerably better that previously proposed RNA transcript quantification approaches for this task. The MMD test is able to detect a large fraction (75%) of such differential cases without the knowledge of the annotated transcripts. It is therefore well-suited to analyze RNA-Seq experiments when the genome annotations are incomplete or not available, where other approaches have to fail.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Parameter-exploring policy gradients

Sehnke, F., Osendorfer, C., Rückstiess, T., Graves, A., Peters, J., Schmidhuber, J.

Neural Networks, 21(4):551-559, May 2010 (article)

Abstract
We present a model-free reinforcement learning method for partially observable Markov decision problems. Our method estimates a likelihood gradient by sampling directly in parameter space, which leads to lower variance gradient estimates than obtained by regular policy gradient methods. We show that for several complex control tasks, including robust standing with a humanoid robot, this method outperforms well-known algorithms from the fields of standard policy gradients, finite difference methods and population based heuristics. We also show that the improvement is largest when the parameter samples are drawn symmetrically. Lastly we analyse the importance of the individual components of our method by incrementally incorporating them into the other algorithms, and measuring the gain in performance after each step.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Temporal Kernel CCA and its Application in Multimodal Neuronal Data Analysis

Biessmann, F., Meinecke, F., Gretton, A., Rauch, A., Rainer, G., Logothetis, N., Müller, K.

Machine Learning, 79(1-2):5-27, May 2010 (article)

Abstract
Data recorded from multiple sources sometimes exhibit non-instantaneous couplings. For simple data sets, cross-correlograms may reveal the coupling dynamics. But when dealing with high-dimensional multivariate data there is no such measure as the cross-correlogram. We propose a simple algorithm based on Kernel Canonical Correlation Analysis (kCCA) that computes a multivariate temporal filter which links one data modality to another one. The filters can be used to compute a multivariate extension of the cross-correlogram, the canonical correlogram, between data sources that have different dimensionalities and temporal resolutions. The canonical correlogram reflects the coupling dynamics between the two sources. The temporal filter reveals which features in the data give rise to these couplings and when they do so. We present results from simulations and neuroscientific experiments showing that tkCCA yields easily interpretable temporal filters and correlograms. In the experiments, we simultaneously performed electrode recordings and functional magnetic resonance imaging (fMRI) in primary visual cortex of the non-human primate. While electrode recordings reflect brain activity directly, fMRI provides only an indirect view of neural activity via the Blood Oxygen Level Dependent (BOLD) response. Thus it is crucial for our understanding and the interpretation of fMRI signals in general to relate them to direct measures of neural activity acquired with electrodes. The results computed by tkCCA confirm recent models of the hemodynamic response to neural activity and allow for a more detailed analysis of neurovascular coupling dynamics.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Estimating predictive stimulus features from psychophysical data: The decision image technique applied to human faces

Macke, J., Wichmann, F.

Journal of Vision, 10(5:22):1-24, May 2010 (article)

Abstract
One major challenge in the sensory sciences is to identify the stimulus features on which sensory systems base their computations, and which are predictive of a behavioral decision: they are a prerequisite for computational models of perception. We describe a technique (decision images) for extracting predictive stimulus features using logistic regression. A decision image not only defines a region of interest within a stimulus but is a quantitative template which defines a direction in stimulus space. Decision images thus enable the development of predictive models, as well as the generation of optimized stimuli for subsequent psychophysical investigations. Here we describe our method and apply it to data from a human face classification experiment. We show that decision images are able to predict human responses not only in terms of overall percent correct but also in terms of the probabilities with which individual faces are (mis-) classified by individual observers. We show that the most predictive dimension for gender categorization is neither aligned with the axis defined by the two class-means, nor with the first principal component of all faces-two hypotheses frequently entertained in the literature. Our method can be applied to a wide range of binary classification tasks in vision or other psychophysical contexts.

ei

Web DOI [BibTex]


no image
Animal detection in natural scenes: Critical features revisited

Wichmann, F., Drewes, J., Rosas, P., Gegenfurtner, K.

Journal of Vision, 10(4):1-27, April 2010 (article)

Abstract
S. J. Thorpe, D. Fize, and C. Marlot (1996) showed how rapidly observers can detect animals in images of natural scenes, but it is still unclear which image features support this rapid detection. A. B. Torralba and A. Oliva (2003) suggested that a simple image statistic based on the power spectrum allows the absence or presence of objects in natural scenes to be predicted. We tested whether human observers make use of power spectral differences between image categories when detecting animals in natural scenes. In Experiments 1 and 2 we found performance to be essentially independent of the power spectrum. Computational analysis revealed that the ease of classification correlates with the proposed spectral cue without being caused by it. This result is consistent with the hypothesis that in commercial stock photo databases a majority of animal images are pre-segmented from the background by the photographers and this pre-segmentation causes the power spectral differences between image categories and may, furthermore, help rapid animal detection. Data from a third experiment are consistent with this hypothesis. Together, our results make it exceedingly unlikely that human observers make use of power spectral differences between animal- and no-animal images during rapid animal detection. In addition, our results point to potential confounds in the commercially available “natural image” databases whose statistics may be less natural than commonly presumed.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
A generative model approach for decoding in the visual event-related potential-based brain-computer interface speller

Martens, SMM., Leiva, JM.

Journal of Neural Engineering, 7(2):1-10, April 2010 (article)

Abstract
There is a strong tendency towards discriminative approaches in brain-computer interface (BCI) research. We argue that generative model-based approaches are worth pursuing and propose a simple generative model for the visual ERP-based BCI speller which incorporates prior knowledge about the brain signals. We show that the proposed generative method needs less training data to reach a given letter prediction performance than the state of the art discriminative approaches.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Hilbert Space Embeddings and Metrics on Probability Measures

Sriperumbudur, B., Gretton, A., Fukumizu, K., Schölkopf, B., Lanckriet, G.

Journal of Machine Learning Research, 11, pages: 1517-1561, April 2010 (article)

ei

PDF [BibTex]

PDF [BibTex]


no image
Graph Kernels

Vishwanathan, SVN., Schraudolph, NN., Kondor, R., Borgwardt, KM.

Journal of Machine Learning Research, 11, pages: 1201-1242, April 2010 (article)

Abstract
We present a unified framework to study graph kernels, special cases of which include the random walk (G{\"a}rtner et al., 2003; Borgwardt et al., 2005) and marginalized (Kashima et al., 2003, 2004; Mahét al., 2004) graph kernels. Through reduction to a Sylvester equation we improve the time complexity of kernel computation between unlabeled graphs with n vertices from O(n6) to O(n3). We find a spectral decomposition approach even more efficient when computing entire kernel matrices. For labeled graphs we develop conjugate gradient and fixed-point methods that take O(dn3) time per iteration, where d is the size of the label set. By extending the necessary linear algebra to Reproducing Kernel Hilbert Spaces (RKHS) we obtain the same result for d-dimensional edge kernels, and O(n4) in the infinite-dimensional case; on sparse graphs these algorithms only take O(n2) time per iteration in all cases. Experiments on graphs from bioinformatics and other application domains show that these techniques can speed up computation of the kernel by an order of magnitude or more. We also show that certain rational kernels (Cortes et al., 2002, 2003, 2004) when specialized to graphs reduce to our random walk graph kernel. Finally, we relate our framework to R-convolution kernels (Haussler, 1999) and provide a kernel that is close to the optimal assignment kernel of kernel of Fr{\"o}hlich et al. (2006) yet provably positive semi-definite.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Solving large-scale nonnegative least squares using an adaptive non-monotonic method

Sra, S., Kim, D., Dhillon, I.

24th European Conference on Operational Research (EURO 2010), 24, pages: 223, April 2010 (poster)

Abstract
We present an efficient algorithm for large-scale non-negative least-squares (NNLS). We solve NNLS by extending the unconstrained quadratic optimization method of Barzilai and Borwein (BB) to handle nonnegativity constraints. Our approach is simple yet efficient. It differs from other constrained BB variants as: (i) it uses a specific subset of variables for computing BB steps; and (ii) it scales these steps adaptively to ensure convergence. We compare our method with both established convex solvers and specialized NNLS methods, and observe highly competitive empirical performance.

ei

PDF [BibTex]

PDF [BibTex]


no image
Gene function prediction from synthetic lethality networks via ranking on demand

Lippert, C., Ghahramani, Z., Borgwardt, KM.

Bioinformatics, 26(7):912-918, April 2010 (article)

Abstract
Motivation: Synthetic lethal interactions represent pairs of genes whose individual mutations are not lethal, while the double mutation of both genes does incur lethality. Several studies have shown a correlation between functional similarity of genes and their distances in networks based on synthetic lethal interactions. However, there is a lack of algorithms for predicting gene function from synthetic lethality interaction networks. Results: In this article, we present a novel technique called kernelROD for gene function prediction from synthetic lethal interaction networks based on kernel machines. We apply our novel algorithm to Gene Ontology functional annotation prediction in yeast. Our experiments show that our method leads to improved gene function prediction compared with state-of-the-art competitors and that combining genetic and congruence networks leads to a further improvement in prediction accuracy.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Sparse regression via a trust-region proximal method

Kim, D., Sra, S., Dhillon, I.

24th European Conference on Operational Research (EURO 2010), 24, pages: 278, April 2010 (poster)

Abstract
We present a method for sparse regression problems. Our method is based on the nonsmooth trust-region framework that minimizes a sum of smooth convex functions and a nonsmooth convex regularizer. By employing a separable quadratic approximation to the smooth part, the method enables the use of proximity operators, which in turn allow tackling the nonsmooth part efficiently. We illustrate our method by implementing it for three important sparse regression problems. In experiments with synthetic and real-world large-scale data, our method is seen to be competitive, robust, and scalable.

ei

PDF [BibTex]

PDF [BibTex]


no image
A toolbox for predicting G-quadruplex formation and stability

Wong, HM., Stegle, O., Rodgers, S., Huppert, J.

Journal of Nucleic Acids, 2010(564946):1-6, March 2010 (article)

Abstract
G-quadruplexes are four stranded nucleic acid structures formed around a core of guanines, arranged in squares with mutual hydrogen bonding. Many of these structures are highly thermally stable, especially in the presence of monovalent cations, such as those found under physiological conditions. Understanding of their physiological roles is expanding rapidly, and they have been implicated in regulating gene transcription and translation among other functions. We have built a community-focused website to act as a repository for the information that is now being developed. At its core, this site has a detailed database (QuadDB) of predicted G-quadruplexes in the human and other genomes, together with the predictive algorithm used to identify them. We also provide a QuadPredict server, which predicts thermal stability and acts as a repository for experimental data from all researchers. There are also a number of other data sources with computational predictions. We anticipate that the wide availability of this information will be of use both to researchers already active in this exciting field and to those who wish to investigate a particular gene hypothesis.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
A Novel Protocol for Accuracy Assessment in Classification of Very High Resolution Images

Persello, C., Bruzzone, L.

IEEE Transactions on Geoscience and Remote Sensing, 48(3):1232-1244, March 2010 (article)

Abstract
This paper presents a novel protocol for the accuracy assessment of the thematic maps obtained by the classification of very high resolution images. As the thematic accuracy alone is not sufficient to adequately characterize the geometrical properties of high-resolution classification maps, we propose a protocol that is based on the analysis of two families of indices: 1) the traditional thematic accuracy indices and 2) a set of novel geometric indices that model different geometric properties of the objects recognized in the map. In this context, we present a set of indices that characterize five different types of geometric errors in the classification map: 1) oversegmentation; 2) undersegmentation; 3) edge location; 4) shape distortion; and 5) fragmentation. Moreover, we propose a new approach for tuning the free parameters of supervised classifiers on the basis of a multiobjective criterion function that aims at selecting the parameter values that result in the classification map that jointly optimize thematic and geometric error indices. Experimental results obtained on QuickBird images show the effectiveness of the proposed protocol in selecting classification maps characterized by a better tradeoff between thematic and geometric accuracies than standard procedures based only on thematic accuracy measures. In addition, results obtained with support vector machine classifiers confirm the effectiveness of the proposed multiobjective technique for the selection of free-parameter values for the classification algorithm.

ei

Web DOI [BibTex]

Web DOI [BibTex]