Header logo is


2007


no image
HPLC analysis and pharmacokinetic study of quercitrin and isoquercitrin in rat plasma after administration of Hypericum japonicum thunb. extract.

Li, J., Wang, W., Zhang, L., Chen, H., Bi, S.

Biomedical Chromatography, 22(4):374-378, December 2007 (article)

Abstract
A simple HPLC method was developed for determination of quercitrin and isoquercitrin in rat plasma. Reversed-phase HPLC was employed for the quantitative analysis using kaempferol-3-O--d-glucopyranoside-7-O--l-rhamnoside as an internal standard. Following extraction from the plasma samples with ethyl acetate-isopropanol (95:5, v/v), these two compounds were successfully separated on a Luna C18 column (250 × 4.6 mm, 5 µm) with isocratic elution of acetonitrile-0.5% aqueous acetic acid (17:83, v/v) as the mobile phase. The flow-rate was set at 1 mL/min and the eluent was detected at 350 nm for both quercitrin and isoquercitrin. The method was linear over the studied ranges of 50-6000 and 50-5000 ng/mL for quercitrin and isoquercitrin, respectively. The intra- and inter-day precisions of the analysis were better than 13.1 and 13.2%, respectively. The lower limits of quantitation for quercitrin and isoquercitrin in plasma were both of 50 ng/mL. The mean extraction recoveries were 73 and 61% for quercitrin and i soquercitrin, respectively. The validated method was successfully applied to pharmacokinetic studies of the two analytes in rat plasma after the oral administration of Hypericum japonicum thunb. ethanol extract.

ei

Web DOI [BibTex]

2007



no image
Reaction graph kernels for discovering missing enzymes in the plant secondary metabolism

Saigo, H., Hattori, M., Tsuda, K.

NIPS Workshop on Machine Learning in Computational Biology, December 2007 (talk)

Abstract
Secondary metabolic pathway in plant is important for finding druggable candidate enzymes. However, there are many enzymes whose functions are still undiscovered especially in organism-specific metabolic pathways. We propose reaction graph kernels for automatically assigning the EC numbers to unknown enzymatic reactions in a metabolic network. Experiments are carried out on KEGG/REACTION database and our method successfully predicted the first three digits of the EC number with 83% accuracy.We also exhaustively predicted missing enzymatic functions in the plant secondary metabolism pathways, and evaluated our results in biochemical validity.

ei

Web [BibTex]

Web [BibTex]


no image
Positional Oligomer Importance Matrices

Sonnenburg, S., Zien, A., Philips, P., Rätsch, G.

NIPS Workshop on Machine Learning in Computational Biology, December 2007 (talk)

Abstract
At the heart of many important bioinformatics problems, such as gene finding and function prediction, is the classification of biological sequences, above all of DNA and proteins. In many cases, the most accurate classifiers are obtained by training SVMs with complex sequence kernels, for instance for transcription starts or splice sites. However, an often criticized downside of SVMs with complex kernels is that it is very hard for humans to understand the learned decision rules and to derive biological insights from them. To close this gap, we introduce the concept of positional oligomer importance matrices (POIMs) and develop an efficient algorithm for their computation. We demonstrate how they overcome the limitations of sequence logos, and how they can be used to find relevant motifs for different biological phenomena in a straight-forward way. Note that the concept of POIMs is not limited to interpreting SVMs, but is applicable to general k−mer based scoring systems.

ei

Web [BibTex]

Web [BibTex]


no image
Machine Learning Algorithms for Polymorphism Detection

Schweikert, G., Zeller, G., Weigel, D., Schölkopf, B., Rätsch, G.

NIPS Workshop on Machine Learning in Computational Biology, December 2007 (talk)

ei

Web [BibTex]

Web [BibTex]


no image
Graph sharpening plus graph integration: a synergy that improves protein functional classification

Shin, HH., Lisewski, AM., Lichtarge, O.

Bioinformatics, 23(23):3217-3224, December 2007 (article)

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
A Tutorial on Spectral Clustering

von Luxburg, U.

Statistics and Computing, 17(4):395-416, December 2007 (article)

Abstract
In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
An Automated Combination of Kernels for Predicting Protein Subcellular Localization

Zien, A., Ong, C.

NIPS Workshop on Machine Learning in Computational Biology, December 2007 (talk)

Abstract
Protein subcellular localization is a crucial ingredient to many important inferences about cellular processes, including prediction of protein function and protein interactions.We propose a new class of protein sequence kernels which considers all motifs including motifs with gaps. This class of kernels allows the inclusion of pairwise amino acid distances into their computation. We utilize an extension of the multiclass support vector machine (SVM)method which directly solves protein subcellular localization without resorting to the common approach of splitting the problem into several binary classification problems. To automatically search over families of possible amino acid motifs, we optimize over multiple kernels at the same time. We compare our automated approach to four other predictors on three different datasets, and show that we perform better than the current state of the art. Furthermore, our method provides some insights as to which features are most useful for determining subcellular localization, which are in agreement with biological reasoning.

ei

Web [BibTex]

Web [BibTex]


no image
A Tutorial on Kernel Methods for Categorization

Jäkel, F., Schölkopf, B., Wichmann, F.

Journal of Mathematical Psychology, 51(6):343-358, December 2007 (article)

Abstract
The abilities to learn and to categorize are fundamental for cognitive systems, be it animals or machines, and therefore have attracted attention from engineers and psychologists alike. Modern machine learning methods and psychological models of categorization are remarkably similar, partly because these two fields share a common history in artificial neural networks and reinforcement learning. However, machine learning is now an independent and mature field that has moved beyond psychologically or neurally inspired algorithms towards providing foundations for a theory of learning that is rooted in statistics and functional analysis. Much of this research is potentially interesting for psychological theories of learning and categorization but also hardly accessible for psychologists. Here, we provide a tutorial introduction to a popular class of machine learning tools, called kernel methods. These methods are closely related to perceptrons, radial-basis-function neural networks and exemplar theories of catego rization. Recent theoretical advances in machine learning are closely tied to the idea that the similarity of patterns can be encapsulated in a positive definite kernel. Such a positive definite kernel can define a reproducing kernel Hilbert space which allows one to use powerful tools from functional analysis for the analysis of learning algorithms. We give basic explanations of some key concepts—the so-called kernel trick, the representer theorem and regularization—which may open up the possibility that insights from machine learning can feed back into psychology.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
A semigroup approach to queueing systems

Haji, A., Radl, A.

Semigroup Forum, 75(3):610-624, December 2007 (article)

Abstract
We prove asymptotic stability of the solutions of equations describing a simple queueing system consisting of two machines separated by a finite storage buffer. Following an approach by G. Gupur, we apply the theory of C0-semigroups and spectral theory of positive operators.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Point-spread functions for backscattered imaging in the scanning electron microscope

Hennig, P., Denk, W.

Journal of Applied Physics , 102(12):1-8, December 2007 (article)

Abstract
One knows the imaging system's properties are central to the correct interpretation of any image. In a scanning electron microscope regions of different composition generally interact in a highly nonlinear way during signal generation. Using Monte Carlo simulations we found that in resin-embedded, heavy metal-stained biological specimens staining is sufficiently dilute to allow an approximately linear treatment. We then mapped point-spread functions for backscattered-electron contrast, for primary energies of 3 and 7 keV and for different detector specifications. The point-spread functions are surprisingly well confined (both laterally and in depth) compared even to the distribution of only those scattered electrons that leave the sample again.

ei pn

Web DOI [BibTex]

Web DOI [BibTex]


no image
Accurate Splice site Prediction Using Support Vector Machines

Sonnenburg, S., Schweikert, G., Philips, P., Behr, J., Rätsch, G.

BMC Bioinformatics, 8(Supplement 10):1-16, December 2007 (article)

Abstract
Background: For splice site recognition, one has to solve two classification problems: discriminating true from decoy splice sites for both acceptor and donor sites. Gene finding systems typically rely on Markov Chains to solve these tasks. Results: In this work we consider Support Vector Machines for splice site recognition. We employ the so-called weighted degree kernel which turns out well suited for this task, as we will illustrate in several experiments where we compare its prediction accuracy with that of recently proposed systems. We apply our method to the genome-wide recognition of splice sites in Caenorhabditis elegans, Drosophila melanogaster, Arabidopsis thaliana, Danio rerio, and Homo sapiens. Our performance estimates indicate that splice sites can be recognized very accurately in these genomes and that our method outperforms many other methods including Markov Chains, GeneSplicer and SpliceMachine. We provide genome-wide predictions of splice sites and a stand-alone prediction tool ready to be used for incorporation in a gene finder. Availability: Data, splits, additional information on the model selection, the whole genome predictions, as well as the stand-alone prediction tool are available for download at http:// www.fml.mpg.de/raetsch/projects/splice.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Challenges in Brain-Computer Interface Development: Induction, Measurement, Decoding, Integration

Hill, NJ.

Invited keynote talk at the launch of BrainGain, the Dutch BCI research consortium, November 2007 (talk)

Abstract
I‘ll present a perspective on Brain-Computer Interface development from T{\"u}bingen. Some of the benefits promised by BCI technology lie in the near foreseeable future, and some further away. Our motivation is to make BCI technology feasible for the people who could benefit from what it has to offer soon: namely, people in the "completely locked-in" state. I‘ll mention some of the challenges of working with this user group, and explain the specific directions they have motivated us to take in developing experimental methods, algorithms, and software.

ei

[BibTex]

[BibTex]


no image
Some Theoretical Aspects of Human Categorization Behavior: Similarity and Generalization

Jäkel, F.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, November 2007, passed with "ausgezeichnet", summa cum laude, published online (phdthesis)

ei

PDF [BibTex]

PDF [BibTex]


no image
Statistical Learning Theory Approaches to Clustering

Jegelka, S.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, November 2007 (diplomathesis)

ei

PDF [BibTex]

PDF [BibTex]


no image
Policy Learning for Robotics

Peters, J.

14th International Conference on Neural Information Processing (ICONIP), November 2007 (talk)

ei

Web [BibTex]

Web [BibTex]


no image
A unifying framework for robot control with redundant DOFs

Peters, J., Mistry, M., Udwadia, F., Nakanishi, J., Schaal, S.

Autonomous Robots, 24(1):1-12, October 2007 (article)

Abstract
Recently, Udwadia (Proc. R. Soc. Lond. A 2003:1783–1800, 2003) suggested to derive tracking controllers for mechanical systems with redundant degrees-of-freedom (DOFs) using a generalization of Gauss’ principle of least constraint. This method allows reformulating control problems as a special class of optimal controllers. In this paper, we take this line of reasoning one step further and demonstrate that several well-known and also novel nonlinear robot control laws can be derived from this generic methodology. We show experimental verifications on a Sarcos Master Arm robot for some of the derived controllers. The suggested approach offers a promising unification and simplification of nonlinear control law design for robots obeying rigid body dynamics equations, both with or without external constraints, with over-actuation or underactuation, as well as open-chain and closed-chain kinematics.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
The Need for Open Source Software in Machine Learning

Sonnenburg, S., Braun, M., Ong, C., Bengio, S., Bottou, L., Holmes, G., LeCun, Y., Müller, K., Pereira, F., Rasmussen, C., Rätsch, G., Schölkopf, B., Smola, A., Vincent, P., Weston, J., Williamson, R.

Journal of Machine Learning Research, 8, pages: 2443-2466, October 2007 (article)

Abstract
Open source tools have recently reached a level of maturity which makes them suitable for building large-scale real-world systems. At the same time, the field of machine learning has developed a large body of powerful learning algorithms for diverse applications. However, the true potential of these methods is not realized, since existing implementations are not openly shared, resulting in software with low usability, and weak interoperability. We argue that this situation can be significantly improved by increasing incentives for researchers to publish their software under an open source model. Additionally, we outline the problems authors are faced with when trying to publish algorithmic implementations of machine learning methods. We believe that a resource of peer reviewed software accompanied by short articles would be highly valuable to both the machine learning and the general scientific community.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Hilbert Space Representations of Probability Distributions

Gretton, A.

2nd Workshop on Machine Learning and Optimization at the ISM, October 2007 (talk)

Abstract
Many problems in unsupervised learning require the analysis of features of probability distributions. At the most fundamental level, we might wish to determine whether two distributions are the same, based on samples from each - this is known as the two-sample or homogeneity problem. We use kernel methods to address this problem, by mapping probability distributions to elements in a reproducing kernel Hilbert space (RKHS). Given a sufficiently rich RKHS, these representations are unique: thus comparing feature space representations allows us to compare distributions without ambiguity. Applications include testing whether cancer subtypes are distinguishable on the basis of DNA microarray data, and whether low frequency oscillations measured at an electrode in the cortex have a different distribution during a neural spike. A more difficult problem is to discover whether two random variables drawn from a joint distribution are independent. It turns out that any dependence between pairs of random variables can be encoded in a cross-covariance operator between appropriate RKHS representations of the variables, and we may test independence by looking at a norm of the operator. We demonstrate this independence test by establishing dependence between an English text and its French translation, as opposed to French text on the same topic but otherwise unrelated. Finally, we show that this operator norm is itself a difference in feature means.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Regression with Intervals

Kashima, H., Yamazaki, K., Saigo, H., Inokuchi, A.

International Workshop on Data-Mining and Statistical Science (DMSS2007), October 2007, JSAI Incentive Award. Talk was given by Hisashi Kashima. (talk)

ei

Web [BibTex]

Web [BibTex]


no image
On the Representer Theorem and Equivalent Degrees of Freedom of SVR

Dinuzzo, F., Neve, M., De Nicolao, G., Gianazza, U.

Journal of Machine Learning Research, 8, pages: 2467-2495, October 2007 (article)

Abstract
Support Vector Regression (SVR) for discrete data is considered. An alternative formulation of the representer theorem is derived. This result is based on the newly introduced notion of pseudoresidual and the use of subdifferential calculus. The representer theorem is exploited to analyze the sensitivity properties of ε-insensitive SVR and introduce the notion of approximate degrees of freedom. The degrees of freedom are shown to play a key role in the evaluation of the optimism, that is the difference between the expected in-sample error and the expected empirical risk. In this way, it is possible to define a Cp-like statistic that can be used for tuning the parameters of SVR. The proposed tuning procedure is tested on a simulated benchmark problem and on a real world problem (Boston Housing data set).

ei

Web [BibTex]

Web [BibTex]


no image
Some observations on the masking effects of Mach bands

Curnow, T., Cowie, DA., Henning, GB., Hill, NJ.

Journal of the Optical Society of America A, 24(10):3233-3241, October 2007 (article)

Abstract
There are 8 cycle / deg ripples or oscillations in performance as a function of location near Mach bands in experiments measuring Mach bands’ masking effects on random polarity signal bars. The oscillations with increments are 180 degrees out of phase with those for decrements. The oscillations, much larger than the measurement error, appear to relate to the weighting function of the spatial-frequency-tuned channel detecting the broad- band signals. The ripples disappear with step maskers and become much smaller at durations below 25 ms, implying either that the site of masking has changed or that the weighting function and hence spatial-frequency tuning is slow to develop.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Bayesian Estimators for Robins-Ritov’s Problem

Harmeling, S., Toussaint, M.

(EDI-INF-RR-1189), School of Informatics, University of Edinburgh, October 2007 (techreport)

Abstract
Bayesian or likelihood-based approaches to data analysis became very popular in the field of Machine Learning. However, there exist theoretical results which question the general applicability of such approaches; among those a result by Robins and Ritov which introduce a specific example for which they prove that a likelihood-based estimator will fail (i.e. it does for certain cases not converge to a true parameter estimate, even given infinite data). In this paper we consider various approaches to formulate likelihood-based estimators in this example, basically by considering various extensions of the presumed generative model of the data. We can derive estimators which are very similar to the classical Horvitz-Thompson and which also account for a priori knowledge of an observation probability function.

ei

PDF [BibTex]

PDF [BibTex]


no image
Support Vector Machine Learning for Interdependent and Structured Output Spaces

Altun, Y., Hofmann, T., Tsochantaridis, I.

In Predicting Structured Data, pages: 85-104, Advances in neural information processing systems, (Editors: Bakir, G. H. , T. Hofmann, B. Schölkopf, A. J. Smola, B. Taskar, S. V. N. Vishwanathan), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

ei

Web [BibTex]

Web [BibTex]


no image
Brisk Kernel ICA

Jegelka, S., Gretton, A.

In Large Scale Kernel Machines, pages: 225-250, Neural Information Processing, (Editors: Bottou, L. , O. Chapelle, D. DeCoste, J. Weston), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Abstract
Recent approaches to independent component analysis have used kernel independence measures to obtain very good performance in ICA, particularly in areas where classical methods experience difficulty (for instance, sources with near-zero kurtosis). In this chapter, we compare two efficient extensions of these methods for large-scale problems: random subsampling of entries in the Gram matrices used in defining the independence measures, and incomplete Cholesky decomposition of these matrices. We derive closed-form, efficiently computable approximations for the gradients of these measures, and compare their performance on ICA using both artificial and music data. We show that kernel ICA can scale up to much larger problems than yet attempted, and that incomplete Cholesky decomposition performs better than random sampling.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
MR-Based PET Attenuation Correction: Method and Validation

Hofmann, M., Steinke, F., Scheel, V., Brady, M., Schölkopf, B., Pichler, B.

Joint Molecular Imaging Conference, September 2007 (talk)

Abstract
PET/MR combines the high soft tissue contrast of Magnetic Resonance Imaging (MRI) and the functional information of Positron Emission Tomography (PET). For quantitative PET information, correction of tissue photon attenuation is mandatory. Usually in conventional PET, the attenuation map is obtained from a transmission scan, which uses a rotating source, or from the CT scan in case of combined PET/CT. In the case of a PET/MR scanner, there is insufficient space for the rotating source and ideally one would want to calculate the attenuation map from the MR image instead. Since MR images provide information about proton density of the different tissue types, it is not trivial to use this data for PET attenuation correction. We present a method for predicting the PET attenuation map from a given the MR image, using a combination of atlas-registration and recognition of local patterns. Using "leave one out cross validation" we show on a database of 16 MR-CT image pairs that our method reliably allows estimating the CT image from the MR image. Subsequently, as in PET/CT, the PET attenuation map can be predicted from the CT image. On an additional dataset of MR/CT/PET triplets we quantitatively validate that our approach allows PET quantification with an error that is smaller than what would be clinically significant. We demonstrate our approach on T1-weighted human brain scans. However, the presented methods are more general and current research focuses on applying the established methods to human whole body PET/MRI applications.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Mining complex genotypic features for predicting HIV-1 drug resistance

Saigo, H., Uno, T., Tsuda, K.

Bioinformatics, 23(18):2455-2462, September 2007 (article)

Abstract
Human immunodeficiency virus type 1 (HIV-1) evolves in human body, and its exposure to a drug often causes mutations that enhance the resistance against the drug. To design an effective pharmacotherapy for an individual patient, it is important to accurately predict the drug resistance based on genotype data. Notably, the resistance is not just the simple sum of the effects of all mutations. Structural biological studies suggest that the association of mutations is crucial: Even if mutations A or B alone do not affect the resistance, a significant change might happen when the two mutations occur together. Linear regression methods cannot take the associations into account, while decision tree methods can reveal only limited associations. Kernel methods and neural networks implicitly use all possible associations for prediction, but cannot select salient associations explicitly. Our method, itemset boosting, performs linear regression in the complete space of power sets of mutations. It implements a forward feature selection procedure where, in each iteration, one mutation combination is found by an efficient branch-and-bound search. This method uses all possible combinations, and salient associations are explicitly shown. In experiments, our method worked particularly well for predicting the resistance of nucleotide reverse transcriptase inhibitors (NRTIs). Furthermore, it successfully recovered many mutation associations known in biological literature.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Training a Support Vector Machine in the Primal

Chapelle, O.

In Large Scale Kernel Machines, pages: 29-50, Neural Information Processing, (Editors: Bottou, L. , O. Chapelle, D. DeCoste, J. Weston), MIT Press, Cambridge, MA, USA, September 2007, This is a slightly updated version of the Neural Computation paper (inbook)

Abstract
Most literature on Support Vector Machines (SVMs) concentrate on the dual optimization problem. In this paper, we would like to point out that the primal problem can also be solved efficiently, both for linear and non-linear SVMs, and that there is no reason to ignore this possibility. On the contrary, from the primal point of view new families of algorithms for large scale SVM training can be investigated.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Approximation Methods for Gaussian Process Regression

Quiñonero-Candela, J., Rasmussen, CE., Williams, CKI.

In Large-Scale Kernel Machines, pages: 203-223, Neural Information Processing, (Editors: Bottou, L. , O. Chapelle, D. DeCoste, J. Weston), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Abstract
A wealth of computationally efficient approximation methods for Gaussian process regression have been recently proposed. We give a unifying overview of sparse approximations, following Quiñonero-Candela and Rasmussen (2005), and a brief review of approximate matrix-vector multiplication methods.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning with Transformation Invariant Kernels

Walder, C., Chapelle, O.

(165), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, September 2007 (techreport)

Abstract
Abstract. This paper considers kernels invariant to translation, rotation and dilation. We show that no non-trivial positive definite (p.d.) kernels exist which are radial and dilation invariant, only conditionally positive definite (c.p.d.) ones. Accordingly, we discuss the c.p.d. case and provide some novel analysis, including an elementary derivation of a c.p.d. representer theorem. On the practical side, we give a support vector machine (s.v.m.) algorithm for arbitrary c.p.d. kernels. For the thin-plate kernel this leads to a classifier with only one parameter (the amount of regularisation), which we demonstrate to be as effective as an s.v.m. with the Gaussian kernel, even though the Gaussian involves a second parameter (the length scale).

ei

PDF [BibTex]

PDF [BibTex]


no image
Density Estimation of Structured Outputs in Reproducing Kernel Hilbert Spaces

Altun, Y., Smola, A.

In Predicting Structured Data, pages: 283-300, Advances in neural information processing systems, (Editors: BakIr, G. H., T. Hofmann, B. Schölkopf, A. J. Smola, B. Taskar, S. V.N. Vishwanathan), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Abstract
In this paper we study the problem of estimating conditional probability distributions for structured output prediction tasks in Reproducing Kernel Hilbert Spaces. More specically, we prove decomposition results for undirected graphical models, give constructions for kernels, and show connections to Gaussian Process classi- cation. Finally we present ecient means of solving the optimization problem and apply this to label sequence learning. Experiments on named entity recognition and pitch accent prediction tasks demonstrate the competitiveness of our approach.

ei

Web [BibTex]

Web [BibTex]


no image
Trading Convexity for Scalability

Collobert, R., Sinz, F., Weston, J., Bottou, L.

In Large Scale Kernel Machines, pages: 275-300, Neural Information Processing, (Editors: Bottou, L. , O. Chapelle, D. DeCoste, J. Weston), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

Abstract
Convex learning algorithms, such as Support Vector Machines (SVMs), are often seen as highly desirable because they offer strong practical properties and are amenable to theoretical analysis. However, in this work we show how nonconvexity can provide scalability advantages over convexity. We show how concave-convex programming can be applied to produce (i) faster SVMs where training errors are no longer support vectors, and (ii) much faster Transductive SVMs.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Scalable Semidefinite Programming using Convex Perturbations

Kulis, B., Sra, S., Jegelka, S.

(TR-07-47), University of Texas, Austin, TX, USA, September 2007 (techreport)

Abstract
Several important machine learning problems can be modeled and solved via semidefinite programs. Often, researchers invoke off-the-shelf software for the associated optimization, which can be inappropriate for many applications due to computational and storage requirements. In this paper, we introduce the use of convex perturbations for semidefinite programs (SDPs). Using a particular perturbation function, we arrive at an algorithm for SDPs that has several advantages over existing techniques: a) it is simple, requiring only a few lines of MATLAB, b) it is a first-order method which makes it scalable, c) it can easily exploit the structure of a particular SDP to gain efficiency (e.g., when the constraint matrices are low-rank). We demonstrate on several machine learning applications that the proposed algorithm is effective in finding fast approximations to large-scale SDPs.

ei

PDF [BibTex]

PDF [BibTex]


no image
Bayesian methods for NMR structure determination

Habeck, M.

29th Annual Discussion Meeting: Magnetic Resonance in Biophysical Chemistry, September 2007 (talk)

ei

Web [BibTex]

Web [BibTex]


no image
Real-Time Fetal Heart Monitoring in Biomagnetic Measurements Using Adaptive Real-Time ICA

Waldert, S., Bensch, M., Bogdan, M., Rosenstiel, W., Schölkopf, B., Lowery, C., Eswaran, H., Preissl, H.

IEEE Transactions on Biomedical Engineering, 54(10):1867-1874, September 2007 (article)

Abstract
Electrophysiological signals of the developing fetal brain and heart can be investigated by fetal magnetoencephalography (fMEG). During such investigations, the fetal heart activity and that of the mother should be monitored continuously to provide an important indication of current well-being. Due to physical constraints of an fMEG system, it is not possible to use clinically established heart monitors for this purpose. Considering this constraint, we developed a real-time heart monitoring system for biomagnetic measurements and showed its reliability and applicability in research and for clinical examinations. The developed system consists of real-time access to fMEG data, an algorithm based on Independent Component Analysis (ICA), and a graphical user interface (GUI). The algorithm extracts the current fetal and maternal heart signal from a noisy and artifact-contaminated data stream in real-time and is able to adapt automatically to continuously varying environmental parameters. This algorithm has been na med Adaptive Real-time ICA (ARICA) and is applicable to real-time artifact removal as well as to related blind signal separation problems.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Classifying Event-Related Desynchronization in EEG, ECoG and MEG signals

Hill, N., Lal, T., Tangermann, M., Hinterberger, T., Widman, G., Elger, C., Schölkopf, B., Birbaumer, N.

In Toward Brain-Computer Interfacing, pages: 235-260, Neural Information Processing, (Editors: G Dornhege and J del R Millán and T Hinterberger and DJ McFarland and K-R Müller), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Joint Kernel Maps

Weston, J., Bakir, G., Bousquet, O., Mann, T., Noble, W., Schölkopf, B.

In Predicting Structured Data, pages: 67-84, Advances in neural information processing systems, (Editors: GH Bakir and T Hofmann and B Schölkopf and AJ Smola and B Taskar and SVN Vishwanathan), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

ei

Web [BibTex]

Web [BibTex]


no image
Brain-Computer Interfaces for Communication in Paralysis: A Clinical Experimental Approach

Hinterberger, T., Nijboer, F., Kübler, A., Matuz, T., Furdea, A., Mochty, U., Jordan, M., Lal, T., Hill, J., Mellinger, J., Bensch, M., Tangermann, M., Widman, G., Elger, C., Rosenstiel, W., Schölkopf, B., Birbaumer, N.

In Toward Brain-Computer Interfacing, pages: 43-64, Neural Information Processing, (Editors: G. Dornhege and J del R Millán and T Hinterberger and DJ McFarland and K-R Müller), MIT Press, Cambridge, MA, USA, September 2007 (inbook)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Sparse Multiscale Gaussian Process Regression

Walder, C., Kim, K., Schölkopf, B.

(162), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, August 2007 (techreport)

Abstract
Most existing sparse Gaussian process (g.p.) models seek computational advantages by basing their computations on a set of m basis functions that are the covariance function of the g.p. with one of its two inputs fixed. We generalise this for the case of Gaussian covariance function, by basing our computations on m Gaussian basis functions with arbitrary diagonal covariance matrices (or length scales). For a fixed number of basis functions and any given criteria, this additional flexibility permits approximations no worse and typically better than was previously possible. Although we focus on g.p. regression, the central idea is applicable to all kernel based algorithms, such as the support vector machine. We perform gradient based optimisation of the marginal likelihood, which costs O(m2n) time where n is the number of data points, and compare the method to various other sparse g.p. methods. Our approach outperforms the other methods, particularly for the case of very few basis functions, i.e. a very high sparsity ratio.

ei

PDF [BibTex]

PDF [BibTex]


no image
Overcomplete Independent Component Analysis via Linearly Constrained Minimum Variance Spatial Filtering

Grosse-Wentrup, M., Buss, M.

Journal of VLSI Signal Processing, 48(1-2):161-171, August 2007 (article)

Abstract
Independent Component Analysis (ICA) designed for complete bases is used in a variety of applications with great success, despite the often questionable assumption of having N sensors and M sources with N&#8805;M. In this article, we assume a source model with more sources than sensors (M>N), only L<N of which are assumed to have a non-Gaussian distribution. We argue that this is a realistic source model for a variety of applications, and prove that for ICA algorithms designed for complete bases (i.e., algorithms assuming N=M) based on mutual information the mixture coefficients of the L non-Gaussian sources can be reconstructed in spite of the overcomplete mixture model. Further, it is shown that the reconstructed temporal activity of non-Gaussian sources is arbitrarily mixed with Gaussian sources. To obtain estimates of the temporal activity of the non-Gaussian sources, we use the correctly reconstructed mixture coefficients in conjunction with linearly constrained minimum variance spatial filtering. This results in estimates of the non-Gaussian sources minimizing the variance of the interference of other sources. The approach is applied to the denoising of Event Related Fields recorded by MEG, and it is shown that it performs superiorly to ordinary ICA.

ei

PDF PDF DOI [BibTex]


no image
Efficient Subwindow Search for Object Localization

Blaschko, M., Hofmann, T., Lampert, C.

(164), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, August 2007 (techreport)

Abstract
Recent years have seen huge advances in object recognition from images. Recognition rates beyond 95% are the rule rather than the exception on many datasets. However, most state-of-the-art methods can only decide if an object is present or not. They are not able to provide information on the object location or extent within in the image. We report on a simple yet powerful scheme that extends many existing recognition methods to also perform localization of object bounding boxes. This is achieved by maximizing the classification score over all possible subrectangles in the image. Despite the impression that this would be computationally intractable, we show that in many situations efficient algorithms exist which solve a generalized maximum subrectangle problem. We show how our method is applicable to a variety object detection frameworks and demonstrate its performance by applying it to the popular bag of visual words model, achieving competitive results on the PASCAL VOC 2006 dataset.

ei

PDF [BibTex]

PDF [BibTex]


no image
Thinking Out Loud: Research and Development of Brain Computer Interfaces

Hill, NJ.

Invited keynote talk at the Max Planck Society‘s PhDNet Workshop., July 2007 (talk)

Abstract
My principal interest is in applying machine-learning methods to the development of Brain-Computer Interfaces (BCI). This involves the classification of a user‘s intentions or mental states, or regression against some continuous intentional control signal, using brain signals obtained for example by EEG, ECoG or MEG. The long-term aim is to develop systems that a completely paralysed person (such as someone suffering from advanced Amyotrophic Lateral Sclerosis) could use to communicate. Such systems have the potential to improve the lives of many people who would be otherwise completely unable to communicate, but they are still very much in the research and development stages.

ei

PDF [BibTex]

PDF [BibTex]


no image
Error Correcting Codes for the P300 Visual Speller

Biessmann, F.

Biologische Kybernetik, Eberhard-Karls-Universität Tübingen, Tübingen, Germany, July 2007 (diplomathesis)

Abstract
The aim of brain-computer interface (BCI) research is to establish a communication system based on intentional modulation of brain activity. This is accomplished by classifying patterns of brain ac- tivity, volitionally induced by the user. The BCI presented in this study is based on a classical paradigm as proposed by (Farwell and Donchin, 1988), the P300 visual speller. Recording electroencephalo- grams (EEG) from the scalp while presenting letters successively to the user, the speller can infer from the brain signal which letter the user was focussing on. Since EEG recordings are noisy, usually many repetitions are needed to detect the correct letter. The focus of this study was to improve the accuracy of the visual speller applying some basic principles from information theory: Stimulus sequences of the speller have been modi&amp;amp;amp;#64257;ed into error-correcting codes. Additionally a language model was incorporated into the probabilistic letter de- coder. Classi&amp;amp;amp;#64257;cation of single EEG epochs was less accurate using error correcting codes. However, the novel code could compensate for that such that overall, letter accuracies were as high as or even higher than for classical stimulus codes. In particular at high noise levels, error-correcting decoding achieved higher letter accuracies.

ei

PDF [BibTex]

PDF [BibTex]


no image
Feature Selection for Trouble Shooting in Complex Assembly Lines

Pfingsten, T., Herrmann, D., Schnitzler, T., Feustel, A., Schölkopf, B.

IEEE Transactions on Automation Science and Engineering, 4(3):465-469, July 2007 (article)

Abstract
The final properties of sophisticated products can be affected by many unapparent dependencies within the manufacturing process, and the products’ integrity can often only be checked in a final measurement. Troubleshooting can therefore be very tedious if not impossible in large assembly lines. In this paper we show that Feature Selection is an efficient tool for serial-grouped lines to reveal causes for irregularities in product attributes. We compare the performance of several methods for Feature Selection on real-world problems in mass-production of semiconductor devices. Note to Practitioners— We present a data based procedure to localize flaws in large production lines: using the results of final quality inspections and information about which machines processed which batches, we are able to identify machines which cause low yield.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Gene selection via the BAHSIC family of algorithms

Song, L., Bedo, J., Borgwardt, K., Gretton, A., Smola, A.

Bioinformatics, 23(13: ISMB/ECCB 2007 Conference Proceedings):i490-i498, July 2007 (article)

Abstract
Motivation: Identifying significant genes among thousands of sequences on a microarray is a central challenge for cancer research in bioinformatics. The ultimate goal is to detect the genes that are involved in disease outbreak and progression. A multitude of methods have been proposed for this task of feature selection, yet the selected gene lists differ greatly between different methods. To accomplish biologically meaningful gene selection from microarray data, we have to understand the theoretical connections and the differences between these methods. In this article, we define a kernel-based framework for feature selection based on the Hilbert–Schmidt independence criterion and backward elimination, called BAHSIC. We show that several well-known feature selectors are instances of BAHSIC, thereby clarifying their relationship. Furthermore, by choosing a different kernel, BAHSIC allows us to easily define novel feature selection algorithms. As a further advantage, feature selection via BAHSIC works directly on multiclass problems. Results: In a broad experimental evaluation, the members of the BAHSIC family reach high levels of accuracy and robustness when compared to other feature selection techniques. Experiments show that features selected with a linear kernel provide the best classification performance in general, but if strong non-linearities are present in the data then non-linear kernels can be more suitable.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Phenotyping of Chondrocytes In Vivo and In Vitro Using cDNA Array Technology

Zien, A., Gebhard, P., Fundel, K., Aigner, T.

Clinical Orthopaedics and Related Research, 460, pages: 226-233, July 2007 (article)

Abstract
The cDNA array technology is a powerful tool to analyze a high number of genes in parallel. We investigated whether large-scale gene expression analysis allows clustering and identification of cellular phenotypes of chondrocytes in different in vivo and in vitro conditions. In 100% of cases, clustering analysis distinguished between in vivo and in vitro samples, suggesting fundamental differences in chondrocytes in situ and in vitro regardless of the culture conditions or disease status. It also allowed us to differentiate between healthy and osteoarthritic cartilage. The clustering also revealed the relative importance of the investigated culturing conditions (stimulation agent, stimulation time, bead/monolayer). We augmented the cluster analysis with a statistical search for genes showing differential expression. The identified genes provided hints to the molecular basis of the differences between the sample classes. Our approach shows the power of modern bioinformatic algorithms for understanding and class ifying chondrocytic phenotypes in vivo and in vitro. Although it does not generate new experimental data per se, it provides valuable information regarding the biology of chondrocytes and may provide tools for diagnosing and staging the osteoarthritic disease process.

ei

DOI [BibTex]

DOI [BibTex]


no image
Data-driven goodness-of-fit tests

Langovoy, MA.

Biologische Kybernetik, Georg-August-Universität Göttingen, Göttingen, Germany, July 2007 (phdthesis)

ei

Web [BibTex]

Web [BibTex]