Header logo is


2015


no image
Distributed Event-based State Estimation

Trimpe, S.

Max Planck Institute for Intelligent Systems, November 2015 (techreport)

Abstract
An event-based state estimation approach for reducing communication in a networked control system is proposed. Multiple distributed sensor-actuator-agents observe a dynamic process and sporadically exchange their measurements and inputs over a bus network. Based on these data, each agent estimates the full state of the dynamic system, which may exhibit arbitrary inter-agent couplings. Local event-based protocols ensure that data is transmitted only when necessary to meet a desired estimation accuracy. This event-based scheme is shown to mimic a centralized Luenberger observer design up to guaranteed bounds, and stability is proven in the sense of bounded estimation errors for bounded disturbances. The stability result extends to the distributed control system that results when the local state estimates are used for distributed feedback control. Simulation results highlight the benefit of the event-based approach over classical periodic ones in reducing communication requirements.

am ics

arXiv [BibTex]

2015


arXiv [BibTex]


no image
Causal Inference for Empirical Time Series Based on the Postulate of Independence of Cause and Mechanism

Besserve, M.

53rd Annual Allerton Conference on Communication, Control, and Computing, September 2015 (talk)

ei

[BibTex]

[BibTex]


no image
Independence of cause and mechanism in brain networks

Besserve, M.

DALI workshop on Networks: Processes and Causality, April 2015 (talk)

ei

[BibTex]

[BibTex]


no image
Information-Theoretic Implications of Classical and Quantum Causal Structures

Chaves, R., Majenz, C., Luft, L., Maciel, T., Janzing, D., Schölkopf, B., Gross, D.

18th Conference on Quantum Information Processing (QIP), 2015 (talk)

ei

Web link (url) [BibTex]

Web link (url) [BibTex]


no image
Cosmology from Cosmic Shear with DES Science Verification Data

Abbott, T., Abdalla, F. B., Allam, S., Amara, A., Annis, J., Armstrong, R., Bacon, D., Banerji, M., Bauer, A. H., Baxter, E., others,

arXiv preprint arXiv:1507.05552, 2015 (techreport)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
The DES Science Verification Weak Lensing Shear Catalogs

Jarvis, M., Sheldon, E., Zuntz, J., Kacprzak, T., Bridle, S. L., Amara, A., Armstrong, R., Becker, M. R., Bernstein, G. M., Bonnett, C., others,

arXiv preprint arXiv:1507.05603, 2015 (techreport)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
The search for single exoplanet transits in the Kepler light curves

Foreman-Mackey, D., Hogg, D. W., Schölkopf, B.

IAU General Assembly, 22, pages: 2258352, 2015 (talk)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Derivation of phenomenological expressions for transition matrix elements for electron-phonon scattering

Illg, C., Haag, M., Müller, B. Y., Czycholl, G., Fähnle, M.

2015 (misc)

mms

link (url) [BibTex]

2013


Thumb xl implied flow whue
Puppet Flow

Zuffi, S., Black, M. J.

(7), Max Planck Institute for Intelligent Systems, October 2013 (techreport)

Abstract
We introduce Puppet Flow (PF), a layered model describing the optical flow of a person in a video sequence. We consider video frames composed by two layers: a foreground layer corresponding to a person, and background. We model the background as an affine flow field. The foreground layer, being a moving person, requires reasoning about the articulated nature of the human body. We thus represent the foreground layer with the Deformable Structures model (DS), a parametrized 2D part-based human body representation. We call the motion field defined through articulated motion and deformation of the DS model, a Puppet Flow. By exploiting the DS representation, Puppet Flow is a parametrized optical flow field, where parameters are the person's pose, gender and body shape.

ps

pdf Project Page Project Page [BibTex]

2013


pdf Project Page Project Page [BibTex]


no image
Dry adhesives and methods for making dry adhesives

Sitti, M., Kim, S.

sep 2013, US Patent App. 14/016,651 (misc)

pi

[BibTex]

[BibTex]


no image
Dry adhesives and methods for making dry adhesives

Sitti, M., Kim, S.

sep 2013, US Patent App. 14/016,683 (misc)

pi

[BibTex]

[BibTex]


no image
Dry adhesives and methods for making dry adhesives

Sitti, M., Kim, S.

sep 2013, US Patent 8,524,092 (misc)

pi

[BibTex]

[BibTex]


no image
Studying large-scale brain networks: electrical stimulation and neural-event-triggered fMRI

Logothetis, N., Eschenko, O., Murayama, Y., Augath, M., Steudel, T., Evrard, H., Besserve, M., Oeltermann, A.

Twenty-Second Annual Computational Neuroscience Meeting (CNS*2013), July 2013, journal = {BMC Neuroscience}, year = {2013}, month = {7}, volume = {14}, number = {Supplement 1}, pages = {A1}, (talk)

ei

Web [BibTex]

Web [BibTex]


Thumb xl submodularity nips
Learning and Optimization with Submodular Functions

Sankaran, B., Ghazvininejad, M., He, X., Kale, D., Cohen, L.

ArXiv, May 2013 (techreport)

Abstract
In many naturally occurring optimization problems one needs to ensure that the definition of the optimization problem lends itself to solutions that are tractable to compute. In cases where exact solutions cannot be computed tractably, it is beneficial to have strong guarantees on the tractable approximate solutions. In order operate under these criterion most optimization problems are cast under the umbrella of convexity or submodularity. In this report we will study design and optimization over a common class of functions called submodular functions. Set functions, and specifically submodular set functions, characterize a wide variety of naturally occurring optimization problems, and the property of submodularity of set functions has deep theoretical consequences with wide ranging applications. Informally, the property of submodularity of set functions concerns the intuitive principle of diminishing returns. This property states that adding an element to a smaller set has more value than adding it to a larger set. Common examples of submodular monotone functions are entropies, concave functions of cardinality, and matroid rank functions; non-monotone examples include graph cuts, network flows, and mutual information. In this paper we will review the formal definition of submodularity; the optimization of submodular functions, both maximization and minimization; and finally discuss some applications in relation to learning and reasoning using submodular functions.

am

arxiv link (url) [BibTex]

arxiv link (url) [BibTex]


no image
Dry adhesives and methods of making dry adhesives

Sitti, M., Murphy, M., Aksak, B.

March 2013, US Patent App. 13/845,702 (misc)

pi

[BibTex]

[BibTex]


Thumb xl secretstr
A Quantitative Analysis of Current Practices in Optical Flow Estimation and the Principles Behind Them

Sun, D., Roth, S., Black, M. J.

(CS-10-03), Brown University, Department of Computer Science, January 2013 (techreport)

ps

pdf [BibTex]

pdf [BibTex]


no image
Animating Samples from Gaussian Distributions

Hennig, P.

(8), Max Planck Institute for Intelligent Systems, Tübingen, Germany, 2013 (techreport)

ei pn

PDF [BibTex]

PDF [BibTex]


no image
Domain Generalization via Invariant Feature Representation

Muandet, K.

30th International Conference on Machine Learning (ICML2013), 2013 (talk)

ei

PDF [BibTex]

PDF [BibTex]


no image
Maximizing Kepler science return per telemetered pixel: Detailed models of the focal plane in the two-wheel era

Hogg, D. W., Angus, R., Barclay, T., Dawson, R., Fergus, R., Foreman-Mackey, D., Harmeling, S., Hirsch, M., Lang, D., Montet, B. T., Schiminovich, D., Schölkopf, B.

arXiv:1309.0653, 2013 (techreport)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Maximizing Kepler science return per telemetered pixel: Searching the habitable zones of the brightest stars

Montet, B. T., Angus, R., Barclay, T., Dawson, R., Fergus, R., Foreman-Mackey, D., Harmeling, S., Hirsch, M., Hogg, D. W., Lang, D., Schiminovich, D., Schölkopf, B.

arXiv:1309.0654, 2013 (techreport)

ei

link (url) [BibTex]

link (url) [BibTex]

2009


no image
Learning an Interactive Segmentation System

Nickisch, H., Kohli, P., Rother, C.

Max Planck Institute for Biological Cybernetics, December 2009 (techreport)

Abstract
Many successful applications of computer vision to image or video manipulation are interactive by nature. However, parameters of such systems are often trained neglecting the user. Traditionally, interactive systems have been treated in the same manner as their fully automatic counterparts. Their performance is evaluated by computing the accuracy of their solutions under some fixed set of user interactions. This paper proposes a new evaluation and learning method which brings the user in the loop. It is based on the use of an active robot user - a simulated model of a human user. We show how this approach can be used to evaluate and learn parameters of state-of-the-art interactive segmentation systems. We also show how simulated user models can be integrated into the popular max-margin method for parameter learning and propose an algorithm to solve the resulting optimisation problem.

ei

Web [BibTex]

2009


Web [BibTex]


no image
Machine Learning for Brain-Computer Interfaces

Hill, NJ.

Mini-Symposia on Assistive Machine Learning for People with Disabilities at NIPS (AMD), December 2009 (talk)

Abstract
Brain-computer interfaces (BCI) aim to be the ultimate in assistive technology: decoding a user‘s intentions directly from brain signals without involving any muscles or peripheral nerves. Thus, some classes of BCI potentially offer hope for users with even the most extreme cases of paralysis, such as in late-stage Amyotrophic Lateral Sclerosis, where nothing else currently allows communication of any kind. Other lines in BCI research aim to restore lost motor function in as natural a way as possible, reconnecting and in some cases re-training motor-cortical areas to control prosthetic, or previously paretic, limbs. Research and development are progressing on both invasive and non-invasive fronts, although BCI has yet to make a breakthrough to widespread clinical application. The high-noise high-dimensional nature of brain-signals, particularly in non-invasive approaches and in patient populations, make robust decoding techniques a necessity. Generally, the approach has been to use relatively simple feature extraction techniques, such as template matching and band-power estimation, coupled to simple linear classifiers. This has led to a prevailing view among applied BCI researchers that (sophisticated) machine-learning is irrelevant since "it doesn‘t matter what classifier you use once you‘ve done your preprocessing right and extracted the right features." I shall show a few examples of how this runs counter to both the empirical reality and the spirit of what needs to be done to bring BCI into clinical application. Along the way I‘ll highlight some of the interesting problems that remain open for machine-learners.

ei

PDF Web Web [BibTex]

PDF Web Web [BibTex]


no image
PAC-Bayesian Approach to Formulation of Clustering Objectives

Seldin, Y.

NIPS Workshop on "Clustering: Science or Art? Towards Principled Approaches", December 2009 (talk)

Abstract
Clustering is a widely used tool for exploratory data analysis. However, the theoretical understanding of clustering is very limited. We still do not have a well-founded answer to the seemingly simple question of "how many clusters are present in the data?", and furthermore a formal comparison of clusterings based on different optimization objectives is far beyond our abilities. The lack of good theoretical support gives rise to multiple heuristics that confuse the practitioners and stall development of the field. We suggest that the ill-posed nature of clustering problems is caused by the fact that clustering is often taken out of its subsequent application context. We argue that one does not cluster the data just for the sake of clustering it, but rather to facilitate the solution of some higher level task. By evaluation of the clustering‘s contribution to the solution of the higher level task it is possible to compare different clusterings, even those obtained by different optimization objectives. In the preceding work it was shown that such an approach can be applied to evaluation and design of co-clustering solutions. Here we suggest that this approach can be extended to other settings, where clustering is applied.

ei

PDF Web Web [BibTex]

PDF Web Web [BibTex]


no image
Semi-supervised Kernel Canonical Correlation Analysis of Human Functional Magnetic Resonance Imaging Data

Shelton, JA.

Women in Machine Learning Workshop (WiML), December 2009 (talk)

Abstract
Kernel Canonical Correlation Analysis (KCCA) is a general technique for subspace learning that incorporates principal components analysis (PCA) and Fisher linear discriminant analysis (LDA) as special cases. By finding directions that maximize correlation, KCCA learns representations tied more closely to underlying process generating the the data and can ignore high-variance noise directions. However, for data where acquisition in a given modality is expensive or otherwise limited, KCCA may suffer from small sample effects. We propose to use semi-supervised Laplacian regularization to utilize data that are present in only one modality. This manifold learning approach is able to find highly correlated directions that also lie along the data manifold, resulting in a more robust estimate of correlated subspaces. Functional magnetic resonance imaging (fMRI) acquired data are naturally amenable to subspace techniques as data are well aligned and such data of the human brain are a particularly interesting candidate. In this study we implemented various supervised and semi-supervised versions of KCCA on human fMRI data, with regression to single and multivariate labels (corresponding to video content subjects viewed during the image acquisition). In each variate condition, Laplacian regularization improved performance whereas the semi-supervised variants of KCCA yielded the best performance. We additionally analyze the weights learned by the regression in order to infer brain regions that are important during different types of visual processing.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
An Incremental GEM Framework for Multiframe Blind Deconvolution, Super-Resolution, and Saturation Correction

Harmeling, S., Sra, S., Hirsch, M., Schölkopf, B.

(187), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2009 (techreport)

Abstract
We develop an incremental generalized expectation maximization (GEM) framework to model the multiframe blind deconvolution problem. A simplistic version of this problem was recently studied by Harmeling etal~cite{harmeling09}. We solve a more realistic version of this problem which includes the following major features: (i) super-resolution ability emph{despite} noise and unknown blurring; (ii) saturation-correction, i.e., handling of overexposed pixels that can otherwise confound the image processing; and (iii) simultaneous handling of color channels. These features are seamlessly integrated into our incremental GEM framework to yield simple but efficient multiframe blind deconvolution algorithms. We present technical details concerning critical steps of our algorithms, especially to highlight how all operations can be written using matrix-vector multiplications. We apply our algorithm to real-world images from astronomy and super resolution tasks. Our experimental results show that our methods yield improve d resolution and deconvolution at the same time.

ei

PDF [BibTex]

PDF [BibTex]


no image
Efficient Filter Flow for Space-Variant Multiframe Blind Deconvolution

Hirsch, M., Sra, S., Schölkopf, B., Harmeling, S.

(188), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2009 (techreport)

Abstract
Ultimately being motivated by facilitating space-variant blind deconvolution, we present a class of linear transformations, that are expressive enough for space-variant filters, but at the same time especially designed for efficient matrix-vector-multiplications. Successful results on astronomical imaging through atmospheric turbulences and on noisy magnetic resonance images of constantly moving objects demonstrate the practical significance of our approach.

ei

PDF [BibTex]

PDF [BibTex]


no image
Event-Related Potentials in Brain-Computer Interfacing

Hill, NJ.

Invited lecture on the bachelor & masters course "Introduction to Brain-Computer Interfacing", October 2009 (talk)

Abstract
An introduction to event-related potentials with specific reference to their use in brain-computer interfacing applications and research.

ei

PDF [BibTex]

PDF [BibTex]


no image
BCI2000 and Python

Hill, NJ.

Invited lecture at the 5th International BCI2000 Workshop, October 2009 (talk)

Abstract
A tutorial, with exercises, on how to integrate your own Python code with the BCI2000 software package.

ei

PDF [BibTex]

PDF [BibTex]


no image
Implementing a Signal Processing Filter in BCI2000 Using C++

Hill, NJ., Mellinger, J.

Invited lecture at the 5th International BCI2000 Workshop, October 2009 (talk)

Abstract
This tutorial shows how the functionality of the BCI2000 software package can be extended with one‘s own code, using BCI2000‘s C++ API.

ei

PDF [BibTex]

PDF [BibTex]


no image
Consistent Nonparametric Tests of Independence

Gretton, A., Györfi, L.

(172), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, July 2009 (techreport)

Abstract
Three simple and explicit procedures for testing the independence of two multi-dimensional random variables are described. Two of the associated test statistics (L1, log-likelihood) are defined when the empirical distribution of the variables is restricted to finite partitions. A third test statistic is defined as a kernel-based independence measure. Two kinds of tests are provided. Distribution-free strong consistent tests are derived on the basis of large deviation bounds on the test statistcs: these tests make almost surely no Type I or Type II error after a random sample size. Asymptotically alpha-level tests are obtained from the limiting distribution of the test statistics. For the latter tests, the Type I error converges to a fixed non-zero value alpha, and the Type II error drops to zero, for increasing sample size. All tests reject the null hypothesis of independence if the test statistics become large. The performance of the tests is evaluated experimentally on benchmark data.

ei

PDF [BibTex]

PDF [BibTex]


no image
Learning Motor Primitives for Robotics

Kober, J., Peters, J., Oztop, E.

Advanced Telecommunications Research Center ATR, June 2009 (talk)

Abstract
The acquisition and self-improvement of novel motor skills is among the most important problems in robotics. Motor primitives offer one of the most promising frameworks for the application of machine learning techniques in this context. Employing the Dynamic Systems Motor primitives originally introduced by Ijspeert et al. (2003), appropriate learning algorithms for a concerted approach of both imitation and reinforcement learning are presented. Using these algorithms new motor skills, i.e., Ball-in-a-Cup, Ball-Paddling and Dart-Throwing, are learned.

ei

[BibTex]

[BibTex]


no image
Learning To Detect Unseen Object Classes by Between-Class Attribute Transfer

Lampert, C.

IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), June 2009 (talk)

ei

Web [BibTex]

Web [BibTex]


no image
Semi-supervised subspace analysis of human functional magnetic resonance imaging data

Shelton, J., Blaschko, M., Bartels, A.

(185), Max Planck Institute for Biological Cybernetics, Tübingen, Germany, May 2009 (techreport)

Abstract
Kernel Canonical Correlation Analysis is a very general technique for subspace learning that incorporates PCA and LDA as special cases. Functional magnetic resonance imaging (fMRI) acquired data is naturally amenable to these techniques as data are well aligned. fMRI data of the human brain is a particularly interesting candidate. In this study we implemented various supervised and semi-supervised versions of KCCA on human fMRI data, with regression to single- and multi-variate labels (corresponding to video content subjects viewed during the image acquisition). In each variate condition, the semi-supervised variants of KCCA performed better than the supervised variants, including a supervised variant with Laplacian regularization. We additionally analyze the weights learned by the regression in order to infer brain regions that are important to different types of visual processing.

ei

PDF [BibTex]

PDF [BibTex]


no image
The SL simulation and real-time control software package

Schaal, S.

University of Southern California, Los Angeles, CA, 2009, clmc (techreport)

Abstract
SL was originally developed as a Simulation Laboratory software package to allow creating complex rigid-body dynamics simulations with minimal development times. It was meant to complement a real-time robotics setup such that robot programs could first be debugged in simulation before trying them on the actual robot. For this purpose, the motor control setup of SL was copied from our experience with real-time robot setups with vxWorks (Windriver Systems, Inc.)Ñindeed, more than 90% of the code is identical to the actual robot software, as will be explained later in detail. As a result, SL is divided into three software components: 1) the generic code that is shared by the actual robot and the simulation, 2) the robot specific code, and 3) the simulation specific code. The robot specific code is tailored to the robotic environments that we have experienced over the years, in particular towards VME-based multi-processor real-time operating systems. The simulation specific code has all the components for OpenGL graphics simulations and mimics the robot multi-processor environment in simple C-code. Importantly, SL can be used stand-alone for creating graphics an-imationsÑthe heritage from real-time robotics does not restrict the complexity of possible simulations. This technical report describes SL in detail and can serve as a manual for new users of SL.

am

link (url) [BibTex]

link (url) [BibTex]


no image
The SL simulation and real-time control software package

Schaal, S.

University of Southern California, Los Angeles, CA, 2009, clmc (techreport)

Abstract
SL was originally developed as a Simulation Laboratory software package to allow creating complex rigid-body dynamics simulations with minimal development times. It was meant to complement a real-time robotics setup such that robot programs could first be debugged in simulation before trying them on the actual robot. For this purpose, the motor control setup of SL was copied from our experience with real-time robot setups with vxWorks (Windriver Systems, Inc.)â??indeed, more than 90% of the code is identical to the actual robot software, as will be explained later in detail. As a result, SL is divided into three software components: 1) the generic code that is shared by the actual robot and the simulation, 2) the robot specific code, and 3) the simulation specific code. The robot specific code is tailored to the robotic environments that we have experienced over the years, in particular towards VME-based multi-processor real-time operating systems. The simulation specific code has all the components for OpenGL graphics simulations and mimics the robot multi-processor environment in simple C-code. Importantly, SL can be used stand-alone for creating graphics an-imationsâ??the heritage from real-time robotics does not restrict the complexity of possible simulations. This technical report describes SL in detail and can serve as a manual for new users of SL.

am

link (url) [BibTex]

link (url) [BibTex]


no image
Biologically Inspired Polymer Microfibrillar Arrays for Mask Sealing

Cheung, E., Aksak, B., Sitti, M.

CARNEGIE-MELLON UNIV PITTSBURGH PA, 2009 (techreport)

pi

[BibTex]

[BibTex]

2006


no image
A Kernel Method for the Two-Sample-Problem

Gretton, A., Borgwardt, K., Rasch, M., Schölkopf, B., Smola, A.

20th Annual Conference on Neural Information Processing Systems (NIPS), December 2006 (talk)

Abstract
We propose two statistical tests to determine if two samples are from different distributions. Our test statistic is in both cases the distance between the means of the two samples mapped into a reproducing kernel Hilbert space (RKHS). The first test is based on a large deviation bound for the test statistic, while the second is based on the asymptotic distribution of this statistic. We show that the test statistic can be computed in $O(m^2)$ time. We apply our approach to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where our test performs strongly. We also demonstrate excellent performance when comparing distributions over graphs, for which no alternative tests currently exist.

ei

PDF [BibTex]

2006


PDF [BibTex]


no image
Ab-initio gene finding using machine learning

Schweikert, G., Zeller, G., Zien, A., Ong, C., de Bona, F., Sonnenburg, S., Phillips, P., Rätsch, G.

NIPS Workshop on New Problems and Methods in Computational Biology, December 2006 (talk)

ei

Web [BibTex]

Web [BibTex]


no image
Graph boosting for molecular QSAR analysis

Saigo, H., Kadowaki, T., Kudo, T., Tsuda, K.

NIPS Workshop on New Problems and Methods in Computational Biology, December 2006 (talk)

Abstract
We propose a new boosting method that systematically combines graph mining and mathematical programming-based machine learning. Informative and interpretable subgraph features are greedily found by a series of graph mining calls. Due to our mathematical programming formulation, subgraph features and pre-calculated real-valued features are seemlessly integrated. We tested our algorithm on a quantitative structure-activity relationship (QSAR) problem, which is basically a regression problem when given a set of chemical compounds. In benchmark experiments, the prediction accuracy of our method favorably compared with the best results reported on each dataset.

ei

Web [BibTex]

Web [BibTex]


no image
Inferring Causal Directions by Evaluating the Complexity of Conditional Distributions

Sun, X., Janzing, D., Schölkopf, B.

NIPS Workshop on Causality and Feature Selection, December 2006 (talk)

Abstract
We propose a new approach to infer the causal structure that has generated the observed statistical dependences among n random variables. The idea is that the factorization of the joint measure of cause and effect into P(cause)P(effect|cause) leads typically to simpler conditionals than non-causal factorizations. To evaluate the complexity of the conditionals we have tried two methods. First, we have compared them to those which maximize the conditional entropy subject to the observed first and second moments since we consider the latter as the simplest conditionals. Second, we have fitted the data with conditional probability measures being exponents of functions in an RKHS space and defined the complexity by a Hilbert-space semi-norm. Such a complexity measure has several properties that are useful for our purpose. We describe some encouraging results with both methods applied to real-world data. Moreover, we have combined constraint-based approaches to causal discovery (i.e., methods using only information on conditional statistical dependences) with our method in order to distinguish between causal hypotheses which are equivalent with respect to the imposed independences. Furthermore, we compare the performance to Bayesian approaches to causal inference.

ei

Web [BibTex]


no image
Minimal Logical Constraint Covering Sets

Sinz, F., Schölkopf, B.

(155), Max Planck Institute for Biological Cybernetics, Tübingen, December 2006 (techreport)

Abstract
We propose a general framework for computing minimal set covers under class of certain logical constraints. The underlying idea is to transform the problem into a mathematical programm under linear constraints. In this sense it can be seen as a natural extension of the vector quantization algorithm proposed by Tipping and Schoelkopf. We show which class of logical constraints can be cast and relaxed into linear constraints and give an algorithm for the transformation.

ei

PDF [BibTex]

PDF [BibTex]


no image
Learning Optimal EEG Features Across Time, Frequency and Space

Farquhar, J., Hill, J., Schölkopf, B.

NIPS Workshop on Current Trends in Brain-Computer Interfacing, December 2006 (talk)

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Semi-Supervised Learning

Zien, A.

Advanced Methods in Sequence Analysis Lectures, November 2006 (talk)

ei

Web [BibTex]

Web [BibTex]


no image
New Methods for the P300 Visual Speller

Biessmann, F.

(1), (Editors: Hill, J. ), Max-Planck Institute for Biological Cybernetics, Tübingen, Germany, November 2006 (techreport)

ei

PDF [BibTex]

PDF [BibTex]


no image
A Machine Learning Approach for Determining the PET Attenuation Map from Magnetic Resonance Images

Hofmann, M., Steinke, F., Judenhofer, M., Claussen, C., Schölkopf, B., Pichler, B.

IEEE Medical Imaging Conference, November 2006 (talk)

Abstract
A promising new combination in multimodality imaging is MR-PET, where the high soft tissue contrast of Magnetic Resonance Imaging (MRI) and the functional information of Positron Emission Tomography (PET) are combined. Although many technical problems have recently been solved, it is still an open problem to determine the attenuation map from the available MR scan, as the MR intensities are not directly related to the attenuation values. One standard approach is an atlas registration where the atlas MR image is aligned with the patient MR thus also yielding an attenuation image for the patient. We also propose another approach, which to our knowledge has not been tried before: Using Support Vector Machines we predict the attenuation value directly from the local image information. We train this well-established machine learning algorithm using small image patches. Although both approaches sometimes yielded acceptable results, they also showed their specific shortcomings: The registration often fails with large deformations whereas the prediction approach is problematic when the local image structure is not characteristic enough. However, the failures often do not coincide and integration of both information sources is promising. We therefore developed a combination method extending Support Vector Machines to use not only local image structure but also atlas registered coordinates. We demonstrate the strength of this combination approach on a number of examples.

ei

[BibTex]

[BibTex]


no image
Geometric Analysis of Hilbert Schmidt Independence criterion based ICA contrast function

Shen, H., Jegelka, S., Gretton, A.

(PA006080), National ICT Australia, Canberra, Australia, October 2006 (techreport)

ei

Web [BibTex]

Web [BibTex]


no image
Semi-Supervised Support Vector Machines and Application to Spam Filtering

Zien, A.

ECML Discovery Challenge Workshop, September 2006 (talk)

Abstract
After introducing the semi-supervised support vector machine (aka TSVM for "transductive SVM"), a few popular training strategies are briefly presented. Then the assumptions underlying semi-supervised learning are reviewed. Finally, two modern TSVM optimization techniques are applied to the spam filtering data sets of the workshop; it is shown that they can achieve excellent results, if the problem of the data being non-iid can be handled properly.

ei

PDF Web [BibTex]