Header logo is de


2011


no image
Model Learning in Robotics: a Survey

Nguyen-Tuong, D., Peters, J.

Cognitive Processing, 12(4):319-340, November 2011 (article)

Abstract
Models are among the most essential tools in robotics, such as kinematics and dynamics models of the robot's own body and controllable external objects. It is widely believed that intelligent mammals also rely on internal models in order to generate their actions. However, while classical robotics relies on manually generated models that are based on human insights into physics, future autonomous, cognitive robots need to be able to automatically generate models that are based on information which is extracted from the data streams accessible to the robot. In this paper, we survey the progress in model learning with a strong focus on robot control on a kinematic as well as dynamical level. Here, a model describes essential information about the behavior of the environment and the in uence of an agent on this environment. In the context of model based learning control, we view the model from three di fferent perspectives. First, we need to study the di erent possible model learning architectures for robotics. Second, we discuss what kind of problems these architecture and the domain of robotics imply for the applicable learning methods. From this discussion, we deduce future directions of real-time learning algorithms. Third, we show where these scenarios have been used successfully in several case studies.

ei

PDF [BibTex]

2011


PDF [BibTex]


no image
Fast removal of non-uniform camera shake

Hirsch, M., Schuler, C., Harmeling, S., Schölkopf, B.

In pages: 463-470 , (Editors: DN Metaxas and L Quan and A Sanfeliu and LJ Van Gool), IEEE, Piscataway, NJ, USA, 13th IEEE International Conference on Computer Vision (ICCV), November 2011 (inproceedings)

Abstract
Camera shake leads to non-uniform image blurs. State-of-the-art methods for removing camera shake model the blur as a linear combination of homographically transformed versions of the true image. While this is conceptually interesting, the resulting algorithms are computationally demanding. In this paper we develop a forward model based on the efficient filter flow framework, incorporating the particularities of camera shake, and show how an efficient algorithm for blur removal can be obtained. Comprehensive comparisons on a number of real-world blurry images show that our approach is not only substantially faster, but it also leads to better deblurring results.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


Thumb xl iccv2011homepageimage notext small
Home 3D body scans from noisy image and range data

Weiss, A., Hirshberg, D., Black, M.

In Int. Conf. on Computer Vision (ICCV), pages: 1951-1958, IEEE, Barcelona, November 2011 (inproceedings)

Abstract
The 3D shape of the human body is useful for applications in fitness, games and apparel. Accurate body scanners, however, are expensive, limiting the availability of 3D body models. We present a method for human shape reconstruction from noisy monocular image and range data using a single inexpensive commodity sensor. The approach combines low-resolution image silhouettes with coarse range data to estimate a parametric model of the body. Accurate 3D shape estimates are obtained by combining multiple monocular views of a person moving in front of the sensor. To cope with varying body pose, we use a SCAPE body model which factors 3D body shape and pose variations. This enables the estimation of a single consistent shape while allowing pose to vary. Additionally, we describe a novel method to minimize the distance between the projected 3D body contour and the image silhouette that uses analytic derivatives of the objective function. We propose a simple method to estimate standard body measurements from the recovered SCAPE model and show that the accuracy of our method is competitive with commercial body scanning systems costing orders of magnitude more.

ps

pdf YouTube poster Project Page Project Page [BibTex]

pdf YouTube poster Project Page Project Page [BibTex]


no image
Cooperative Cuts: a new use of submodularity in image segmentation

Jegelka, S.

Second I.S.T. Austria Symposium on Computer Vision and Machine Learning, October 2011 (talk)

ei

Web [BibTex]

Web [BibTex]


no image
Effect of MR Contrast Agents on Quantitative Accuracy of PET in Combined Whole-Body PET/MR Imaging

Lois, C., Bezrukov, I., Schmidt, H., Schwenzer, N., Werner, M., Pichler, B., Kupferschläger, J., Beyer, T.

2011(MIC3-3), 2011 IEEE Nuclear Science Symposium, Medical Imaging Conference (NSS-MIC), October 2011 (talk)

Abstract
Combined whole-body PET/MR systems are being tested in clinical practice today. Integrated imaging protocols entail the use of MR contrast agents (MRCA) that could bias PET attenuation correction. In this work, we assess the effect of MRCA in PET/MR imaging. We analyze the effect of oral and intravenous MRCA on PET activity after attenuation correction. We conclude that in clinical scenarios, MRCA are not expected to lead to significant attenuation of PET signals, and that attenuation maps are not biased after the ingestion of adequate oral contrasts.

ei

Web [BibTex]

Web [BibTex]


no image
First Results on Patients and Phantoms of a Fully Integrated Clinical Whole-Body PET/MRI

Schmidt, H., Schwenzer, N., Bezrukov, I., Kolb, A., Mantlik, F., Kupferschläger, J., Lois, C., Sauter, A., Brendle, C., Pfannenberg, C., Pichler, B.

2011(J2-8), 2011 IEEE Nuclear Science Symposium, Medical Imaging Conference (NSS-MIC), October 2011 (talk)

Abstract
First clinical fully integrated whole-body PET/MR scanners are just entering the field. Here, we present studies toward quantification accuracy and variation within the PET field of view of small lesions from our BrainPET/MRI, a dedicated clinical brain scanner which was installed three years ago in Tbingen. Also, we present first results for patient and phantom scans of a fully integral whole-body PET/MRI, which was installed two months ago at our department. The quantification accuracy and homogeneity of the BrainPET-Insert (Siemens Medical Solutions, Germany) installed inside the magnet bore of a clinical 3T MRI scanner (Magnetom TIM Trio, Siemens Medical Solutions, Germany) was evaluated by using eight hollow spheres with inner diameters from 3.95 to 7.86 mm placed at different positions inside a homogeneous cylinder phantom with an 9:1 and 6:1 sphere to background ratio. The quantification accuracy for small lesions at different positions in the PET FoV shows a standard deviation of up to 11% and is acceptable for quantitative brain studies where the homogeneity of quantification on the entire FoV is essental. Image quality and resolution of the new Siemens whole-body PET/MR system (Biograph mMR, Siemens Medical Solutions, Germany) was evaluated according to the NEMA NU2 2007 protocol using a body phantom containing six spheres with inner diameter from 10 to 37 mm at sphere to background ratios of 8:1 and 4:1 and the F-18 point sources located at different positions inside the PET FoV, respectively. The evaluation of the whole-body PET/MR system reveals a good PET image quality and resolution comparable to state-of-the-art clinical PET/CT scanners. First images of patient studies carried out at the whole-body PET/MR are presented highlighting the potency of combined PET/MR imaging.

ei

Web [BibTex]

Web [BibTex]


no image
FaST linear mixed models for genome-wide association studies

Lippert, C., Listgarten, J., Liu, Y., Kadie, CM., Davidson, RI., Heckerman, D.

Nature Methods, 8(10):833–835, October 2011 (article)

Abstract
We describe factored spectrally transformed linear mixed models (FaST-LMM), an algorithm for genome-wide association studies (GWAS) that scales linearly with cohort size in both run time and memory use. On Wellcome Trust data for 15,000 individuals, FaST-LMM ran an order of magnitude faster than current efficient algorithms. Our algorithm can analyze data for 120,000 individuals in just a few hours, whereas current algorithms fail on data for even 20,000 individuals (http://mscompbio.codeplex.com/).

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Evaluation and Optimization of MR-Based Attenuation Correction Methods in Combined Brain PET/MR

Mantlik, F., Hofmann, M., Bezrukov, I., Schmidt, H., Kolb, A., Beyer, T., Reimold, M., Schölkopf, B., Pichler, B.

2011(MIC18.M-96), 2011 IEEE Nuclear Science Symposium, Medical Imaging Conference (NSS-MIC), October 2011 (poster)

Abstract
Combined PET/MR provides simultaneous molecular and functional information in an anatomical context with unique soft tissue contrast. However, PET/MR does not support direct derivation of attenuation maps of objects and tissues within the measured PET field-of-view. Valid attenuation maps are required for quantitative PET imaging, specifically for scientific brain studies. Therefore, several methods have been proposed for MR-based attenuation correction (MR-AC). Last year, we performed an evaluation of different MR-AC methods, including simple MR thresholding, atlas- and machine learning-based MR-AC. CT-based AC served as gold standard reference. RoIs from 2 anatomic brain atlases with different levels of detail were used for evaluation of correction accuracy. We now extend our evaluation of different MR-AC methods by using an enlarged dataset of 23 patients from the integrated BrainPET/MR (Siemens Healthcare). Further, we analyze options for improving the MR-AC performance in terms of speed and accuracy. Finally, we assess the impact of ignoring BrainPET positioning aids during the course of MR-AC. This extended study confirms the overall prediction accuracy evaluation results of the first evaluation in a larger patient population. Removing datasets affected by metal artifacts from the Atlas-Patch database helped to improve prediction accuracy, although the size of the database was reduced by one half. Significant improvement in prediction speed can be gained at a cost of only slightly reduced accuracy, while further optimizations are still possible.

ei

Web [BibTex]

Web [BibTex]


no image
Atlas- and Pattern Recognition Based Attenuation Correction on Simultaneous Whole-Body PET/MR

Bezrukov, I., Schmidt, H., Mantlik, F., Schwenzer, N., Hofmann, M., Schölkopf, B., Pichler, B.

2011(MIC18.M-116), 2011 IEEE Nuclear Science Symposium, Medical Imaging Conference (NSS-MIC), October 2011 (poster)

Abstract
With the recent availability of clinical whole-body PET/MRI it is possible to evaluate and further develop MR-based attenuation correction methods using simultaneously acquired PET/MR data. We present first results for MRAC on patient data acquired on a fully integrated whole-body PET/MRI (Biograph mMR, Siemens) using our method that applies atlas registration and pattern recognition (ATPR) and compare them to the segmentation-based (SEG) method provided by the manufacturer. The ATPR method makes use of a database of previously aligned pairs of MR-CT volumes to predict attenuation values on a continuous scale. The robustness of the method in presence of MR artifacts was improved by location and size based detection. Lesion to liver and lesion to blood ratios (LLR and LBR) were compared for both methods on 29 iso-contour ROIs in 4 patients. ATPR showed >20% higher LBR and LLR for ROIs in and >7% near osseous tissue. For ROIs in soft tissue, both methods yielded similar ratios with max. differences <6% . For ROIs located within metal artifacts in the MR image, ATPR showed >190% higher LLR and LBR than SEG, where ratios <0.1 occured. For lesions in the neighborhood of artifacts, both ratios were >15% higher for ATPR. If artifacts in MR volumes caused by metal implants are not accounted for in the computation of attenuation maps, they can lead to a strong decrease of lesion to background ratios, even to disappearance of hot spots. Metal implants are likely to occur in the patient collective receiving combined PET/MR scans, of our first 10 patients, 3 had metal implants. Our method is currently able to account for artifacts in the pelvis caused by prostheses. The ability of the ATPR method to account for bone leads to a significant increase of LLR and LBR in osseous tissue, which supports our previous evaluations with combined PET/CT and PET/MR data. For lesions within soft tissue, lesion to background ratios of ATPR and SEG were comparable.

ei

Web [BibTex]

Web [BibTex]


no image
Retrospective blind motion correction of MR images

Loktyushin, A., Nickisch, H., Pohmann, R.

Magnetic Resonance Materials in Physics, Biology and Medicine, 24(Supplement 1):498, 28th Annual Scientific Meeting ESMRMB, October 2011 (poster)

Abstract
We present a retrospective method, which significantly reduces ghosting and blurring artifacts due to subject motion. No modifications to the sequence (as in [2, 3]), or the use of additional equipment (as in [1]) are required. Our method iteratively searches for the transformation, that applied to the lines in k-space -- yields the sparsest Laplacian filter output in the spatial domain.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Model based reconstruction for GRE EPI

Blecher, W., Pohmann, R., Schölkopf, B., Seeger, M.

Magnetic Resonance Materials in Physics, Biology and Medicine, 24(Supplement 1):493-494, 28th Annual Scientific Meeting ESMRMB, October 2011 (poster)

Abstract
Model based nonlinear image reconstruction methods for MRI [3] are at the heart of modern reconstruction techniques (e.g.compressed sensing [6]). In general, models are expressed as a matrix equation where y and u are column vectors of k-space and image data, X model matrix and e independent noise. However, solving the corresponding linear system is not tractable. Therefore fast nonlinear algorithms that minimize a function wrt.the unknown image are the method of choice: In this work a model for gradient echo EPI, is proposed that incorporates N/2 Ghost correction and correction for field inhomogeneities. In addition to reconstruction from full data, the model allows for sparse reconstruction, joint estimation of image, field-, and relaxation-map (like [5,8] for spiral imaging), and improved N/2 ghost correction.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Effect of MR contrast agents on quantitative accuracy of PET in combined whole-body PET/MR imaging

Lois, C., Kupferschläger, J., Bezrukov, I., Schmidt, H., Werner, M., Mannheim, J., Pichler, B., Schwenzer, N., Beyer, T.

(OP314), Annual Congress of the European Association of Nuclear Medicine (EANM), October 2011 (talk)

Abstract
PURPOSE:Combined PET/MR imaging entails the use of MR contrast agents (MRCA) as part of integrated protocols. MRCA are made up of iron oxide and Gd-chelates for oral and intravenous (iv) application, respectively. We assess additional attenuation of the PET emission signals in the presence of oral and iv MRCA.MATERIALS AND METHODS:Phantom scans were performed on a clinical PET/CT (Biograph HiRez16, Siemens) and an integrated whole-body PET/MR (Biograph mMR, Siemens). Two common MRCA were evaluated: Lumirem (oral) and Gadovist (iv).Reference PET attenuation values were determined on a dedicated small-animal PET (Inveon, Siemens) using equivalent standard PET transmission source imaging (TX). Seven syringes of 5mL were filled with (a) Water, (b) Lumirem_100 (100% concentration), (c) Gadovist_100 (100%), (d) Gadovist_18 (18%), (e) Gadovist_02 (0.2%), (f) Imeron-400 CT iv-contrast (100%) and (g) Imeron-400 (2.4%). The same set of syringes was scanned on CT (Sensation16, Siemens) at 120kVp and 160mAs.The effect of MRCA on the attenuation of PET emission data was evaluated using a 20cm cylinder filled uniformly with [18F]-FDG (FDG) in water (BGD). Three 4.5cm diameter cylinders were inserted into the phantom: (C1) Teflon, (C2) Water+FDG (2:1) and (C3) Lumirem_100+FDG (2:1). Two 50mL syringes filled with Gadovist_02+FDG (Sy1) and water+FDG (Sy2) were attached to the sides of (C1) to mimick the effects of iv-contrast in vessels near bone. Syringe-to-background activity ratio was 4-to-1.PET emission data were acquired for 10min each using the PET/CT and the PET/MR. Images were reconstructed using CT- and MR-based attenuation correction (AC). Since Teflon is not correctly identified on MR, PET(/MR) data were reconstructed using MR-AC and CT-AC.RESULTS:Mean linear PET attenuation (cm-1) on TX was (a) 0.098, (b) 0.098, (c) 0.300, (d) 0.134, (e) 0.095, (f) 0.397 and (g) 0.105. Corresponding CT attenuation (HU) was: (a) 5, (b) 14, (c) 3070, (d) 1040, (e) 13, (f) 3070 and (g) 347.Lumirem had little effect on PET attenuation with (C3) being 13%, 10% and 11% higher than (C2) on PET/CT, PET/MR with MR-AC, and PET/MR with CT-AC, respectively. Gadovist_02 had even smaller effects with (Sy1) being 2.5% lower, 1.2% higher, and 3.5% lower than (Sy2) on PET/CT, PET/MR with MR-AC and PET/MR with CT-AC, respectively.CONCLUSION:MRCA in high and clinically relevant concentrations have attenuation values similar to that of CT contrast and water, respectively. In clinical PET/MR scenarios MRCA are not expected to lead to significant attenuation of the PET emission signals.

ei

Web [BibTex]

Web [BibTex]


no image
The effect of noise correlations in populations of diversely tuned neurons

Ecker, A., Berens, P., Tolias, A., Bethge, M.

Journal of Neuroscience, 31(40):14272-14283, October 2011 (article)

Abstract
The amount of information encoded by networks of neurons critically depends on the correlation structure of their activity. Neurons with similar stimulus preferences tend to have higher noise correlations than others. In homogeneous populations of neurons, this limited range correlation structure is highly detrimental to the accuracy of a population code. Therefore, reduced spike count correlations under attention, after adaptation, or after learning have been interpreted as evidence for a more efficient population code. Here, we analyze the role of limited range correlations in more realistic, heterogeneous population models. We use Fisher information and maximum-likelihood decoding to show that reduced correlations do not necessarily improve encoding accuracy. In fact, in populations with more than a few hundred neurons, increasing the level of limited range correlations can substantially improve encoding accuracy. We found that this improvement results from a decrease in noise entropy that is associated with increasing correlations if the marginal distributions are unchanged. Surprisingly, for constant noise entropy and in the limit of large populations, the encoding accuracy is independent of both structure and magnitude of noise correlations.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Attenuation correction in MR-BrainPET with segmented T1-weighted MR images of the patient’s head: A comparative study with CT

Wagenknecht, G., Rota Kops, E., Mantlik, F., Fried, E., Pilz, T., Hautzel, H., Tellmann, L., Pichler, B., Herzog, H.

In pages: 2261-2266 , IEEE, Piscataway, NJ, USA, IEEE Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), October 2011 (inproceedings)

Abstract
Our method for attenuation correction (AC) in MR-BrainPET with segmented T1-weighted MR images of the pa-tient's head was applied to data from different MR-BrainPET scanners (Jülich, Tübingen) and compared to CT-based results. The study objectives presented in this paper are twofold. The first objective is to examine if the segmentation method developed for and successfully applied to 3D MP-RAGE data can also be used to segment other T1-weighted MR data such as 3D FLASH data. The second aim is to show if the similarity of segmented MR-based (SBA) and CT-based AC (CBA) obtained at HR+ PET can also be confirmed for BrainPET for which the new AC method is intended for. In order to reach the first objective, 14 segmented MR data sets (three 3D MP-RAGE data sets from Jülich and eleven 3D FLASH data sets from Tubingen) were compared to the resp. CT data based on the Dice coefficient and scatter plots. For bone, a CT threshold HU>;500 was applied. Dice coefficients (mean±std) for the upper cranial part of the skull, the skull above cavities, and in the caudal part including the cerebellum are 0.73±0.1, 0.79±0.04, and 0.49±0.02 for the Jülich data and 0.7U0.1, 0.72±0.1, and 0.60±0.05 for the Tubingen data. To reach the second aim, SBA and CBA were compared for six subjects based on VOI (AAL atlas) analysis. Mean absolute relative difference (maRD) values are maRD(JUFVBWl-FDG): 0.99%±0.83%, maRD(JüFVBW2-FDG): 0.90%±0.89%, and maRD(JUEP-Fluma- zenil): 1.85%±1.25% for the Jülich data and maRD(TuTP02- FDG): 2.99%±1.65%, maRD(TuNP01-FDG): 5.37%±2.29%, and maRD(TuNP02-FDG): 6.52%±1.69% for the three best-segmented Tübingen data sets. The results show similar segmentation quality for both Tl- weighted MR sequence types. The application to AC in BrainPET - hows a high similarity to CT-based AC if the standardized ACF value for bone used in SBA is in good accordance to the bone density of the patient in question.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Simultaneous multimodal imaging of patients with bronchial carcinoma in a whole body MR/PET system

Brendle, C., Sauter, A., Schmidt, H., Schraml, C., Bezrukov, I., Martirosian, P., Hetzel, J., Müller, M., Claussen, C., Schwenzer, N., Pfannenberg, C.

Magnetic Resonance Materials in Physics, Biology and Medicine, 24(Supplement 1):141, 28th annual scientific meeting of the European Society for Magnetic Resonance in Medicine and Biology (ESMRB), October 2011 (poster)

Abstract
Purpose/Introduction: Lung cancer is among the most frequent cancers (1). Exact determination of tumour extent and viability is crucial for adequate therapy guidance. [18F]-FDG-PET allows accurate staging and the evaluation of therapy response based on glucose metabolism. Diffusion weighted MRI (DWI) is another promising tool for the evaluation of tumour viability (2,3). The aim of the study was the simultaneous PET-MR acquisition in lung cancer patients and correlation of PET and MR data. Subjects and Methods: Seven patients (age 38-73 years, mean 61 years) with highly suspected or known bronchial carcinoma were examined. First, a [18F]-FDG-PET/CT was performed (injected dose: 332-380 MBq). Subsequently, patients were examined at the whole-body MR/PET (Siemens Biograph mMR). The MRI is a modified 3T Verio whole body system with a magnet bore of 60 cm (max. amplitude gradients 45 mT/m, max. slew rate 200 T/m/s). Concerning the PET, the whole-body MR/PET system comprises 56 detector cassettes with a 59.4 cm transaxial and 25.8 cm axial FoV. The following parameters for PET acquisition were applied: 2 bed positions, 6 min/bed with an average uptake time of 124 min after injection (range: 110-143 min). The attenuation correction of PET data was conducted with a segmentation-based method provided by the manufacturer. Acquired PET data were reconstructed with an iterative 3D OSEM algorithm using 3 iterations and 21 subsets, Gaussian filter of 3 mm. DWI MR images were recorded simultaneously for each bed using two b-values (0/800 s/mm2). SUVmax and ADCmin were assessed in a ROI analysis. The following ratios were calculated: SUVmax(tumor)/SUVmean(liver) and ADCmin(tumor)/ADCmean(muscle). Correlation between SUV and ADC was analyzed (Pearson’s correlation). Results: Diagnostic scans could be obtained in all patients with good tumour delineation. The spatial matching of PET and DWI data was very exact. Most tumours showed a pronounced FDG-uptake in combination with decreased ADC values. Significant correlation was found between SUV and ADC ratios (r = -0.87, p = 0.0118). Discussion/Conclusion: Simultaneous MR/PET imaging of lung cancer is feasible. The whole-body MR/PET system can provide complementary information regarding tumour viability and cellularity which could facilitate a more profound tumour characterization. Further studies have to be done to evaluate the importance of these parameters for therapy decisions and monitoring

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Analysis of Fixed-Point and Coordinate Descent Algorithms for Regularized Kernel Methods

Dinuzzo, F.

IEEE Transactions on Neural Networks, 22(10):1576-1587, October 2011 (article)

Abstract
In this paper, we analyze the convergence of two general classes of optimization algorithms for regularized kernel methods with convex loss function and quadratic norm regularization. The first methodology is a new class of algorithms based on fixed-point iterations that are well-suited for a parallel implementation and can be used with any convex loss function. The second methodology is based on coordinate descent, and generalizes some techniques previously proposed for linear support vector machines. It exploits the structure of additively separable loss functions to compute solutions of line searches in closed form. The two methodologies are both very easy to implement. In this paper, we also show how to remove non-differentiability of the objective functional by exactly reformulating a convex regularization problem as an unconstrained differentiable stabilization problem.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
A biomimetic approach to robot table tennis

Mülling, K., Kober, J., Peters, J.

Adaptive Behavior , 19(5):359-376 , October 2011 (article)

Abstract
Playing table tennis is a difficult motor task that requires fast movements, accurate control and adaptation to task parameters. Although human beings see and move slower than most robot systems, they significantly outperform all table tennis robots. One important reason for this higher performance is the human movement generation. In this paper, we study human movements during table tennis and present a robot system that mimics human striking behavior. Our focus lies on generating hitting motions capable of adapting to variations in environmental conditions, such as changes in ball speed and position. Therefore, we model the human movements involved in hitting a table tennis ball using discrete movement stages and the virtual hitting point hypothesis. The resulting model was evaluated both in a physically realistic simulation and on a real anthropomorphic seven degrees of freedom Barrett WAM™ robot arm.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Whole-genome sequencing of multiple Arabidopsis thaliana populations

Cao, J., Schneeberger, K., Ossowski, S., Günther, T., Bender, S., Fitz, J., Koenig, D., Lanz, C., Stegle, O., Lippert, C., Wang, X., Ott, F., Müller, J., Alonso-Blanco, C., Borgwardt, K., Schmid, K., Weigel, D.

Nature Genetics, 43(10):956–963, October 2011 (article)

Abstract
The plant Arabidopsis thaliana occurs naturally in many different habitats throughout Eurasia. As a foundation for identifying genetic variation contributing to adaptation to diverse environments, a 1001 Genomes Project to sequence geographically diverse A. thaliana strains has been initiated. Here we present the first phase of this project, based on population-scale sequencing of 80 strains drawn from eight regions throughout the species' native range. We describe the majority of common small-scale polymorphisms as well as many larger insertions and deletions in the A. thaliana pan-genome, their effects on gene function, and the patterns of local and global linkage among these variants. The action of processes other than spontaneous mutation is identified by comparing the spectrum of mutations that have accumulated since A. thaliana diverged from its closest relative 10 million years ago with the spectrum observed in the laboratory. Recent species-wide selective sweeps are rare, and potentially deleterious mutations are more common in marginal populations.

ei

Web DOI [BibTex]

Web DOI [BibTex]


Thumb xl lugano11small
Evaluating the Automated Alignment of 3D Human Body Scans

Hirshberg, D. A., Loper, M., Rachlin, E., Tsoli, A., Weiss, A., Corner, B., Black, M. J.

In 2nd International Conference on 3D Body Scanning Technologies, pages: 76-86, (Editors: D’Apuzzo, Nicola), Hometrica Consulting, Lugano, Switzerland, October 2011 (inproceedings)

Abstract
The statistical analysis of large corpora of human body scans requires that these scans be in alignment, either for a small set of key landmarks or densely for all the vertices in the scan. Existing techniques tend to rely on hand-placed landmarks or algorithms that extract landmarks from scans. The former is time consuming and subjective while the latter is error prone. Here we show that a model-based approach can align meshes automatically, producing alignment accuracy similar to that of previous methods that rely on many landmarks. Specifically, we align a low-resolution, artist-created template body mesh to many high-resolution laser scans. Our alignment procedure employs a robust iterative closest point method with a regularization that promotes smooth and locally rigid deformation of the template mesh. We evaluate our approach on 50 female body models from the CAESAR dataset that vary significantly in body shape. To make the method fully automatic, we define simple feature detectors for the head and ankles, which provide initial landmark locations. We find that, if body poses are fairly similar, as in CAESAR, the fully automated method provides dense alignments that enable statistical analysis and anthropometric measurement.

ps

pdf slides DOI Project Page [BibTex]

pdf slides DOI Project Page [BibTex]


no image
Learning anticipation policies for robot table tennis

Wang, Z., Lampert, C., Mülling, K., Schölkopf, B., Peters, J.

In pages: 332-337 , (Editors: NM Amato), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2011 (inproceedings)

Abstract
Playing table tennis is a difficult task for robots, especially due to their limitations of acceleration. A key bottleneck is the amount of time needed to reach the desired hitting position and velocity of the racket for returning the incoming ball. Here, it often does not suffice to simply extrapolate the ball's trajectory after the opponent returns it but more information is needed. Humans are able to predict the ball's trajectory based on the opponent's moves and, thus, have a considerable advantage. Hence, we propose to incorporate an anticipation system into robot table tennis players, which enables the robot to react earlier while the opponent is performing the striking movement. Based on visual observation of the opponent's racket movement, the robot can predict the aim of the opponent and adjust its movement generation accordingly. The policies for deciding how and when to react are obtained by reinforcement learning. We conduct experiments with an existing robot player to show that the learned reaction policy can significantly improve the performance of the overall system.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Estimating integrated information with TMS pulses during wakefulness, sleep and under anesthesia

Balduzzi, D.

In pages: 4717-4720 , IEEE, Piscataway, NJ, USA, 33rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society (IEEE EMBC), September 2011 (inproceedings)

Abstract
This paper relates a recently proposed measure of information integration to experiments investigating the evoked high-density electroencephalography (EEG) response to transcranial magnetic stimulation (TMS) during wakefulness, early non-rapid eye movement (NREM) sleep and under anesthesia. We show that bistability, arising at the cellular and population level during NREM sleep and under anesthesia, dramatically reduces the brain’s ability to integrate information.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Improving Denoising Algorithms via a Multi-scale Meta-procedure

Burger, H., Harmeling, S.

In Pattern Recognition, pages: 206-215, (Editors: Mester, R. , M. Felsberg), Springer, Berlin, Germany, 33rd DAGM Symposium, September 2011 (inproceedings)

Abstract
Many state-of-the-art denoising algorithms focus on recovering high-frequency details in noisy images. However, images corrupted by large amounts of noise are also degraded in the lower frequencies. Thus properly handling all frequency bands allows us to better denoise in such regimes. To improve existing denoising algorithms we propose a meta-procedure that applies existing denoising algorithms across different scales and combines the resulting images into a single denoised image. With a comprehensive evaluation we show that the performance of many state-of-the-art denoising algorithms can be improved.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Multiple reference genomes and transcriptomes for Arabidopsis thaliana

Gan, X., Stegle, O., Behr, J., Steffen, J., Drewe, P., Hildebrand, K., Lyngsoe, R., Schultheiss, S., Osborne, E., Sreedharan, V., Kahles, A., Bohnert, R., Jean, G., Derwent, P., Kersey, P., Belfield, E., Harberd, N., Kemen, E., Toomajian, C., Kover, P., Clark, R., Rätsch, G., Mott, R.

Nature, 477(7365):419–423, September 2011 (article)

Abstract
Genetic differences between Arabidopsis thaliana accessions underlie the plant’s extensive phenotypic variation, and until now these have been interpreted largely in the context of the annotated reference accession Col-0. Here we report the sequencing, assembly and annotation of the genomes of 18 natural A. thaliana accessions, and their transcriptomes. When assessed on the basis of the reference annotation, one-third of protein-coding genes are predicted to be disrupted in at least one accession. However, re-annotation of each genome revealed that alternative gene models often restore coding potential. Gene expression in seedlings differed for nearly half of expressed genes and was frequently associated with cis variants within 5 kilobases, as were intron retention alternative splicing events. Sequence and expression variation is most pronounced in genes that respond to the biotic environment. Our data further promote evolutionary and functional studies in A. thaliana, especially the MAGIC genetic reference population descended from these accessions.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Weisfeiler-Lehman Graph Kernels

Shervashidze, N., Schweitzer, P., van Leeuwen, E., Mehlhorn, K., Borgwardt, M.

Journal of Machine Learning Research, 12, pages: 2539-2561, September 2011 (article)

Abstract
In this article, we propose a family of efficient kernels for large graphs with discrete node labels. Key to our method is a rapid feature extraction scheme based on the Weisfeiler-Lehman test of isomorphism on graphs. It maps the original graph to a sequence of graphs, whose node attributes capture topological and label information. A family of kernels can be defined based on this Weisfeiler-Lehman sequence of graphs, including a highly efficient kernel comparing subtree-like patterns. Its runtime scales only linearly in the number of edges of the graphs and the length of the Weisfeiler-Lehman graph sequence. In our experimental evaluation, our kernels outperform state-of-the-art graph kernels on several graph classification benchmark data sets in terms of accuracy and runtime. Our kernels open the door to large-scale applications of graph kernels in various disciplines such as computational biology and social network analysis.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
What are the Causes of Performance Variation in Brain-Computer Interfacing?

Grosse-Wentrup, M.

International Journal of Bioelectromagnetism, 13(3):115-116, September 2011 (article)

Abstract
While research on brain-computer interfacing (BCI) has seen tremendous progress in recent years, performance still varies substantially between as well as within subjects, with roughly 10 - 20% of subjects being incapable of successfully operating a BCI system. In this short report, I argue that this variation in performance constitutes one of the major obstacles that impedes a successful commercialization of BCI systems. I review the current state of research on the neuro-physiological causes of performance variation in BCI, discuss recent progress and open problems, and delineate potential research programs for addressing this issue.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning robot grasping from 3-D images with Markov Random Fields

Boularias, A., Kroemer, O., Peters, J.

In pages: 1548-1553 , (Editors: Amato, N.M.), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2011 (inproceedings)

Abstract
Learning to grasp novel objects is an essential skill for robots operating in unstructured environments. We therefore propose a probabilistic approach for learning to grasp. In particular, we learn a function that predicts the success probability of grasps performed on surface points of a given object. Our approach is based on Markov Random Fields (MRF), and motivated by the fact that points that are geometrically close to each other tend to have similar grasp success probabilities. The MRF approach is successfully tested in simulation, and on a real robot using 3-D scans of various types of objects. The empirical results show a significant improvement over methods that do not utilize the smoothness assumption and classify each point separately from the others.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Neurofeedback of Fronto-Parietal Gamma-Oscillations

Grosse-Wentrup, M.

In pages: 172-175, (Editors: Müller-Putz, G.R. , R. Scherer, M. Billinger, A. Kreilinger, V. Kaiser, C. Neuper), Verlag der Technischen Universität Graz, Graz, Austria, 5th International Brain-Computer Interface Conference (BCI), September 2011 (inproceedings)

Abstract
In recent work, we have provided evidence that fronto-parietal γ-range oscillations are a cause of within-subject performance variations in brain-computer interfaces (BCIs) based on motor-imagery. Here, we explore the feasibility of using neurofeedback of fronto-parietal γ-power to induce a mental state that is beneficial for BCI-performance. We provide empirical evidence based on two healthy subjects that intentional attenuation of fronto-parietal γ-power results in an enhanced resting-state sensorimotor-rhythm (SMR). As a large resting-state amplitude of the SMR has been shown to correlate with good BCI-performance, our approach may provide a means to reduce performance variations in BCIs.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning inverse kinematics with structured prediction

Bocsi, B., Nguyen-Tuong, D., Csato, L., Schölkopf, B., Peters, J.

In pages: 698-703 , (Editors: NM Amato), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2011 (inproceedings)

Abstract
Learning inverse kinematics of robots with redundant degrees of freedom (DoF) is a difficult problem in robot learning. The difficulty lies in the non-uniqueness of the inverse kinematics function. Existing methods tackle non-uniqueness by segmenting the configuration space and building a global solution from local experts. The usage of local experts implies the definition of an oracle, which governs the global consistency of the local models; the definition of this oracle is difficult. We propose an algorithm suitable to learn the inverse kinematics function in a single global model despite its multivalued nature. Inverse kinematics is approximated from examples using structured output learning methods. Unlike most of the existing methods, which estimate inverse kinematics on velocity level, we address the learning of the direct function on position level. This problem is a significantly harder. To support the proposed method, we conducted real world experiments on a tracking control task and tested our algorithms on these models.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Automatic foreground-background refocusing

Loktyushin, A., Harmeling, S.

In pages: 3445-3448, (Editors: Macq, B. , P. Schelkens), IEEE, Piscataway, NJ, USA, 18th IEEE International Conference on Image Processing (ICIP), September 2011 (inproceedings)

Abstract
A challenging problem in image restoration is to recover an image with a blurry foreground. Such images can easily occur with modern cameras, when the auto-focus aims mistakenly at the background (which will appear sharp) instead of the foreground, where usually the object of interest is. In this paper we propose an automatic procedure that (i) estimates the amount of out-of-focus blur, (ii) segments the image into foreground and background incorporating clues from the blurriness, (iii) recovers the sharp foreground, and finally (iv) blurs the background to refocus the scene. On several real photographs with blurry foreground and sharp background, we demonstrate the effectiveness and limitations of our method.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Gravitational Lensing Accuracy Testing 2010 (GREAT10) Challenge Handbook

Kitching, T., Amara, A., Gill, M., Harmeling, S., Heymans, C., Massey, R., Rowe, B., Schrabback, T., Voigt, L., Balan, S., Bernstein, G., Bethge, M., Bridle, S., Courbin, F., Gentile, M., Heavens, A., Hirsch, M., Hosseini, R., Kiessling, A., Kirk, D., Kuijken, K., Mandelbaum, R., Moghaddam, B., Nurbaeva, G., Paulin-Henriksson, S., Rassat, A., Rhodes, J., Schölkopf, B., Shawe-Taylor, J., Shmakova, M., Taylor, A., Velander, M., van Waerbeke, L., Witherick, D., Wittman, D.

Annals of Applied Statistics, 5(3):2231-2263, September 2011 (article)

Abstract
GRavitational lEnsing Accuracy Testing 2010 (GREAT10) is a public image analysis challenge aimed at the development of algorithms to analyze astronomical images. Specifically, the challenge is to measure varying image distortions in the presence of a variable convolution kernel, pixelization and noise. This is the second in a series of challenges set to the astronomy, computer science and statistics communities, providing a structured environment in which methods can be improved and tested in preparation for planned astronomical surveys. GREAT10 extends upon previous work by introducing variable fields into the challenge. The “Galaxy Challenge” involves the precise measurement of galaxy shape distortions, quantified locally by two parameters called shear, in the presence of a known convolution kernel. Crucially, the convolution kernel and the simulated gravitational lensing shape distortion both now vary as a function of position within the images, as is the case for real data. In addition, we introduce the “Star Challenge” that concerns the reconstruction of a variable convolution kernel, similar to that in a typical astronomical observation. This document details the GREAT10 Challenge for potential participants. Continually updated information is also available from www.greatchallenges.info.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Reinforcement Learning to adjust Robot Movements to New Situations

Kober, J., Oztop, E., Peters, J.

In Robotics: Science and Systems VI, pages: 33-40, (Editors: Matsuoka, Y. , H. F. Durrant-Whyte, J. Neira), MIT Press, Cambridge, MA, USA, 2010 Robotics: Science and Systems Conference (RSS), September 2011 (inproceedings)

Abstract
Many complex robot motor skills can be represented using elementary movements, and there exist efficient techniques for learning parametrized motor plans using demonstrations and self-improvement. However, in many cases, the robot currently needs to learn a new elementary movement even if a parametrized motor plan exists that covers a similar, related situation. Clearly, a method is needed that modulates the elementary movement through the meta-parameters of its representation. In this paper, we show how to learn such mappings from circumstances to meta-parameters using reinforcement learning.We introduce an appropriate reinforcement learning algorithm based on a kernelized version of the reward-weighted regression. We compare this algorithm to several previous methods on a toy example and show that it performs well in comparison to standard algorithms. Subsequently, we show two robot applications of the presented setup; i.e., the generalization of throwing movements in darts, and of hitting movements in table tennis. We show that both tasks can be learned successfully using simulated and real robots.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Simultaneous EEG Recordings with Dry and Wet Electrodes in Motor-Imagery

Saab, J., Battes, B., Grosse-Wentrup, M.

In pages: 312-315, (Editors: Müller-Putz, G.R. , R. Scherer, M. Billinger, A. Kreilinger, V. Kaiser, C. Neuper), Verlag der Technischen Universität Graz, Graz, Austria, 5th International Brain-Computer Interface Conference (BCI), September 2011 (inproceedings)

Abstract
Robust dry EEG electrodes are arguably the key to making EEG Brain-Computer Interfaces (BCIs) a practical technology. Existing studies on dry EEG electrodes can be characterized by the recording method (stand-alone dry electrodes or simultaneous recording with wet electrodes), the dry electrode technology (e.g. active or passive), the paradigm used for testing (e.g. event-related potentials), and the measure of performance (e.g. comparing dry and wet electrode frequency spectra). In this study, an active-dry electrode prototype is tested, during a motor-imagery task, with EEG-BCI in mind. It is used simultaneously with wet electrodes and assessed using classification accuracy. Our results indicate that the two types of electrodes are comparable in their performance but there are improvements to be made, particularly in finding ways to reduce motion-related artifacts.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Learning task-space tracking control with kernels

Nguyen-Tuong, D., Peters, J.

In pages: 704-709 , (Editors: Amato, N.M.), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2011 (inproceedings)

Abstract
Task-space tracking control is essential for robot manipulation. In practice, task-space control of redundant robot systems is known to be susceptive to modeling errors. Here, data driven learning methods may present an interesting alternative approach. However, learning models for task-space tracking control from sampled data is an ill-posed problem. In particular, the same input data point can yield many different output values which can form a non-convex solution space. Because the problem is ill-posed, models cannot be learned from such data using common regression methods. While learning of task-space control mappings is globally ill-posed, it has been shown in recent work that it is locally a well-defined problem. In this paper, we use this insight to formulate a local kernel-based learning approach for online model learning for taskspace tracking control. For evaluations, we show in simulation the ability of the method for online model learning for task-space tracking control of redundant robots.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Automatic particle picking using diffusion filtering and random forest classification

Joubert, P., Nickell, S., Beck, F., Habeck, M., Hirsch, M., Schölkopf, B.

In pages: 6, International Workshop on Microscopic Image Analysis with Application in Biology (MIAAB), September 2011 (inproceedings)

Abstract
An automatic particle picking algorithm for processing electron micrographs of a large molecular complex, the 26S proteasome, is described. The algorithm makes use of a coherence enhancing diffusion filter to denoise the data, and a random forest classifier for removing false positives. It does not make use of a 3D reference model, but uses a training set of manually picked particles instead. False positive and false negative rates of around 25% to 30% are achieved on a testing set. The algorithm was developed for a specific particle, but contains steps that should be useful for developing automatic picking algorithms for other particles.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Active Versus Semi-supervised Learning Paradigm for the Classification of Remote Sensing Images

Persello, C., Bruzzone, L.

In pages: 1-15, (Editors: Bruzzone, L.), SPIE, Bellingham, WA, USA, Image and Signal Processing for Remote Sensing XVII, September 2011 (inproceedings)

Abstract
This paper presents a comparative study in order to analyze active learning (AL) and semi-supervised learning (SSL) for the classification of remote sensing (RS) images. The two learning paradigms are analyzed both from the theoretical and experimental point of view. The aim of this work is to identify the advantages and disadvantages of AL and SSL methods, and to point out the boundary conditions on the applicability of these methods with respect to both the number of available labeled samples and the reliability of classification results. In our experimental analysis, AL and SSL techniques have been applied to the classification of both synthetic and real RS data, defining different classification problems starting from different initial training sets and considering different distributions of the classes. This analysis allowed us to derive important conclusion about the use of these classification approaches and to obtain insight about which one of the two approaches is more appropriate according to the specific classification problem, the available initial training set and the available budget for the acquisition of new labeled samples.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
MRI-Based Attenuation Correction for Whole-Body PET/MRI: Quantitative Evaluation of Segmentation- and Atlas-Based Methods

Hofmann, M., Bezrukov, I., Mantlik, F., Aschoff, P., Steinke, F., Beyer, T., Pichler, B., Schölkopf, B.

Journal of Nuclear Medicine, 52(9):1392-1399, September 2011 (article)

Abstract
PET/MRI is an emerging dual-modality imaging technology that requires new approaches to PET attenuation correction (AC). We assessed 2 algorithms for whole-body MRI-based AC (MRAC): a basic MR image segmentation algorithm and a method based on atlas registration and pattern recognition (AT&PR). METHODS: Eleven patients each underwent a whole-body PET/CT study and a separate multibed whole-body MRI study. The MR image segmentation algorithm uses a combination of image thresholds, Dixon fat-water segmentation, and component analysis to detect the lungs. MR images are segmented into 5 tissue classes (not including bone), and each class is assigned a default linear attenuation value. The AT&PR algorithm uses a database of previously aligned pairs of MRI/CT image volumes. For each patient, these pairs are registered to the patient MRI volume, and machine-learning techniques are used to predict attenuation values on a continuous scale. MRAC methods are compared via the quantitative analysis of AC PET images using volumes of interest in normal organs and on lesions. We assume the PET/CT values after CT-based AC to be the reference standard. RESULTS: In regions of normal physiologic uptake, the average error of the mean standardized uptake value was 14.1% ± 10.2% and 7.7% ± 8.4% for the segmentation and the AT&PR methods, respectively. Lesion-based errors were 7.5% ± 7.9% for the segmentation method and 5.7% ± 4.7% for the AT&PR method. CONCLUSION: The MRAC method using AT&PR provided better overall PET quantification accuracy than the basic MR image segmentation approach. This better quantification was due to the significantly reduced volume of errors made regarding volumes of interest within or near bones and the slightly reduced volume of errors made regarding areas outside the lungs.

ei

Web DOI [BibTex]


no image
Learning elementary movements jointly with a higher level task

Kober, J., Peters, J.

In pages: 338-343 , (Editors: Amato, N.M.), IEEE, Piscataway, NJ, USA, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), September 2011 (inproceedings)

Abstract
Many motor skills consist of many lower level elementary movements that need to be sequenced in order to achieve a task. In order to learn such a task, both the primitive movements as well as the higher-level strategy need to be acquired at the same time. In contrast, most learning approaches focus either on learning to combine a fixed set of options or to learn just single options. In this paper, we discuss a new approach that allows improving the performance of lower level actions while pursuing a higher level task. The presented approach is applicable to learning a wider range motor skills, but in this paper, we employ it for learning games where the player wants to improve his performance at the individual actions of the game while still performing well at the strategy level game. We propose to learn the lower level actions using Cost-regularized Kernel Regression and the higher level actions using a form of Policy Iteration. The two approaches are coupled by their transition probabilities. We evaluate the approach on a side-stall-style throwing game both in simulation and with a real BioRob.

ei

PDF Web DOI [BibTex]

PDF Web DOI [BibTex]


no image
Multi-parametric Tumor Characterization and Therapy Monitoring using Simultaneous PET/MRI: initial results for Lung Cancer and GvHD

Sauter, A., Schmidt, H., Gueckel, B., Brendle, C., Bezrukov, I., Mantlik, F., Kolb, A., Mueller, M., Reimold, M., Federmann, B., Hetzel, J., Claussen, C., Pfannenberg, C., Horger, M., Pichler, B., Schwenzer, N.

(T110), 2011 World Molecular Imaging Congress (WMIC), September 2011 (talk)

Abstract
Hybrid imaging modalities such as [18F]FDG-PET/CT are superior in staging of e.g. lung cancer disease compared with stand-alone modalities. Clinical PET/MRI systems are about to enter the field of hybrid imaging and offer potential advantages. One added value could be a deeper insight into the tumor metabolism and tumorigenesis due to the combination of PET and dedicated MR methods such as MRS and DWI. Additionally, therapy monitoring of diffucult to diagnose disease such as chronic sclerodermic GvHD (csGvHD) can potentially be improved by this combination. We have applied PET/MRI in 3 patients with lung cancer and 4 patients with csGvHD before and during therapy. All 3 patients had lung cancer confirmed by histology (2 adenocarcinoma, 1 carcinoid). First, a [18F]FDG-PET/CT was performed with the following parameters: injected dose 351.7±25.1 MBq, uptake time 59.0±2.6 min, 3 min/bed. Subsequently, patients were brought to the PET/MRI imaging facility. The whole-body PET/MRI Biograph mMR system comprises 56 detector cassettes with a 59.4 cm transaxial and 25.8 cm axial FoV. The MRI is a modified Verio system with a magnet bore of 60 cm. The following parameters for PET acquisition were applied: uptake time 121.3±2.3 min, 3 bed positions, 6 min/bed. T1w, T2w, and DWI MR images were recorded simultaneously for each bed. Acquired PET data were reconstructed with an iterative 3D OSEM algorithm using 3 iterations and 21 subsets, Gaussian filter of 3 mm. The 4 patients with GvHD were brought to the brainPET/MRI imaging facility 2:10h-2:28h after tracer injection. A 9 min brainPET-acquisition with simultaneous MRI of the lower extremities was accomplished. MRI examination included T1-weighted (pre and post gadolinium) and T2-weighted sequences. Attenuation correction was calculated based on manual bone segmentation and thresholds for soft tissue, fat and air. Soleus muscle (m), crural fascia (f1) and posterior crural intermuscular septum fascia (f2) were surrounded with ROIs based on the pre-treatment T1-weighted images and coregistered using IRW (Siemens). Fascia-to-muscle ratios for PET (f/m), T1 contrast uptake (T1_post-contrast_f-pre-contrast_f/post-contrast_m-pre-contrast_m) and T2 (T2_f/m) were calculated. Both patients with adenocarcinoma show a lower ADC value compared with the carcinoid patient suggesting a higher cellularity. This is also reflected in FDG-PET with higher SUV values. Our initial results reveal that PET/MRI can provide complementary information for a profound tumor characterization and therapy monitoring. The high soft tissue contrast provided by MRI is valuable for the assessment of the fascial inflammation. While in the first patient FDG and contrast uptake as well as edema, represented by T2 signals, decreased with ongoing therapy, all parameters remained comparatively stable in the second patient. Contrary to expectations, an increase in FDG uptake of patient 3 and 4 was accompanied by an increase of the T2 signals, but a decrease in contrast uptake. These initial results suggest that PET/MRI provides complementary information of the complex disease mechanisms in fibrosing disorders.

ei

Web [BibTex]

Web [BibTex]


no image
Adaptive nonparametric detection in cryo-electron microscopy

Langovoy, M., Habeck, M., Schölkopf, B.

In Proceedings of the 58th World Statistics Congress, pages: 4456-4461, ISI, August 2011 (inproceedings)

Abstract
We develop a novel method for detection of signals and reconstruction of images in the presence of random noise. The method uses results from percolation theory. We specifically address the problem of detection of multiple objects of unknown shapes in the case of nonparametric noise. The noise density is unknown and can be heavy-tailed. The objects of interest have unknown varying intensities. No boundary shape constraints are imposed on the objects, only a set of weak bulk conditions is required. We view the object detection problem as hypothesis testing for discrete statistical inverse problems. We present an algorithm that allows to detect greyscale objects of various shapes in noisy images. We prove results on consistency and algorithmic complexity of our procedures. Applications to cryo-electron microscopy are presented.

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Semi-supervised kernel canonical correlation analysis with application to human fMRI

Blaschko, M., Shelton, J., Bartels, A., Lampert, C., Gretton, A.

Pattern Recognition Letters, 32(11):1572-1583 , August 2011 (article)

Abstract
Kernel canonical correlation analysis (KCCA) is a general technique for subspace learning that incorporates principal components analysis (PCA) and Fisher linear discriminant analysis (LDA) as special cases. By finding directions that maximize correlation, KCCA learns representations that are more closely tied to the underlying process that generates the data and can ignore high-variance noise directions. However, for data where acquisition in one or more modalities is expensive or otherwise limited, KCCA may suffer from small sample effects. We propose to use semi-supervised Laplacian regularization to utilize data that are present in only one modality. This approach is able to find highly correlated directions that also lie along the data manifold, resulting in a more robust estimate of correlated subspaces. Functional magnetic resonance imaging (fMRI) acquired data are naturally amenable to subspace techniques as data are well aligned. fMRI data of the human brain are a particularly interesting candidate. In this study we implemented various supervised and semi-supervised versions of KCCA on human fMRI data, with regression to single and multi-variate labels (corresponding to video content subjects viewed during the image acquisition). In each variate condition, the semi-supervised variants of KCCA performed better than the supervised variants, including a supervised variant with Laplacian regularization. We additionally analyze the weights learned by the regression in order to infer brain regions that are important to different types of visual processing.

ei

PDF PDF DOI [BibTex]

PDF PDF DOI [BibTex]


no image
Balancing Safety and Exploitability in Opponent Modeling

Wang, Z., Boularias, A., Mülling, K., Peters, J.

In Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence (AAAI 2011), pages: 1515-1520, (Editors: Burgard, W. and Roth, D.), AAAI Press, Menlo Park, CA, USA, August 2011 (inproceedings)

Abstract
Opponent modeling is a critical mechanism in repeated games. It allows a player to adapt its strategy in order to better respond to the presumed preferences of his opponents. We introduce a new modeling technique that adaptively balances exploitability and risk reduction. An opponent’s strategy is modeled with a set of possible strategies that contain the actual strategy with a high probability. The algorithm is safe as the expected payoff is above the minimax payoff with a high probability, and can exploit the opponents’ preferences when sufficient observations have been obtained. We apply them to normal-form games and stochastic games with a finite number of stages. The performance of the proposed approach is first demonstrated on repeated rock-paper-scissors games. Subsequently, the approach is evaluated in a human-robot table-tennis setting where the robot player learns to prepare to return a served ball. By modeling the human players, the robot chooses a forehand, backhand or middle preparation pose before they serve. The learned strategies can exploit the opponent’s preferences, leading to a higher rate of successful returns.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Detecting emergent processes in cellular automata with excess information

Balduzzi, D.

In Advances in Artificial Life: ECAL 2011, pages: 55-62, (Editors: Lenaerts, T. , M. Giacobini, H. Bersini, P. Bourgine, M. Dorigo, R. Doursat), MIT Press, Cambridge, MA, USA, Eleventh European Conference on the Synthesis and Simulation of Living Systems, August 2011 (inproceedings)

Abstract
Many natural processes occur over characteristic spatial and temporal scales. This paper presents tools for (i) flexibly and scalably coarse-graining cellular automata and (ii) identifying which coarse-grainings express an automaton’s dynamics well, and which express its dynamics badly. We apply the tools to investigate a range of examples in Conway’s Game of Life and Hopfield networks and demonstrate that they capture some basic intuitions about emergent processes. Finally, we formalize the notion that a process is emergent if it is better expressed at a coarser granularity.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Statistical Image Analysis and Percolation Theory

Langovoy, M., Habeck, M., Schölkopf, B.

2011 Joint Statistical Meetings (JSM), August 2011 (talk)

Abstract
We develop a novel method for detection of signals and reconstruction of images in the presence of random noise. The method uses results from percolation theory. We specifically address the problem of detection of multiple objects of unknown shapes in the case of nonparametric noise. The noise density is unknown and can be heavy-tailed. The objects of interest have unknown varying intensities. No boundary shape constraints are imposed on the objects, only a set of weak bulk conditions is required. We view the object detection problem as hypothesis testing for discrete statistical inverse problems. We present an algorithm that allows to detect greyscale objects of various shapes in noisy images. We prove results on consistency and algorithmic complexity of our procedures. Applications to cryo-electron microscopy are presented.

ei

Web [BibTex]

Web [BibTex]


no image
Spatial statistics, image analysis and percolation theory

Langovoy, M., Habeck, M., Schölkopf, B.

In pages: 11, American Statistical Association, Alexandria, VA, USA, 2011 Joint Statistical Meetings (JSM), August 2011 (inproceedings)

Abstract
We develop a novel method for detection of signals and reconstruction of images in the presence of random noise. The method uses results from percolation theory. We specifically address the problem of detection of multiple objects of unknown shapes in the case of nonparametric noise. The noise density is unknown. The objects of interest have unknown varying intensities. No boundary shape constraints are imposed on the objects, only a set of weak bulk conditions is required. We view the object detection problem as a multiple hypothesis testing for discrete statistical inverse problems. We present an algorithm that allows to detect greyscale objects of various shapes in noisy images. We prove results on consistency and algorithmic complexity of our procedures. Applications to cryo-electron microscopy are presented.

ei

PDF [BibTex]

PDF [BibTex]


no image
Two-locus association mapping in subquadratic time

Achlioptas, P., Schölkopf, B., Borgwardt, K.

In pages: 726-734, (Editors: C Apté and J Ghosh and P Smyth), ACM Press, New York, NY, USA, 17th ACM SIGKKD Conference on Knowledge Discovery and Data Mining (KDD) , August 2011 (inproceedings)

Abstract
Genome-wide association studies (GWAS) have not been able to discover strong associations between many complex human diseases and single genetic loci. Mapping these phenotypes to pairs of genetic loci is hindered by the huge number of candidates leading to enormous computational and statistical problems. In GWAS on single nucleotide polymorphisms (SNPs), one has to consider in the order of 1010 to 1014 pairs, which is infeasible in practice. In this article, we give the first algorithm for 2-locus genome-wide association studies that is subquadratic in the number, n, of SNPs. The running time of our algorithm is data-dependent, but large experiments over real genomic data suggest that it scales empirically as n3/2. As a result, our algorithm can easily cope with n ~ 107, i.e., it can efficiently search all pairs of SNPs in the human genome.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Multi-subject learning for common spatial patterns in motor-imagery BCI

Devlaminck, D., Wyns, B., Grosse-Wentrup, M., Otte, G., Santens, P.

Computational Intelligence and Neuroscience, 2011(217987):1-9, August 2011 (article)

Abstract
Motor-imagery-based brain-computer interfaces (BCIs) commonly use the common spatial pattern filter (CSP) as preprocessing step before feature extraction and classification. The CSP method is a supervised algorithm and therefore needs subject-specific training data for calibration, which is very time consuming to collect. In order to reduce the amount of calibration data that is needed for a new subject, one can apply multitask (from now on called multisubject) machine learning techniques to the preprocessing phase. Here, the goal of multisubject learning is to learn a spatial filter for a new subject based on its own data and that of other subjects. This paper outlines the details of the multitask CSP algorithm and shows results on two data sets. In certain subjects a clear improvement can be seen, especially when the number of training trials is relatively low.

ei

PDF DOI [BibTex]

PDF DOI [BibTex]


no image
Bayesian Time Series Models

Barber, D., Cemgil, A., Chiappa, S.

pages: 432, Cambridge University Press, Cambridge, UK, August 2011 (book)

ei

[BibTex]

[BibTex]


no image
A Novel Active Learning Strategy for Domain Adaptation in the Classification of Remote Sensing Images

Persello, C., Bruzzone, L.

In pages: 3720-3723 , IEEE, Piscataway, NJ, USA, IEEE International Geoscience and Remote Sensing Symposium (IGARSS), July 2011 (inproceedings)

Abstract
We present a novel technique for addressing domain adaptation problems in the classification of remote sensing images with active learning. Domain adaptation is the important problem of adapting a supervised classifier trained on a given image (source domain) to the classification of another similar (but not identical) image (target domain) acquired on a different area, or on the same area at a different time. The main idea of the proposed approach is to iteratively labeling and adding to the training set the minimum number of the most informative samples from target domain, while removing the source-domain samples that does not fit with the distributions of the classes in the target domain. In this way, the classification system exploits already available information, i.e., the labeled samples of source domain, in order to minimize the number of target domain samples to be labeled, thus reducing the cost associated to the definition of the training set for the classification of the target domain. Experimental results obtained in the classification of a hyperspectral image confirm the effectiveness of the proposed technique.

ei

Web DOI [BibTex]

Web DOI [BibTex]


no image
Reinforcement Learning to adjust Robot Movements to New Situations

Kober, J., Oztop, E., Peters, J.

In pages: 2650-2655, (Editors: Walsh, T.), AAAI Press, Menlo Park, CA, USA, Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI), July 2011 (inproceedings)

Abstract
Many complex robot motor skills can be represented using elementary movements, and there exist efficient techniques for learning parametrized motor plans using demonstrations and self-improvement. However with current techniques, in many cases, the robot currently needs to learn a new elementary movement even if a parametrized motor plan exists that covers a related situation. A method is needed that modulates the elementary movement through the meta-parameters of its representation. In this paper, we describe how to learn such mappings from circumstances to meta-parameters using reinforcement learning. In particular we use a kernelized version of the reward-weighted regression. We show two robot applications of the presented setup in robotic domains; the generalization of throwing movements in darts, and of hitting movements in table tennis. We demonstrate that both tasks can be learned successfully using simulated and real robots.

ei

PDF Web [BibTex]

PDF Web [BibTex]


no image
Online submodular minimization for combinatorial structures

Jegelka, S., Bilmes, J.

In pages: 345-352, (Editors: Getoor, L. , T. Scheffer), International Machine Learning Society, Madison, WI, USA, 28th International Conference on Machine Learning (ICML), July 2011 (inproceedings)

Abstract
Most results for online decision problems with structured concepts, such as trees or cuts, assume linear costs. In many settings, however, nonlinear costs are more realistic. Owing to their non-separability, these lead to much harder optimization problems. Going beyond linearity, we address online approximation algorithms for structured concepts that allow the cost to be submodular, i.e., nonseparable. In particular, we show regret bounds for three Hannan-consistent strategies that capture different settings. Our results also tighten a regret bound for unconstrained online submodular minimization.

ei

PDF PDF Web [BibTex]

PDF PDF Web [BibTex]