Header logo is


2019


Towards Geometric Understanding of Motion
Towards Geometric Understanding of Motion

Ranjan, A.

University of Tübingen, December 2019 (phdthesis)

Abstract

The motion of the world is inherently dependent on the spatial structure of the world and its geometry. Therefore, classical optical flow methods try to model this geometry to solve for the motion. However, recent deep learning methods take a completely different approach. They try to predict optical flow by learning from labelled data. Although deep networks have shown state-of-the-art performance on classification problems in computer vision, they have not been as effective in solving optical flow. The key reason is that deep learning methods do not explicitly model the structure of the world in a neural network, and instead expect the network to learn about the structure from data. We hypothesize that it is difficult for a network to learn about motion without any constraint on the structure of the world. Therefore, we explore several approaches to explicitly model the geometry of the world and its spatial structure in deep neural networks.

The spatial structure in images can be captured by representing it at multiple scales. To represent multiple scales of images in deep neural nets, we introduce a Spatial Pyramid Network (SpyNet). Such a network can leverage global information for estimating large motions and local information for estimating small motions. We show that SpyNet significantly improves over previous optical flow networks while also being the smallest and fastest neural network for motion estimation. SPyNet achieves a 97% reduction in model parameters over previous methods and is more accurate.

The spatial structure of the world extends to people and their motion. Humans have a very well-defined structure, and this information is useful in estimating optical flow for humans. To leverage this information, we create a synthetic dataset for human optical flow using a statistical human body model and motion capture sequences. We use this dataset to train deep networks and see significant improvement in the ability of the networks to estimate human optical flow.

The structure and geometry of the world affects the motion. Therefore, learning about the structure of the scene together with the motion can benefit both problems. To facilitate this, we introduce Competitive Collaboration, where several neural networks are constrained by geometry and can jointly learn about structure and motion in the scene without any labels. To this end, we show that jointly learning single view depth prediction, camera motion, optical flow and motion segmentation using Competitive Collaboration achieves state-of-the-art results among unsupervised approaches.

Our findings provide support for our hypothesis that explicit constraints on structure and geometry of the world lead to better methods for motion estimation.

ps

PhD Thesis [BibTex]

2019


PhD Thesis [BibTex]


no image
Robot Learning for Muscular Robots

Büchler, D.

Technical University Darmstadt, Germany, December 2019 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Real Time Probabilistic Models for Robot Trajectories

Gomez-Gonzalez, S.

Technical University Darmstadt, Germany, December 2019 (phdthesis)

ei

[BibTex]

[BibTex]


Fast and Resource-Efficient Control of Wireless Cyber-Physical Systems
Fast and Resource-Efficient Control of Wireless Cyber-Physical Systems

Baumann, D.

KTH Royal Institute of Technology, Stockholm, Febuary 2019 (phdthesis)

ics

PDF [BibTex]

PDF [BibTex]


no image
Learning Transferable Representations

Rojas-Carulla, M.

University of Cambridge, UK, 2019 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Sample-efficient deep reinforcement learning for continuous control

Gu, S.

University of Cambridge, UK, 2019 (phdthesis)

ei

[BibTex]


no image
X-ray microscopic characterization of high-Tc-supercoductors using image processing

Bihler, M.

Universität Stuttgart, Stuttgart, 2019 (mastersthesis)

mms

[BibTex]


no image
Spatial Filtering based on Riemannian Manifold for Brain-Computer Interfacing

Xu, J.

Technical University of Munich, Germany, 2019 (mastersthesis)

ei

[BibTex]

[BibTex]


no image
Novel X-ray lenses for direct and coherent imaging

Sanli, U. T.

Universität Stuttgart, Stuttgart, 2019 (phdthesis)

mms

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Quantification of tumor heterogeneity using PET/MRI and machine learning

Katiyar, P.

Eberhard Karls Universität Tübingen, Germany, 2019 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Actively Learning Dynamical Systems with Gaussian Processes

Buisson-Fenet, M.

Mines ParisTech, PSL Research University, 2019 (mastersthesis)

Abstract
Predicting the behavior of complex systems is of great importance in many fields such as engineering, economics or meteorology. The evolution of such systems often follows a certain structure, which can be induced, for example from the laws of physics or of market forces. Mathematically, this structure is often captured by differential equations. The internal functional dependencies, however, are usually unknown. Hence, using machine learning approaches that recreate this structure directly from data is a promising alternative to designing physics-based models. In particular, for high dimensional systems with nonlinear effects, this can be a challenging task. Learning dynamical systems is different from the classical machine learning tasks, such as image processing, and necessitates different tools. Indeed, dynamical systems can be actuated, often by applying torques or voltages. Hence, the user has a power of decision over the system, and can drive it to certain states by going through the dynamics. Actuating this system generates data, from which a machine learning model of the dynamics can be trained. However, gathering informative data that is representative of the whole state space remains a challenging task. The question of active learning then becomes important: which control inputs should be chosen by the user so that the data generated during an experiment is informative, and enables efficient training of the dynamics model? In this context, Gaussian processes can be a useful framework for approximating system dynamics. Indeed, they perform well on small and medium sized data sets, as opposed to most other machine learning frameworks. This is particularly important considering data is often costly to generate and process, most of all when producing it involves actuating a complex physical system. Gaussian processes also yield a notion of uncertainty, which indicates how sure the model is about its predictions. In this work, we investigate in a principled way how to actively learn dynamical systems, by selecting control inputs that generate informative data. We model the system dynamics by a Gaussian process, and use information-theoretic criteria to identify control trajectories that maximize the information gain. Thus, the input space can be explored efficiently, leading to a data-efficient training of the model. We propose several methods, investigate their theoretical properties and compare them extensively in a numerical benchmark. The final method proves to be efficient at generating informative data. Thus, it yields the lowest prediction error with the same amount of samples on most benchmark systems. We propose several variants of this method, allowing the user to trade off computations with prediction accuracy, and show it is versatile enough to take additional objectives into account.

ics

[BibTex]

[BibTex]

2017


Human Shape Estimation using Statistical Body Models
Human Shape Estimation using Statistical Body Models

Loper, M. M.

University of Tübingen, May 2017 (thesis)

Abstract
Human body estimation methods transform real-world observations into predictions about human body state. These estimation methods benefit a variety of health, entertainment, clothing, and ergonomics applications. State may include pose, overall body shape, and appearance. Body state estimation is underconstrained by observations; ambiguity presents itself both in the form of missing data within observations, and also in the form of unknown correspondences between observations. We address this challenge with the use of a statistical body model: a data-driven virtual human. This helps resolve ambiguity in two ways. First, it fills in missing data, meaning that incomplete observations still result in complete shape estimates. Second, the model provides a statistically-motivated penalty for unlikely states, which enables more plausible body shape estimates. Body state inference requires more than a body model; we therefore build obser- vation models whose output is compared with real observations. In this thesis, body state is estimated from three types of observations: 3D motion capture markers, depth and color images, and high-resolution 3D scans. In each case, a forward process is proposed which simulates observations. By comparing observations to the results of the forward process, state can be adjusted to minimize the difference between simulated and observed data. We use gradient-based methods because they are critical to the precise estimation of state with a large number of parameters. The contributions of this work include three parts. First, we propose a method for the estimation of body shape, nonrigid deformation, and pose from 3D markers. Second, we present a concise approach to differentiating through the rendering process, with application to body shape estimation. And finally, we present a statistical body model trained from human body scans, with state-of-the-art fidelity, good runtime performance, and compatibility with existing animation packages.

ps

Official Version [BibTex]


Chapter 8 - Micro- and nanorobots in Newtonian and biological viscoelastic fluids
Chapter 8 - Micro- and nanorobots in Newtonian and biological viscoelastic fluids

Palagi, S., (Walker) Schamel, D., Qiu, T., Fischer, P.

In Microbiorobotics, pages: 133 - 162, 8, Micro and Nano Technologies, Second edition, Elsevier, Boston, March 2017 (incollection)

Abstract
Swimming microorganisms are a source of inspiration for small scale robots that are intended to operate in fluidic environments including complex biomedical fluids. Nature has devised swimming strategies that are effective at small scales and at low Reynolds number. These include the rotary corkscrew motion that, for instance, propels a flagellated bacterial cell, as well as the asymmetric beat of appendages that sperm cells or ciliated protozoa use to move through fluids. These mechanisms can overcome the reciprocity that governs the hydrodynamics at small scale. The complex molecular structure of biologically important fluids presents an additional challenge for the effective propulsion of microrobots. In this chapter it is shown how physical and chemical approaches are essential in realizing engineered abiotic micro- and nanorobots that can move in biomedically important environments. Interestingly, we also describe a microswimmer that is effective in biological viscoelastic fluids that does not have a natural analogue.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Appealing Avatars from {3D} Body Scans: Perceptual Effects of Stylization
Appealing Avatars from 3D Body Scans: Perceptual Effects of Stylization

Fleming, R., Mohler, B. J., Romero, J., Black, M. J., Breidt, M.

In Computer Vision, Imaging and Computer Graphics Theory and Applications: 11th International Joint Conference, VISIGRAPP 2016, Rome, Italy, February 27 – 29, 2016, Revised Selected Papers, pages: 175-196, Springer International Publishing, 2017 (inbook)

Abstract
Using styles derived from existing popular character designs, we present a novel automatic stylization technique for body shape and colour information based on a statistical 3D model of human bodies. We investigate whether such stylized body shapes result in increased perceived appeal with two different experiments: One focuses on body shape alone, the other investigates the additional role of surface colour and lighting. Our results consistently show that the most appealing avatar is a partially stylized one. Importantly, avatars with high stylization or no stylization at all were rated to have the least appeal. The inclusion of colour information and improvements to render quality had no significant effect on the overall perceived appeal of the avatars, and we observe that the body shape primarily drives the change in appeal ratings. For body scans with colour information, we found that a partially stylized avatar was perceived as most appealing.

ps

publisher site pdf DOI [BibTex]

publisher site pdf DOI [BibTex]


no image
Robot Learning

Peters, J., Lee, D., Kober, J., Nguyen-Tuong, D., Bagnell, J., Schaal, S.

In Springer Handbook of Robotics, pages: 357-394, 15, 2nd, (Editors: Siciliano, Bruno and Khatib, Oussama), Springer International Publishing, 2017 (inbook)

am ei

Project Page [BibTex]

Project Page [BibTex]


Learning to Filter Object Detections
Learning to Filter Object Detections

Prokudin, S., Kappler, D., Nowozin, S., Gehler, P.

In Pattern Recognition: 39th German Conference, GCPR 2017, Basel, Switzerland, September 12–15, 2017, Proceedings, pages: 52-62, Springer International Publishing, Cham, 2017 (inbook)

Abstract
Most object detection systems consist of three stages. First, a set of individual hypotheses for object locations is generated using a proposal generating algorithm. Second, a classifier scores every generated hypothesis independently to obtain a multi-class prediction. Finally, all scored hypotheses are filtered via a non-differentiable and decoupled non-maximum suppression (NMS) post-processing step. In this paper, we propose a filtering network (FNet), a method which replaces NMS with a differentiable neural network that allows joint reasoning and re-scoring of the generated set of hypotheses per image. This formulation enables end-to-end training of the full object detection pipeline. First, we demonstrate that FNet, a feed-forward network architecture, is able to mimic NMS decisions, despite the sequential nature of NMS. We further analyze NMS failures and propose a loss formulation that is better aligned with the mean average precision (mAP) evaluation metric. We evaluate FNet on several standard detection datasets. Results surpass standard NMS on highly occluded settings of a synthetic overlapping MNIST dataset and show competitive behavior on PascalVOC2007 and KITTI detection benchmarks.

ps

Paper link (url) DOI Project Page [BibTex]

Paper link (url) DOI Project Page [BibTex]


no image
Policy Gradient Methods

Peters, J., Bagnell, J.

In Encyclopedia of Machine Learning and Data Mining, pages: 982-985, 2nd, (Editors: Sammut, Claude and Webb, Geoffrey I.), Springer US, 2017 (inbook)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Unsupervised clustering of EOG as a viable substitute for optical eye-tracking

Flad, N., Fomina, T., Bülthoff, H. H., Chuang, L. L.

In First Workshop on Eye Tracking and Visualization (ETVIS 2015), pages: 151-167, Mathematics and Visualization, (Editors: Burch, M., Chuang, L., Fisher, B., Schmidt, A., and Weiskopf, D.), Springer, 2017 (inbook)

ei

DOI [BibTex]

DOI [BibTex]


Learning Inference Models for Computer Vision
Learning Inference Models for Computer Vision

Jampani, V.

MPI for Intelligent Systems and University of Tübingen, 2017 (phdthesis)

Abstract
Computer vision can be understood as the ability to perform 'inference' on image data. Breakthroughs in computer vision technology are often marked by advances in inference techniques, as even the model design is often dictated by the complexity of inference in them. This thesis proposes learning based inference schemes and demonstrates applications in computer vision. We propose techniques for inference in both generative and discriminative computer vision models. Despite their intuitive appeal, the use of generative models in vision is hampered by the difficulty of posterior inference, which is often too complex or too slow to be practical. We propose techniques for improving inference in two widely used techniques: Markov Chain Monte Carlo (MCMC) sampling and message-passing inference. Our inference strategy is to learn separate discriminative models that assist Bayesian inference in a generative model. Experiments on a range of generative vision models show that the proposed techniques accelerate the inference process and/or converge to better solutions. A main complication in the design of discriminative models is the inclusion of prior knowledge in a principled way. For better inference in discriminative models, we propose techniques that modify the original model itself, as inference is simple evaluation of the model. We concentrate on convolutional neural network (CNN) models and propose a generalization of standard spatial convolutions, which are the basic building blocks of CNN architectures, to bilateral convolutions. First, we generalize the existing use of bilateral filters and then propose new neural network architectures with learnable bilateral filters, which we call `Bilateral Neural Networks'. We show how the bilateral filtering modules can be used for modifying existing CNN architectures for better image segmentation and propose a neural network approach for temporal information propagation in videos. Experiments demonstrate the potential of the proposed bilateral networks on a wide range of vision tasks and datasets. In summary, we propose learning based techniques for better inference in several computer vision models ranging from inverse graphics to freely parameterized neural networks. In generative vision models, our inference techniques alleviate some of the crucial hurdles in Bayesian posterior inference, paving new ways for the use of model based machine learning in vision. In discriminative CNN models, the proposed filter generalizations aid in the design of new neural network architectures that can handle sparse high-dimensional data as well as provide a way for incorporating prior knowledge into CNNs.

ps

pdf [BibTex]

pdf [BibTex]


no image
Statistical Asymmetries Between Cause and Effect

Janzing, D.

In Time in Physics, pages: 129-139, Tutorials, Schools, and Workshops in the Mathematical Sciences, (Editors: Renner, Renato and Stupar, Sandra), Springer International Publishing, Cham, 2017 (inbook)

ei

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Robot Learning

Peters, J., Tedrake, R., Roy, N., Morimoto, J.

In Encyclopedia of Machine Learning and Data Mining, pages: 1106-1109, 2nd, (Editors: Sammut, Claude and Webb, Geoffrey I.), Springer US, 2017 (inbook)

ei

DOI Project Page [BibTex]

DOI Project Page [BibTex]


no image
Development and Evaluation of a Portable BCI System for Remote Data Acquisition

Emde, T.

Graduate School of Neural Information Processing, Eberhard Karls Universität Tübingen, Germany, 2017 (mastersthesis)

ei

[BibTex]

[BibTex]


no image
Brain-Computer Interfaces for patients with Amyotrophic Lateral Sclerosis

Fomina, T.

Eberhard Karls Universität Tübingen, Germany, 2017 (phdthesis)

ei

[BibTex]

[BibTex]


Decentralized Simultaneous Multi-target Exploration using a Connected Network of Multiple Robots
Decentralized Simultaneous Multi-target Exploration using a Connected Network of Multiple Robots

Nestmeyer, T., Robuffo Giordano, P., Bülthoff, H. H., Franchi, A.

In pages: 989-1011, Autonomous Robots, 2017 (incollection)

ps

[BibTex]

[BibTex]


no image
Causal models for decision making via integrative inference

Geiger, P.

University of Stuttgart, Germany, 2017 (phdthesis)

ei

[BibTex]

[BibTex]


Capturing Hand-Object Interaction and Reconstruction of Manipulated Objects
Capturing Hand-Object Interaction and Reconstruction of Manipulated Objects

Tzionas, D.

University of Bonn, 2017 (phdthesis)

Abstract
Hand motion capture with an RGB-D sensor gained recently a lot of research attention, however, even most recent approaches focus on the case of a single isolated hand. We focus instead on hands that interact with other hands or with a rigid or articulated object. Our framework successfully captures motion in such scenarios by combining a generative model with discriminatively trained salient points, collision detection and physics simulation to achieve a low tracking error with physically plausible poses. All components are unified in a single objective function that can be optimized with standard optimization techniques. We initially assume a-priori knowledge of the object's shape and skeleton. In case of unknown object shape there are existing 3d reconstruction methods that capitalize on distinctive geometric or texture features. These methods though fail for textureless and highly symmetric objects like household articles, mechanical parts or toys. We show that extracting 3d hand motion for in-hand scanning effectively facilitates the reconstruction of such objects and we fuse the rich additional information of hands into a 3d reconstruction pipeline. Finally, although shape reconstruction is enough for rigid objects, there is a lack of tools that build rigged models of articulated objects that deform realistically using RGB-D data. We propose a method that creates a fully rigged model consisting of a watertight mesh, embedded skeleton and skinning weights by employing a combination of deformable mesh tracking, motion segmentation based on spectral clustering and skeletonization based on mean curvature flow.

ps

Thesis link (url) Project Page [BibTex]


Evaluation of the passive dynamics of compliant legs with inertia
Evaluation of the passive dynamics of compliant legs with inertia

Györfi, B.

University of Applied Science Pforzheim, Germany, 2017 (mastersthesis)

dlg

[BibTex]

[BibTex]


no image
Learning Optimal Configurations for Modeling Frowning by Transcranial Electrical Stimulation

Sücker, K.

Graduate School of Neural Information Processing, Eberhard Karls Universität Tübingen, Germany, 2017 (mastersthesis)

ei

[BibTex]

[BibTex]


no image
Momentum-Centered Control of Contact Interactions

Righetti, L., Herzog, A.

In Geometric and Numerical Foundations of Movements, 117, pages: 339-359, Springer Tracts in Advanced Robotics, Springer, Cham, 2017 (incollection)

mg

link (url) [BibTex]

link (url) [BibTex]


no image
Understanding FORC using synthetic micro-structured systems with variable coupling- and coercivefield distributions

Groß, Felix

Universität Stuttgart, Stuttgart, 2017 (mastersthesis)

mms

[BibTex]


no image
Adsorption von Wasserstoffmolekülen in nanoporösen Gerüststrukturen

Kotzur, Nadine

Universität Stuttgart, Stuttgart, 2017 (mastersthesis)

mms

[BibTex]

[BibTex]

2015


Untethered Magnetic Micromanipulation
Untethered Magnetic Micromanipulation

Diller, E., Sitti, M.

In Micro-and Nanomanipulation Tools, 13, 10, Wiley-VCH Verlag GmbH & Co. KGaA, November 2015 (inbook)

Abstract
This chapter discusses the methods and state of the art in microscale manipulation in remote environments using untethered microrobotic devices. It focuses on manipulation at the size scale of tens to hundreds of microns, where small size leads to a dominance of microscale physical effects and challenges in fabrication and actuation. To motivate the challenges of operating at this size scale, the chapter includes coverage of the physical forces relevant to microrobot motion and manipulation below the millimeter-size scale. It then introduces the actuation methods commonly used in untethered manipulation schemes, with particular focus on magnetic actuation due to its wide use in the field. The chapter divides these manipulation techniques into two types: contact manipulation, which relies on direct pushing or grasping of objects for motion, and noncontact manipulation, which relies indirectly on induced fluid flow from the microrobot motion to move objects without any direct contact.

pi

DOI Project Page [BibTex]

2015


DOI Project Page [BibTex]


no image
easyGWAS: An Integrated Computational Framework for Advanced Genome-Wide Association Studies

Grimm, Dominik

Eberhard Karls Universität Tübingen, November 2015 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Causal Discovery Beyond Conditional Independences

Sgouritsa, E.

Eberhard Karls Universität Tübingen, Germany, October 2015 (phdthesis)

ei

link (url) [BibTex]

link (url) [BibTex]


Gaussian Process Optimization for Self-Tuning Control
Gaussian Process Optimization for Self-Tuning Control

Marco, A.

Polytechnic University of Catalonia (BarcelonaTech), October 2015 (mastersthesis)

am ics

PDF Project Page [BibTex]

PDF Project Page [BibTex]


no image
From Points to Probability Measures: A Statistical Learning on Distributions with Kernel Mean Embedding

Muandet, K.

University of Tübingen, Germany, University of Tübingen, Germany, September 2015 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Machine Learning Approaches to Image Deconvolution

Schuler, C.

University of Tübingen, Germany, University of Tübingen, Germany, September 2015 (phdthesis)

ei

[BibTex]

[BibTex]


no image
Adaptive and Learning Concepts in Hydraulic Force Control

Doerr, A.

University of Stuttgart, September 2015 (mastersthesis)

am ics

[BibTex]

[BibTex]


no image
Kernel methods in medical imaging

Charpiat, G., Hofmann, M., Schölkopf, B.

In Handbook of Biomedical Imaging, pages: 63-81, 4, (Editors: Paragios, N., Duncan, J. and Ayache, N.), Springer, Berlin, Germany, June 2015 (inbook)

ei

Web link (url) [BibTex]

Web link (url) [BibTex]


Object Detection Using Deep Learning - Learning where to search using visual attention
Object Detection Using Deep Learning - Learning where to search using visual attention

Kloss, A.

Eberhard Karls Universität Tübingen, May 2015 (mastersthesis)

Abstract
Detecting and identifying the different objects in an image fast and reliably is an important skill for interacting with one’s environment. The main problem is that in theory, all parts of an image have to be searched for objects on many different scales to make sure that no object instance is missed. It however takes considerable time and effort to actually classify the content of a given image region and both time and computational capacities that an agent can spend on classification are limited. Humans use a process called visual attention to quickly decide which locations of an image need to be processed in detail and which can be ignored. This allows us to deal with the huge amount of visual information and to employ the capacities of our visual system efficiently. For computer vision, researchers have to deal with exactly the same problems, so learning from the behaviour of humans provides a promising way to improve existing algorithms. In the presented master’s thesis, a model is trained with eye tracking data recorded from 15 participants that were asked to search images for objects from three different categories. It uses a deep convolutional neural network to extract features from the input image that are then combined to form a saliency map. This map provides information about which image regions are interesting when searching for the given target object and can thus be used to reduce the parts of the image that have to be processed in detail. The method is based on a recent publication of Kümmerer et al., but in contrast to the original method that computes general, task independent saliency, the presented model is supposed to respond differently when searching for different target categories.

am

PDF Project Page [BibTex]


no image
Blind Retrospective Motion Correction of MR Images

Loktyushin, A.

University of Tübingen, Germany, May 2015 (phdthesis)

ei

[BibTex]

[BibTex]


Robot Arm Tracking with Random Decision Forests
Robot Arm Tracking with Random Decision Forests

Widmaier, F.

Eberhard-Karls-Universität Tübingen, May 2015 (mastersthesis)

Abstract
For grasping and manipulation with robot arms, knowing the current pose of the arm is crucial for successful controlling its motion. Often, pose estimations can be acquired from encoders inside the arm, but they can have significant inaccuracy which makes the use of additional techniques necessary. In this master thesis, a novel approach of robot arm pose estimation is presented, that works on single depth images without the need of prior foreground segmentation or other preprocessing steps. A random regression forest is used, which is trained only on synthetically generated data. The approach improves former work by Bohg et al. by considerably reducing the computational effort both at training and test time. The forest in the new method directly estimates the desired joint angles while in the former approach, the forest casts 3D position votes for the joints, which then have to be clustered and fed into an iterative inverse kinematic process to finally get the joint angles. To improve the estimation accuracy, the standard training objective of the forest training is replaced by a specialized function that makes use of a model-dependent distance metric, called DISP. Experimental results show that the specialized objective indeed improves pose estimation and it is shown that the method, despite of being trained on synthetic data only, is able to provide reasonable estimations for real data at test time.

am

PDF Project Page [BibTex]

PDF Project Page [BibTex]


no image
Lernende Roboter

Trimpe, S.

In Jahrbuch der Max-Planck-Gesellschaft, Max Planck Society, May 2015, (popular science article in German) (inbook)

am ics

link (url) [BibTex]

link (url) [BibTex]