Header logo is


2020


Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image
Learning Unsupervised Hierarchical Part Decomposition of 3D Objects from a Single RGB Image

Paschalidou, D., Gool, L., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Humans perceive the 3D world as a set of distinct objects that are characterized by various low-level (geometry, reflectance) and high-level (connectivity, adjacency, symmetry) properties. Recent methods based on convolutional neural networks (CNNs) demonstrated impressive progress in 3D reconstruction, even when using a single 2D image as input. However, the majority of these methods focuses on recovering the local 3D geometry of an object without considering its part-based decomposition or relations between parts. We address this challenging problem by proposing a novel formulation that allows to jointly recover the geometry of a 3D object as a set of primitives as well as their latent hierarchical structure without part-level supervision. Our model recovers the higher level structural decomposition of various objects in the form of a binary tree of primitives, where simple parts are represented with fewer primitives and more complex parts are modeled with more components. Our experiments on the ShapeNet and D-FAUST datasets demonstrate that considering the organization of parts indeed facilitates reasoning about 3D geometry.

avg

pdf suppmat Video Project Page [BibTex]

2020


pdf suppmat Video Project Page [BibTex]


Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis
Towards Unsupervised Learning of Generative Models for 3D Controllable Image Synthesis

Liao, Y., Schwarz, K., Mescheder, L., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
In recent years, Generative Adversarial Networks have achieved impressive results in photorealistic image synthesis. This progress nurtures hopes that one day the classical rendering pipeline can be replaced by efficient models that are learned directly from images. However, current image synthesis models operate in the 2D domain where disentangling 3D properties such as camera viewpoint or object pose is challenging. Furthermore, they lack an interpretable and controllable representation. Our key hypothesis is that the image generation process should be modeled in 3D space as the physical world surrounding us is intrinsically three-dimensional. We define the new task of 3D controllable image synthesis and propose an approach for solving it by reasoning both in 3D space and in the 2D image domain. We demonstrate that our model is able to disentangle latent 3D factors of simple multi-object scenes in an unsupervised fashion from raw images. Compared to pure 2D baselines, it allows for synthesizing scenes that are consistent wrt. changes in viewpoint or object pose. We further evaluate various 3D representations in terms of their usefulness for this challenging task.

avg

pdf suppmat Video Project Page [BibTex]

pdf suppmat Video Project Page [BibTex]


Exploring Data Aggregation in Policy Learning for Vision-based Urban Autonomous Driving
Exploring Data Aggregation in Policy Learning for Vision-based Urban Autonomous Driving

Prakash, A., Behl, A., Ohn-Bar, E., Chitta, K., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Data aggregation techniques can significantly improve vision-based policy learning within a training environment, e.g., learning to drive in a specific simulation condition. However, as on-policy data is sequentially sampled and added in an iterative manner, the policy can specialize and overfit to the training conditions. For real-world applications, it is useful for the learned policy to generalize to novel scenarios that differ from the training conditions. To improve policy learning while maintaining robustness when training end-to-end driving policies, we perform an extensive analysis of data aggregation techniques in the CARLA environment. We demonstrate how the majority of them have poor generalization performance, and develop a novel approach with empirically better generalization performance compared to existing techniques. Our two key ideas are (1) to sample critical states from the collected on-policy data based on the utility they provide to the learned policy in terms of driving behavior, and (2) to incorporate a replay buffer which progressively focuses on the high uncertainty regions of the policy's state distribution. We evaluate the proposed approach on the CARLA NoCrash benchmark, focusing on the most challenging driving scenarios with dense pedestrian and vehicle traffic. Our approach improves driving success rate by 16% over state-of-the-art, achieving 87% of the expert performance while also reducing the collision rate by an order of magnitude without the use of any additional modality, auxiliary tasks, architectural modifications or reward from the environment.

avg

pdf suppmat Video Project Page [BibTex]

pdf suppmat Video Project Page [BibTex]


On Joint Estimation of Pose, Geometry and svBRDF from a Handheld Scanner
On Joint Estimation of Pose, Geometry and svBRDF from a Handheld Scanner

Schmitt, C., Donne, S., Riegler, G., Koltun, V., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
We propose a novel formulation for joint recovery of camera pose, object geometry and spatially-varying BRDF. The input to our approach is a sequence of RGB-D images captured by a mobile, hand-held scanner that actively illuminates the scene with point light sources. Compared to previous works that jointly estimate geometry and materials from a hand-held scanner, we formulate this problem using a single objective function that can be minimized using off-the-shelf gradient-based solvers. By integrating material clustering as a differentiable operation into the optimization process, we avoid pre-processing heuristics and demonstrate that our model is able to determine the correct number of specular materials independently. We provide a study on the importance of each component in our formulation and on the requirements of the initial geometry. We show that optimizing over the poses is crucial for accurately recovering fine details and that our approach naturally results in a semantically meaningful material segmentation.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Learning Situational Driving
Learning Situational Driving

Ohn-Bar, E., Prakash, A., Behl, A., Chitta, K., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Human drivers have a remarkable ability to drive in diverse visual conditions and situations, e.g., from maneuvering in rainy, limited visibility conditions with no lane markings to turning in a busy intersection while yielding to pedestrians. In contrast, we find that state-of-the-art sensorimotor driving models struggle when encountering diverse settings with varying relationships between observation and action. To generalize when making decisions across diverse conditions, humans leverage multiple types of situation-specific reasoning and learning strategies. Motivated by this observation, we develop a framework for learning a situational driving policy that effectively captures reasoning under varying types of scenarios. Our key idea is to learn a mixture model with a set of policies that can capture multiple driving modes. We first optimize the mixture model through behavior cloning, and show it to result in significant gains in terms of driving performance in diverse conditions. We then refine the model by directly optimizing for the driving task itself, i.e., supervised with the navigation task reward. Our method is more scalable than methods assuming access to privileged information, e.g., perception labels, as it only assumes demonstration and reward-based supervision. We achieve over 98% success rate on the CARLA driving benchmark as well as state-of-the-art performance on a newly introduced generalization benchmark.

avg

pdf suppmat Video Project Page [BibTex]

pdf suppmat Video Project Page [BibTex]


Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision
Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision

Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Learning-based 3D reconstruction methods have shown impressive results. However, most methods require 3D supervision which is often hard to obtain for real-world datasets. Recently, several works have proposed differentiable rendering techniques to train reconstruction models from RGB images. Unfortunately, these approaches are currently restricted to voxel- and mesh-based representations, suffering from discretization or low resolution. In this work, we propose a differentiable rendering formulation for implicit shape and texture representations. Implicit representations have recently gained popularity as they represent shape and texture continuously. Our key insight is that depth gradients can be derived analytically using the concept of implicit differentiation. This allows us to learn implicit shape and texture representations directly from RGB images. We experimentally show that our single-view reconstructions rival those learned with full 3D supervision. Moreover, we find that our method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.

avg

pdf suppmat Video Project Page [BibTex]

pdf suppmat Video Project Page [BibTex]

2018


On the Integration of Optical Flow and Action Recognition
On the Integration of Optical Flow and Action Recognition

Sevilla-Lara, L., Liao, Y., Güney, F., Jampani, V., Geiger, A., Black, M. J.

In German Conference on Pattern Recognition (GCPR), LNCS 11269, pages: 281-297, Springer, Cham, October 2018 (inproceedings)

Abstract
Most of the top performing action recognition methods use optical flow as a "black box" input. Here we take a deeper look at the combination of flow and action recognition, and investigate why optical flow is helpful, what makes a flow method good for action recognition, and how we can make it better. In particular, we investigate the impact of different flow algorithms and input transformations to better understand how these affect a state-of-the-art action recognition method. Furthermore, we fine tune two neural-network flow methods end-to-end on the most widely used action recognition dataset (UCF101). Based on these experiments, we make the following five observations: 1) optical flow is useful for action recognition because it is invariant to appearance, 2) optical flow methods are optimized to minimize end-point-error (EPE), but the EPE of current methods is not well correlated with action recognition performance, 3) for the flow methods tested, accuracy at boundaries and at small displacements is most correlated with action recognition performance, 4) training optical flow to minimize classification error instead of minimizing EPE improves recognition performance, and 5) optical flow learned for the task of action recognition differs from traditional optical flow especially inside the human body and at the boundary of the body. These observations may encourage optical flow researchers to look beyond EPE as a goal and guide action recognition researchers to seek better motion cues, leading to a tighter integration of the optical flow and action recognition communities.

avg ps

arXiv DOI [BibTex]

2018


arXiv DOI [BibTex]


Towards Robust Visual Odometry with a Multi-Camera System
Towards Robust Visual Odometry with a Multi-Camera System

Liu, P., Geppert, M., Heng, L., Sattler, T., Geiger, A., Pollefeys, M.

In International Conference on Intelligent Robots and Systems (IROS) 2018, International Conference on Intelligent Robots and Systems, October 2018 (inproceedings)

Abstract
We present a visual odometry (VO) algorithm for a multi-camera system and robust operation in challenging environments. Our algorithm consists of a pose tracker and a local mapper. The tracker estimates the current pose by minimizing photometric errors between the most recent keyframe and the current frame. The mapper initializes the depths of all sampled feature points using plane-sweeping stereo. To reduce pose drift, a sliding window optimizer is used to refine poses and structure jointly. Our formulation is flexible enough to support an arbitrary number of stereo cameras. We evaluate our algorithm thoroughly on five datasets. The datasets were captured in different conditions: daytime, night-time with near-infrared (NIR) illumination and night-time without NIR illumination. Experimental results show that a multi-camera setup makes the VO more robust to challenging environments, especially night-time conditions, in which a single stereo configuration fails easily due to the lack of features.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Learning Priors for Semantic 3D Reconstruction
Learning Priors for Semantic 3D Reconstruction

Cherabier, I., Schönberger, J., Oswald, M., Pollefeys, M., Geiger, A.

In Computer Vision – ECCV 2018, Springer International Publishing, Cham, September 2018 (inproceedings)

Abstract
We present a novel semantic 3D reconstruction framework which embeds variational regularization into a neural network. Our network performs a fixed number of unrolled multi-scale optimization iterations with shared interaction weights. In contrast to existing variational methods for semantic 3D reconstruction, our model is end-to-end trainable and captures more complex dependencies between the semantic labels and the 3D geometry. Compared to previous learning-based approaches to 3D reconstruction, we integrate powerful long-range dependencies using variational coarse-to-fine optimization. As a result, our network architecture requires only a moderate number of parameters while keeping a high level of expressiveness which enables learning from very little data. Experiments on real and synthetic datasets demonstrate that our network achieves higher accuracy compared to a purely variational approach while at the same time requiring two orders of magnitude less iterations to converge. Moreover, our approach handles ten times more semantic class labels using the same computational resources.

avg

pdf suppmat Project Page Video DOI Project Page [BibTex]

pdf suppmat Project Page Video DOI Project Page [BibTex]


Unsupervised Learning of Multi-Frame Optical Flow with Occlusions
Unsupervised Learning of Multi-Frame Optical Flow with Occlusions

Janai, J., Güney, F., Ranjan, A., Black, M. J., Geiger, A.

In European Conference on Computer Vision (ECCV), Lecture Notes in Computer Science, vol 11220, pages: 713-731, Springer, Cham, September 2018 (inproceedings)

avg ps

pdf suppmat Video Project Page DOI Project Page [BibTex]

pdf suppmat Video Project Page DOI Project Page [BibTex]


SphereNet: Learning Spherical Representations for Detection and Classification in Omnidirectional Images
SphereNet: Learning Spherical Representations for Detection and Classification in Omnidirectional Images

Coors, B., Condurache, A. P., Geiger, A.

European Conference on Computer Vision (ECCV), September 2018 (conference)

Abstract
Omnidirectional cameras offer great benefits over classical cameras wherever a wide field of view is essential, such as in virtual reality applications or in autonomous robots. Unfortunately, standard convolutional neural networks are not well suited for this scenario as the natural projection surface is a sphere which cannot be unwrapped to a plane without introducing significant distortions, particularly in the polar regions. In this work, we present SphereNet, a novel deep learning framework which encodes invariance against such distortions explicitly into convolutional neural networks. Towards this goal, SphereNet adapts the sampling locations of the convolutional filters, effectively reversing distortions, and wraps the filters around the sphere. By building on regular convolutions, SphereNet enables the transfer of existing perspective convolutional neural network models to the omnidirectional case. We demonstrate the effectiveness of our method on the tasks of image classification and object detection, exploiting two newly created semi-synthetic and real-world omnidirectional datasets.

avg

pdf suppmat Project Page [BibTex]


Robust Dense Mapping for Large-Scale Dynamic Environments
Robust Dense Mapping for Large-Scale Dynamic Environments

Barsan, I. A., Liu, P., Pollefeys, M., Geiger, A.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, IEEE, International Conference on Robotics and Automation, May 2018 (inproceedings)

Abstract
We present a stereo-based dense mapping algorithm for large-scale dynamic urban environments. In contrast to other existing methods, we simultaneously reconstruct the static background, the moving objects, and the potentially moving but currently stationary objects separately, which is desirable for high-level mobile robotic tasks such as path planning in crowded environments. We use both instance-aware semantic segmentation and sparse scene flow to classify objects as either background, moving, or potentially moving, thereby ensuring that the system is able to model objects with the potential to transition from static to dynamic, such as parked cars. Given camera poses estimated from visual odometry, both the background and the (potentially) moving objects are reconstructed separately by fusing the depth maps computed from the stereo input. In addition to visual odometry, sparse scene flow is also used to estimate the 3D motions of the detected moving objects, in order to reconstruct them accurately. A map pruning technique is further developed to improve reconstruction accuracy and reduce memory consumption, leading to increased scalability. We evaluate our system thoroughly on the well-known KITTI dataset. Our system is capable of running on a PC at approximately 2.5Hz, with the primary bottleneck being the instance-aware semantic segmentation, which is a limitation we hope to address in future work.

avg

pdf Video Project Page Project Page [BibTex]

pdf Video Project Page Project Page [BibTex]


RayNet: Learning Volumetric 3D Reconstruction with Ray Potentials
RayNet: Learning Volumetric 3D Reconstruction with Ray Potentials

Paschalidou, D., Ulusoy, A. O., Schmitt, C., Gool, L., Geiger, A.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
In this paper, we consider the problem of reconstructing a dense 3D model using images captured from different views. Recent methods based on convolutional neural networks (CNN) allow learning the entire task from data. However, they do not incorporate the physics of image formation such as perspective geometry and occlusion. Instead, classical approaches based on Markov Random Fields (MRF) with ray-potentials explicitly model these physical processes, but they cannot cope with large surface appearance variations across different viewpoints. In this paper, we propose RayNet, which combines the strengths of both frameworks. RayNet integrates a CNN that learns view-invariant feature representations with an MRF that explicitly encodes the physics of perspective projection and occlusion. We train RayNet end-to-end using empirical risk minimization. We thoroughly evaluate our approach on challenging real-world datasets and demonstrate its benefits over a piece-wise trained baseline, hand-crafted models as well as other learning-based approaches.

avg

pdf suppmat Video Project Page code Poster Project Page [BibTex]

pdf suppmat Video Project Page code Poster Project Page [BibTex]


no image
Enhanced Non-Steady Gliding Performance of the MultiMo-Bat through Optimal Airfoil Configuration and Control Strategy

Kim, H., Woodward, M. A., Sitti, M.

In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages: 1382-1388, 2018 (inproceedings)

pi

[BibTex]

[BibTex]


Deep Marching Cubes: Learning Explicit Surface Representations
Deep Marching Cubes: Learning Explicit Surface Representations

Liao, Y., Donne, S., Geiger, A.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
Existing learning based solutions to 3D surface prediction cannot be trained end-to-end as they operate on intermediate representations (eg, TSDF) from which 3D surface meshes must be extracted in a post-processing step (eg, via the marching cubes algorithm). In this paper, we investigate the problem of end-to-end 3D surface prediction. We first demonstrate that the marching cubes algorithm is not differentiable and propose an alternative differentiable formulation which we insert as a final layer into a 3D convolutional neural network. We further propose a set of loss functions which allow for training our model with sparse point supervision. Our experiments demonstrate that the model allows for predicting sub-voxel accurate 3D shapes of arbitrary topology. Additionally, it learns to complete shapes and to separate an object's inside from its outside even in the presence of sparse and incomplete ground truth. We investigate the benefits of our approach on the task of inferring shapes from 3D point clouds. Our model is flexible and can be combined with a variety of shape encoder and shape inference techniques.

avg

pdf suppmat Video Project Page Poster Project Page [BibTex]

pdf suppmat Video Project Page Poster Project Page [BibTex]


Semantic Visual Localization
Semantic Visual Localization

Schönberger, J., Pollefeys, M., Geiger, A., Sattler, T.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
Robust visual localization under a wide range of viewing conditions is a fundamental problem in computer vision. Handling the difficult cases of this problem is not only very challenging but also of high practical relevance, eg, in the context of life-long localization for augmented reality or autonomous robots. In this paper, we propose a novel approach based on a joint 3D geometric and semantic understanding of the world, enabling it to succeed under conditions where previous approaches failed. Our method leverages a novel generative model for descriptor learning, trained on semantic scene completion as an auxiliary task. The resulting 3D descriptors are robust to missing observations by encoding high-level 3D geometric and semantic information. Experiments on several challenging large-scale localization datasets demonstrate reliable localization under extreme viewpoint, illumination, and geometry changes.

avg

pdf suppmat Poster Project Page [BibTex]

pdf suppmat Poster Project Page [BibTex]


Which Training Methods for GANs do actually Converge?
Which Training Methods for GANs do actually Converge?

Mescheder, L., Geiger, A., Nowozin, S.

International Conference on Machine learning (ICML), 2018 (conference)

Abstract
Recent work has shown local convergence of GAN training for absolutely continuous data and generator distributions. In this paper, we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is not always convergent. Furthermore, we discuss regularization strategies that were recently proposed to stabilize GAN training. Our analysis shows that GAN training with instance noise or zero-centered gradient penalties converges. On the other hand, we show that Wasserstein-GANs and WGAN-GP with a finite number of discriminator updates per generator update do not always converge to the equilibrium point. We discuss these results, leading us to a new explanation for the stability problems of GAN training. Based on our analysis, we extend our convergence results to more general GANs and prove local convergence for simplified gradient penalties even if the generator and data distributions lie on lower dimensional manifolds. We find these penalties to work well in practice and use them to learn high-resolution generative image models for a variety of datasets with little hyperparameter tuning.

avg

code video paper supplement slides poster Project Page [BibTex]


no image
Collectives of Spinning Mobile Microrobots for Navigation and Object Manipulation at the Air-Water Interface

Wang, W., Kishore, V., Koens, L., Lauga, E., Sitti, M.

In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages: 1-9, 2018 (inproceedings)

pi

[BibTex]

[BibTex]


Learning 3D Shape Completion from Laser Scan Data with Weak Supervision
Learning 3D Shape Completion from Laser Scan Data with Weak Supervision

Stutz, D., Geiger, A.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
3D shape completion from partial point clouds is a fundamental problem in computer vision and computer graphics. Recent approaches can be characterized as either data-driven or learning-based. Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations. Learning-based approaches, in contrast, avoid the expensive optimization step and instead directly predict the complete shape from the incomplete observations using deep neural networks. However, full supervision is required which is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, ie, learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. Tackling 3D shape completion of cars on ShapeNet and KITTI, we demonstrate that the proposed amortized maximum likelihood approach is able to compete with a fully supervised baseline and a state-of-the-art data-driven approach while being significantly faster. On ModelNet, we additionally show that the approach is able to generalize to other object categories as well.

avg

pdf suppmat Project Page Poster Project Page [BibTex]

pdf suppmat Project Page Poster Project Page [BibTex]


no image
Endo-VMFuseNet: A Deep Visual-Magnetic Sensor Fusion Approach for Endoscopic Capsule Robots

Turan, M., Almalioglu, Y., Gilbert, H. B., Sari, A. E., Soylu, U., Sitti, M.

In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages: 1-7, 2018 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Endosensorfusion: Particle filtering-based multi-sensory data fusion with switching state-space model for endoscopic capsule robots

Turan, M., Almalioglu, Y., Gilbert, H., Araujo, H., Cemgil, T., Sitti, M.

In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages: 1-8, 2018 (inproceedings)

pi

[BibTex]

[BibTex]


Learning Transformation Invariant Representations with Weak Supervision
Learning Transformation Invariant Representations with Weak Supervision

Coors, B., Condurache, A., Mertins, A., Geiger, A.

In International Conference on Computer Vision Theory and Applications, International Conference on Computer Vision Theory and Applications, 2018 (inproceedings)

Abstract
Deep convolutional neural networks are the current state-of-the-art solution to many computer vision tasks. However, their ability to handle large global and local image transformations is limited. Consequently, extensive data augmentation is often utilized to incorporate prior knowledge about desired invariances to geometric transformations such as rotations or scale changes. In this work, we combine data augmentation with an unsupervised loss which enforces similarity between the predictions of augmented copies of an input sample. Our loss acts as an effective regularizer which facilitates the learning of transformation invariant representations. We investigate the effectiveness of the proposed similarity loss on rotated MNIST and the German Traffic Sign Recognition Benchmark (GTSRB) in the context of different classification models including ladder networks. Our experiments demonstrate improvements with respect to the standard data augmentation approach for supervised and semi-supervised learning tasks, in particular in the presence of little annotated data. In addition, we analyze the performance of the proposed approach with respect to its hyperparameters, including the strength of the regularization as well as the layer where representation similarity is enforced.

avg

pdf [BibTex]

pdf [BibTex]


no image
Direct observations of sub-100 nm spin wave propagation in magnonic wave-guides

Träger, N., Gruszecki, P., Lisiecki, F., Förster, J., Weigand, M., Kuswik, P., Dubowik, J., Schütz, G., Krawczyk, M., Gräfe, J.

In 2018 IEEE International Magnetics Conference (INTERMAG 2018), IEEE, Singapore, 2018 (inproceedings)

mms

DOI [BibTex]

DOI [BibTex]


no image
Interpreting FORC diagrams beyond the Preisach model: an experimental permalloy micro array investigation

Gross, F., Ilse, S., Schütz, G., Gräfe, J., Goering, E.

In 2018 IEEE International Magnetics Conference (INTERMAG 2018), IEEE, Singapore, 2018 (inproceedings)

mms

DOI [BibTex]

DOI [BibTex]

2012


no image
Topological optimization for continuum compliant mechanisms via morphological evolution of traditional mechanisms

Lum, GZ, Yeo, SH, Yang, GL, Teo, TJ, Sitti, M

In 4th International Conference on Computational Methods, pages: 8, 2012 (inproceedings)

pi

[BibTex]

2012


[BibTex]


no image
Spin wave mediated magnetic vortex core reversal

Stoll, H.

In 8461, San Diego, California, USA, 2012 (inproceedings)

mms

DOI [BibTex]

DOI [BibTex]


no image
Flapping Wings with DC-Motors via Direct, Elastic Transmissions

Azhar, M., Campolo, D., Lau, G., Sitti, M.

In Proceedings of International Conference on Intelligent Unmanned Systems, 8, 2012 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Investigation of bioinspired gecko fibers to improve adhesion of HeartLander surgical robot

Tortora, G., Glass, P., Wood, N., Aksak, B., Menciassi, A., Sitti, M., Riviere, C.

In Engineering in Medicine and Biology Society (EMBC), 2012 Annual International Conference of the IEEE, pages: 908-911, 2012 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Magnetic hysteresis for multi-state addressable magnetic microrobotic control

Diller, E., Miyashita, S., Sitti, M.

In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, pages: 2325-2331, 2012 (inproceedings)

pi

[BibTex]

[BibTex]

2001


no image
Survey of nanomanipulation systems

Sitti, M.

In Nanotechnology, 2001. IEEE-NANO 2001. Proceedings of the 2001 1st IEEE Conference on, pages: 75-80, 2001 (inproceedings)

pi

[BibTex]

2001


[BibTex]


no image
Nanotribological characterization system by AFM based controlled pushing

Sitti, M.

In Nanotechnology, 2001. IEEE-NANO 2001. Proceedings of the 2001 1st IEEE Conference on, pages: 99-104, 2001 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Computational micromagnetism of magnetic structures and magnetization processes in thin plantelets and small particles

Kronmüller, H., Hertel, R.

In Magnetic Storage Sstems Beyond 2000, 41, pages: 345-362, Nato Science Series II: Mathematics, Physics and Chemistry, Kluwer Academic Publishers, Rhodos, Greece, 2001 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Hydrogen storage in mechanically treated single wall carbon nanotrubes

Haluska, M., Hulman, M., Hirscher, M., Becher, M., Roth, S., Stepanek, I., Bernier, P.

In Electronic Properties of Molecular Nanostructures: XV International Winterschool/Euroconference, 591, pages: 603-608, American Institute of Physics Conference Proceedings, AIP, Kirchberg [Austria], 2001 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Isotopic mass and lattice constant of Si and Ge: X-Ray standing wave measurements

Zegenhagen, J., Kazimirov, A., Cao, L. X., Konuma, M., Sozontov, E., Plachke, D., Carstanjen, H. D., Bilger, G., Haller, E., Kohn, V., Cardona, M.

In Proceedings of the 25th Conference on the Physics of Semiconductors, 87, pages: 125-127, Springer proceedings in physics, Springer, Osaka, Japan, 2001 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Positron Annihilation Studies on Stable and Undercooled Metal Melts at the Stuttgart Pelletron

Stoll, H., Siegle, A., Major, J.

In Application of Accelerators in Research and Industry, 576, pages: 749-752, AIP Conference Proceedings, Denton, Texas, USA, 2001 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Towards flapping wing control for a micromechanical flying insect

Yan, J., Wood, R. J., Avadhanula, S., Sitti, M., Fearing, R. S.

In Robotics and Automation, 2001. Proceedings 2001 ICRA. IEEE International Conference on, 4, pages: 3901-3908, 2001 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Man-machine interface for micro/nano manipulation with an afm probe

Aruk, B., Hashimoto, H., Sitti, M.

In Nanotechnology, 2001. IEEE-NANO 2001. Proceedings of the 2001 1st IEEE Conference on, pages: 151-156, 2001 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Submicrometer spatially resolved measurements of mechanical properties and correlation to microstructure and composition

Kunert, M., Baretzky, B., Baker, S. P., Mittemeijer, E. J.

In Fundamentals of Nanoindentation and Nanotribology II, 649, pages: Q3.2.1-Q3.2.6, Materials Research Society Symposium Proceedings, MRS, Boston, MA, USA, 2001 (inproceedings)

mms

[BibTex]

[BibTex]


no image
The six-jump diffusion cycles in B2-compounds

Drautz, R., Meyer, B., Fähnle, M.

In Proceedings of DIMAT 2000, the Fifth International Conference on Diffusion in Materials, pages: 417-422, Defect and Diffusion Forum, Scitec Publications Ltd., Paris, France, 2001 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Ionic nitriding of austenitic and ferritic steel with the aid of a high aperture hall current accelerator

Straumal, B. B., Vershinin, N. F., Friesel, M., Ishenko, S. A., Gust, W.

In Diffusion in Materials DIMAT2000, 194, pages: 1457-1462, Defect and Diffusion Forum, Trans Tech, Paris, France, 2001 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Development of PZT and PZN-PT based unimorph actuators for micromechanical flapping mechanisms

Sitti, M., Campolo, D., Yan, J., Fearing, R. S.

In Robotics and Automation, 2001. Proceedings 2001 ICRA. IEEE International Conference on, 4, pages: 3839-3846, 2001 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Thorax Design and Wing Control for a Micromechanical Flying Insect

Yan, J, Ayadhanula, S, Sitti, M, Wood, RJ, Fearing, RS

In PROCEEDINGS OF THE ANNUAL ALLERTON CONFERENCE ON COMMUNICATION CONTROL AND COMPUTING, 39(2):952-961, 2001 (inproceedings)

pi

[BibTex]

[BibTex]


no image
First proof of slow trapping of positronium in polymers by an Age-Momentum-Correlation (AMOC) experiment

Dauwe, C., Balcaen, N., van Waeyenberge, B., van Petegem, S., Stoll, H.

In Positron Annihilation. Proceedings of the 12th International Conference on Positron Annihilation, 363/365, pages: 254-256, Materials Science Forum, Trans Tech Publications Ltd., München, 2001 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Positron-age-momentum correlation

Stoll, H., Bandzuch, P., Siegle, A.

In Positron Annihilation: Proceedings of the 12th International Conference on Positron Annihilation, 363-365, pages: 547-551, Materials Science Forum, Trans Tech Publications Ltd., München, 2001 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Nanocrystalline and nanostructured high-performance permanent magnets

Goll, D., Hadjipanayis, G. C., Kronmüller, H.

In Applications of Ferromagnetic and Optical Materials, Storage and Magnetoelectronics, 674, pages: U2.4.1-U2.4.12, Materials Research Society Symposium Proceedings, MRS, San Francisco, Calif., 2001 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Ion beam analysis with monolayer depth resolution using the electrostatic spectrometer at the MPI Stuttgart

Plachke, D., Blohm, G., Fischer, T., Khellaf, A., Kruse, O., Stoll, H., Carstanjen, H. D.

In Proceedings of the 16th International Conference on Applications of Accelerators in Research and Industry, 576, pages: 458-462, American Institute of Physics Conference Proceedings, AIP, Denton, Texas, 2001 (inproceedings)

mms

[BibTex]

[BibTex]


no image
From the electronic structure to the macroscopiy behavior: A multi-scale analysis of plasticity in intermetallic compounds

Fähnle, M., Kohlhammer, S., Bester, G.

In Influences of Interface and Dislocation Behavior on Microstructure Evolution, 652, pages: Y4.5.1.-Y4.5.12, Materials Research Society Symposium Proceedings, MRS, Boston, Mass., USA, 2001 (inproceedings)

mms

[BibTex]

[BibTex]


no image
Influence of the microstructure on the magnetic properties of giant-magnetostrictive TbDyFe films

Hirscher, M., Winzek, B., Fischer, S. F., Kronmüller, H.

In Smart Materials. Proceedings of the 1st Caesarium, pages: 23-37, Springer, Bonn, 2001 (inproceedings)

mms

[BibTex]

[BibTex]


no image
PZT actuated four-bar mechanism with two flexible links for micromechanical flying insect thorax

Sitti, M.

In Robotics and Automation, 2001. Proceedings 2001 ICRA. IEEE International Conference on, 4, pages: 3893-3900, 2001 (inproceedings)

pi

[BibTex]

[BibTex]