Header logo is


2018


Deep Reinforcement Learning for Event-Triggered Control
Deep Reinforcement Learning for Event-Triggered Control

Baumann, D., Zhu, J., Martius, G., Trimpe, S.

In Proceedings of the 57th IEEE International Conference on Decision and Control (CDC), pages: 943-950, 57th IEEE International Conference on Decision and Control (CDC), December 2018 (inproceedings)

al ics

arXiv PDF DOI Project Page Project Page [BibTex]

2018


arXiv PDF DOI Project Page Project Page [BibTex]


Efficient Encoding of Dynamical Systems through Local Approximations
Efficient Encoding of Dynamical Systems through Local Approximations

Solowjow, F., Mehrjou, A., Schölkopf, B., Trimpe, S.

In Proceedings of the 57th IEEE International Conference on Decision and Control (CDC), pages: 6073 - 6079 , Miami, Fl, USA, December 2018 (inproceedings)

ei ics

arXiv PDF DOI Project Page [BibTex]

arXiv PDF DOI Project Page [BibTex]


Depth Control of Underwater Robots using Sliding Modes and Gaussian Process Regression
Depth Control of Underwater Robots using Sliding Modes and Gaussian Process Regression

Lima, G. S., Bessa, W. M., Trimpe, S.

In Proceeding of the 15th Latin American Robotics Symposium, João Pessoa, Brazil, 15th Latin American Robotics Symposium, November 2018 (inproceedings)

Abstract
The development of accurate control systems for underwater robotic vehicles relies on the adequate compensation for hydrodynamic effects. In this work, a new robust control scheme is presented for remotely operated underwater vehicles. In order to meet both robustness and tracking requirements, sliding mode control is combined with Gaussian process regression. The convergence properties of the closed-loop signals are analytically proven. Numerical results confirm the stronger improved performance of the proposed control scheme.

ics

[BibTex]

[BibTex]


Gait learning for soft microrobots controlled by light fields
Gait learning for soft microrobots controlled by light fields

Rohr, A. V., Trimpe, S., Marco, A., Fischer, P., Palagi, S.

In International Conference on Intelligent Robots and Systems (IROS) 2018, pages: 6199-6206, International Conference on Intelligent Robots and Systems 2018, October 2018 (inproceedings)

Abstract
Soft microrobots based on photoresponsive materials and controlled by light fields can generate a variety of different gaits. This inherent flexibility can be exploited to maximize their locomotion performance in a given environment and used to adapt them to changing environments. However, because of the lack of accurate locomotion models, and given the intrinsic variability among microrobots, analytical control design is not possible. Common data-driven approaches, on the other hand, require running prohibitive numbers of experiments and lead to very sample-specific results. Here we propose a probabilistic learning approach for light-controlled soft microrobots based on Bayesian Optimization (BO) and Gaussian Processes (GPs). The proposed approach results in a learning scheme that is highly data-efficient, enabling gait optimization with a limited experimental budget, and robust against differences among microrobot samples. These features are obtained by designing the learning scheme through the comparison of different GP priors and BO settings on a semisynthetic data set. The developed learning scheme is validated in microrobot experiments, resulting in a 115% improvement in a microrobot’s locomotion performance with an experimental budget of only 20 tests. These encouraging results lead the way toward self-adaptive microrobotic systems based on lightcontrolled soft microrobots and probabilistic learning control.

ics pf

arXiv IEEE Xplore DOI Project Page [BibTex]

arXiv IEEE Xplore DOI Project Page [BibTex]


On the Integration of Optical Flow and Action Recognition
On the Integration of Optical Flow and Action Recognition

Sevilla-Lara, L., Liao, Y., Güney, F., Jampani, V., Geiger, A., Black, M. J.

In German Conference on Pattern Recognition (GCPR), LNCS 11269, pages: 281-297, Springer, Cham, October 2018 (inproceedings)

Abstract
Most of the top performing action recognition methods use optical flow as a "black box" input. Here we take a deeper look at the combination of flow and action recognition, and investigate why optical flow is helpful, what makes a flow method good for action recognition, and how we can make it better. In particular, we investigate the impact of different flow algorithms and input transformations to better understand how these affect a state-of-the-art action recognition method. Furthermore, we fine tune two neural-network flow methods end-to-end on the most widely used action recognition dataset (UCF101). Based on these experiments, we make the following five observations: 1) optical flow is useful for action recognition because it is invariant to appearance, 2) optical flow methods are optimized to minimize end-point-error (EPE), but the EPE of current methods is not well correlated with action recognition performance, 3) for the flow methods tested, accuracy at boundaries and at small displacements is most correlated with action recognition performance, 4) training optical flow to minimize classification error instead of minimizing EPE improves recognition performance, and 5) optical flow learned for the task of action recognition differs from traditional optical flow especially inside the human body and at the boundary of the body. These observations may encourage optical flow researchers to look beyond EPE as a goal and guide action recognition researchers to seek better motion cues, leading to a tighter integration of the optical flow and action recognition communities.

avg ps

arXiv DOI [BibTex]

arXiv DOI [BibTex]


Towards Robust Visual Odometry with a Multi-Camera System
Towards Robust Visual Odometry with a Multi-Camera System

Liu, P., Geppert, M., Heng, L., Sattler, T., Geiger, A., Pollefeys, M.

In International Conference on Intelligent Robots and Systems (IROS) 2018, International Conference on Intelligent Robots and Systems, October 2018 (inproceedings)

Abstract
We present a visual odometry (VO) algorithm for a multi-camera system and robust operation in challenging environments. Our algorithm consists of a pose tracker and a local mapper. The tracker estimates the current pose by minimizing photometric errors between the most recent keyframe and the current frame. The mapper initializes the depths of all sampled feature points using plane-sweeping stereo. To reduce pose drift, a sliding window optimizer is used to refine poses and structure jointly. Our formulation is flexible enough to support an arbitrary number of stereo cameras. We evaluate our algorithm thoroughly on five datasets. The datasets were captured in different conditions: daytime, night-time with near-infrared (NIR) illumination and night-time without NIR illumination. Experimental results show that a multi-camera setup makes the VO more robust to challenging environments, especially night-time conditions, in which a single stereo configuration fails easily due to the lack of features.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Learning Priors for Semantic 3D Reconstruction
Learning Priors for Semantic 3D Reconstruction

Cherabier, I., Schönberger, J., Oswald, M., Pollefeys, M., Geiger, A.

In Computer Vision – ECCV 2018, Springer International Publishing, Cham, September 2018 (inproceedings)

Abstract
We present a novel semantic 3D reconstruction framework which embeds variational regularization into a neural network. Our network performs a fixed number of unrolled multi-scale optimization iterations with shared interaction weights. In contrast to existing variational methods for semantic 3D reconstruction, our model is end-to-end trainable and captures more complex dependencies between the semantic labels and the 3D geometry. Compared to previous learning-based approaches to 3D reconstruction, we integrate powerful long-range dependencies using variational coarse-to-fine optimization. As a result, our network architecture requires only a moderate number of parameters while keeping a high level of expressiveness which enables learning from very little data. Experiments on real and synthetic datasets demonstrate that our network achieves higher accuracy compared to a purely variational approach while at the same time requiring two orders of magnitude less iterations to converge. Moreover, our approach handles ten times more semantic class labels using the same computational resources.

avg

pdf suppmat Project Page Video DOI Project Page [BibTex]

pdf suppmat Project Page Video DOI Project Page [BibTex]


Unsupervised Learning of Multi-Frame Optical Flow with Occlusions
Unsupervised Learning of Multi-Frame Optical Flow with Occlusions

Janai, J., Güney, F., Ranjan, A., Black, M. J., Geiger, A.

In European Conference on Computer Vision (ECCV), Lecture Notes in Computer Science, vol 11220, pages: 713-731, Springer, Cham, September 2018 (inproceedings)

avg ps

pdf suppmat Video Project Page DOI Project Page [BibTex]

pdf suppmat Video Project Page DOI Project Page [BibTex]


SphereNet: Learning Spherical Representations for Detection and Classification in Omnidirectional Images
SphereNet: Learning Spherical Representations for Detection and Classification in Omnidirectional Images

Coors, B., Condurache, A. P., Geiger, A.

European Conference on Computer Vision (ECCV), September 2018 (conference)

Abstract
Omnidirectional cameras offer great benefits over classical cameras wherever a wide field of view is essential, such as in virtual reality applications or in autonomous robots. Unfortunately, standard convolutional neural networks are not well suited for this scenario as the natural projection surface is a sphere which cannot be unwrapped to a plane without introducing significant distortions, particularly in the polar regions. In this work, we present SphereNet, a novel deep learning framework which encodes invariance against such distortions explicitly into convolutional neural networks. Towards this goal, SphereNet adapts the sampling locations of the convolutional filters, effectively reversing distortions, and wraps the filters around the sphere. By building on regular convolutions, SphereNet enables the transfer of existing perspective convolutional neural network models to the omnidirectional case. We demonstrate the effectiveness of our method on the tasks of image classification and object detection, exploiting two newly created semi-synthetic and real-world omnidirectional datasets.

avg

pdf suppmat Project Page [BibTex]


no image
Learning-Based Robust Model Predictive Control with State-Dependent Uncertainty

Soloperto, R., Müller, M. A., Trimpe, S., Allgöwer, F.

In Proceedings of the IFAC Conference on Nonlinear Model Predictive Control (NMPC), Madison, Wisconsin, USA, 6th IFAC Conference on Nonlinear Model Predictive Control, August 2018 (inproceedings)

ics

PDF [BibTex]

PDF [BibTex]


Probabilistic Recurrent State-Space Models
Probabilistic Recurrent State-Space Models

Doerr, A., Daniel, C., Schiegg, M., Nguyen-Tuong, D., Schaal, S., Toussaint, M., Trimpe, S.

In Proceedings of the International Conference on Machine Learning (ICML), International Conference on Machine Learning (ICML), July 2018 (inproceedings)

Abstract
State-space models (SSMs) are a highly expressive model class for learning patterns in time series data and for system identification. Deterministic versions of SSMs (e.g., LSTMs) proved extremely successful in modeling complex time-series data. Fully probabilistic SSMs, however, unfortunately often prove hard to train, even for smaller problems. To overcome this limitation, we propose a scalable initialization and training algorithm based on doubly stochastic variational inference and Gaussian processes. In the variational approximation we propose in contrast to related approaches to fully capture the latent state temporal correlations to allow for robust training.

am ics

arXiv pdf Project Page [BibTex]

arXiv pdf Project Page [BibTex]


Event-triggered Learning for Resource-efficient Networked Control
Event-triggered Learning for Resource-efficient Networked Control

Solowjow, F., Baumann, D., Garcke, J., Trimpe, S.

In Proceedings of the American Control Conference (ACC), pages: 6506 - 6512, American Control Conference, June 2018 (inproceedings)

ics

arXiv PDF DOI Project Page [BibTex]

arXiv PDF DOI Project Page [BibTex]


Soft Miniaturized Linear Actuators Wirelessly Powered by Rotating Permanent Magnets
Soft Miniaturized Linear Actuators Wirelessly Powered by Rotating Permanent Magnets

Qiu, T., Palagi, S., Sachs, J., Fischer, P.

In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages: 3595-3600, May 2018 (inproceedings)

Abstract
Wireless actuation by magnetic fields allows for the operation of untethered miniaturized devices, e.g. in biomedical applications. Nevertheless, generating large controlled forces over relatively large distances is challenging. Magnetic torques are easier to generate and control, but they are not always suitable for the tasks at hand. Moreover, strong magnetic fields are required to generate a sufficient torque, which are difficult to achieve with electromagnets. Here, we demonstrate a soft miniaturized actuator that transforms an externally applied magnetic torque into a controlled linear force. We report the design, fabrication and characterization of both the actuator and the magnetic field generator. We show that the magnet assembly, which is based on a set of rotating permanent magnets, can generate strong controlled oscillating fields over a relatively large workspace. The actuator, which is 3D-printed, can lift a load of more than 40 times its weight. Finally, we show that the actuator can be further miniaturized, paving the way towards strong, wirelessly powered microactuators.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Robust Dense Mapping for Large-Scale Dynamic Environments
Robust Dense Mapping for Large-Scale Dynamic Environments

Barsan, I. A., Liu, P., Pollefeys, M., Geiger, A.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2018, IEEE, International Conference on Robotics and Automation, May 2018 (inproceedings)

Abstract
We present a stereo-based dense mapping algorithm for large-scale dynamic urban environments. In contrast to other existing methods, we simultaneously reconstruct the static background, the moving objects, and the potentially moving but currently stationary objects separately, which is desirable for high-level mobile robotic tasks such as path planning in crowded environments. We use both instance-aware semantic segmentation and sparse scene flow to classify objects as either background, moving, or potentially moving, thereby ensuring that the system is able to model objects with the potential to transition from static to dynamic, such as parked cars. Given camera poses estimated from visual odometry, both the background and the (potentially) moving objects are reconstructed separately by fusing the depth maps computed from the stereo input. In addition to visual odometry, sparse scene flow is also used to estimate the 3D motions of the detected moving objects, in order to reconstruct them accurately. A map pruning technique is further developed to improve reconstruction accuracy and reduce memory consumption, leading to increased scalability. We evaluate our system thoroughly on the well-known KITTI dataset. Our system is capable of running on a PC at approximately 2.5Hz, with the primary bottleneck being the instance-aware semantic segmentation, which is a limitation we hope to address in future work.

avg

pdf Video Project Page Project Page [BibTex]

pdf Video Project Page Project Page [BibTex]


Evaluating Low-Power Wireless Cyber-Physical Systems
Evaluating Low-Power Wireless Cyber-Physical Systems

Baumann, D., Mager, F., Singh, H., Zimmerling, M., Trimpe, S.

In Proceedings of the IEEE Workshop on Benchmarking Cyber-Physical Networks and Systems (CPSBench), pages: 13-18, IEEE Workshop on Benchmarking Cyber-Physical Networks and Systems (CPSBench), April 2018 (inproceedings)

ics

arXiv PDF DOI Project Page [BibTex]

arXiv PDF DOI Project Page [BibTex]


RayNet: Learning Volumetric 3D Reconstruction with Ray Potentials
RayNet: Learning Volumetric 3D Reconstruction with Ray Potentials

Paschalidou, D., Ulusoy, A. O., Schmitt, C., Gool, L., Geiger, A.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
In this paper, we consider the problem of reconstructing a dense 3D model using images captured from different views. Recent methods based on convolutional neural networks (CNN) allow learning the entire task from data. However, they do not incorporate the physics of image formation such as perspective geometry and occlusion. Instead, classical approaches based on Markov Random Fields (MRF) with ray-potentials explicitly model these physical processes, but they cannot cope with large surface appearance variations across different viewpoints. In this paper, we propose RayNet, which combines the strengths of both frameworks. RayNet integrates a CNN that learns view-invariant feature representations with an MRF that explicitly encodes the physics of perspective projection and occlusion. We train RayNet end-to-end using empirical risk minimization. We thoroughly evaluate our approach on challenging real-world datasets and demonstrate its benefits over a piece-wise trained baseline, hand-crafted models as well as other learning-based approaches.

avg

pdf suppmat Video Project Page code Poster Project Page [BibTex]

pdf suppmat Video Project Page code Poster Project Page [BibTex]


no image
Enhanced Non-Steady Gliding Performance of the MultiMo-Bat through Optimal Airfoil Configuration and Control Strategy

Kim, H., Woodward, M. A., Sitti, M.

In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages: 1382-1388, 2018 (inproceedings)

pi

[BibTex]

[BibTex]


Deep Marching Cubes: Learning Explicit Surface Representations
Deep Marching Cubes: Learning Explicit Surface Representations

Liao, Y., Donne, S., Geiger, A.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
Existing learning based solutions to 3D surface prediction cannot be trained end-to-end as they operate on intermediate representations (eg, TSDF) from which 3D surface meshes must be extracted in a post-processing step (eg, via the marching cubes algorithm). In this paper, we investigate the problem of end-to-end 3D surface prediction. We first demonstrate that the marching cubes algorithm is not differentiable and propose an alternative differentiable formulation which we insert as a final layer into a 3D convolutional neural network. We further propose a set of loss functions which allow for training our model with sparse point supervision. Our experiments demonstrate that the model allows for predicting sub-voxel accurate 3D shapes of arbitrary topology. Additionally, it learns to complete shapes and to separate an object's inside from its outside even in the presence of sparse and incomplete ground truth. We investigate the benefits of our approach on the task of inferring shapes from 3D point clouds. Our model is flexible and can be combined with a variety of shape encoder and shape inference techniques.

avg

pdf suppmat Video Project Page Poster Project Page [BibTex]

pdf suppmat Video Project Page Poster Project Page [BibTex]


Semantic Visual Localization
Semantic Visual Localization

Schönberger, J., Pollefeys, M., Geiger, A., Sattler, T.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
Robust visual localization under a wide range of viewing conditions is a fundamental problem in computer vision. Handling the difficult cases of this problem is not only very challenging but also of high practical relevance, eg, in the context of life-long localization for augmented reality or autonomous robots. In this paper, we propose a novel approach based on a joint 3D geometric and semantic understanding of the world, enabling it to succeed under conditions where previous approaches failed. Our method leverages a novel generative model for descriptor learning, trained on semantic scene completion as an auxiliary task. The resulting 3D descriptors are robust to missing observations by encoding high-level 3D geometric and semantic information. Experiments on several challenging large-scale localization datasets demonstrate reliable localization under extreme viewpoint, illumination, and geometry changes.

avg

pdf suppmat Poster Project Page [BibTex]

pdf suppmat Poster Project Page [BibTex]


Which Training Methods for GANs do actually Converge?
Which Training Methods for GANs do actually Converge?

Mescheder, L., Geiger, A., Nowozin, S.

International Conference on Machine learning (ICML), 2018 (conference)

Abstract
Recent work has shown local convergence of GAN training for absolutely continuous data and generator distributions. In this paper, we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is not always convergent. Furthermore, we discuss regularization strategies that were recently proposed to stabilize GAN training. Our analysis shows that GAN training with instance noise or zero-centered gradient penalties converges. On the other hand, we show that Wasserstein-GANs and WGAN-GP with a finite number of discriminator updates per generator update do not always converge to the equilibrium point. We discuss these results, leading us to a new explanation for the stability problems of GAN training. Based on our analysis, we extend our convergence results to more general GANs and prove local convergence for simplified gradient penalties even if the generator and data distributions lie on lower dimensional manifolds. We find these penalties to work well in practice and use them to learn high-resolution generative image models for a variety of datasets with little hyperparameter tuning.

avg

code video paper supplement slides poster Project Page [BibTex]


no image
Collectives of Spinning Mobile Microrobots for Navigation and Object Manipulation at the Air-Water Interface

Wang, W., Kishore, V., Koens, L., Lauga, E., Sitti, M.

In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages: 1-9, 2018 (inproceedings)

pi

[BibTex]

[BibTex]


Learning 3D Shape Completion from Laser Scan Data with Weak Supervision
Learning 3D Shape Completion from Laser Scan Data with Weak Supervision

Stutz, D., Geiger, A.

In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE Computer Society, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2018, 2018 (inproceedings)

Abstract
3D shape completion from partial point clouds is a fundamental problem in computer vision and computer graphics. Recent approaches can be characterized as either data-driven or learning-based. Data-driven approaches rely on a shape model whose parameters are optimized to fit the observations. Learning-based approaches, in contrast, avoid the expensive optimization step and instead directly predict the complete shape from the incomplete observations using deep neural networks. However, full supervision is required which is often not available in practice. In this work, we propose a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision. While we also learn a shape prior on synthetic data, we amortize, ie, learn, maximum likelihood fitting using deep neural networks resulting in efficient shape completion without sacrificing accuracy. Tackling 3D shape completion of cars on ShapeNet and KITTI, we demonstrate that the proposed amortized maximum likelihood approach is able to compete with a fully supervised baseline and a state-of-the-art data-driven approach while being significantly faster. On ModelNet, we additionally show that the approach is able to generalize to other object categories as well.

avg

pdf suppmat Project Page Poster Project Page [BibTex]

pdf suppmat Project Page Poster Project Page [BibTex]


no image
Endo-VMFuseNet: A Deep Visual-Magnetic Sensor Fusion Approach for Endoscopic Capsule Robots

Turan, M., Almalioglu, Y., Gilbert, H. B., Sari, A. E., Soylu, U., Sitti, M.

In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages: 1-7, 2018 (inproceedings)

pi

[BibTex]

[BibTex]


no image
Endosensorfusion: Particle filtering-based multi-sensory data fusion with switching state-space model for endoscopic capsule robots

Turan, M., Almalioglu, Y., Gilbert, H., Araujo, H., Cemgil, T., Sitti, M.

In 2018 IEEE International Conference on Robotics and Automation (ICRA), pages: 1-8, 2018 (inproceedings)

pi

[BibTex]

[BibTex]


Learning Transformation Invariant Representations with Weak Supervision
Learning Transformation Invariant Representations with Weak Supervision

Coors, B., Condurache, A., Mertins, A., Geiger, A.

In International Conference on Computer Vision Theory and Applications, International Conference on Computer Vision Theory and Applications, 2018 (inproceedings)

Abstract
Deep convolutional neural networks are the current state-of-the-art solution to many computer vision tasks. However, their ability to handle large global and local image transformations is limited. Consequently, extensive data augmentation is often utilized to incorporate prior knowledge about desired invariances to geometric transformations such as rotations or scale changes. In this work, we combine data augmentation with an unsupervised loss which enforces similarity between the predictions of augmented copies of an input sample. Our loss acts as an effective regularizer which facilitates the learning of transformation invariant representations. We investigate the effectiveness of the proposed similarity loss on rotated MNIST and the German Traffic Sign Recognition Benchmark (GTSRB) in the context of different classification models including ladder networks. Our experiments demonstrate improvements with respect to the standard data augmentation approach for supervised and semi-supervised learning tasks, in particular in the presence of little annotated data. In addition, we analyze the performance of the proposed approach with respect to its hyperparameters, including the strength of the regularization as well as the layer where representation similarity is enforced.

avg

pdf [BibTex]

pdf [BibTex]

2016


no image
Predictive and Self Triggering for Event-based State Estimation

Trimpe, S.

In Proceedings of the 55th IEEE Conference on Decision and Control (CDC), pages: 3098-3105, Las Vegas, NV, USA, December 2016 (inproceedings)

am ics

arXiv PDF DOI Project Page [BibTex]

2016


arXiv PDF DOI Project Page [BibTex]


Steering control of a water-running robot using an active tail
Steering control of a water-running robot using an active tail

Kim, H., Jeong, K., Sitti, M., Seo, T.

In Intelligent Robots and Systems (IROS), 2016 IEEE/RSJ International Conference on, pages: 4945-4950, October 2016 (inproceedings)

Abstract
Many highly dynamic novel mobile robots have been developed being inspired by animals. In this study, we are inspired by a basilisk lizard's ability to run and steer on water surface for a hexapedal robot. The robot has an active tail with a circular plate, which the robot rotates to steer on water. We dynamically modeled the platform and conducted simulations and experiments on steering locomotion with a bang-bang controller. The robot can steer on water by rotating the tail, and the controlled steering locomotion is stable. The dynamic modelling approximates the robot's steering locomotion and the trends of the simulations and experiments are similar, although there are errors between the desired and actual angles. The robot's maneuverability on water can be improved through further research.

pi

DOI [BibTex]

DOI [BibTex]


Targeting of cell mockups using sperm-shaped microrobots in vitro
Targeting of cell mockups using sperm-shaped microrobots in vitro

Khalil, I. S., Tabak, A. F., Hosney, A., Klingner, A., Shalaby, M., Abdel-Kader, R. M., Serry, M., Sitti, M.

In Biomedical Robotics and Biomechatronics (BioRob), 2016 6th IEEE International Conference on, pages: 495-501, July 2016 (inproceedings)

Abstract
Sperm-shaped microrobots are controlled under the influence of weak oscillating magnetic fields (milliTesla range) to selectively target cell mockups (i.e., gas bubbles with average diameter of 200 μm). The sperm-shaped microrobots are fabricated by electrospinning using a solution of polystyrene, dimethylformamide, and iron oxide nanoparticles. These nanoparticles are concentrated within the head of the microrobot, and hence enable directional control along external magnetic fields. The magnetic dipole moment of the microrobot is characterized (using the flip-time technique) to be 1.4×10-11 A.m2, at magnetic field of 28 mT. In addition, the morphology of the microrobot is characterized using Scanning Electron Microscopy images. The characterized parameters and morphology are used in the simulation of the locomotion mechanism of the microrobot to prove that its motion depends on breaking the time-reversal symmetry, rather than pulling with the magnetic field gradient. We experimentally demonstrate that the microrobot can controllably follow S-shaped, U-shaped, and square paths, and selectively target the cell mockups using image guidance and under the influence of the oscillating magnetic fields.

pi

DOI [BibTex]

DOI [BibTex]


Soft continuous microrobots with multiple intrinsic degrees of freedom
Soft continuous microrobots with multiple intrinsic degrees of freedom

Palagi, S., Mark, A. G., Melde, K., Zeng, H., Parmeggiani, C., Martella, D., Wiersma, D. S., Fischer, P.

In 2016 International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS), pages: 1-5, July 2016 (inproceedings)

Abstract
One of the main challenges in the development of microrobots, i.e. robots at the sub-millimeter scale, is the difficulty of adopting traditional solutions for power, control and, especially, actuation. As a result, most current microrobots are directly manipulated by external fields, and possess only a few passive degrees of freedom (DOFs). We have reported a strategy that enables embodiment, remote powering and control of a large number of DOFs in mobile soft microrobots. These consist of photo-responsive materials, such that the actuation of their soft continuous body can be selectively and dynamically controlled by structured light fields. Here we use finite-element modelling to evaluate the effective number of DOFs that are addressable in our microrobots. We also demonstrate that by this flexible approach different actuation patterns can be obtained, and thus different locomotion performances can be achieved within the very same microrobot. The reported results confirm the versatility of the proposed approach, which allows for easy application-specific optimization and online reconfiguration of the microrobot's behavior. Such versatility will enable advanced applications of robotics and automation at the micro scale.

pf

DOI [BibTex]

DOI [BibTex]


Analysis of the magnetic torque on a tilted permanent magnet for drug delivery in capsule robots
Analysis of the magnetic torque on a tilted permanent magnet for drug delivery in capsule robots

Munoz, F., Alici, G., Zhou, H., Li, W., Sitti, M.

In Advanced Intelligent Mechatronics (AIM), 2016 IEEE International Conference on, pages: 1386-1391, July 2016 (inproceedings)

Abstract
In this paper, we present the analysis of the torque transmitted to a tilted permanent magnet that is to be embedded in a capsule robot to achieve targeted drug delivery. This analysis is carried out by using an analytical model and experimental results for a small cubic permanent magnet that is driven by an external magnetic system made of an array of arc-shaped permanent magnets (ASMs). Our experimental results, which are in agreement with the analytical results, show that the cubic permanent magnet can safely be actuated for inclinations lower than 75° without having to make positional adjustments in the external magnetic system. We have found that with further inclinations, the cubic permanent magnet to be embedded in a drug delivery mechanism may stall. When it stalls, the external magnetic system's position and orientation would have to be adjusted to actuate the cubic permanent magnet and the drug release mechanism. This analysis of the transmitted torque is helpful for the development of real-time control strategies for magnetically articulated devices.

pi

DOI [BibTex]

DOI [BibTex]


Wireless actuator based on ultrasonic bubble streaming
Wireless actuator based on ultrasonic bubble streaming

Qiu, T., Palagi, S., Mark, A. G., Melde, K., Fischer, P.

In 2016 International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS), pages: 1-5, July 2016 (inproceedings)

Abstract
Miniaturized actuators are a key element for the manipulation and automation at small scales. Here, we propose a new miniaturized actuator, which consists of an array of micro gas bubbles immersed in a fluid. Under ultrasonic excitation, the oscillation of micro gas bubbles results in acoustic streaming and provides a propulsive force that drives the actuator. The actuator was fabricated by lithography and fluidic streaming was observed under ultrasound excitation. Theoretical modelling and numerical simulations were carried out to show that lowing the surface tension results in a larger amplitude of the bubble oscillation, and thus leads to a higher propulsive force. Experimental results also demonstrate that the propulsive force increases 3.5 times when the surface tension is lowered by adding a surfactant. An actuator with a 4×4 mm 2 surface area provides a driving force of about 0.46 mN, suggesting that it is possible to be used as a wireless actuator for small-scale robots and medical instruments.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Robust Gaussian Filtering using a Pseudo Measurement
Robust Gaussian Filtering using a Pseudo Measurement

Wüthrich, M., Garcia Cifuentes, C., Trimpe, S., Meier, F., Bohg, J., Issac, J., Schaal, S.

In Proceedings of the American Control Conference (ACC), Boston, MA, USA, July 2016 (inproceedings)

Abstract
Most widely-used state estimation algorithms, such as the Extended Kalman Filter and the Unscented Kalman Filter, belong to the family of Gaussian Filters (GF). Unfortunately, GFs fail if the measurement process is modelled by a fat-tailed distribution. This is a severe limitation, because thin-tailed measurement models, such as the analytically-convenient and therefore widely-used Gaussian distribution, are sensitive to outliers. In this paper, we show that mapping the measurements into a specific feature space enables any existing GF algorithm to work with fat-tailed measurement models. We find a feature function which is optimal under certain conditions. Simulation results show that the proposed method allows for robust filtering in both linear and nonlinear systems with measurements contaminated by fat-tailed noise.

am ics

Web link (url) DOI Project Page [BibTex]

Web link (url) DOI Project Page [BibTex]


Patches, Planes and Probabilities: A Non-local Prior for Volumetric {3D} Reconstruction
Patches, Planes and Probabilities: A Non-local Prior for Volumetric 3D Reconstruction

Ulusoy, A. O., Black, M. J., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
In this paper, we propose a non-local structured prior for volumetric multi-view 3D reconstruction. Towards this goal, we present a novel Markov random field model based on ray potentials in which assumptions about large 3D surface patches such as planarity or Manhattan world constraints can be efficiently encoded as probabilistic priors. We further derive an inference algorithm that reasons jointly about voxels, pixels and image segments, and estimates marginal distributions of appearance, occupancy, depth, normals and planarity. Key to tractable inference is a novel hybrid representation that spans both voxel and pixel space and that integrates non-local information from 2D image segmentations in a principled way. We compare our non-local prior to commonly employed local smoothness assumptions and a variety of state-of-the-art volumetric reconstruction baselines on challenging outdoor scenes with textureless and reflective surfaces. Our experiments indicate that regularizing over larger distances has the potential to resolve ambiguities where local regularizers fail.

avg ps

YouTube pdf poster suppmat Project Page [BibTex]

YouTube pdf poster suppmat Project Page [BibTex]


Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer
Semantic Instance Annotation of Street Scenes by 3D to 2D Label Transfer

Xie, J., Kiefel, M., Sun, M., Geiger, A.

In IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2016 (inproceedings)

Abstract
Semantic annotations are vital for training models for object recognition, semantic segmentation or scene understanding. Unfortunately, pixelwise annotation of images at very large scale is labor-intensive and only little labeled data is available, particularly at instance level and for street scenes. In this paper, we propose to tackle this problem by lifting the semantic instance labeling task from 2D into 3D. Given reconstructions from stereo or laser data, we annotate static 3D scene elements with rough bounding primitives and develop a probabilistic model which transfers this information into the image domain. We leverage our method to obtain 2D labels for a novel suburban video dataset which we have collected, resulting in 400k semantic and instance image annotations. A comparison of our method to state-of-the-art label transfer baselines reveals that 3D information enables more efficient annotation while at the same time resulting in improved accuracy and time-coherent labels.

avg ps

pdf suppmat Project Page Project Page [BibTex]

pdf suppmat Project Page Project Page [BibTex]


Automatic LQR Tuning Based on Gaussian Process Global Optimization
Automatic LQR Tuning Based on Gaussian Process Global Optimization

Marco, A., Hennig, P., Bohg, J., Schaal, S., Trimpe, S.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages: 270-277, IEEE, IEEE International Conference on Robotics and Automation, May 2016 (inproceedings)

Abstract
This paper proposes an automatic controller tuning framework based on linear optimal control combined with Bayesian optimization. With this framework, an initial set of controller gains is automatically improved according to a pre-defined performance objective evaluated from experimental data. The underlying Bayesian optimization algorithm is Entropy Search, which represents the latent objective as a Gaussian process and constructs an explicit belief over the location of the objective minimum. This is used to maximize the information gain from each experimental evaluation. Thus, this framework shall yield improved controllers with fewer evaluations compared to alternative approaches. A seven-degree- of-freedom robot arm balancing an inverted pole is used as the experimental demonstrator. Results of a two- and four- dimensional tuning problems highlight the method’s potential for automatic controller tuning on robotic platforms.

am ics pn

Video - Automatic LQR Tuning Based on Gaussian Process Global Optimization - ICRA 2016 Video - Automatic Controller Tuning on a Two-legged Robot PDF DOI Project Page [BibTex]

Video - Automatic LQR Tuning Based on Gaussian Process Global Optimization - ICRA 2016 Video - Automatic Controller Tuning on a Two-legged Robot PDF DOI Project Page [BibTex]


Depth-based Object Tracking Using a Robust Gaussian Filter
Depth-based Object Tracking Using a Robust Gaussian Filter

Issac, J., Wüthrich, M., Garcia Cifuentes, C., Bohg, J., Trimpe, S., Schaal, S.

In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2016, IEEE, IEEE International Conference on Robotics and Automation, May 2016 (inproceedings)

Abstract
We consider the problem of model-based 3D- tracking of objects given dense depth images as input. Two difficulties preclude the application of a standard Gaussian filter to this problem. First of all, depth sensors are characterized by fat-tailed measurement noise. To address this issue, we show how a recently published robustification method for Gaussian filters can be applied to the problem at hand. Thereby, we avoid using heuristic outlier detection methods that simply reject measurements if they do not match the model. Secondly, the computational cost of the standard Gaussian filter is prohibitive due to the high-dimensional measurement, i.e. the depth image. To address this problem, we propose an approximation to reduce the computational complexity of the filter. In quantitative experiments on real data we show how our method clearly outperforms the standard Gaussian filter. Furthermore, we compare its performance to a particle-filter-based tracking method, and observe comparable computational efficiency and improved accuracy and smoothness of the estimates.

am ics

Video Bayesian Object Tracking Library Bayesian Filtering Framework Object Tracking Dataset link (url) DOI Project Page [BibTex]

Video Bayesian Object Tracking Library Bayesian Filtering Framework Object Tracking Dataset link (url) DOI Project Page [BibTex]


Sperm-shaped magnetic microrobots: Fabrication using electrospinning, modeling, and characterization
Sperm-shaped magnetic microrobots: Fabrication using electrospinning, modeling, and characterization

Khalil, I. S., Tabak, A. F., Hosney, A., Mohamed, A., Klingner, A., Ghoneima, M., Sitti, M.

In Robotics and Automation (ICRA), 2016 IEEE International Conference on, pages: 1939-1944, May 2016 (inproceedings)

Abstract
We use electrospinning to fabricate sperm-shaped magnetic microrobots with a range of diameters from 50 μm to 500 μm. The variables of the electrospinning operation (voltage, concentration of the solution, dynamic viscosity, and distance between the syringe needle and collector) to achieve beading effect are determined. This beading effect allows us to fabricate microrobots with similar morphology to that of sperm cells. The bead and the ultra-fine fiber resemble the morphology of the head and tail of the sperm cell, respectively. We incorporate iron oxide nanoparticles to the head of the sperm-shaped microrobot to provide a magnetic dipole moment. This dipole enables directional control under the influence of external magnetic fields. We also apply weak (less than 2 mT) oscillating magnetic fields to exert a magnetic torque on the magnetic head, and generate planar flagellar waves and flagellated swim. The average speed of the sperm-shaped microrobot is calculated to be 0.5 body lengths per second and 1 body lengths per second at frequencies of 5 Hz and 10 Hz, respectively. We also develop a model of the microrobot using elastohydrodynamics approach and Timoshenko-Rayleigh beam theory, and find good agreement with the experimental results.

pi

DOI [BibTex]

DOI [BibTex]


no image
Communication Rate Analysis for Event-based State Estimation

(Best student paper finalist)

Ebner, S., Trimpe, S.

In Proceedings of the 13th International Workshop on Discrete Event Systems, May 2016 (inproceedings)

am ics

PDF DOI [BibTex]

PDF DOI [BibTex]


Auxetic Metamaterial Simplifies Soft Robot Design
Auxetic Metamaterial Simplifies Soft Robot Design

Mark, A. G., Palagi, S., Qiu, T., Fischer, P.

In 2016 IEEE Int. Conf. on Robotics and Automation (ICRA), pages: 4951-4956, May 2016 (inproceedings)

Abstract
Soft materials are being adopted in robotics in order to facilitate biomedical applications and in order to achieve simpler and more capable robots. One route to simplification is to design the robot's body using `smart materials' that carry the burden of control and actuation. Metamaterials enable just such rational design of the material properties. Here we present a soft robot that exploits mechanical metamaterials for the intrinsic synchronization of two passive clutches which contact its travel surface. Doing so allows it to move through an enclosed passage with an inchworm motion propelled by a single actuator. Our soft robot consists of two 3D-printed metamaterials that implement auxetic and normal elastic properties. The design, fabrication and characterization of the metamaterials are described. In addition, a working soft robot is presented. Since the synchronization mechanism is a feature of the robot's material body, we believe that the proposed design will enable compliant and robust implementations that scale well with miniaturization.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Towards Photo-Induced Swimming: Actuation of Liquid Crystalline  Elastomer in Water
Towards Photo-Induced Swimming: Actuation of Liquid Crystalline Elastomer in Water

cerretti, G., Martella, D., Zeng, H., Parmeggiani, C., Palagi, S., Mark, A. G., Melde, K., Qiu, T., Fischer, P., Wiersma, D.

In Proc. of SPIE 9738, pages: Laser 3D Manufacturing III, 97380T, April 2016 (inproceedings)

Abstract
Liquid Crystalline Elastomers (LCEs) are very promising smart materials that can be made sensitive to different external stimuli, such as heat, pH, humidity and light, by changing their chemical composition. In this paper we report the implementation of a nematically aligned LCE actuator able to undergo large light-induced deformations. We prove that this property is still present even when the actuator is submerged in fresh water. Thanks to the presence of azo-dye moieties, capable of going through a reversible trans-cis photo-isomerization, and by applying light with two different wavelengths we managed to control the bending of such actuator in the liquid environment. The reported results represent the first step towards swimming microdevices powered by light.

pf

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Deep Discrete Flow
Deep Discrete Flow

Güney, F., Geiger, A.

Asian Conference on Computer Vision (ACCV), 2016 (conference) Accepted

avg ps

pdf suppmat Project Page [BibTex]

pdf suppmat Project Page [BibTex]

2014


Omnidirectional 3D Reconstruction in Augmented Manhattan Worlds
Omnidirectional 3D Reconstruction in Augmented Manhattan Worlds

Schoenbein, M., Geiger, A.

International Conference on Intelligent Robots and Systems, pages: 716 - 723, IEEE, Chicago, IL, USA, IEEE/RSJ International Conference on Intelligent Robots and System, October 2014 (conference)

Abstract
This paper proposes a method for high-quality omnidirectional 3D reconstruction of augmented Manhattan worlds from catadioptric stereo video sequences. In contrast to existing works we do not rely on constructing virtual perspective views, but instead propose to optimize depth jointly in a unified omnidirectional space. Furthermore, we show that plane-based prior models can be applied even though planes in 3D do not project to planes in the omnidirectional domain. Towards this goal, we propose an omnidirectional slanted-plane Markov random field model which relies on plane hypotheses extracted using a novel voting scheme for 3D planes in omnidirectional space. To quantitatively evaluate our method we introduce a dataset which we have captured using our autonomous driving platform AnnieWAY which we equipped with two horizontally aligned catadioptric cameras and a Velodyne HDL-64E laser scanner for precise ground truth depth measurements. As evidenced by our experiments, the proposed method clearly benefits from the unified view and significantly outperforms existing stereo matching techniques both quantitatively and qualitatively. Furthermore, our method is able to reduce noise and the obtained depth maps can be represented very compactly by a small number of image segments and plane parameters.

avg ps

pdf DOI [BibTex]

2014


pdf DOI [BibTex]


Geckogripper: A soft, inflatable robotic gripper using gecko-inspired elastomer micro-fiber adhesives
Geckogripper: A soft, inflatable robotic gripper using gecko-inspired elastomer micro-fiber adhesives

Song, S., Majidi, C., Sitti, M.

In Intelligent Robots and Systems (IROS 2014), 2014 IEEE/RSJ International Conference on, pages: 4624-4629, September 2014 (inproceedings)

Abstract
This paper proposes GeckoGripper, a novel soft, inflatable gripper based on the controllable adhesion mechanism of gecko-inspired micro-fiber adhesives, to pick-and-place complex and fragile non-planar or planar parts serially or in parallel. Unlike previous fibrillar structures that use peel angle to control the manipulation of parts, we developed an elastomer micro-fiber adhesive that is fabricated on a soft, flexible membrane, increasing the adaptability to non-planar three-dimensional (3D) geometries and controllability in adhesion. The adhesive switching ratio (the ratio between the maximum and minimum adhesive forces) of the developed gripper was measured to be around 204, which is superior to previous works based on peel angle-based release control methods. Adhesion control mechanism based on the stretch of the membrane and superior adaptability to non-planar 3D geometries enable the micro-fibers to pick-and-place various 3D parts as shown in demonstrations.

pi

DOI [BibTex]

DOI [BibTex]


Simultaneous Underwater Visibility Assessment, Enhancement and Improved Stereo
Simultaneous Underwater Visibility Assessment, Enhancement and Improved Stereo

Roser, M., Dunbabin, M., Geiger, A.

IEEE International Conference on Robotics and Automation, pages: 3840 - 3847 , Hong Kong, China, IEEE International Conference on Robotics and Automation, June 2014 (conference)

Abstract
Vision-based underwater navigation and obstacle avoidance demands robust computer vision algorithms, particularly for operation in turbid water with reduced visibility. This paper describes a novel method for the simultaneous underwater image quality assessment, visibility enhancement and disparity computation to increase stereo range resolution under dynamic, natural lighting and turbid conditions. The technique estimates the visibility properties from a sparse 3D map of the original degraded image using a physical underwater light attenuation model. Firstly, an iterated distance-adaptive image contrast enhancement enables a dense disparity computation and visibility estimation. Secondly, using a light attenuation model for ocean water, a color corrected stereo underwater image is obtained along with a visibility distance estimate. Experimental results in shallow, naturally lit, high-turbidity coastal environments show the proposed technique improves range estimation over the original images as well as image quality and color for habitat classification. Furthermore, the recursiveness and robustness of the technique allows real-time implementation onboard an Autonomous Underwater Vehicles for improved navigation and obstacle avoidance performance.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


Calibrating and Centering Quasi-Central Catadioptric Cameras
Calibrating and Centering Quasi-Central Catadioptric Cameras

Schoenbein, M., Strauss, T., Geiger, A.

IEEE International Conference on Robotics and Automation, pages: 4443 - 4450, Hong Kong, China, IEEE International Conference on Robotics and Automation, June 2014 (conference)

Abstract
Non-central catadioptric models are able to cope with irregular camera setups and inaccuracies in the manufacturing process but are computationally demanding and thus not suitable for robotic applications. On the other hand, calibrating a quasi-central (almost central) system with a central model introduces errors due to a wrong relationship between the viewing ray orientations and the pixels on the image sensor. In this paper, we propose a central approximation to quasi-central catadioptric camera systems that is both accurate and efficient. We observe that the distance to points in 3D is typically large compared to deviations from the single viewpoint. Thus, we first calibrate the system using a state-of-the-art non-central camera model. Next, we show that by remapping the observations we are able to match the orientation of the viewing rays of a much simpler single viewpoint model with the true ray orientations. While our approximation is general and applicable to all quasi-central camera systems, we focus on one of the most common cases in practice: hypercatadioptric cameras. We compare our model to a variety of baselines in synthetic and real localization and motion estimation experiments. We show that by using the proposed model we are able to achieve near non-central accuracy while obtaining speed-ups of more than three orders of magnitude compared to state-of-the-art non-central models.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


3D nanofabrication on complex seed shapes using glancing angle deposition
3D nanofabrication on complex seed shapes using glancing angle deposition

Hyeon-Ho, J., Mark, A. G., Gibbs, J. G., Reindl, T., Waizmann, U., Weis, J., Fischer, P.

In 2014 IEEE 27th International Conference on Micro Electro Mechanical Systems (MEMS), pages: 437-440, January 2014 (inproceedings)

Abstract
Three-dimensional (3D) fabrication techniques promise new device architectures and enable the integration of more components, but fabricating 3D nanostructures for device applications remains challenging. Recently, we have performed glancing angle deposition (GLAD) upon a nanoscale hexagonal seed array to create a variety of 3D nanoscale objects including multicomponent rods, helices, and zigzags [1]. Here, in an effort to generalize our technique, we present a step-by-step approach to grow 3D nanostructures on more complex nanoseed shapes and configurations than before. This approach allows us to create 3D nanostructures on nanoseeds regardless of seed sizes and shapes.

pf

DOI [BibTex]

DOI [BibTex]


no image
A Self-Tuning LQR Approach Demonstrated on an Inverted Pendulum

Trimpe, S., Millane, A., Doessegger, S., D’Andrea, R.

In Proceedings of the 19th IFAC World Congress, Cape Town, South Africa, 2014 (inproceedings)

am ics

PDF Supplementary material DOI [BibTex]

PDF Supplementary material DOI [BibTex]


Active Microrheology of the Vitreous of the Eye applied to Nanorobot Propulsion
Active Microrheology of the Vitreous of the Eye applied to Nanorobot Propulsion

Qiu, T., Schamel, D., Mark, A. G., Fischer, P.

In 2014 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), pages: 3801-3806, IEEE International Conference on Robotics and Automation ICRA, 2014, Best Automation Paper Award – Finalist. (inproceedings)

Abstract
Biomedical applications of micro or nanorobots require active movement through complex biological fluids. These are generally non-Newtonian (viscoelastic) fluids that are characterized by complicated networks of macromolecules that have size-dependent rheological properties. It has been suggested that an untethered microrobot could assist in retinal surgical procedures. To do this it must navigate the vitreous humor, a hydrated double network of collagen fibrils and high molecular-weight, polyanionic hyaluronan macromolecules. Here, we examine the characteristic size that potential robots must have to traverse vitreous relatively unhindered. We have constructed magnetic tweezers that provide a large gradient of up to 320 T/m to pull sub-micron paramagnetic beads through biological fluids. A novel two-step electrical discharge machining (EDM) approach is used to construct the tips of the magnetic tweezers with a resolution of 30 mu m and high aspect ratio of similar to 17:1 that restricts the magnetic field gradient to the plane of observation. We report measurements on porcine vitreous. In agreement with structural data and passive Brownian diffusion studies we find that the unhindered active propulsion through the eye calls for nanorobots with cross-sections of less than 500 nm.

Best Automation Paper Award – Finalist.

pf

[BibTex]

[BibTex]


no image
Stability Analysis of Distributed Event-Based State Estimation

Trimpe, S.

In Proceedings of the 53rd IEEE Conference on Decision and Control, Los Angeles, CA, 2014 (inproceedings)

Abstract
An approach for distributed and event-based state estimation that was proposed in previous work [1] is analyzed and extended to practical networked systems in this paper. Multiple sensor-actuator-agents observe a dynamic process, sporadically exchange their measurements over a broadcast network according to an event-based protocol, and estimate the process state from the received data. The event-based approach was shown in [1] to mimic a centralized Luenberger observer up to guaranteed bounds, under the assumption of identical estimates on all agents. This assumption, however, is unrealistic (it is violated by a single packet drop or slight numerical inaccuracy) and removed herein. By means of a simulation example, it is shown that non-identical estimates can actually destabilize the overall system. To achieve stability, the event-based communication scheme is supplemented by periodic (but infrequent) exchange of the agentsâ?? estimates and reset to their joint average. When the local estimates are used for feedback control, the stability guarantee for the estimation problem extends to the event-based control system.

am ics

PDF Supplementary material DOI Project Page [BibTex]

PDF Supplementary material DOI Project Page [BibTex]