Header logo is


2020


no image
Selectively Controlled Magnetic Microrobots with Opposing Helices

Joshua, , Wendong, , Panayiota, , Eric, , Sitti,

2020 (article) Accepted

pi

[BibTex]

2020


[BibTex]


no image
Fabrication and temperature-dependent magnetic properties of large-area L10-FePt/Co exchange-spring magnet nanopatterns

Son, K., Schütz, G.

{Physica E: Low-Dimensional Systems And Nanostructures}, 115, North-Holland, Amsterdam, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


General Movement Assessment from videos of computed {3D} infant body models is equally effective compared to conventional {RGB} Video rating
General Movement Assessment from videos of computed 3D infant body models is equally effective compared to conventional RGB Video rating

Schroeder, S., Hesse, N., Weinberger, R., Tacke, U., Gerstl, L., Hilgendorff, A., Heinen, F., Arens, M., Bodensteiner, C., Dijkstra, L. J., Pujades, S., Black, M., Hadders-Algra, M.

Early Human Development, 2020 (article)

Abstract
Background: General Movement Assessment (GMA) is a powerful tool to predict Cerebral Palsy (CP). Yet, GMA requires substantial training hampering its implementation in clinical routine. This inspired a world-wide quest for automated GMA. Aim: To test whether a low-cost, marker-less system for three-dimensional motion capture from RGB depth sequences using a whole body infant model may serve as the basis for automated GMA. Study design: Clinical case study at an academic neurodevelopmental outpatient clinic. Subjects: Twenty-nine high-risk infants were recruited and assessed at their clinical follow-up at 2-4 month corrected age (CA). Their neurodevelopmental outcome was assessed regularly up to 12-31 months CA. Outcome measures: GMA according to Hadders-Algra by a masked GMA-expert of conventional and computed 3D body model (“SMIL motion”) videos of the same GMs. Agreement between both GMAs was assessed, and sensitivity and specificity of both methods to predict CP at ≥12 months CA. Results: The agreement of the two GMA ratings was substantial, with κ=0.66 for the classification of definitely abnormal (DA) GMs and an ICC of 0.887 (95% CI 0.762;0.947) for a more detailed GM-scoring. Five children were diagnosed with CP (four bilateral, one unilateral CP). The GMs of the child with unilateral CP were twice rated as mildly abnormal. DA-ratings of both videos predicted bilateral CP well: sensitivity 75% and 100%, specificity 88% and 92% for conventional and SMIL motion videos, respectively. Conclusions: Our computed infant 3D full body model is an attractive starting point for automated GMA in infants at risk of CP.

ps

[BibTex]

[BibTex]


Acoustically powered surface-slipping mobile microrobots
Acoustically powered surface-slipping mobile microrobots

Aghakhani, A., Yasa, O., Wrede, P., Sitti, M.

Proceedings of the National Academy of Sciences, 117, National Acad Sciences, 2020 (article)

Abstract
Untethered synthetic microrobots have significant potential to revolutionize minimally invasive medical interventions in the future. However, their relatively slow speed and low controllability near surfaces typically are some of the barriers standing in the way of their medical applications. Here, we introduce acoustically powered microrobots with a fast, unidirectional surface-slipping locomotion on both flat and curved surfaces. The proposed three-dimensionally printed, bullet-shaped microrobot contains a spherical air bubble trapped inside its internal body cavity, where the bubble is resonated using acoustic waves. The net fluidic flow due to the bubble oscillation orients the microrobot's axisymmetric axis perpendicular to the wall and then propels it laterally at very high speeds (up to 90 body lengths per second with a body length of 25 µm) while inducing an attractive force toward the wall. To achieve unidirectional locomotion, a small fin is added to the microrobot’s cylindrical body surface, which biases the propulsion direction. For motion direction control, the microrobots are coated anisotropically with a soft magnetic nanofilm layer, allowing steering under a uniform magnetic field. Finally, surface locomotion capability of the microrobots is demonstrated inside a three-dimensional circular cross-sectional microchannel under acoustic actuation. Overall, the combination of acoustic powering and magnetic steering can be effectively utilized to actuate and navigate these microrobots in confined and hard-to-reach body location areas in a minimally invasive fashion.

pi

[BibTex]

[BibTex]


Bio-inspired Flexible Twisting Wings Increase Lift and Efficiency of a Flapping Wing Micro Air Vehicle
Bio-inspired Flexible Twisting Wings Increase Lift and Efficiency of a Flapping Wing Micro Air Vehicle

Colmenares, D., Kania, R., Zhang, W., Sitti, M.

arXiv preprint arXiv:2001.11586, 2020 (article)

Abstract
We investigate the effect of wing twist flexibility on lift and efficiency of a flapping-wing micro air vehicle capable of liftoff. Wings used previously were chosen to be fully rigid due to modeling and fabrication constraints. However, biological wings are highly flexible and other micro air vehicles have successfully utilized flexible wing structures for specialized tasks. The goal of our study is to determine if dynamic twisting of flexible wings can increase overall aerodynamic lift and efficiency. A flexible twisting wing design was found to increase aerodynamic efficiency by 41.3%, translational lift production by 35.3%, and the effective lift coefficient by 63.7% compared to the rigid-wing design. These results exceed the predictions of quasi-steady blade element models, indicating the need for unsteady computational fluid dynamics simulations of twisted flapping wings.

pi

[BibTex]

[BibTex]


Cohesive self-organization of mobile microrobotic swarms
Cohesive self-organization of mobile microrobotic swarms

Yigit, B., Alapan, Y., Sitti, M.

arXiv preprint arXiv:1907.05856, 2020 (article)

pi

[BibTex]

[BibTex]


Physical Variables Underlying Tactile Stickiness during Fingerpad Detachment
Physical Variables Underlying Tactile Stickiness during Fingerpad Detachment

Nam, S., Vardar, Y., Gueorguiev, D., Kuchenbecker, K. J.

Frontiers in Neuroscience, 2020 (article) Accepted

Abstract
One may notice a relatively wide range of tactile sensations even when touching the same hard, flat surface in similar ways. Little is known about the reasons for this variability, so we decided to investigate how the perceptual intensity of light stickiness relates to the physical interaction between the skin and the surface. We conducted a psychophysical experiment in which nine participants actively pressed their finger on a flat glass plate with a normal force close to 1.5 N and detached it after a few seconds. A custom-designed apparatus recorded the contact force vector and the finger contact area during each interaction as well as pre- and post-trial finger moisture. After detaching their finger, participants judged the stickiness of the glass using a nine-point scale. We explored how sixteen physical variables derived from the recorded data correlate with each other and with the stickiness judgments of each participant. These analyses indicate that stickiness perception mainly depends on the pre-detachment pressing duration, the time taken for the finger to detach, and the impulse in the normal direction after the normal force changes sign; finger-surface adhesion seems to build with pressing time, causing a larger normal impulse during detachment and thus a more intense stickiness sensation. We additionally found a strong between-subjects correlation between maximum real contact area and peak pull-off force, as well as between finger moisture and impulse.

hi

[BibTex]


no image
Planning from Images with Deep Latent Gaussian Process Dynamics

Bosch, N., Achterhold, J., Leal-Taixe, L., Stückler, J.

2nd Annual Conference on Learning for Dynamics and Control (L4DC) , 2020 (conference) Accepted

ev

[BibTex]

[BibTex]


no image
Practical Accelerated Optimization on Riemannian Manifolds

F Alimisis, F., Orvieto, A., Becigneul, G., Lucchi, A.

37th International Conference on Machine Learning (ICML), 2020 (conference) Submitted

ei

[BibTex]

[BibTex]


Exploring Data Aggregation in Policy Learning for Vision-based Urban Autonomous Driving
Exploring Data Aggregation in Policy Learning for Vision-based Urban Autonomous Driving

Prakash, A., Behl, A., Ohn-Bar, E., Chitta, K., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Data aggregation techniques can significantly improve vision-based policy learning within a training environment, e.g., learning to drive in a specific simulation condition. However, as on-policy data is sequentially sampled and added in an iterative manner, the policy can specialize and overfit to the training conditions. For real-world applications, it is useful for the learned policy to generalize to novel scenarios that differ from the training conditions. To improve policy learning while maintaining robustness when training end-to-end driving policies, we perform an extensive analysis of data aggregation techniques in the CARLA environment. We demonstrate how the majority of them have poor generalization performance, and develop a novel approach with empirically better generalization performance compared to existing techniques. Our two key ideas are (1) to sample critical states from the collected on-policy data based on the utility they provide to the learned policy in terms of driving behavior, and (2) to incorporate a replay buffer which progressively focuses on the high uncertainty regions of the policy's state distribution. We evaluate the proposed approach on the CARLA NoCrash benchmark, focusing on the most challenging driving scenarios with dense pedestrian and vehicle traffic. Our approach improves driving success rate by 16% over state-of-the-art, achieving 87% of the expert performance while also reducing the collision rate by an order of magnitude without the use of any additional modality, auxiliary tasks, architectural modifications or reward from the environment.

avg

pdf suppmat Video Project Page [BibTex]

pdf suppmat Video Project Page [BibTex]


Electronics, Software and Analysis of a Bioinspired Sensorized Quadrupedal Robot
Electronics, Software and Analysis of a Bioinspired Sensorized Quadrupedal Robot

Petereit, R.

Technische Universität München, 2020 (mastersthesis)

dlg

[BibTex]


no image
Visual-Inertial Mapping with Non-Linear Factor Recovery

Usenko, V., Demmel, N., Schubert, D., Stückler, J., Cremers, D.

IEEE Robotics and Automation Letters (RA-L), 5, 2020, accepted for presentation at IEEE International Conference on Robotics and Automation (ICRA) 2020, to appear, arXiv:1904.06504 (article)

Abstract
Cameras and inertial measurement units are complementary sensors for ego-motion estimation and environment mapping. Their combination makes visual-inertial odometry (VIO) systems more accurate and robust. For globally consistent mapping, however, combining visual and inertial information is not straightforward. To estimate the motion and geometry with a set of images large baselines are required. Because of that, most systems operate on keyframes that have large time intervals between each other. Inertial data on the other hand quickly degrades with the duration of the intervals and after several seconds of integration, it typically contains only little useful information. In this paper, we propose to extract relevant information for visual-inertial mapping from visual-inertial odometry using non-linear factor recovery. We reconstruct a set of non-linear factors that make an optimal approximation of the information on the trajectory accumulated by VIO. To obtain a globally consistent map we combine these factors with loop-closing constraints using bundle adjustment. The VIO factors make the roll and pitch angles of the global map observable, and improve the robustness and the accuracy of the mapping. In experiments on a public benchmark, we demonstrate superior performance of our method over the state-of-the-art approaches.

ev

[BibTex]

[BibTex]


Spatial Scheduling of Informative Meetings for Multi-Agent Persistent Coverage
Spatial Scheduling of Informative Meetings for Multi-Agent Persistent Coverage

Haksar, R. N., Trimpe, S., Schwager, M.

IEEE Robotics and Automation Letters, 2020 (article) Accepted

ics

DOI [BibTex]

DOI [BibTex]


Bioinspired underwater locomotion of light-driven liquid crystal gels
Bioinspired underwater locomotion of light-driven liquid crystal gels

Shahsavan, H., Aghakhani, A., Zeng, H., Guo, Y., Davidson, Z. S., Priimagi, A., Sitti, M.

Proceedings of the National Academy of Sciences, National Acad Sciences, 2020 (article)

Abstract
Untethered dynamic shape programming and control of soft materials have significant applications in technologies such as soft robots, medical devices, organ-on-a-chip, and optical devices. Here, we present a solution to remotely actuate and move soft materials underwater in a fast, efficient, and controlled manner using photoresponsive liquid crystal gels (LCGs). LCG constructs with engineered molecular alignment show a low and sharp phase-transition temperature and experience considerable density reduction by light exposure, thereby allowing rapid and reversible shape changes. We demonstrate different modes of underwater locomotion, such as crawling, walking, jumping, and swimming, by localized and time-varying illumination of LCGs. The diverse locomotion modes of smart LCGs can provide a new toolbox for designing efficient light-fueled soft robots in fluid-immersed media.

pi

[BibTex]

[BibTex]


Differentiation of blackbox combinatorial solvers
Differentiation of blackbox combinatorial solvers

Vlastelica, M., Paulus, A., Musil, V., Martius, G., Rolı́nek, M.

In International Conference on Learning Representations, ICLR’20, 2020 (incollection)

al

link (url) [BibTex]

link (url) [BibTex]


no image
How to functionalise metal-organic frameworks to enable guest nanocluster embedment

King, J., Zhang, L., Doszczeczko, S., Sambalova, O., Luo, H., Rohman, F., Phillips, O., Borgschulte, A., Hirscher, M., Addicoat, M., Szilágyi, P. A.

{Journal of Materials Chemistry A}, 8(9):4889-4897, Royal Society of Chemistry, Cambridge, UK, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
Constant Curvature Graph Convolutional Networks

Bachmann*, G., Becigneul*, G., Ganea, O.

37th International Conference on Machine Learning (ICML), 2020, *equal contribution (conference) Submitted

ei

[BibTex]

[BibTex]


Learning Situational Driving
Learning Situational Driving

Ohn-Bar, E., Prakash, A., Behl, A., Chitta, K., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Human drivers have a remarkable ability to drive in diverse visual conditions and situations, e.g., from maneuvering in rainy, limited visibility conditions with no lane markings to turning in a busy intersection while yielding to pedestrians. In contrast, we find that state-of-the-art sensorimotor driving models struggle when encountering diverse settings with varying relationships between observation and action. To generalize when making decisions across diverse conditions, humans leverage multiple types of situation-specific reasoning and learning strategies. Motivated by this observation, we develop a framework for learning a situational driving policy that effectively captures reasoning under varying types of scenarios. Our key idea is to learn a mixture model with a set of policies that can capture multiple driving modes. We first optimize the mixture model through behavior cloning, and show it to result in significant gains in terms of driving performance in diverse conditions. We then refine the model by directly optimizing for the driving task itself, i.e., supervised with the navigation task reward. Our method is more scalable than methods assuming access to privileged information, e.g., perception labels, as it only assumes demonstration and reward-based supervision. We achieve over 98% success rate on the CARLA driving benchmark as well as state-of-the-art performance on a newly introduced generalization benchmark.

avg

pdf suppmat Video Project Page [BibTex]

pdf suppmat Video Project Page [BibTex]


On Joint Estimation of Pose, Geometry and svBRDF from a Handheld Scanner
On Joint Estimation of Pose, Geometry and svBRDF from a Handheld Scanner

Schmitt, C., Donne, S., Riegler, G., Koltun, V., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
We propose a novel formulation for joint recovery of camera pose, object geometry and spatially-varying BRDF. The input to our approach is a sequence of RGB-D images captured by a mobile, hand-held scanner that actively illuminates the scene with point light sources. Compared to previous works that jointly estimate geometry and materials from a hand-held scanner, we formulate this problem using a single objective function that can be minimized using off-the-shelf gradient-based solvers. By integrating material clustering as a differentiable operation into the optimization process, we avoid pre-processing heuristics and demonstrate that our model is able to determine the correct number of specular materials independently. We provide a study on the importance of each component in our formulation and on the requirements of the initial geometry. We show that optimizing over the poses is crucial for accurately recovering fine details and that our approach naturally results in a semantically meaningful material segmentation.

avg

pdf Project Page [BibTex]

pdf Project Page [BibTex]


Trunk pitch oscillations for energy trade-offs in bipedal running birds and robots
Trunk pitch oscillations for energy trade-offs in bipedal running birds and robots

Drama, Ö., Badri-Spröwitz, A.

Bioinspiration & Biomimetics, 2020 (article)

Abstract
Bipedal animals have diverse morphologies and advanced locomotion abilities. Terrestrial birds, in particular, display agile, efficient, and robust running motion, in which they exploit the interplay between the body segment masses and moment of inertias. On the other hand, most legged robots are not able to generate such versatile and energy-efficient motion and often disregard trunk movements as a means to enhance their locomotion capabilities. Recent research investigated how trunk motions affect the gait characteristics of humans, but there is a lack of analysis across different bipedal morphologies. To address this issue, we analyze avian running based on a spring-loaded inverted pendulum model with a pronograde (horizontal) trunk. We use a virtual point based control scheme and modify the alignment of the ground reaction forces to assess how our control strategy influences the trunk pitch oscillations and energetics of the locomotion. We derive three potential key strategies to leverage trunk pitch motions that minimize either the energy fluctuations of the center of mass or the work performed by the hip and leg. We suggest how these strategies could be used in legged robotics.

dlg

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Event-triggered Learning
Event-triggered Learning

Solowjow, F., Trimpe, S.

Automatica, 2020 (article) Accepted

ics

arXiv PDF Project Page [BibTex]


no image
Thermal nucleation and high-resolution imaging of submicrometer magnetic bubbles in thin thulium iron garnet films with perpendicular anisotropy

Büttner, F., Mawass, M. A., Bauer, J., Rosenberg, E., Caretta, L., Avci, C. O., Gräfe, J., Finizio, S., Vaz, C. A. F., Novakovic, N., Weigand, M., Litzius, K., Förster, J., Träger, N., Groß, F., Suzuki, D., Huang, M., Bartell, J., Kronast, F., Raabe, J., Schütz, G., Ross, C. A., Beach, G. S. D.

{Physical Review Materials}, 4(1), American Physical Society, College Park, MD, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


no image
DirectShape: Photometric Alignment of Shape Priors for Visual Vehicle Pose and Shape Estimation

Wang, R., Yang, N., Stückler, J., Cremers, D.

In Accepted for IEEE international Conference on Robotics and Automation (ICRA), 2020, arXiv:1904.10097 (inproceedings) Accepted

ev

[BibTex]

[BibTex]


Safe and Fast Tracking on a Robot Manipulator: Robust MPC and Neural Network Control
Safe and Fast Tracking on a Robot Manipulator: Robust MPC and Neural Network Control

Nubert, J., Koehler, J., Berenz, V., Allgower, F., Trimpe, S.

IEEE Robotics and Automation Letters, 2020 (article) Accepted

Abstract
Fast feedback control and safety guarantees are essential in modern robotics. We present an approach that achieves both by combining novel robust model predictive control (MPC) with function approximation via (deep) neural networks (NNs). The result is a new approach for complex tasks with nonlinear, uncertain, and constrained dynamics as are common in robotics. Specifically, we leverage recent results in MPC research to propose a new robust setpoint tracking MPC algorithm, which achieves reliable and safe tracking of a dynamic setpoint while guaranteeing stability and constraint satisfaction. The presented robust MPC scheme constitutes a one-layer approach that unifies the often separated planning and control layers, by directly computing the control command based on a reference and possibly obstacle positions. As a separate contribution, we show how the computation time of the MPC can be drastically reduced by approximating the MPC law with a NN controller. The NN is trained and validated from offline samples of the MPC, yielding statistical guarantees, and used in lieu thereof at run time. Our experiments on a state-of-the-art robot manipulator are the first to show that both the proposed robust and approximate MPC schemes scale to real-world robotic systems.

am ics

arXiv PDF DOI [BibTex]

arXiv PDF DOI [BibTex]


Additive manufacturing of cellulose-based materials with continuous, multidirectional stiffness gradients
Additive manufacturing of cellulose-based materials with continuous, multidirectional stiffness gradients

Giachini, P., Gupta, S., Wang, W., Wood, D., Yunusa, M., Baharlou, E., Sitti, M., Menges, A.

Science Advances, 6, American Association for the Advancement of Science, 2020 (article)

Abstract
Functionally graded materials (FGMs) enable applications in fields such as biomedicine and architecture, but their fabrication suffers from shortcomings in gradient continuity, interfacial bonding, and directional freedom. In addition, most commercial design software fail to incorporate property gradient data, hindering explorations of the design space of FGMs. Here, we leveraged a combined approach of materials engineering and digital processing to enable extrusion-based multimaterial additive manufacturing of cellulose-based tunable viscoelastic materials with continuous, high-contrast, and multidirectional stiffness gradients. A method to engineer sets of cellulose-based materials with similar compositions, yet distinct mechanical and rheological properties, was established. In parallel, a digital workflow was developed to embed gradient information into design models with integrated fabrication path planning. The payoff of integrating these physical and digital tools is the ability to achieve the same stiffness gradient in multiple ways, opening design possibilities previously limited by the rigid coupling of material and geometry.

pi

[BibTex]

[BibTex]


no image
Generation and characterization of focused helical x-ray beams

Loetgering, L., Baluktsian, M., Keskinbora, K., Horstmeyer, R., Wilhein, T., Schütz, G., Eikema, K. S. E., Witte, S.

Science Advances, 6(7), American Association for the Advancement of Science, 2020 (article)

mms

Generation and characterization of focused helical x-ray beams link (url) DOI [BibTex]

Generation and characterization of focused helical x-ray beams link (url) DOI [BibTex]


no image
Materials for hydrogen-based energy storage - past, recent progress and future outlook

Hirscher, M., Yartys, V. A., Baricco, M., Bellosta von Colbe, J., Blanchard, D., Bowman Jr., R. C., Broom, D. P., Buckley, C. E., Chang, F., Chen, P., Cho, Y. W., Crivello, J., Cuevas, F., David, W. I. F., de Jongh, P. E., Denys, R. V., Dornheim, M., Felderhoff, M., Filinchuk, Y., Froudakis, G. E., Grant, D. M., Gray, E. M., Hauback, B. C., He, T., Humphries, T. D., Jensen, T. R., Kim, S., Kojima, Y., Latroche, M., Li, H., Lotostskyy, M. V., Makepeace, J. W., M\oller, K. T., Naheed, L., Ngene, P., Noréus, D., Nyg\aard, M. M., Orimo, S., Paskevicius, M., Pasquini, L., Ravnsbaek, D. B., Sofianos, M. V., Udovic, T. J., Vegge, T., Walker, G. S., Webb, C. J., Weidenthaler, C., Zlotea, C.

{Journal of Alloys and Compounds}, 827, Elsevier B.V., Lausanne, Switzerland, 2020 (article)

mms

DOI [BibTex]

DOI [BibTex]


Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision
Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision

Niemeyer, M., Mescheder, L., Oechsle, M., Geiger, A.

In Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), IEEE International Conference on Computer Vision and Pattern Recognition (CVPR) 2020, 2020 (inproceedings)

Abstract
Learning-based 3D reconstruction methods have shown impressive results. However, most methods require 3D supervision which is often hard to obtain for real-world datasets. Recently, several works have proposed differentiable rendering techniques to train reconstruction models from RGB images. Unfortunately, these approaches are currently restricted to voxel- and mesh-based representations, suffering from discretization or low resolution. In this work, we propose a differentiable rendering formulation for implicit shape and texture representations. Implicit representations have recently gained popularity as they represent shape and texture continuously. Our key insight is that depth gradients can be derived analytically using the concept of implicit differentiation. This allows us to learn implicit shape and texture representations directly from RGB images. We experimentally show that our single-view reconstructions rival those learned with full 3D supervision. Moreover, we find that our method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.

avg

pdf suppmat Video Project Page [BibTex]

pdf suppmat Video Project Page [BibTex]

2018


Role of symmetry in driven propulsion at low Reynolds number
Role of symmetry in driven propulsion at low Reynolds number

Sachs, J., Morozov, K. I., Kenneth, O., Qiu, T., Segreto, N., Fischer, P., Leshansky, A. M.

Phys. Rev. E, 98(6):063105, American Physical Society, December 2018 (article)

Abstract
We theoretically and experimentally investigate low-Reynolds-number propulsion of geometrically achiral planar objects that possess a dipole moment and that are driven by a rotating magnetic field. Symmetry considerations (involving parity, $\widehat{P}$, and charge conjugation, $\widehat{C}$) establish correspondence between propulsive states depending on orientation of the dipolar moment. Although basic symmetry arguments do not forbid individual symmetric objects to efficiently propel due to spontaneous symmetry breaking, they suggest that the average ensemble velocity vanishes. Some additional arguments show, however, that highly symmetrical ($\widehat{P}$-even) objects exhibit no net propulsion while individual less symmetrical ($\widehat{C}\widehat{P}$-even) propellers do propel. Particular magnetization orientation, rendering the shape $\widehat{C}\widehat{P}$-odd, yields unidirectional motion typically associated with chiral structures, such as helices. If instead of a structure with a permanent dipole we consider a polarizable object, some of the arguments have to be modified. For instance, we demonstrate a truly achiral ($\widehat{P}$- and $\widehat{C}\widehat{P}$-even) planar shape with an induced electric dipole that can propel by electro-rotation. We thereby show that chirality is not essential for propulsion due to rotation-translation coupling at low Reynolds number.

pf

link (url) DOI Project Page [BibTex]

2018


link (url) DOI Project Page [BibTex]


Swimming Back and Forth Using Planar Flagellar Propulsion at Low Reynolds Numbers
Swimming Back and Forth Using Planar Flagellar Propulsion at Low Reynolds Numbers

Khalil, I. S. M., Tabak, A. F., Hamed, Y., Mitwally, M. E., Tawakol, M., Klingner, A., Sitti, M.

Advanced Science, 5(2):1700461, 2018 (article)

Abstract
Abstract Peritrichously flagellated Escherichia coli swim back and forth by wrapping their flagella together in a helical bundle. However, other monotrichous bacteria cannot swim back and forth with a single flagellum and planar wave propagation. Quantifying this observation, a magnetically driven soft two‐tailed microrobot capable of reversing its swimming direction without making a U‐turn trajectory or actively modifying the direction of wave propagation is designed and developed. The microrobot contains magnetic microparticles within the polymer matrix of its head and consists of two collinear, unequal, and opposite ultrathin tails. It is driven and steered using a uniform magnetic field along the direction of motion with a sinusoidally varying orthogonal component. Distinct reversal frequencies that enable selective and independent excitation of the first or the second tail of the microrobot based on their tail length ratio are found. While the first tail provides a propulsive force below one of the reversal frequencies, the second is almost passive, and the net propulsive force achieves flagellated motion along one direction. On the other hand, the second tail achieves flagellated propulsion along the opposite direction above the reversal frequency.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


no image
Non-factorised Variational Inference in Dynamical Systems

Ialongo, A. D., Van Der Wilk, M., Hensman, J., Rasmussen, C. E.

1st Symposion on Advances in Approximate Bayesian Inference, December 2018 (conference)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Enhancing the Accuracy and Fairness of Human Decision Making

Valera, I., Singla, A., Gomez Rodriguez, M.

Advances in Neural Information Processing Systems 31, pages: 1774-1783, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32nd Annual Conference on Neural Information Processing Systems, December 2018 (conference)

ei

arXiv link (url) Project Page [BibTex]

arXiv link (url) Project Page [BibTex]


Deep Reinforcement Learning for Event-Triggered Control
Deep Reinforcement Learning for Event-Triggered Control

Baumann, D., Zhu, J., Martius, G., Trimpe, S.

In Proceedings of the 57th IEEE International Conference on Decision and Control (CDC), pages: 943-950, 57th IEEE International Conference on Decision and Control (CDC), December 2018 (inproceedings)

al ics

arXiv PDF DOI Project Page Project Page [BibTex]

arXiv PDF DOI Project Page Project Page [BibTex]


no image
When do random forests fail?

Tang, C., Garreau, D., von Luxburg, U.

In Proceedings Neural Information Processing Systems, Neural Information Processing Systems (NIPS 2018) , December 2018 (inproceedings)

slt

Project Page [BibTex]

Project Page [BibTex]


no image
Consolidating the Meta-Learning Zoo: A Unifying Perspective as Posterior Predictive Inference

Gordon*, J., Bronskill*, J., Bauer*, M., Nowozin, S., Turner, R. E.

Workshop on Meta-Learning (MetaLearn 2018) at the 32nd Conference on Neural Information Processing Systems, December 2018, *equal contribution (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Versa: Versatile and Efficient Few-shot Learning

Gordon*, J., Bronskill*, J., Bauer*, M., Nowozin, S., Turner, R. E.

Third Workshop on Bayesian Deep Learning at the 32nd Conference on Neural Information Processing Systems, December 2018, *equal contribution (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
DP-MAC: The Differentially Private Method of Auxiliary Coordinates for Deep Learning

Harder, F., Köhler, J., Welling, M., Park, M.

Workshop on Privacy Preserving Machine Learning at the 32nd Conference on Neural Information Processing Systems, December 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Boosting Black Box Variational Inference

Locatello*, F., Dresdner*, G., R., K., Valera, I., Rätsch, G.

Advances in Neural Information Processing Systems 31, pages: 3405-3415, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32nd Annual Conference on Neural Information Processing Systems, December 2018, *equal contribution (conference)

ei

arXiv link (url) Project Page [BibTex]

arXiv link (url) Project Page [BibTex]


no image
Deep Nonlinear Non-Gaussian Filtering for Dynamical Systems

Mehrjou, A., Schölkopf, B.

Workshop: Infer to Control: Probabilistic Reinforcement Learning and Structured Control at the 32nd Conference on Neural Information Processing Systems, December 2018 (conference)

ei

PDF link (url) [BibTex]

PDF link (url) [BibTex]


no image
Resampled Priors for Variational Autoencoders

Bauer, M., Mnih, A.

Third Workshop on Bayesian Deep Learning at the 32nd Conference on Neural Information Processing Systems, December 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Learning Invariances using the Marginal Likelihood

van der Wilk, M., Bauer, M., John, S. T., Hensman, J.

Advances in Neural Information Processing Systems 31, pages: 9960-9970, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32nd Annual Conference on Neural Information Processing Systems, December 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Data-Efficient Hierarchical Reinforcement Learning

Nachum, O., Gu, S., Lee, H., Levine, S.

Advances in Neural Information Processing Systems 31, pages: 3307-3317, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32nd Annual Conference on Neural Information Processing Systems, December 2018 (conference)

ei

link (url) Project Page [BibTex]

link (url) Project Page [BibTex]


no image
Generalisation in humans and deep neural networks

Geirhos, R., Temme, C. R. M., Rauber, J., Schütt, H., Bethge, M., Wichmann, F. A.

Advances in Neural Information Processing Systems 31, pages: 7549-7561, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32nd Annual Conference on Neural Information Processing Systems, December 2018 (conference)

ei

link (url) [BibTex]

link (url) [BibTex]


no image
Parallel and functionally segregated processing of task phase and conscious content in the prefrontal cortex

Kapoor, V., Besserve, M., Logothetis, N. K., Panagiotaropoulos, T. I.

Communications Biology, 1(215):1-12, December 2018 (article)

ei

link (url) DOI Project Page [BibTex]

link (url) DOI Project Page [BibTex]


no image
A Computational Camera with Programmable Optics for Snapshot High Resolution Multispectral Imaging

Chen, J., Hirsch, M., Eberhardt, B., Lensch, H. P. A.

Computer Vision - ACCV 2018 - 14th Asian Conference on Computer Vision, December 2018 (conference) Accepted

ei

[BibTex]

[BibTex]


no image
Adaptive Skip Intervals: Temporal Abstraction for Recurrent Dynamical Models

Neitz, A., Parascandolo, G., Bauer, S., Schölkopf, B.

Advances in Neural Information Processing Systems 31, pages: 9838-9848, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32nd Annual Conference on Neural Information Processing Systems, December 2018 (conference)

ei

arXiv link (url) [BibTex]

arXiv link (url) [BibTex]