Header logo is


2019


Decoding subcategories of human bodies from both body- and face-responsive cortical regions
Decoding subcategories of human bodies from both body- and face-responsive cortical regions

Foster, C., Zhao, M., Romero, J., Black, M. J., Mohler, B. J., Bartels, A., Bülthoff, I.

NeuroImage, 202(15):116085, November 2019 (article)

Abstract
Our visual system can easily categorize objects (e.g. faces vs. bodies) and further differentiate them into subcategories (e.g. male vs. female). This ability is particularly important for objects of social significance, such as human faces and bodies. While many studies have demonstrated category selectivity to faces and bodies in the brain, how subcategories of faces and bodies are represented remains unclear. Here, we investigated how the brain encodes two prominent subcategories shared by both faces and bodies, sex and weight, and whether neural responses to these subcategories rely on low-level visual, high-level visual or semantic similarity. We recorded brain activity with fMRI while participants viewed faces and bodies that varied in sex, weight, and image size. The results showed that the sex of bodies can be decoded from both body- and face-responsive brain areas, with the former exhibiting more consistent size-invariant decoding than the latter. Body weight could also be decoded in face-responsive areas and in distributed body-responsive areas, and this decoding was also invariant to image size. The weight of faces could be decoded from the fusiform body area (FBA), and weight could be decoded across face and body stimuli in the extrastriate body area (EBA) and a distributed body-responsive area. The sex of well-controlled faces (e.g. excluding hairstyles) could not be decoded from face- or body-responsive regions. These results demonstrate that both face- and body-responsive brain regions encode information that can distinguish the sex and weight of bodies. Moreover, the neural patterns corresponding to sex and weight were invariant to image size and could sometimes generalize across face and body stimuli, suggesting that such subcategorical information is encoded with a high-level visual or semantic code.

ps

paper pdf DOI [BibTex]

2019


paper pdf DOI [BibTex]


AirCap -- Aerial Outdoor Motion Capture
AirCap – Aerial Outdoor Motion Capture

Ahmad, A., Price, E., Tallamraju, R., Saini, N., Lawless, G., Ludwig, R., Martinovic, I., Bülthoff, H. H., Black, M. J.

IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2019), Workshop on Aerial Swarms, November 2019 (misc)

Abstract
This paper presents an overview of the Grassroots project Aerial Outdoor Motion Capture (AirCap) running at the Max Planck Institute for Intelligent Systems. AirCap's goal is to achieve markerless, unconstrained, human motion capture (mocap) in unknown and unstructured outdoor environments. To that end, we have developed an autonomous flying motion capture system using a team of aerial vehicles (MAVs) with only on-board, monocular RGB cameras. We have conducted several real robot experiments involving up to 3 aerial vehicles autonomously tracking and following a person in several challenging scenarios using our approach of active cooperative perception developed in AirCap. Using the images captured by these robots during the experiments, we have demonstrated a successful offline body pose and shape estimation with sufficiently high accuracy. Overall, we have demonstrated the first fully autonomous flying motion capture system involving multiple robots for outdoor scenarios.

ps

Talk slides Project Page Project Page [BibTex]

Talk slides Project Page Project Page [BibTex]


Active Perception based Formation Control for Multiple Aerial Vehicles
Active Perception based Formation Control for Multiple Aerial Vehicles

Tallamraju, R., Price, E., Ludwig, R., Karlapalem, K., Bülthoff, H. H., Black, M. J., Ahmad, A.

IEEE Robotics and Automation Letters, Robotics and Automation Letters, 4(4):4491-4498, IEEE, October 2019 (article)

Abstract
We present a novel robotic front-end for autonomous aerial motion-capture (mocap) in outdoor environments. In previous work, we presented an approach for cooperative detection and tracking (CDT) of a subject using multiple micro-aerial vehicles (MAVs). However, it did not ensure optimal view-point configurations of the MAVs to minimize the uncertainty in the person's cooperatively tracked 3D position estimate. In this article, we introduce an active approach for CDT. In contrast to cooperatively tracking only the 3D positions of the person, the MAVs can actively compute optimal local motion plans, resulting in optimal view-point configurations, which minimize the uncertainty in the tracked estimate. We achieve this by decoupling the goal of active tracking into a quadratic objective and non-convex constraints corresponding to angular configurations of the MAVs w.r.t. the person. We derive this decoupling using Gaussian observation model assumptions within the CDT algorithm. We preserve convexity in optimization by embedding all the non-convex constraints, including those for dynamic obstacle avoidance, as external control inputs in the MPC dynamics. Multiple real robot experiments and comparisons involving 3 MAVs in several challenging scenarios are presented.

ps

pdf DOI Project Page [BibTex]

pdf DOI Project Page [BibTex]


Method for providing a three dimensional body model
Method for providing a three dimensional body model

Loper, M., Mahmood, N., Black, M.

September 2019, U.S.~Patent 10,417,818 (misc)

Abstract
A method for providing a three-dimensional body model which may be applied for an animation, based on a moving body, wherein the method comprises providing a parametric three-dimensional body model, which allows shape and pose variations; applying a standard set of body markers; optimizing the set of body markers by generating an additional set of body markers and applying the same for providing 3D coordinate marker signals for capturing shape and pose of the body and dynamics of soft tissue; and automatically providing an animation by processing the 3D coordinate marker signals in order to provide a personalized three-dimensional body model, based on estimated shape and an estimated pose of the body by means of predicted marker locations.

ps

MoSh Project pdf [BibTex]


3D Morphable Face Models - Past, Present and Future
3D Morphable Face Models - Past, Present and Future

Egger, B., Smith, W. A. P., Tewari, A., Wuhrer, S., Zollhoefer, M., Beeler, T., Bernard, F., Bolkart, T., Kortylewski, A., Romdhani, S., Theobalt, C., Blanz, V., Vetter, T.

arxiv preprint arXiv:1909.01815, September 2019 (article)

Abstract
In this paper, we provide a detailed survey of 3D Morphable Face Models over the 20 years since they were first proposed. The challenges in building and applying these models, namely capture, modeling, image formation,and image analysis, are still active research topics, and we review the state-of-the-art in each of these areas. We also look ahead, identifying unsolved challenges, proposing directions for future research and highlighting the broad range of current and future applications.

ps

paper project page [BibTex]

paper project page [BibTex]


Learning and Tracking the {3D} Body Shape of Freely Moving Infants from {RGB-D} sequences
Learning and Tracking the 3D Body Shape of Freely Moving Infants from RGB-D sequences

Hesse, N., Pujades, S., Black, M., Arens, M., Hofmann, U., Schroeder, S.

Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2019 (article)

Abstract
Statistical models of the human body surface are generally learned from thousands of high-quality 3D scans in predefined poses to cover the wide variety of human body shapes and articulations. Acquisition of such data requires expensive equipment, calibration procedures, and is limited to cooperative subjects who can understand and follow instructions, such as adults. We present a method for learning a statistical 3D Skinned Multi-Infant Linear body model (SMIL) from incomplete, low-quality RGB-D sequences of freely moving infants. Quantitative experiments show that SMIL faithfully represents the RGB-D data and properly factorizes the shape and pose of the infants. To demonstrate the applicability of SMIL, we fit the model to RGB-D sequences of freely moving infants and show, with a case study, that our method captures enough motion detail for General Movements Assessment (GMA), a method used in clinical practice for early detection of neurodevelopmental disorders in infants. SMIL provides a new tool for analyzing infant shape and movement and is a step towards an automated system for GMA.

ps

pdf Journal DOI [BibTex]

pdf Journal DOI [BibTex]


 Perceptual Effects of Inconsistency in Human Animations
Perceptual Effects of Inconsistency in Human Animations

Kenny, S., Mahmood, N., Honda, C., Black, M. J., Troje, N. F.

ACM Trans. Appl. Percept., 16(1):2:1-2:18, Febuary 2019 (article)

Abstract
The individual shape of the human body, including the geometry of its articulated structure and the distribution of weight over that structure, influences the kinematics of a person’s movements. How sensitive is the visual system to inconsistencies between shape and motion introduced by retargeting motion from one person onto the shape of another? We used optical motion capture to record five pairs of male performers with large differences in body weight, while they pushed, lifted, and threw objects. From these data, we estimated both the kinematics of the actions as well as the performer’s individual body shape. To obtain consistent and inconsistent stimuli, we created animated avatars by combining the shape and motion estimates from either a single performer or from different performers. Using these stimuli we conducted three experiments in an immersive virtual reality environment. First, a group of participants detected which of two stimuli was inconsistent. Performance was very low, and results were only marginally significant. Next, a second group of participants rated perceived attractiveness, eeriness, and humanness of consistent and inconsistent stimuli, but these judgements of animation characteristics were not affected by consistency of the stimuli. Finally, a third group of participants rated properties of the objects rather than of the performers. Here, we found strong influences of shape-motion inconsistency on perceived weight and thrown distance of objects. This suggests that the visual system relies on its knowledge of shape and motion and that these components are assimilated into an altered perception of the action outcome. We propose that the visual system attempts to resist inconsistent interpretations of human animations. Actions involving object manipulations present an opportunity for the visual system to reinterpret the introduced inconsistencies as a change in the dynamics of an object rather than as an unexpected combination of body shape and body motion.

ps

publisher pdf DOI [BibTex]

publisher pdf DOI [BibTex]


no image
X-ray Optics Fabrication Using Unorthodox Approaches

Sanli, U., Baluktsian, M., Ceylan, H., Sitti, M., Weigand, M., Schütz, G., Keskinbora, K.

Bulletin of the American Physical Society, APS, 2019 (article)

mms pi

[BibTex]

[BibTex]


Perceiving Systems (2016-2018)
Perceiving Systems (2016-2018)
Scientific Advisory Board Report, 2019 (misc)

ps

pdf [BibTex]

pdf [BibTex]


Microrobotics and Microorganisms: Biohybrid Autonomous Cellular Robots
Microrobotics and Microorganisms: Biohybrid Autonomous Cellular Robots

Alapan, Y., Yasa, O., Yigit, B., Yasa, I. C., Erkoc, P., Sitti, M.

Annual Review of Control, Robotics, and Autonomous Systems, 2019 (article)

pi

[BibTex]

[BibTex]


Tailored Magnetic Springs for Shape-Memory Alloy Actuated Mechanisms in Miniature Robots
Tailored Magnetic Springs for Shape-Memory Alloy Actuated Mechanisms in Miniature Robots

Woodward, M. A., Sitti, M.

IEEE Transactions on Robotics, 35, 2019 (article)

Abstract
Animals can incorporate large numbers of actuators because of the characteristics of muscles; whereas, robots cannot, as typical motors tend to be large, heavy, and inefficient. However, shape-memory alloys (SMA), materials that contract during heating because of change in their crystal structure, provide another option. SMA, though, is unidirectional and therefore requires an additional force to reset (extend) the actuator, which is typically provided by springs or antagonistic actuation. These strategies, however, tend to limit the actuator's work output and functionality as their force-displacement relationships typically produce increasing resistive force with limited variability. In contrast, magnetic springs-composed of permanent magnets, where the interaction force between magnets mimics a spring force-have much more variable force-displacement relationships and scale well with SMA. However, as of yet, no method for designing magnetic springs for SMA-actuators has been demonstrated. Therefore, in this paper, we present a new methodology to tailor magnetic springs to the characteristics of these actuators, with experimental results both for the device and robot-integrated SMA-actuators. We found magnetic building blocks, based on sets of permanent magnets, which are well-suited to SMAs and have the potential to incorporate features such as holding force, state transitioning, friction minimization, auto-alignment, and self-mounting. We show magnetic springs that vary by more than 3 N in 750 $\mu$m and two SMA-actuated devices that allow the MultiMo-Bat to reach heights of up to 4.5 m without, and 3.6 m with, integrated gliding airfoils. Our results demonstrate the potential of this methodology to add previously impossible functionality to smart material actuators. We anticipate this methodology will inspire broader consideration of the use of magnetic springs in miniature robots and further study of the potential of tailored magnetic springs throughout mechanical systems.

pi

DOI [BibTex]


Magnetically Actuated Soft Capsule Endoscope for Fine-Needle Biopsy
Magnetically Actuated Soft Capsule Endoscope for Fine-Needle Biopsy

Son, D., Gilbert, H., Sitti, M.

Soft robotics, Mary Ann Liebert, Inc., publishers 140 Huguenot Street, 3rd Floor New …, 2019 (article)

pi

[BibTex]

[BibTex]


Thrust and Hydrodynamic Efficiency of the Bundled Flagella
Thrust and Hydrodynamic Efficiency of the Bundled Flagella

Danis, U., Rasooli, R., Chen, C., Dur, O., Sitti, M., Pekkan, K.

Micromachines, 10, 2019 (article)

pi

[BibTex]

[BibTex]


The near and far of a pair of magnetic capillary disks
The near and far of a pair of magnetic capillary disks

Koens, L., Wang, W., Sitti, M., Lauga, E.

Soft Matter, 2019 (article)

pi

[BibTex]

[BibTex]


Multifarious Transit Gates for Programmable Delivery of Bio‐functionalized Matters
Multifarious Transit Gates for Programmable Delivery of Bio‐functionalized Matters

Hu, X., Torati, S. R., Kim, H., Yoon, J., Lim, B., Kim, K., Sitti, M., Kim, C.

Small, Wiley Online Library, 2019 (article)

pi

[BibTex]

[BibTex]


Multi-functional soft-bodied jellyfish-like swimming
Multi-functional soft-bodied jellyfish-like swimming

Ren, Z., Hu, W., Dong, X., Sitti, M.

Nature communications, 10, 2019 (article)

pi

[BibTex]


no image
Welcome to Progress in Biomedical Engineering

Sitti, M.

Progress in Biomedical Engineering, 1, IOP Publishing, 2019 (article)

pi

[BibTex]

[BibTex]


The Virtual Caliper: Rapid Creation of Metrically Accurate Avatars from {3D} Measurements
The Virtual Caliper: Rapid Creation of Metrically Accurate Avatars from 3D Measurements

Pujades, S., Mohler, B., Thaler, A., Tesch, J., Mahmood, N., Hesse, N., Bülthoff, H. H., Black, M. J.

IEEE Transactions on Visualization and Computer Graphics, 25, pages: 1887,1897, IEEE, 2019 (article)

Abstract
Creating metrically accurate avatars is important for many applications such as virtual clothing try-on, ergonomics, medicine, immersive social media, telepresence, and gaming. Creating avatars that precisely represent a particular individual is challenging however, due to the need for expensive 3D scanners, privacy issues with photographs or videos, and difficulty in making accurate tailoring measurements. We overcome these challenges by creating “The Virtual Caliper”, which uses VR game controllers to make simple measurements. First, we establish what body measurements users can reliably make on their own body. We find several distance measurements to be good candidates and then verify that these are linearly related to 3D body shape as represented by the SMPL body model. The Virtual Caliper enables novice users to accurately measure themselves and create an avatar with their own body shape. We evaluate the metric accuracy relative to ground truth 3D body scan data, compare the method quantitatively to other avatar creation tools, and perform extensive perceptual studies. We also provide a software application to the community that enables novices to rapidly create avatars in fewer than five minutes. Not only is our approach more rapid than existing methods, it exports a metrically accurate 3D avatar model that is rigged and skinned.

ps

Project Page IEEE Open Access IEEE Open Access PDF DOI [BibTex]

Project Page IEEE Open Access IEEE Open Access PDF DOI [BibTex]


Mechanics of a pressure-controlled adhesive membrane for soft robotic gripping on curved surfaces
Mechanics of a pressure-controlled adhesive membrane for soft robotic gripping on curved surfaces

Song, S., Drotlef, D., Paik, J., Majidi, C., Sitti, M.

Extreme Mechanics Letters, Elsevier, 2019 (article)

pi

[BibTex]


Graphene oxide synergistically enhances antibiotic efficacy in Vancomycin resistance Staphylococcus aureus
Graphene oxide synergistically enhances antibiotic efficacy in Vancomycin resistance Staphylococcus aureus

Singh, V., Kumar, V., Kashyap, S., Singh, A. V., Kishore, V., Sitti, M., Saxena, P. S., Srivastava, A.

ACS Applied Bio Materials, ACS Publications, 2019 (article)

pi

[BibTex]

[BibTex]


Review of emerging concepts in nanotoxicology: opportunities and challenges for safer nanomaterial design
Review of emerging concepts in nanotoxicology: opportunities and challenges for safer nanomaterial design

Singh, A. V., Laux, P., Luch, A., Sudrik, C., Wiehr, S., Wild, A., Santamauro, G., Bill, J., Sitti, M.

Toxicology Mechanisms and Methods, 2019 (article)

pi

[BibTex]

[BibTex]


Multifunctional and biodegradable self-propelled protein motors
Multifunctional and biodegradable self-propelled protein motors

Pena-Francesch, A., Giltinan, J., Sitti, M.

Nature communications, 10, Nature Publishing Group, 2019 (article)

pi

[BibTex]

[BibTex]


Cohesive self-organization of mobile microrobotic swarms
Cohesive self-organization of mobile microrobotic swarms

Yigit, B., Alapan, Y., Sitti, M.

arXiv preprint arXiv:1907.05856, 2019 (article)

pi

[BibTex]

[BibTex]


Mobile microrobots for active therapeutic delivery
Mobile microrobots for active therapeutic delivery

Erkoc, P., Yasa, I. C., Ceylan, H., Yasa, O., Alapan, Y., Sitti, M.

Advanced Therapeutics, Wiley Online Library, 2019 (article)

pi

[BibTex]

[BibTex]


Shape-encoded dynamic assembly of mobile micromachines
Shape-encoded dynamic assembly of mobile micromachines

Alapan, Y., Yigit, B., Beker, O., Demirörs, A. F., Sitti, M.

Nature, 18, 2019 (article)

pi

[BibTex]

[BibTex]


Microfluidics Integrated Lithography‐Free Nanophotonic Biosensor for the Detection of Small Molecules
Microfluidics Integrated Lithography‐Free Nanophotonic Biosensor for the Detection of Small Molecules

Sreekanth, K. V., Sreejith, S., Alapan, Y., Sitti, M., Lim, C. T., Singh, R.

Advanced Optical Materials, 2019 (article)

pi

[BibTex]

[BibTex]


ENGINEERING Bio-inspired robotic collectives
ENGINEERING Bio-inspired robotic collectives

Sitti, M.

Nature, 567, pages: 314-315, Macmillan Publishers Ltd., London, England, 2019 (article)

pi

[BibTex]

[BibTex]


Peptide-Induced Biomineralization of Tin Oxide (SnO2) Nanoparticles for Antibacterial Applications
Peptide-Induced Biomineralization of Tin Oxide (SnO2) Nanoparticles for Antibacterial Applications

Singh, A. V., Jahnke, T., Xiao, Y., Wang, S., Yu, Y., David, H., Richter, G., Laux, P., Luch, A., Srivastava, A., Saxena, P. S., Bill, J., Sitti, M.

Journal of nanoscience and nanotechnology, 19, American Scientific Publishers, 2019 (article)

pi

[BibTex]

[BibTex]


no image
Electromechanical actuation of dielectric liquid crystal elastomers for soft robotics

Davidson, Z., Shahsavan, H., Guo, Y., Hines, L., Xia, Y., Yang, S., Sitti, M.

Bulletin of the American Physical Society, APS, 2019 (article)

pi

[BibTex]

[BibTex]


Learning to Navigate Endoscopic Capsule Robots
Learning to Navigate Endoscopic Capsule Robots

Turan, M., Almalioglu, Y., Gilbert, H. B., Mahmood, F., Durr, N. J., Araujo, H., Sarı, A. E., Ajay, A., Sitti, M.

IEEE Robotics and Automation Letters, 4, 2019 (article)

pi

[BibTex]

[BibTex]


Order and Information in the Phases of a Torque-driven Artificial Collective System
Order and Information in the Phases of a Torque-driven Artificial Collective System

Wang, W., Gardi, G., Kishore, V., Koens, L., Son, D., Gilbert, H., Harwani, P., Lauga, E., Sitti, M.

arXiv preprint arXiv:1910.11226, 2019 (article)

pi

[BibTex]

[BibTex]

2017


Soft Actuators for Small-Scale Robotics
Soft Actuators for Small-Scale Robotics

Hines, L., Petersen, K., Lum, G. Z., Sitti, M.

Advanced Materials, 2017 (article)

Abstract
This review comprises a detailed survey of ongoing methodologies for soft actuators, highlighting approaches suitable for nanometer- to centimeter-scale robotic applications. Soft robots present a special design challenge in that their actuation and sensing mechanisms are often highly integrated with the robot body and overall functionality. When less than a centimeter, they belong to an even more special subcategory of robots or devices, in that they often lack on-board power, sensing, computation, and control. Soft, active materials are particularly well suited for this task, with a wide range of stimulants and a number of impressive examples, demonstrating large deformations, high motion complexities, and varied multifunctionality. Recent research includes both the development of new materials and composites, as well as novel implementations leveraging the unique properties of soft materials.

pi

DOI [BibTex]


A deep learning based fusion of RGB camera information and magnetic localization information for endoscopic capsule robots
A deep learning based fusion of RGB camera information and magnetic localization information for endoscopic capsule robots

Turan, M., Shabbir, J., Araujo, H., Konukoglu, E., Sitti, M.

International Journal of Intelligent Robotics and Applications, 1(4):442-450, December 2017 (article)

Abstract
A reliable, real time localization functionality is crutial for actively controlled capsule endoscopy robots, which are an emerging, minimally invasive diagnostic and therapeutic technology for the gastrointestinal (GI) tract. In this study, we extend the success of deep learning approaches from various research fields to the problem of sensor fusion for endoscopic capsule robots. We propose a multi-sensor fusion based localization approach which combines endoscopic camera information and magnetic sensor based localization information. The results performed on real pig stomach dataset show that our method achieves sub-millimeter precision for both translational and rotational movements.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


3D Chemical Patterning of Micromaterials for Encoded Functionality
3D Chemical Patterning of Micromaterials for Encoded Functionality

Ceylan, H., Yasa, I. C., Sitti, M.

Advanced Materials, 2017 (article)

Abstract
Programming local chemical properties of microscale soft materials with 3D complex shapes is indispensable for creating sophisticated functionalities, which has not yet been possible with existing methods. Precise spatiotemporal control of two-photon crosslinking is employed as an enabling tool for 3D patterning of microprinted structures for encoding versatile chemical moieties.

pi

DOI Project Page [BibTex]


Biohybrid actuators for robotics: A review of devices actuated by living cells
Biohybrid actuators for robotics: A review of devices actuated by living cells

Ricotti, L., Trimmer, B., Feinberg, A. W., Raman, R., Parker, K. K., Bashir, R., Sitti, M., Martel, S., Dario, P., Menciassi, A.

Science Robotics, 2(12), Science Robotics, November 2017 (article)

Abstract
Actuation is essential for artificial machines to interact with their surrounding environment and to accomplish the functions for which they are designed. Over the past few decades, there has been considerable progress in developing new actuation technologies. However, controlled motion still represents a considerable bottleneck for many applications and hampers the development of advanced robots, especially at small length scales. Nature has solved this problem using molecular motors that, through living cells, are assembled into multiscale ensembles with integrated control systems. These systems can scale force production from piconewtons up to kilonewtons. By leveraging the performance of living cells and tissues and directly interfacing them with artificial components, it should be possible to exploit the intricacy and metabolic efficiency of biological actuation within artificial machines. We provide a survey of important advances in this biohybrid actuation paradigm.

pi

link (url) DOI [BibTex]

link (url) DOI [BibTex]


Learning a model of facial shape and expression from {4D} scans
Learning a model of facial shape and expression from 4D scans

Li, T., Bolkart, T., Black, M. J., Li, H., Romero, J.

ACM Transactions on Graphics, 36(6):194:1-194:17, November 2017, Two first authors contributed equally (article)

Abstract
The field of 3D face modeling has a large gap between high-end and low-end methods. At the high end, the best facial animation is indistinguishable from real humans, but this comes at the cost of extensive manual labor. At the low end, face capture from consumer depth sensors relies on 3D face models that are not expressive enough to capture the variability in natural facial shape and expression. We seek a middle ground by learning a facial model from thousands of accurately aligned 3D scans. Our FLAME model (Faces Learned with an Articulated Model and Expressions) is designed to work with existing graphics software and be easy to fit to data. FLAME uses a linear shape space trained from 3800 scans of human heads. FLAME combines this linear shape space with an articulated jaw, neck, and eyeballs, pose-dependent corrective blendshapes, and additional global expression from 4D face sequences in the D3DFACS dataset along with additional 4D sequences.We accurately register a template mesh to the scan sequences and make the D3DFACS registrations available for research purposes. In total the model is trained from over 33, 000 scans. FLAME is low-dimensional but more expressive than the FaceWarehouse model and the Basel Face Model. We compare FLAME to these models by fitting them to static 3D scans and 4D sequences using the same optimization method. FLAME is significantly more accurate and is available for research purposes (http://flame.is.tue.mpg.de).

ps

data/model video code chumpy code tensorflow paper supplemental Project Page [BibTex]

data/model video code chumpy code tensorflow paper supplemental Project Page [BibTex]


Investigating Body Image Disturbance in Anorexia Nervosa Using Novel Biometric Figure Rating Scales: A Pilot Study
Investigating Body Image Disturbance in Anorexia Nervosa Using Novel Biometric Figure Rating Scales: A Pilot Study

Mölbert, S. C., Thaler, A., Streuber, S., Black, M. J., Karnath, H., Zipfel, S., Mohler, B., Giel, K. E.

European Eating Disorders Review, 25(6):607-612, November 2017 (article)

Abstract
This study uses novel biometric figure rating scales (FRS) spanning body mass index (BMI) 13.8 to 32.2 kg/m2 and BMI 18 to 42 kg/m2. The aims of the study were (i) to compare FRS body weight dissatisfaction and perceptual distortion of women with anorexia nervosa (AN) to a community sample; (ii) how FRS parameters are associated with questionnaire body dissatisfaction, eating disorder symptoms and appearance comparison habits; and (iii) whether the weight spectrum of the FRS matters. Women with AN (n = 24) and a community sample of women (n = 104) selected their current and ideal body on the FRS and completed additional questionnaires. Women with AN accurately picked the body that aligned best with their actual weight in both FRS. Controls underestimated their BMI in the FRS 14–32 and were accurate in the FRS 18–42. In both FRS, women with AN desired a body close to their actual BMI and controls desired a thinner body. Our observations suggest that body image disturbance in AN is unlikely to be characterized by a visual perceptual disturbance, but rather by an idealization of underweight in conjunction with high body dissatisfaction. The weight spectrum of FRS can influence the accuracy of BMI estimation.

ps

publisher DOI Project Page [BibTex]