I am in the field of Virtual Humans and Affective Computing. I am interested in what makes us perceive an interacting agent as 'human'. Specifically, I am interested in affectivity and appearance. What does the shape, pose, movement, behavior and style of a person or a virtual human tells us about them? How does the interaction with each other affects our own actions? Furthermore, I study how individual factors (such as culture) influences this perception.
On the bigger picture, my goal is to relate social perception, linguistics and computer vision in order to improve our Virtual Humans. Maybe one day our agents will learn to express themselves and communicate with us in the same way we humans do among ourselves.
Besides research, I enjoy web technologies! I am currently supporting the creation of websites for scientific data acquisition and dissemination related to 3D body shape, as well as web development for scientific experiments and perceptual studies.
Realistic, metrically accurate, 3D human avatars are useful for games, shopping, virtual reality, and health applications. Such avatars are not in wide use because solutions for creating them from high-end scanners, low-cost range cameras, and tailoring measurements all have limitations. Here we propose a simple ...
ACM Trans. Graph. (Proc. SIGGRAPH), 35(4):54:1-54:14, July 2016 (article)
Realistic, metrically accurate, 3D human avatars are useful for games, shopping, virtual reality, and health applications. Such avatars are not in wide use because solutions for creating them from high-end scanners, low-cost range cameras, and tailoring measurements all have limitations. Here we propose a simple solution and show that it is surprisingly accurate. We use crowdsourcing to generate attribute ratings of 3D body shapes corresponding to standard linguistic descriptions of 3D shape. We then learn a linear function relating these ratings to 3D human shape parameters. Given an image of a new body, we again turn to the crowd for ratings of the body shape. The collection of linguistic ratings of a photograph provides remarkably strong constraints on the metric 3D shape. We call the process crowdshaping and show that our Body Talk system produces shapes that are perceptually indistinguishable from bodies created from high-resolution scans and that the metric accuracy is sufficient for many tasks. This makes body “scanning” practical without a scanner, opening up new applications including database search, visualization, and extracting avatars from books.
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems