I am a post-doctoral researcher at the Perceiving Systems department of Michael Black, part of the Max Planck Institute for Intelligent Systems in Tübingen.
Before that I was a PhD student affiliated with the University of Bonn and the Max Planck Institute for Intelligent Systems in Tübingen, working with Juergen Gall on Hand-Object Interaction.
My alma mater is Aristotle University of Thessaloniki in Greece, where I closely cooparated with Leontios Hadjileontiadis.
For more info, please visit my personal website
November 2017
The dataset and models for our paper "Embodied Hands: Modeling and Capturing Hands and Bodies Together" is now online.
September 2017
Our paper "Embodied Hands: Modeling and Capturing Hands and Bodies Together" is accepted at SIGGRAPH-Asia 2017. Project's website.
January 2017
PhD thesis defended @ Uni-Bonn.
August 2016
PhD thesis submitted @ Uni-Bonn.
Talk @:
Our paper "Reconstructing Articulated Rigged Models from RGB-D Videos" is accepted for oral presentation at the ECCV-Workshop on Recovering 6D Object Pose Estimation (R6D). Projet's website here.
June 2016
Talk @:
February 2016
Our paper "Capturing Hands in Action using Discriminative Salient Points and Physics Simulation" is accepted from the IJCV special issue "Human Activity Understanding from 2D and 3D data". Project's website here.
November 2015
The Dataset, Source-Code and several Videos and Viewers for the ICCV'15 paper have been uploaded at the project's website.
October 2015
Extended abstract for our ICCV'15 paper "3D Object Reconstruction from Hand-Object Interactions" got accepted for a poster presentation at the 1st Workshop on Object Understanding for Interaction.
September 2015
Our paper "3D Object Reconstruction from Hand-Object Interactions" got accepted for a poster presentation at ICCV'15. Project's website here.
October 2014
One paper submitted to IJCV is under review. Project's website here.
July 2014
GCPR Paper "Capturing Hand Motion with an RGB-D Sensor, Fusing a Generative Model with Salient Points" accepted for an oral presentation.
Please visit the Project Page.
October 2013
Together with Abhilash Srikantha and Domik A. Klein and Fahrettin Gökgöz, we organize the exercises of Juergen Gall's Computer Vision lecture at the University of Bonn (Computer Science department)
June 2013
GCPR Paper "A Comparison of Directional Distances for Hand Pose Estimation" accepted for a poster presentation.
Please visit the Project Page.
May 2012 - Aug 2016 | PhD (Dr. rer. nat.) |
University of Bonn & Max Planck Institute for Intelligent Systems | |
Computer Vision Group (Bonn) / Perceiving Systems Department (MPI) | |
Supervisor: Juergen Gall | |
"Capturing Hand-Object Interaction and Reconstruction of Manipulated Objects" | |
Committee: Juergen Gall, Antonis Argyros, Reinhard Klein, Carsten Urbach | |
November 2009 | Diploma in Engineering (M.Eng. equivalent - 5 year curriculum) |
Department of Electrical and Computer Engineering | |
Aristotle University of Thessaloniki, Greece | |
Diploma Thesis: "Adaptive algorithms for marker detection in an image: Application in the field of Augmented Reality" |
|
Supervisor: Hadjileontiadis Leontios | |
June 2002 | 2nd High School (Lyceum) of Kozani, Greece |
Direction: Science Studies | |
Oct 2016 - Present | Postdoctoral Researcher (with Michael Black) |
Max Planck Institute for Intelligent Systems | |
Perceiving Systems Department | |
July 2015 - Sept 2016 | Research Student (with Juergen Gall) |
Computer Vision Group | |
University of Bonn | |
May 2012 - June 2013 | Research Student (with Juergen Gall) |
Max Planck Institute for Intelligent Systems | |
Perceiving Systems Department | |
Apr. 2011 - July 2011 | Member of the R&D team for the project |
"Epione: An Innovative Pain Management Solution" | |
Under the mentorship of assoc.prof. L. Hadjileontiadis | |
Department of Electrical and Computer Engineering | |
Aristotle University of Thessaloniki, Greece | |
Jan. 2010 - May 2011 | IT - Support |
A' Army Corps, Greek Army | |
Nov. 2009 - June 2010 | Member of the R&D team for the project |
"Epione: An Innovative Pain Management Solution" | |
Under the mentorship of assoc.prof. L. Hadjileontiadis | |
Department of Electrical and Computer Engineering | |
Aristotle University of Thessaloniki, Greece | |
July 2006 - Aug. 2006 | Internship at "Agios Dimitrios" power station |
Capturing Hand-Object Interaction and Reconstruction of Manipulated Objects | ||
13.06.16 | FORTH Institute, Hosted by Prof. A. Argyros | (link) |
17.06.16 | Max Planck Institute, Hosted by Prof. C. Theobalt | |
04.08.16 | Perceiving Systems - MPI for IS, hosted bt Prof. M. Black | (link) |
10.08.16 | Microsoft Research Cambridge | |
12.08.16 | University of Oxford, hosted by Prof. P. Torr | (link) |
26.08.16 | Dyson Robotics Lab at Imperial College, hosted by Prof. A. Davison |
Media Coverage by various national TV/Radio/Web/Press media about Epione project |
Honorary Award by the Rectorate of Aristotle University of Thessaloniki for the project “Epione” |
Invited Presentation of "Epione" project to Steve Ballmer, Microsoft's CEO, 8 July 2011, New York |
World finals of Microsoft Imagine Cup 2011 (Software Design) in New York, July 2011 as a member of the "Epione" Team |
1st Place in the National Finals of Imagine Cup 2011 (Software Design) as a member of the "Epione" Team |
Invited presentation of Epione projet at TEDxThessaloniki 2010 (by team's mentor, Leontios Hadjileontiadis) |
2nd Place in the National Finals of Imagine Cup 2010 (Software Design) as a member of the "Epione" Team |
Epione Project | Invited Presentation to Steve Balmer, Microsoft's CEO |
Marriott Marquiss Hotel | |
8 July 2011, New York | |
Epione Project | Imagine Cup 2011 World Finals |
Marriott Marquiss Hotel | |
9 July 2011, New York | |
Epione Project | Imagine Cup 2011 Greek Finals |
Microsoft Hellas | |
Athens, 2-3 May 2011 | |
Epione Project | Imagine Cup 2010 Greek Finals |
Microsoft Innovation Center (MIC) Greece | |
Athens, 3-4 May 2010 | |
Epione Project | 5th StudentGuru Event |
Central Library of Aristotle University of Thessaloniki | |
Thessaloniki, 7 June 2010 | |
since 2007 | IEEE (Institute of Electrical and Electronics Engineers) |
since 2010 | TEE (Technical Chamber of Greece) |
since 2017 | ACM SIGGRAPH |
TPAMI | Transactions on Pattern Analysis and Machine Intelligence | link | |
CVPR | IEEE Conference on Computer Vision and Pattern Recognition | link | |
ECCV | European Conference on Computer Vision | link | |
ICRA | IEEE International Conference on Robotics and Automation | link | |
Sensors | Sensors | Open Access Journal from MDPI | link | |
HRI | ACM/IEEE International Conference on Human-Robot Interaction | link | |
TMM | IEEE Transactions on Multimedia | link | |
August 2012 | Vision and Sports Summer School |
27-31 August 2012 | |
Czech Technical University | |
Prague, Czech Republic | |
July 2007 | BEST Course on Technology |
"Batteries to power, turbines to speed, kick it baby!" | |
4 - 19 July 2007 | |
Ghent, Belgium | |
Greek | - | Mother Language |
English | C2 | Certificate Of Proficiency In English, University Of Cambridge |
German | C1 | Goethe Zertifikat, Goethe Institut |
dimitris.tzionas[at]gmail.com | |
dimitris.tzionas[at]tuebingen.mpg.de | |
Office | N03.009 |
Address | MPI for Intelligent Systems |
Perceiving Systems dpt. | |
Max-Planck-Ring 4 | |
72076 Tubingen | |
Skype | dimitris.tzionas.atwork |
Website | www.dimtzionas.com |
Networking | |
Research | Google Scholar |
Research Gate |
Postdoc - MPI | Michael Black |
Javier Romero | |
PhD | Juergen Gall (advisor) |
Aristotle University | Leontios Hadjileontiadis (mentor) |
+ | Stefanos Eleftheriadis |
MS Imagine Cup | Stamatis Georgoulis |
Kostas Vrenas | |
Bonn | Abhilash Srikantha |
Pablo Aponte | |
ETH Zurich | Luca Ballan |
Marc Pollefeys | |
Aristotle University | Tonia Tzemanaki |
Paris Chourtsidis | |
Tilemachos Matiakis - Kenotom P.C.-I.K.E. | |
Panagiotis Petrantonakis | |
Akis Tsiotsios | |
SIGGRAPH-Asia 2017 (TOG) models/dataset (link)
Models & Alignments & Scans, for:
- hand-only (MANO)
- body+hand (SMPL+H)
ECCVw 2016 dataset (link)
RGB-D dataset of an object under manipulation.
The dataset also contains input 3D template meshes
for each object and output articulated models.
IJCV 2016 dataset (link)
Annotated RGB-D + multicamera-RGB dataset of one or two hands
interacting with each other and/or with a rigid or an articulated object
ICCV 2015 dataset (link)
RGB-D dataset of a hand rotating a rigid object for 3d scanning
GCPR 2014 dataset (link)
Annotated RGB-D dataset of one or two hands interacting with each other
GCPR 2013 dataset (link)
Synthetic dataset of two hands interacting with each other
Capturing Hand-Object Interaction and Reconstruction of Manipulated Objects | ||
13.06.16 | FORTH Institute, Hosted by Prof. A. Argyros | (link) |
17.06.16 | Max Planck Institute, Hosted by Prof. C. Theobalt | |
04.08.16 | Perceiving Systems - MPI for IS, hosted bt Prof. M. Black | (link) |
10.08.16 | Microsoft Research Cambridge | |
12.08.16 | University of Oxford, hosted by Prof. P. Torr | (link) |
26.08.16 | Dyson Robotics Lab at Imperial College, hosted by Prof. A. Davison | |
Capturing Hand Motion with an RGB-D Sensor, Fusing a Generative Model with Salient Points | ||
05.09.14 | GCPR 2014, Muenster | (link) |
TPAMI | Transactions on Pattern Analysis and Machine Intelligence | link | |||
CVPR | IEEE Conference on Computer Vision and Pattern Recognition | link | |||
ECCV | European Conference on Computer Vision | link | |||
ICRA | IEEE International Conference on Robotics and Automation | link | |||
Sensors | Sensors | Open Access Journal from MDPI | link | |||
HRI | ACM/IEEE International Conference on Human-Robot Interaction | link | |||
TMM | IEEE Transactions on Multimedia | link | |||
Humans move their hands and bodies together to communicate and solve tasks. Capturing and replicating such coordinated activity is critical for virtual characters that behave realistically. Surprisingly, most methods treat the 3D modeling and tracking of bodies and hands separately. Here we formulate a model of hands and bodies interacting together and fit it to full-body 4D sequences. When scanning or capturing the full body in 3D, hands are small and often partially occluded, making their shape and pose hard to recover. To cope with low-resolution, occlusion, and noise, we develop a new model called MANO (hand Model with Articulated and Non-rigid defOrmations). MANO is learned from around 1000 high-resolution 3D scans of hands of 31 subjects in a wide variety of hand poses. The model is realistic, low-dimensional, captures non-rigid shape changes with pose, is compatible with standard graphics packages, and can fit any human hand. MANO provides a compact mapping from hand poses to pose blend shape corrections and a linear manifold of pose synergies. We attach MANO to a standard parameterized 3D body shape model (SMPL), resulting in a fully articulated body and hand model (SMPL+H). We illustrate SMPL+H by fitting complex, natural, activities of subjects captured with a 4D scanner. The fitting is fully automatic and results in full body models that move naturally with detailed hand motions and a realism not seen before in full body performance capture. The models and data are freely available for research purposes at http://mano.is.tue.mpg.de.
Although commercial and open-source software exist to reconstruct a static object from a sequence recorded with an RGB-D sensor, there is a lack of tools that build rigged models of articulated objects that deform realistically and can be used for tracking or animation. In this work, we fill this gap and propose a method that creates a fully rigged model of an articulated object from depth data of a single sensor. To this end, we combine deformable mesh tracking, motion segmentation based on spectral clustering and skeletonization based on mean curvature flow. The fully rigged model then consists of a watertight mesh, embedded skeleton, and skinning weights.
Hand motion capture is a popular research field, recently gaining more attention due to the ubiquity of RGB-D sensors. However, even most recent approaches focus on the case of a single isolated hand. In this work, we focus on hands that interact with other hands or objects and present a framework that successfully captures motion in such interaction scenarios for both rigid and articulated objects. Our framework combines a generative model with discriminatively trained salient points to achieve a low tracking error and with collision detection and physics simulation to achieve physically plausible estimates even in case of occlusions and missing visual data. Since all components are unified in a single objective function which is almost everywhere differentiable, it can be optimized with standard optimization techniques. Our approach works for monocular RGB-D sequences as well as setups with multiple synchronized RGB cameras. For a qualitative and quantitative evaluation, we captured 29 sequences with a large variety of interactions and up to 150 degrees of freedom.
Recent advances have enabled 3d object reconstruction approaches using a single off-the-shelf RGB-D camera. Although these approaches are successful for a wide range of object classes, they rely on stable and distinctive geometric or texture features. Many objects like mechanical parts, toys, household or decorative articles, however, are textureless and characterized by minimalistic shapes that are simple and symmetric. Existing in-hand scanning systems and 3d reconstruction techniques fail for such symmetric objects in the absence of highly distinctive features. In this work, we show that extracting 3d hand motion for in-hand scanning effectively facilitates the reconstruction of even featureless and highly symmetric objects and we present an approach that fuses the rich additional information of hands into a 3d reconstruction pipeline, significantly contributing to the state-of-the-art of in-hand scanning.
Hand motion capture has been an active research topic in recent years, following the success of full-body pose tracking. Despite similarities, hand tracking proves to be more challenging, characterized by a higher dimensionality, severe occlusions and self-similarity between fingers. For this reason, most approaches rely on strong assumptions, like hands in isolation or expensive multi-camera systems, that limit the practical use. In this work, we propose a framework for hand tracking that can capture the motion of two interacting hands using only a single, inexpensive RGB-D camera. Our approach combines a generative model with collision detection and discriminatively learned salient points. We quantitatively evaluate our approach on 14 new sequences with challenging interactions.
Benchmarking methods for 3d hand tracking is still an open problem due to the difficulty of acquiring ground truth data. We introduce a new dataset and benchmarking protocol that is insensitive to the accumulative error of other protocols. To this end, we create testing frame pairs of increasing difficulty and measure the pose estimation error separately for each of them. This approach gives new insights and allows to accurately study the performance of each feature or method without employing a full tracking pipeline. Following this protocol, we evaluate various directional distances in the context of silhouette-based 3d hand tracking, expressed as special cases of a generalized Chamfer distance form. An appropriate parameter setup is proposed for each of them, and a comparative study reveals the best performing method in this context.
Bodies in computer vision have often been an afterthought. Human pose is often represented by 10-12 body joints in 2D or 3D. This is inspired by Johannson's moving light displays, which showed that some human actions can be recognized from the motion of the major joints of the body. But the joint...
Michael Black Dimitris Tzionas Timo Bolkart Ahmed Osman Vassilis Choutas Georgios Pavlakos Nima Ghorbani
Deep learning has brought rapid progress for many computer vision problems but current methods require large training datasets with annotated ground truth. Human annotators tend to be reasonably efficient for tasks like sparse 2D joint estimation, however annotation for other tasks like dense optical...
Javier Romero Anurag Ranjan Michael Black Jonas Wulff David Hoffmann Dimitris Tzionas Siyu Tang Naureen Mahmood
lala
Javier Romero Anurag Ranjan Michael Black Jonas Wulff David Hoffmann Dimitris Tzionas Siyu Tang Naureen Mahmood Gul Varol Cordelia Schmid
Little is known about the shape and properties of the human finger during haptic interaction, as such situations are difficult to instrument. Interestingly, th...
David Gueorguiev
Dimitris Tzionas
Michael Black
Katherine J. Kuchenbecker
Hands are important to humans for signaling and communication, as well as for interacting with the physical world. Capturing the motion of hands is a very challenging computer vision problem that is also highly relevant for other areas like computer graphics, human-computer interfaces, and robotics.
...Dimitris Tzionas Javier Romero Michael Black Gul Varol Cordelia Schmid
Gueorguiev, D., Tzionas, D., Pacchierotti, C., Black, M. J., Kuchenbecker, K. J.
Extended abstract presented at the Hand, Brain and Technology conference (HBT), Ascona, Switzerland, August 2018 (misc)
hi
Gueorguiev, D., Tzionas, D., Pacchierotti, C., Black, M. J., Kuchenbecker, K. J.
Work-in-progress paper (3 pages) presented at the IEEE Haptics Symposium, San Francisco, USA, March 2018 (misc)
hi
Romero, J., Tzionas, D., Black, M. J.
ACM Transactions on Graphics, (Proc. SIGGRAPH Asia), 36(6):245:1-245:17, 245:1–245:17, ACM, November 2017 (article)
ps
ps
Tzionas, D., Ballan, L., Srikantha, A., Aponte, P., Pollefeys, M., Gall, J.
International Journal of Computer Vision (IJCV), 118(2):172-193, June 2016 (article)
ps
ps
ps
Tzionas, D., Srikantha, A., Aponte, P., Gall, J.
In German Conference on Pattern Recognition (GCPR), pages: 1-13, Lecture Notes in Computer Science, Springer, GCPR, September 2014 (inproceedings)
ps
ps
Georgoulis, S., Eleftheriadis, S., Tzionas, D., Vrenas, K., Petrantonakis, P., Hadjileontiadis, L. J.
In Proceedings of the 2010 International Conference on Intelligent Networking and Collaborative Systems, pages: 259-266, INCOS ’10, IEEE Computer Society, Washington, DC, USA, 2010 (inproceedings)
ps
Tzionas, D., Vrenas, K., Eleftheriadis, S., Georgoulis, S., Petrantonakis, P. C., Hadjileontiadis, L. J.
In Proceedings of the 3rd International Conferenceon Software Development for EnhancingAccessibility and Fighting Info-Exclusion, pages: 23-30, DSAI ’10, UTAD - Universidade de Trás-os-Montes e Alto Douro, 2010 (inproceedings)
ps