Among all the potential contexts where computer graphics techniques can be used, cloth animation is a particularly interesting case since, in the real world, clothing is far more than just the physical objects that we wear; clothing is a key element to show someone’s...
Among all the potential contexts where computer graphics techniques can be used, cloth animation is a particularly interesting case since, in the real world, clothing is far more than just the physical objects that we wear; clothing is a key element to show someone’s expressiveness and motion, it even defines his or her identity. Consequently, the relevance of creating interactive digitally dressed characters with photorealistic clothing is enormous, and has a direct impact in a number of industries, including: video game industry, to create digital characters, edit apparel, etc.; and textile and fashion industries, for computational design tools, e-commerce, virtual try-out, etc. Considering the huge size of these industries –the worldwide revenue of the videogame industry alone is estimated to be over $60B in 2015, and expected to grow 10% yearly, while the global revenue of the fashion industry is estimated at $1,200B–, the interest in developing robust real-time techniques for photorealistic clothing is strongly motivated. Important social and economic aspects of our everyday life will directly benefit from investigating animable digital dressed characters.
The aim of this project was to investigate fundamental cloth simulation methods to develop a novel framework to synthesize real-time photorealistic cloth animation. In particular, having a video containing a heterogeneous cloth motion, the project has researched methods to incorporates image information from the video into a physical model, enabling interactive animation of video cloth motion. Such algorithms pave the way towards a fully digitized fashion and clothing industries, enabling ground-breaking applications such as virtual try-on or digital design of garments.
\"The main objectives and results of the project can be classified into 4 categories:
**** 1. Human pose estimation from monocular color video. ****
The estimation of the human pose of an actor from a single video is a fundamental step towards enabling the edit and reanimation of an actor\'s clothing and apparel. In this project we have investigated novel methods based on machine learning techniques, namely neural networks, that have shown to be successful in predicting they 3D pose of an actor from single video. Having a single video feed, captured from a mobile phone, we have contributed and published a new algorithm that predicts in real-time the 3D position of each articulation. Notice that our approach overcomes the need for body markers, additional sensors and a controlled environment, and therefore is able to predict the 3D pose of a person in unconstrained scenarios, even in vintage or YouTube video. This capability is a fundamental step towards virtual try-on applications where, for example, users can take a selfie and instantly visualize how a particular garment fits on they body and pose.
**** 2. Soft-tissue body animation ****
To accurately synthesize photorealistic garments, we also need to compute how these fit to any human body. We therefore need to be able to compute how a garment deforms when is contact with the body surface, a task specially challenging due to the non-rigid nature of our skin surface. For example, the area around the belly and chest is usually softer (or less rigid) than the shoulder. To this end, in this project we have investigated how to create human models that are able to reproduce the stiffness of the skin, paving the way to the simulation of realistic skin-cloth interactions. We published a journal research paper that demonstrates how soft-tissue skin deformations can be computed as a function of body shape and motion. In particular, ours is one of the firsts methods capable of learning to predict soft-tissue deformations by looking at examples (\"\"data-driven\"\") captured from real users, instead of solving the computationally expensive physical simulation needed to deform a 3D mesh.
**** 3. Data-driven cloth animation *****
Natural and realistic cloth deformations are another key aspect to achieve photorealism. In this project we have focused on cloth deformations in garments due to human body shape and pose. In other words, given a certain garment represented by a static 3D mesh, how does it deform when is worn by an arbitrary human body? This is highly important challenge in any virtual try-on application. We have investigated the sources of deformations that the garment suffers at any time, mainly due to body shape, pose and motion by looking at a large dataset of physically-simulated examples. Our results, currently under submission in a top-tier conference, demonstrate that our novel method is capable of predicting garment deformations at over 250 frames per second, significantly faster than any state-of-the-art method.
**** 4. Material reflectance estimation ****
Last but not least, material reflectance estimation is key to digitally reproduce photorealistic fabrics. In this project we have contributed to a research paper that estimates the optical properties of a material (i.e. fabrics) from color images. Our algorithm is the first to recover complex properties like anisotropy, index of refraction and a second reflectance color, for materials that have tinted specular reflections or whose albedo changes at glancing angles.
Altogether, the contributions to the state of the art in the 4 areas investigated in this project make important steps towards the overall goal of achieving photorealistic cloth animations.\"
The research carried out throughout this project has led to the publication of the following 5 papers in the area of Computer Vision and Computer Graphics.
[1] VNect: Real-time 3D Human Pose Estimation with a Single RGB Camera
Dushyant Mehta, Srinath Sridhar, Oleksandr Sotnychenko, Helge Rhodin, Mohammad Shafiei, Hans-Peter Seidel, Weipeng Xu, Dan Casas and Christian Theobalt
ACM Transactions on Graphics (Proc. SIGGRAPH), 2017
https://doi.org/10.1145/3072959.3073596
[2] Real-time Hand Tracking under Occlusion from an Egocentric RGB-D Sensor
Franziska Mueller, Dushyant Mehta, Oleksandr Sotnychenko, Srinath Sridhar, Dan Casas and Christian Theobalt
IEEE/CVF International Conference on Computer Vision (ICCV), 2017
http://doi.ieeecomputersociety.org/10.1109/ICCV.2017.131
[3] GANerated Hands for Real-time 3D Hand Tracking from Monocular RGB
Franziska Mueller, Florian Bernard, Oleksandr Sotnychenko, Dushyant Mehta, Srinath Sridhar, Dan Casas and Christian Theobalt
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018
[4] Learning Nonlinear Soft-Tissue Dynamics for Interactive Avatars
Dan Casas and Miguel A. Otaduy
Proceedings of the ACM on Computer Graphics and Interactive Techniques (ACM i3D), 2018
https://doi.org/10.1145/3203187
[5] BRDF Estimation of Complex Materials with Nested Learning
Raquel Vidaurre, Dan Casas, Elena Garces and Jorge Lopez-Moreno
IEEE Winter Conference on Applications of Computer Vision (WACV), 2019
Furthermore, we have another work under submission in a top conference in Computer Graphics.
More info: http://dancasas.github.io/projects/photocloth.