Designers draw extensively to externalize their ideas and communicate with others. However, drawings are currently not directly interpretable by computers. To test their ideas against physical reality, designers have to create 3D models suitable for simulation and 3D printing...
Designers draw extensively to externalize their ideas and communicate with others. However, drawings are currently not directly interpretable by computers. To test their ideas against physical reality, designers have to create 3D models suitable for simulation and 3D printing. However, the visceral and approximate nature of drawing clashes with the tediousness and rigidity of 3D modeling. As a result, designers only model finalized concepts, and have no feedback on feasibility during creative exploration.
Our ambition is to bring the power of 3D engineering tools to the creative phase of design by automatically estimating 3D models from drawings. However, this problem is ill-posed: a point in the drawing can lie anywhere in depth. Existing solutions are limited to simple shapes, or require user input to ``explain\'\' to the computer how to interpret the drawing. Our originality is to exploit professional drawing techniques that designers developed to communicate shape most efficiently. Each technique provides geometric constraints that help viewers understand drawings, and that we shall leverage for 3D reconstruction.
In addition to tackling the long-standing problem of single-image 3D reconstruction, our research will significantly tighten design and engineering for rapid prototyping.
The first 30 months of the project allowed us to make significant progress in our understanding of how designers draw, and to propose preliminary solutions to the challenge of reconstructing 3D shapes from design drawings.
To better understand design sketching, we have collected a dataset of more than 400 professional drawings [Gryaditskaya et al. 2019]. We manually labeled the drawing techniques used in each sketch, and we registered all sketches to reference 3D models. Analyzing this data revealed systematic strategies employed by designers to convey 3D shapes, which will inspire the development of novel algorithms for drawing interpretation. In addition, our annotated sketches and associated 3D models form a challenging benchmark to test existing methods.
We proposed several methods to recover 3D information from drawings. A first family of method employs deep learning to predict what 3D shape is represented in a drawing. We applied this strategy in the context of architectural design, where we reconstruct 3D buildings by recognizing their constituent components (building mass, façade, window) [Nishida et al. 2018]. We also presented an interactive system that allows users to create 3D objects by drawing from multiple viewpoints [Delanoy et al. 2018, 2019].
The second family of methods leverages geometric properties of the lines drawn to optimize the 3D reconstruction. In particular, we exploit properties of developable surfaces to reconstruct sketches of fashion items [Fondevilla et al. 2017].
A long-term goal of our research is to evaluate the physical validity of a concept directly from a drawing. We obtained promising results towards this goal for the particular case of mechanical objects. We proposed an interactive system where users design the shape and motion of an articulated object, and our method automatically synthesizes a mechanism that animates the object while avoiding collisions [Nishida et al. 2019]. The geometry synthesized by our method is ready to be fabricated for rapid prototyping.
We were among the first to apply deep learning to the problem of sketch interpretation, which allowed our methods to reconstruct 3D shapes from as little as a single drawing. Our interactive system illustrates the potential of this approach to offer a new and effective workflow where designers can seamlessly sketch and navigate around a 3D shape. However, our solutions are currently only capable of interpreting simple drawings. The dataset we have collected contains sketches that are much more complex, and we are currently working on improving our methods to handle such real-world data. In parallel, we have started working on physical simulation for 3D shapes to offer a greater synergy between design sketching and engineering.
More info: https://ns.inria.fr/d3/.