|

Fast Sketch Segmentation and Labeling With Deep Learning.

Researchers

Journal

Modalities

Models

Abstract

We present a simple and efficient method based on deep learning to automatically decompose sketched objects into semantically valid parts. We train a deep neural network to transfer existing segmentations and labelings from three-dimensional (3-D) models to freehand sketches without requiring numerous well-annotated sketches as training data. The network takes the binary image of a sketched object as input and produces a corresponding segmentation map with per-pixel labelings as output. A subsequent postprocess procedure with multilabel graph cuts further refines the segmentation and labeling result. We validate our proposed method on two sketch datasets. Experiments show that our method outperforms the state-of-the-art method in terms of segmentation and labeling accuracy and is significantly faster, enabling further integration in interactive drawing systems. We demonstrate the efficiency of our method in a sketch-based modeling application that automatically transforms input sketches into 3-D models by part assembly.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *