Joint Gap Detection and Inpainting of Line Drawings
Kazuma Sasaki Satoshi
Iizuka Edgar Simo-Serra
Hiroshi
Ishikawa
CVPR 2017
Input |
[Darabi et al. 2012] |
Ours |
|
Abstract:
We propose a novel data-driven approach for automatically detecting and completing gaps in line drawings with
a Convolutional Neural Network. In the case of existing
inpainting approaches for natural images, masks indicating
the missing regions are generally required as input. Here, we
show that line drawings have enough structures that can be
learned by the CNN to allow automatic detection and completion
of the gaps without any such input. Thus, our method
can find the gaps in line drawings and complete them without
user interaction. Furthermore, the completion realistically
conserves thickness and curvature of the line segments. All
the necessary heuristics for such realistic line completion
are learned naturally from a dataset of line drawings, where
various patterns of line completion are generated on the fly
as training pairs to improve the model generalization. We
evaluate our method qualitatively on a diverse set of challenging
line drawings and also provide quantitative results
with a user study, where it significantly outperforms the state
of the art.
Paper Code (GitHub) BibTex
Comparisons:
Comparison with PatchMatch [Barnes et al. 2009] and Image Melding [Darabi et al. 2012]. Note that previous methods require manually specifying the missing region shown as the magenta mask, while our approach is able to automatically both detect the missing line segments and complete them.