DeepRemaster: Temporal Source-Reference Attention Networks for Comprehensive Video Enhancement
— Additional Results —

Satoshi IizukaEdgar Simo-Serra

SIGGRAPH Asia 2019

Overview

We present additional results for remastering, colorization-only, and restoration-only. Results are shown on a test subset of the Youtube-8M dataset. Please refer to the paper for more details.

The approaches we compare to are as follows:

Remastering Results:

Comparison with combinations of existing restoration and colorization approaches, i.e., the approach of [Zhang et al. 2017b] and [Yu et al. 2018] for restoration, and [Zhang et al. 2017a] and [Vondrick et al. 2018] for colorization. We randomly sample a subset of 300 frames for videos from Youtube-8M dataset, and apply both example-based and algorithm-based deterioration effects. For the reference color images, we provide every 60th frame starting from the first frame as a reference image. In the following results, each video is an input video, [Zhang et al. 2017b] and [Zhang et al. 2017a], [Yu et al. 2018] and [Zhang et al. 2017a], [Zhang et al. 2017b] and [Vondrick et al. 2018], [Yu et al. 2018] and [Vondrick et al. 2018], and ours, in order from left to right, top to bottom.

Input Video
[Zhang et al. 2017b] and [Zhang et al. 2017a]
[Yu et al. 2018] and [Zhang et al. 2017a]
[Zhang et al. 2017b] and [Vondrick et al. 2018]
[Yu et al. 2018] and [Vondrick et al. 2018]
Ours









Colorization Only Results:

Comparison with existing colorization approaches of [Zhang et al. 2017a] and [Vondrick et al. 2018] on the videos from Youtube8M dataset. For the reference color images, we provide every 60th frame starting from the first frame as a reference image. In the following results, each video is an input video, [Zhang et al. 2017a], [Vondrick et al. 2018], and ours, in order from left to right.

Input Video
[Zhang et al. 2017a]
[Vondrick et al. 2018]
Ours












Restoration Only Results:

Comparison with existing restoration approaches of [Zhang et al. 2017b] and [Yu et al. 2018] on synthetically-generated highly-deteriorated videos from Youtube8M dataset. In the following results, each video is an input video, [Zhang et al. 2017b], [Yu et al. 2018], and ours, in order from left to right.

Input Video
[Zhang et al. 2017b]
[Yu et al. 2018]
Ours









This work was partially supported by JST ACT-I (Iizuka, Grant Number: JPMJPR16U3), JST PRESTO (Simo-Serra, Grant Number: JPMJPR1756), and JST CREST (Iizuka and Simo-Serra, Grant Number: JPMJCR14D1).