arxiv:1603.06668v1 [cs.cv] 22 mar 2016 · 1university of chicago 2toyota technological institute at...

26
Learning Representations for Automatic Colorization Gustav Larsson 1 , Michael Maire 2 , and Gregory Shakhnarovich 2 1 University of Chicago 2 Toyota Technological Institute at Chicago [email protected], {mmaire,greg}@ttic.edu Abstract. We develop a fully automatic image colorization system. Our approach leverages recent advances in deep networks, exploiting both low-level and semantic representations during colorization. As many scene elements naturally appear according to multimodal color distribu- tions, we train our model to predict per-pixel color histograms. This in- termediate output can be used to automatically generate a color image, or further manipulated prior to image formation; our experiments con- sider both scenarios. On both fully and partially automatic colorization tasks, our system significantly outperforms all existing methods. Keywords: Automatic colorization, convolutional neural networks, hy- percolumns, deep learning. Fig. 1: Our automatic colorization of grayscale input; more examples in Figures 4 and 5. 1 Introduction Colorization of grayscale images is a simple task for the human imagination. A human need only recall that sky is usually blue and grass is green; for many ob- jects, the mind is free to hallucinate any of several plausible colors. The high-level comprehension required for this process is precisely why the development of fully automatic colorization algorithms remains a challenge. Colorization is therefore intriguing beyond its immediate practical utility in many graphics applications. Automatic colorization also serves as a proxy measure for visual understanding. Our work makes this connection explicit; we unify a colorization pipeline with the type of deep neural network architectures driving recent advances in image classification and object detection. Both our technical approach and our focus on fully automatic results depart from much past work on colorization. Given its importance across multiple appli- cations (e.g. historical photographs and videos [1], artist assistance [2,3]), much research strives to make colorization cheaper and less time-consuming [4–12]. However, most methods still require some level of user input [5–7,9,10,13]. Our arXiv:1603.06668v1 [cs.CV] 22 Mar 2016

Upload: phungdien

Post on 10-Apr-2019

213 views

Category:

Documents


0 download

TRANSCRIPT

Learning Representations for Automatic Colorization

Gustav Larsson1, Michael Maire2, and Gregory Shakhnarovich2

1University of Chicago 2Toyota Technological Institute at [email protected], {mmaire,greg}@ttic.edu

Abstract. We develop a fully automatic image colorization system.Our approach leverages recent advances in deep networks, exploitingboth low-level and semantic representations during colorization. As manyscene elements naturally appear according to multimodal color distribu-tions, we train our model to predict per-pixel color histograms. This in-termediate output can be used to automatically generate a color image,or further manipulated prior to image formation; our experiments con-sider both scenarios. On both fully and partially automatic colorizationtasks, our system significantly outperforms all existing methods.

Keywords: Automatic colorization, convolutional neural networks, hy-percolumns, deep learning.

Fig. 1: Our automatic colorization of grayscale input; more examples in Figures 4 and 5.

1 Introduction

Colorization of grayscale images is a simple task for the human imagination. Ahuman need only recall that sky is usually blue and grass is green; for many ob-jects, the mind is free to hallucinate any of several plausible colors. The high-levelcomprehension required for this process is precisely why the development of fullyautomatic colorization algorithms remains a challenge. Colorization is thereforeintriguing beyond its immediate practical utility in many graphics applications.Automatic colorization also serves as a proxy measure for visual understanding.Our work makes this connection explicit; we unify a colorization pipeline withthe type of deep neural network architectures driving recent advances in imageclassification and object detection.

Both our technical approach and our focus on fully automatic results departfrom much past work on colorization. Given its importance across multiple appli-cations (e.g. historical photographs and videos [1], artist assistance [2,3]), muchresearch strives to make colorization cheaper and less time-consuming [412].However, most methods still require some level of user input [57,9,10,13]. Our

arX

iv:1

603.

0666

8v1

[cs

.CV

] 2

2 M

ar 2

016

2 Larsson, Maire, Shakhnarovich

p

VGG-16-Gray

Input: Grayscale Image Output: Color Image

conv1 1

conv5 3

(fc6) conv6

(fc7) conv7

Hypercolumn

h fc1

Hue

Chroma

Ground-truth

Lightness

Fig. 2: System overview. We process a grayscale image through a deep convolu-tional architecture (VGG) [14] and take spatially localized multilayer slices (hyper-columns) [1517], as per-pixel descriptors. We train our system end-to-end for thetask of predicting hue and chroma distributions for each pixel p given its hypercolumndescriptor. These predicted distributions determine color assignment at test time.

work joins the relatively few recent efforts on developing fully automatic col-orization algorithms [8, 11, 12]. Some [11, 12] show promising results, especiallyon photos of certain typical scene types (e.g. beaches and landscapes). However,their success is limited on complex images with foreground objects.

At a technical level, existing automatic colorization methods often employ astrategy of finding suitable reference images and transferring their color onto atarget grayscale image [8,11]. This works well if sufficiently similar reference im-ages can be found, which becomes increasingly difficult for unique grayscale inputimages. Such a strategy also requires availability and search of a large repositoryof reference images at test time. In contrast, our approach is entirely free fromdatabase search and fast at test time. Section 2 provides a more complete viewof prior methods, highlighting differences with our proposed algorithm.

Our approach to automatic colorization converts two intuitive observationsinto design principles. First, semantic information matters. In order to colorizearbitrary images, a system must interpret the semantic composition of the visualscene (what is in the image: faces, animals, cars, plants, . . . ) as well as localizescene components (where things are). Recent dramatic advances in image un-derstanding rely on deep convolutional neural networks (CNNs) and offer toolsto incorporate semantic parsing and localization into a colorization system.

Our second observation is that while some scene elements can be assigneda single color with high confidence, other elements (e.g. clothes or cars) maydraw from many suitable colors. Thus, we design our system to predict a colorhistogram, instead of a single color, at every image location. Figure 2 sketchesthe CNN architecture we use to connect semantics with color distributions byexploiting features across multiple abstraction levels. Section 3 provides details.

Section 4 reports extensive experimental validation of our algorithm againstcompeting methods [4, 11] in two settings: fully (grayscale input only) and par-

Learning Representations for Automatic Colorization 3

tially (grayscale input and reference global color histogram provided) automaticcolorization. Across every benchmark metric and every dataset [1820], ourmethod achieves the best performance. Lacking an established standard col-orization benchmark, exhaustive comparison is necessary. To ease this burdenfor future research, we propose a new colorization benchmark on the ImageNetdataset [20], which may simplify quantitative comparisons and drive furtherprogress. Our system achieves such a drastic performance leap that its fullyautomatic colorization output is superior to that of prior methods relying on ad-ditional information such as reference images or ground-truth color histograms.

Section 5 summarizes our contributions: (1) a novel technical approach tocolorization, bringing semantic knowledge to bear using CNNs, and modelingcolor distributions; (2) state-of-the-art performance across fully and partiallyautomatic colorization tasks; (3) a new ImageNet colorization benchmark.

2 Related work

Previous colorization methods broadly fall into three categories: scribble-based [3,5, 2123], transfer-based [4, 610,24], and automatic direct prediction [11,12].

Scribble-based methods, introduced by Levin et al. [5], require manually speci-fying desired colors of certain regions. These scribble colors are propagated underthe assumption that adjacent pixels with similar luminance should have similarcolor, with the optimization relying on Normalized Cuts [25]. Users can interac-tively refine results by providing additional scribbles. Further advances withinthis family of methods extend the notion of similarity to texture [3, 23], andexploit edges to reduce color bleeding [21].

Transfer-based methods rely on availability of related reference image(s),from which color is transferred to the target grayscale image. Mapping betweensource and target is established automatically, using correspondences betweenlocal descriptors [4, 7, 8], or in combination with manual intervention [6, 9]. Ex-cepting [8], reference image selection is at least partially manual.

In contrast to these method families, our goal is fully automatic colorization.We are aware of two recent efforts in this direction. Deshpande et al. [11] colorizean entire image by solving a linear system. This can be seen as an extension ofpatch-matching techniques [4], adding interaction terms for spatial consistency.Regression trees address the high-dimensionality of the system. Inference re-quires an iterative algorithm. Most of the experiments are focused on a dataset(SUN-6) limited to images of a few scene classes, and best results are obtainedwhen the scene class is known at test time. They also examine another partiallyautomatic task, in which a desired global color histogram is provided.

The work of Cheng et al. [12] is perhaps most related to ours. It combinesthree levels of features with increasing receptive field: the raw image patch,DAISY features [26], and semantic features [27]. These features are concatenatedand fed into a three-layer fully connected neural network trained with an L2 loss.Only this last component is optimized; the feature representations are fixed.

4 Larsson, Maire, Shakhnarovich

Unlike [11] and [12], our system does not rely on hand-crafted features, istrained end-to-end, and treats color prediction as a histogram estimation taskrather than as regression. Experiments in Section 4 justify these principles bydemonstrating performance superior to the best reported by [11, 12] across allregimes. But first, we elaborate the specifics of our architecture and algorithm.

3 Method

We frame the colorization problem as learning a function f : X Y. Givena grayscale image patch x X = [0, 1]SS , f predicts the color y Y of itscenter pixel. The patch size S S is the receptive field of the colorizer. Theoutput space Y depends on the choice of color parameterization. We implementf according to the neural network architecture diagrammed in Figure 2.

Motivating this strategy is the success of similar architectures for semanticsegmentation [16, 17, 2729] and boundary detection [15, 3033]. Together withcolorization, these tasks can all be viewed as image-to-image prediction problems,in which a value is predicted for each input pixel. Leading methods commonlyadapt deep convolutional neural networks pretrained for image classification [14,20]. Such classification networks can be converted to fully convolutional networksthat produce output of the same spatial size as the input, e.g. using the shift-and-stitch method [27] or the more efficient a trous algorithm [29]. Subsequenttraining with a task-specific loss fine-tunes the converted network.

Skip-layer connections, which directly link low- and mid-level features to pre-diction layers, are an architectural addition beneficial for many image-to-imageproblems. Some methods implement skip connections directly through concate-nation layers [27, 29], while others equivalently extract per-pixel descriptors byreading localized slices of multiple layers [1517]. We use this latter strategy andadopt the recently coined hypercolumn terminology [17] for such slices.

Though we build upon these ideas, our technical approach innovates on twofronts. First, we integrate domain knowledge for colorization, experimenting withoutput spaces and loss functions. We design the network output to serve as anintermediate representation, appropriate for direct or biased sampling. We intro-duce an energy minimization procedure for optionally biasing sampling towards areference image. Second, we develop a novel and efficient computational strategyfor network training that is widely applicable to hypercolumn architectures.

3.1 Color spaces

We generate training data by converting color images to grayscale according toL = R+G+B3 . This is only one of many desaturation options and chosen primarilyto facilitate comparison with Deshpande et al. [11]. For the representation ofcolor predictions, using RGB is overdetermined, as lightness L is already known.We instead consider output color spaces with L (or a closely related quantity)conveniently appearing as a separate pass-through channel:

Learning Representations for Automatic Colorization 5

Hue/chroma. Hue-based spaces, such as HSL, can be thought of as a colorcylinder, with angular coordinate H (hue), radial distance S (saturation),and height L (lightness). The color is constant with respect to S and Hat the bottom (black) and top (white) of the cylinder, making S and Harbitrary (and thus unstable) close to those regions. HSV describes a similarcolor cylinder which is only unstable at the bottom. However, L is no longerone of the channels. We wish to avoid both instabilities and still retainL as a channel. The solution is a color bicone, where chroma (C) takesthe place of saturation. This color space only has instability in H aroundC = 0; this is inherent to all hue-based color spaces and we show how toovercome this when designing the loss function. Conversion to HSV is givenby V = L + C2 , S =

CV .

Lab and . Lab (or L*a*b) is designed to be perceptually linear. Thecolor vector (a, b) defines a Euclidean space where the distance to the origindetermines chroma. Deshpande et al. [11] use a color space somewhat similarto Lab, denoted ab. To differentiate, we call their color space .

3.2 Loss

For any output color representation, we require a loss function for measuringprediction errors. A first consideration, also used in [12], is L2 regression in Lab:

Lreg(x,y) = f(x) y2 (1)

where Y = R2 describes the (a, b) vector space. However, regression targetsdo not handle multimodal color distributions well. To address this, we insteadpredict distributions over a set of color bins, a technique also used in [7]:

Lhist(x,y) = DKL(yf(x)) (2)

where Y = [0, 1]K describes a histogram over K bins, and DKL is the KL-divergence. The ground-truth histogram y is set as the empirical distribution ina rectangular region of size R around the center pixel. Somewhat surprisingly,our experiments see no benefit to predicting smoothed histograms, so we simplyset R = 1. This makes y a one-hot vector and Equation (2) the log loss. Forhistogram predictions, the last layer of neural network f is always a softmax.

There are several choices of how to bin color space. We bin the Lab axesby evenly spaced Gaussian quantiles ( = 0, = 25). They can be encodedseparately for a and b (as marginal distributions), in which case our loss becomesthe sum of two separate terms defined by Equation (2). They can also be encodedas a joint distribution over a and b, in which case we let the quantiles form a 2Dgrid of bins. In our experiments, we set K = 32 for marginal distributions andK = 16 16 for joint. We determined these numbers, along with , to offer agood compromise of output fidelity and output complexity.

For hue/chroma, we only consider marginal distributions and bin axes uni-formly in [0, 1]. Since hue becomes unstable as chroma approaches zero, we add

6 Larsson, Maire, Shakhnarovich

a sample weight to the hue based on the chroma:

Lhue/chroma(x,y) = DKL(yCfC(x)) + HyCDKL(yHfH(x)) (3)

where Y = [0, 1]2K , and yC [0, 1] is the chroma value at the sample pixel.We set H = 5, which is roughly the inverse expectation of yC, so that hue andchroma are equally weighted.

3.3 Inference

Given network f trained according to a loss function in the previous section, weevaluate it at every pixel n in a test image: yn = f(xn). For the L2 loss, all thatremains is to combine each yn with the respective input lightness and convertto RGB. However, when the predictions are histograms, we must infer a finalcolor. We consider several options:

Sample Draw a sample from the histogram. If done per pixel, this maycreate high-frequency color changes in areas of high-entropy histograms.

Mode Take the arg maxk yn,k as the color. This can create jarring transitionsbetween colors, and is prone to vote splitting when two or more bins haveproximal centroids.

Median Compute cumulative sum of yn and use linear interpolation to findthe value at the middle bin. Undefined for circular histograms, such as hue.

Expectation Sum over the color bin centroids weighted by the histogram.

For Lab output, we achieve the best qualitative and quantitative results usingexpectations. For hue/chroma, the best results are achieved by taking the medianof the chroma. Many objects can appear both with and without chroma, whichmeans C = 0 is a particularly common bin. This mode draws the expectationcloser to zero, producing less saturated images. As for hue, since it is circular,we first compute the complex expectation:

z = EHfh(x)[H] ,1

K

k

[fh(x)]keik , k = 2

k + 0.5

K(4)

We then set hue to the argument of z remapped to lie in [0, 1).

Fig. 3: Artifacts withoutchromatic fading (left).

In cases where the estimate of the chroma is highand z is close to zero, the instability of the hue cancreate artifacts (Figure 3). A simple, yet effective, fixis chromatic fading: downweight the chroma if theabsolute value of z is too small. We thus re-definethe predicted chroma as:

Cfaded = max(1|z|, 1) C (5)

In our experiments, we set = 0.03 (obtained viacross-validation).

Learning Representations for Automatic Colorization 7

3.4 Histogram transfer from ground-truth

So far, we have only considered the fully automatic color inference task. Desh-pande et al. [11], test a separate task where the ground-truth histogram in thetwo non-lightness color channels of the original color image is made available.1

In order to compare, we propose two histogram transfer methods. We refer tothe predicted image as the source and the ground-truth image as the target.

Lightness-normalized quantile matching. Take the RGB representation ofboth the source and the target and divide by their respective lightness L. Com-pute marginal histograms over the three resulting color channels. Alter eachsource histogram to fit the corresponding target histogram by quantile match-ing. Multiply the resulting image by L. This quantile matching is surprisinglyeffective and beats the cluster correspondence method proposed in [11] (see Ta-ble 3). However, it does not exploit our richer predictions of color distributions.

Energy minimization. We phrase histogram matching as minimizing energy:

E =1

N

n

DKL(ynyn) + D2(y, t) (6)

where N is the number of pixels, y, y [0, 1]NK are the predicted and poste-rior distributions, respectively. The target histogram is denoted by t [0, 1]K .The first term contains unary potentials that anchor the posteriors to the pre-dictions. The second term is a symmetric 2 distance to promote proximitybetween source and target histograms. Weight defines relative importance ofhistogram matching. We estimate the source histogram as y = 1N

n y

n. We

parameterize the posterior for all pixels n as: yn = softmax(log yn + b), wherethe vector b RK can be seen as a global bias for each bin. It is also possibleto solve for the posteriors directly; this does not perform better quantitativelyand is more prone to introducing artifacts. We solve for b using gradient descenton E and use the resulting posteriors in place of the predictions. In the case ofmarginal histograms, the optimization is run twice, once for each color channel.

3.5 Neural network architecture and training

Our base network is a fully convolutional version of VGG-16 [14] with twochanges: (1) the classification layer (fc8) is discarded, and (2) the first filterlayer (conv1 1) operates on a single intensity channel instead of mean-subtractedRGB. We extract a hypercolumn descriptor for a pixel by concatenating the fea-tures at its spatial location in all layers, from data to conv7 (fc7), resulting in a12, 417 channel descriptor. We feed this hypercolumn into a fully connected layerwith 1024 channels (h fc1 in Figure 2), to which we connect output predictors.

1 Note that if the histogram of the L channel were available, it would be possible tomatch lightness to lightness exactly and thus greatly narrow down color placement.

8 Larsson, Maire, Shakhnarovich

Processing each pixel separately in such manner would be extremely costly.We can instead run an entire image through a single forward pass of VGG-16 and approximate the hypercolumns using bilinear interpolation. Even withsuch sharing, extracting hypercolumns densely over the entire image requiresupsampling each layer to the same spatial size. For a 256 256 input image, thehypercolumns alone require significant memory (1.7 GB).

To fit image batches in memory during training, we instead extract hyper-columns at only a sparse set of locations, implementing a custom Caffe [34] layerto directly compute them.2 Extracting batches of only 128 hypercolumn descrip-tors per input image, sampled at random locations, provides sufficient trainingsignal. In the backward pass of stochastic gradient descent, an interpolated hy-percolumn propagates its gradients to the four closest spatial cells in each layer.We ensure atomicity of gradient updates using locks, without incurring any per-formance penalty. This strategy drops training memory usage for hypercolumnsto only 13 MB per image.

We initialize with a version of VGG-16 pretrained on ImageNet, adaptingit to grayscale by averaging over color channels in the first layer and rescalingappropriately. Prior to training for colorization, we further fine-tune the networkfor one epoch on the ImageNet classification task with grayscale input. As theoriginal VGG-16 was trained without batch normalization [35], scale of responsesin internal layers can vary dramatically, presenting a problem for learning atoptheir hypercolumn concatenation. Liu et al. [36] compensate for such variabilityby applying layer-wise L2 normalization. We use the alternative of balancinghypercolumns so that each layer has roughly unit second moment (E[X2] 1);additional details are provided in the Appendix (Section A.1).

4 Experiments

Starting from pretrained VGG-16-Gray, described in the previous section, weattach h fc1 and output prediction layers with Xavier initialization [37], andfine-tune the entire system for colorization. We consider multiple prediction layervariants: Lab output with L2 loss, and both Lab and hue/chroma marginal orjoint histogram output with losses according to Equations (2) and (3). We traineach system variant end-to-end for one epoch on the 1.2 million images of theImageNet training set, each resized to at most 256 pixels in smaller dimension.A single epoch takes approximately 17 hours on an NVIDIA Titan X GPU. Attest time, colorizing a single 512 512 pixel image takes 0.5 seconds.

We setup two disjoint subsets of the ImageNet validation data for our ownuse: 1000 validation images (cval1k) and 10000 test images (ctest10k). Eachset has a balanced representation for ImageNet categories, and excludes anyimages encoded as grayscale, but may include images that are naturally grayscale(e.g. closeup of nuts and bolts), where an algorithm should know not to add color.Category labels are discarded; only images are available at test time. We proposectest10k as a standard benchmark with the following metrics:

2 We will make all code and trained models public soon.

Learning Representations for Automatic Colorization 9

Input Our Method Ground-truth Input Our Method Ground-truth

Fig. 4: Fully automatic colorization results on ImageNet/ctest10k. Our sys-tem reproduces known object color properties (e.g. faces, sky, grass, fruit, wood), andcoherently picks colors for objects without such properties (e.g. clothing).

Model\Metric RMSE PSNRNo colorization 0.343 22.98Lab, L2 0.318 24.25Lab, K = 32 0.321 24.33Lab, K = 16 16 0.328 24.30Hue/chroma, K = 32 0.342 23.77

+ chromatic fading 0.299 24.45

Table 1: ImageNet/cval1k. Validationperformance of system variants. Hue/chromais best, but only with chromatic fading.

Model\Metric RMSE PSNRdata..fc7 0.299 24.45data..conv5 3 0.306 24.13conv4 1..fc7 0.302 24.45conv5 1..fc7 0.307 24.38fc6..fc7 0.323 24.22fc7 0.324 24.19

Table 2: ImageNet/cval1k. Abla-tion study of hypercolumn compo-nents.

RMSE: root mean square error in averaged over all pixels [11].

PSNR: peak signal-to-noise ratio in RGB calculated per image [12]. We usethe arithmetic mean of PSNR over images, instead of the geometric meanas in Cheng et al. [12]; geometric mean is overly sensitive to outliers.

10 Larsson, Maire, Shakhnarovich

Fig. 5: Additional results. Top: Our automatic colorizations of these ImageNet ex-amples are difficult to distinguish from real color images. Bottom: B&W photographs.

By virtue of comparing to ground-truth color images, quantitative coloriza-tion metrics can penalize reasonable, but incorrect, color guesses for many ob-jects (e.g. red car instead of blue car) more than jarring artifacts. This makesqualitative results for colorization as important as quantitative; we report both.

Figures 1, 4, and 5 show example test results of our best system variant,selected according to performance on the validation set and trained for a total

Learning Representations for Automatic Colorization 11

Grayscale only

Welsh et al. [4]

yGT Sceney GT Scene & Hist

Deshpande et al. [11]

Grayscale only GT Histogram

Our Method

Ground-truth

Fig. 6: Comparison with leading methods on SUN-6. GT Scene: test imagescene class is available. GT Hist: test image color histogram is available. We obtaincolorizations with visual quality better than those from prior work, even though wedo not exploit reference images or known scene class. Using our energy minimizationtransfer method (Section 3.4) when GT Hist is available further improves results. Ineither mode, our method appears less dependent on spatial priors: note splitting of thesky in the second row and correlation of green with actual grass in the last row.

of 10 epochs. This variant predicts hue and chroma and uses chromatic fadingduring image generation. Table 1 provides complete validation benchmarks forall system variants, including the trivial baseline of no colorization (returningthe grayscale input). On ImageNet test (ctest10k), our selected model obtains0.293 (RMSE, , avg/px) and 24.94 dB (PSNR, RGB, avg/im), compared to0.333 and 23.27 dB for the no colorization baseline.

Table 2 examines the importance of different neural network layers to col-orization; it reports validation performance of ablated systems that include onlythe specified subsets of layers in the hypercolumn used to predict hue andchroma. Some lower layers may be discarded without much performance loss,yet higher layers alone (fc6..fc7) are insufficient for good colorization.

Our ImageNet colorization benchmark is new to a field lacking an estab-lished evaluation protocol. We therefore focus on comparisons with two recentpapers [11, 12], using their self-defined evaluation criteria. To do so, we run ourImageNet-trained hue and chroma model on two additional datasets:

12 Larsson, Maire, Shakhnarovich

Method RMSE

Grayscale (no colorization) 0.285Welsh et al. [4] 0.353Deshpande et al. [11] 0.262

+ GT Scene 0.254Our Method 0.211

Table 3: SUN-6. Comparison withcompeting methods.

Method RMSE

Deshpande et al. (C) [11] 0.236Deshpande et al. (Q) 0.211Our Method (Q) 0.178Our Method (E) 0.165

Table 4: SUN-6 (GT Hist). Comparisonusing ground-truth histograms. Results forDeshpande et al. [11] use GT Scene.

SUN-A [19] is a subset of the SUN dataset [18] containing 47 object cat-egories. Cheng et al. [12] train a colorization system on 2688 images andreport results on 1344 test images. We were unable to obtain the list of testimages, and therefore report results averaged over five random subsets of1344 SUN-A images. We do not use any SUN-A images for training.

SUN-6, another SUN subset, used by Deshpande et al. [11], includes imagesfrom 6 scene categories (beach, castle, outdoor, kitchen, living room, bed-room). We compare our results on 240 test images to those reported in [11]for their method as well as for Welsh et al. [4] with automatically matchedreference images as in [8]. Following [11], we consider another evaluationregime in which ground-truth target color histograms are available.

Figure 6 shows a comparison of results on SUN-6. Forgoing usage of ground-truth global histograms, our fully automatic system produces output qualita-tively superior to methods relying on such side information. Tables 3 and 4 re-port quantitative performance corroborating this view. The partially automaticsystems in Table 4 adapt output to fit global histograms using either: (C) clustercorrespondences [11], (Q) quantile matching, or (E) our energy minimization de-

0.0 0.2 0.4 0.6 0.8 1.0

RMSE ()

0.0

0.2

0.4

0.6

0.8

1.0

% P

ixels

No colorization

Welsh et al.

Deshpande et al.

Ours

Deshpande et al. (GTH)

Ours (GTH)

Fig. 7: SUN-6. Cumulative histogramof per pixel error (higher=more pix-els with lower error). Results for Desh-pande et al. [11] use GT Scene.

10 15 20 25 30 35

PSNR

0.00

0.05

0.10

0.15

0.20

0.25

Frequency

Cheng et al.

Our method

Fig. 8: SUN-A. Histogram of per-imagePSNR for [12] and our method. The highestgeometric mean PSNR reported for experi-ments in [12] is 24.2, vs. 32.72.0 for us.

Learning Representations for Automatic Colorization 13

Fig. 9: Failure modes. Top row, left-to-right: texture confusion, too homogeneous,color bleeding, unnatural color shifts (2). Bottom row: inconsistent background, in-consistent chromaticity, not enough color, object not recognized (upside down facepartly gray), context confusion (sky).

scribed in Section 3.4. Our quantile matching results are superior to those of [11]and our new energy minimization procedure offers further improvement.

Figures 7 and 8 compare error distributions on SUN-6 and SUN-A. As inTable 3, our fully automatic method dominates all competing approaches, eventhose which use auxiliary information. It is only outperformed by the versionof itself augmented with ground-truth global histograms. On SUN-A, Figure 8shows clear separation between our method and [12] on per-image PSNR.

The Appendix (Figures 13 and 14) provides anecdotal comparisons to oneadditional method, that of Charpiat et al. [38], which can be considered an auto-matic system if reference images are available. Unfortunately, source code of [38]is not available and reported time cost is prohibitive for large-scale evaluation (30minutes per image). We were thus unable to benchmark [38] on large datasets.

Though we achieve significant improvements compared to the state-of-the-art, our results are not perfect. Figure 9 shows examples of significant failures.Minor imperfections can also be seen in some of the successful colorization resultsin Figures 4 and 5. We believe a common failure mode may correlate with gaps insemantic interpretation: incorrectly identified or unfamiliar objects and incorrectsegmentation. In addition, there are mistakes due to natural uncertainty ofcolor e.g. the graduation robe at the bottom right of Figure 4 is red, but couldas well be purple.

Since our method produces histograms, we can provide an interactive meansof biasing uncertain colorizations according to user input preferences. Ratherthan output a single color per pixel, we can sample color for image regions,driven by the predicted histograms, and evaluate color uncertainty.

Specifically, solving our energy minimization formulation (Equation (6)) withglobal biases b that are not optimized based on a reference image, but simplyrotated through color space, induces changed color preferences everywhere inthe image. The effect is modulated by the uncertainty in the predicted colorhistogram. Figure 10 shows multiple sampled colorizations, together with a vi-

14 Larsson, Maire, Shakhnarovich

Fig. 10: Sampling multiple colorizations. From left: graylevel input; three coloriza-tions sampled from our model; color uncertainty map according to our model.

sualization of color uncertainty. Here, we calculate uncertainty as the entropy ofthe predicted hue multiplied by the chroma.

We believe our distributional output representation and energy minimizationframework open the path for future investigation of interactive human-in-the-loop colorization tools.

5 Conclusion

We present a system that demonstrates state-of-the-art ability to automaticallycolorize grayscale images. Two novel contributions enable this progress: a deepneural architecture that is trained end-to-end to incorporate semantically mean-ingful features of varying complexity into colorization, and a color histogram pre-diction framework that handles uncertainty and ambiguities inherent in coloriza-tion while preventing jarring artifacts. Our fully automatic colorizer producesstrong results, improving upon previously leading methods by large margins onall datasets tested; we also propose a new large-scale generic benchmark for au-tomatic image colorization, and establish a strong baseline with our method tofacilitate future comparisons. Our colorization results are visually appealing evenon complex scenes, and allow for effective post-processing with creative controlvia color histogram transfer and intelligent, uncertainty-driven color sampling.

Learning Representations for Automatic Colorization 15

6 Acknowledgements

We would like to thank Ayan Chakrabarti for suggesting the lightness-normalizedquantile matching method and for useful discussions, and Aditya Deshpandeand Jason Rock for a helpful discussion on colorization and on their work. Wegratefully acknowledge the support of NVIDIA Corporation with the donationof GPUs used for this research.

References

1. Tsaftaris, S.A., Casadio, F., Andral, J.L., Katsaggelos, A.K.: A novel visualizationtool for art history and conservation: Automated colorization of black and whitearchival photographs of works of art. Studies in Conservation 59(3) (2014)

2. Sykora, D., Burianek, J., Zara, J.: Unsupervised colorization of black-and-whitecartoons. In: International symposium on Non-photorealistic animation and ren-dering. (2004)

3. Qu, Y., Wong, T.T., Heng, P.A.: Manga colorization. ACM Transactions onGraphics (TOG) 25(3) (2006)

4. Welsh, T., Ashikhmin, M., Mueller, K.: Transferring color to greyscale images.ACM Transactions on Graphics (TOG) 21(3) (2002)

5. Levin, A., Lischinski, D., Weiss, Y.: Colorization using optimization. ACM Trans-actions on Graphics (TOG) 23(3) (2004)

6. Irony, R., Cohen-Or, D., Lischinski, D.: Colorization by example. In: EurographicsSymp. on Rendering. (2005)

7. Charpiat, G., Hofmann, M., Scholkopf, B.: Automatic image colorization via mul-timodal predictions. In: ECCV. (2008)

8. Morimoto, Y., Taguchi, Y., Naemura, T.: Automatic colorization of grayscaleimages using multiple images on the web. In: SIGGRAPH: Posters. (2009)

9. Chia, A.Y.S., Zhuo, S., Gupta, R.K., Tai, Y.W., Cho, S.Y., Tan, P., Lin, S.: Se-mantic colorization with internet images. ACM Transactions on Graphics (TOG)30(6) (2011)

10. Gupta, R.K., Chia, A.Y.S., Rajan, D., Ng, E.S., Zhiyong, H.: Image colorizationusing similar images. In: ACM international conference on Multimedia. (2012)

11. Deshpande, A., Rock, J., Forsyth, D.: Learning large-scale automatic image col-orization. In: ICCV. (2015)

12. Cheng, Z., Yang, Q., Sheng, B.: Deep colorization. In: ICCV. (2015)

13. Sapiro, G.: Inpainting the colors. In: ICIP. (2005)

14. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scaleimage recognition. In: ICLR. (2015)

15. Maire, M., Yu, S.X., Perona, P.: Reconstructive sparse code transfer for contourdetection and semantic labeling. In: ACCV. (2014)

16. Mostajabi, M., Yadollahpour, P., Shakhnarovich, G.: Feedforward semantic seg-mentation with zoom-out features. In: CVPR. (2015)

17. Hariharan, B., an R. Girshick, P.A., Malik, J.: Hypercolumns for object segmen-tation and fine-grained localization. CVPR (2015)

18. Xiao, J., Hays, J., Ehinger, K.A., Oliva, A., Torralba, A.: Sun database: Large-scalescene recognition from abbey to zoo. In: CVPR. (2010)

16 Larsson, Maire, Shakhnarovich

19. Patterson, G., Xu, C., Su, H., Hays, J.: The sun attribute database: Beyondcategories for deeper scene understanding. International Journal of ComputerVision 108(1-2) (2014)

20. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z.,Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet LargeScale Visual Recognition Challenge. International Journal of Computer Vision(IJCV) 115(3) (2015)

21. Huang, Y.C., Tung, Y.S., Chen, J.C., Wang, S.W., Wu, J.L.: An adaptive edgedetection based colorization algorithm and its applications. In: ACM internationalconference on Multimedia. (2005)

22. Yatziv, L., Sapiro, G.: Fast image and video colorization using chrominance blend-ing. Image Processing, IEEE Transactions on 15(5) (2006)

23. Luan, Q., Wen, F., Cohen-Or, D., Liang, L., Xu, Y.Q., Shum, H.Y.: Natural imagecolorization. In: Eurographics conference on Rendering Techniques. (2007)

24. Tai, Y.W., Jia, J., Tang, C.K.: Local color transfer via probabilistic segmentationby expectation-maximization. In: CVPR. (2005)

25. Shi, J., Malik, J.: Normalized cuts and image segmentation. Pattern Analysis andMachine Intelligence, IEEE Transactions on 22(8) (2000)

26. Tola, E., Lepetit, V., Fua, P.: A fast local descriptor for dense matching. In:CVPR. (2008)

27. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semanticsegmentation. In: CVPR. (2015)

28. Farabet, C., Couprie, C., Najman, L., LeCun, Y.: Learning hierarchical featuresfor scene labeling. Pattern Analysis and Machine Intelligence, IEEE Transactionson 35(8) (2013)

29. Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Semanticimage segmentation with deep convolutional nets and fully connected crfs. In:ICLR. (2015)

30. Ganin, Y., Lempitsky, V.S.: N4-fields: Neural network nearest neighbor fields forimage transforms. In: ACCV. (2014)

31. Bertasius, G., Shi, J., Torresani, L.: Deepedge: A multi-scale bifurcated deepnetwork for top-down contour detection. In: CVPR. (2015)

32. Shen, W., Wang, X., Wang, Y., Bai, X., Zhang, Z.: Deepcontour: A deep convo-lutional feature learned by positive-sharing loss for contour detection. In: CVPR.(2015)

33. Xie, S., Tu, Z.: Holistically-nested edge detection. In: ICCV. (2015)34. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadar-

rama, S., Darrell, T.: Caffe: Convolutional architecture for fast feature embedding.arXiv preprint arXiv:1408.5093 (2014)

35. Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training byreducing internal covariate shift. In: ICML. (2015)

36. Liu, W., Rabinovich, A., Berg, A.C.: Parsenet: Looking wider to see better. arXivpreprint arXiv:1506.04579 (2015)

37. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforwardneural networks. In: AISTATS. (2010)

38. Charpiat, G., Bezrukov, I., Altun, Y., Hofmann, M., Scholkopf, B.: Machine learn-ing methods for automatic image colorization. In: Computational Photography:Methods and Applications. CRC Press (2010)

Learning Representations for Automatic Colorization 17

Output: Color Image

Ground-truth

Hue Chroma Hue Chroma

Hue Chroma

Fig. 11: Histogram predictions. Example of predicted hue/chroma his-tograms.

Appendix A provides additional training and evaluation details. This is followedby more results and examples in Appendix B.

A Supplementary details

A.1 Re-balancing

To adjust the scale of the activations of layer l by factor m, without changingany other layers activation, the weights W and the bias b are updated accordingto:

Wl mWl bl mbl Wl+1 1

mWl+1 (7)

The activation of xl+1 becomes:

xl+1 =1

mWl+1ReLU(mWlxl +mbl) + bl+1 (8)

The m inside the ReLU will not affect whether or not a value is rectified, so thetwo cases remain the same: (1) negative: the activation will be the correspondingfeature in bl+1 regardless of m, and (2) positive: the ReLU becomes the identityfunction and m and 1m cancel to get back the original activation.

We set m = 1E[X2]

, estimated for each layer separately.

A.2 Color space

The color channels (ab in [11]) are calculated as

=B 12 (R+G)

L+ =

RGL+

(9)

where = 0.0001, R,G,B [0, 1] and L = R+G+B3 .3

3 We know that this is how Deshpande et al. [11] calculate it based on their coderelease.

18 Larsson, Maire, Shakhnarovich

Output: Color Image

Ground-truth

Hue Chroma

Hue Chroma

Hue Chroma

Fig. 12: Histogram predictions. Example of predicted hue/chroma his-tograms.

A.3 Error metrics

For M images, each image m with Nm pixels, we calculate the error metrics as:

RMSE =1M

m=1Nm

Mm=1

Nmn=1

[y(m) ]n[y(m)

]n

2 (10)PSNR =

1

M

Mm=1

Nmn=1

10 log10

(y(m)RGB y

(m)RGB2

3Nm

)(11)

Where y(m) [3, 3]Nm2 and y

(m)RGB [0, 1]Nm3 for all m.

A.4 Lightness correction

Ideally the lightness L is an unaltered pass-through channel. However, due tosubtle differences in how L is defined, it is possible that the lightness of thepredicted image, L, does not agree with the input, L. To compensate for this,we add LL to all color channels in the predicted RGB image as a final correctivestep.

Learning Representations for Automatic Colorization 19

Hue Chroma CF RMSE PSNR

Sample Sample 0.426 21.41Mode Mode 0.304 23.90Expectation Expectation 0.374 23.13Expectation Expectation Yes 0.307 24.35Expectation Median 0.342 23.77Expectation Median Yes 0.299 24.45

Table 5: ImageNet/cval1k.Comparison of various histograminference methods for hue/chroma.Mode/mode does fairly well buthas severe visual artifacts. (CF =Chromatic fading)

B Supplementary results

B.1 Validation

A more detailed list of validation results for hue/chroma inference methods isseen in Table 5.

B.2 Examples

A comparison with Charpiat et al. [38] appears in Figures 13 and 14. Examplesof how our algorithm can bring old photographs to life in Figure 15. More ex-amples on ImageNet (ctest10k) in Figures 16 to 19 and Figure 20 (failure cases).Examples of histogram predictions in Figures 11 and 12.

Reference Image Input Charpiat et al. [38] Our Method

(Energy Minimization)

Fig. 13: Transfer. Comparison with Charpiat et al. [38] with reference image.Their method works fairly well when the reference image closely matches (com-pare with Figure 14). However, they still present sharp unnatural color edges. Weapply our histogram transfer method (Energy Minimization) using the referenceimage.

20 Larsson, Maire, Shakhnarovich

Input Charpiat et al. [38] Our Method Ground-truth

Fig. 14: Portraits. Comparison with Charpiat et al. [38], a transfer-basedmethod using 53 reference portrait paintings. Note that their method works sig-nificantly worse when the reference images are not hand-picked for each grayscaleinput (compare with Figure 13). Our model was not trained specifically for thistask and we used no reference images.

Learning Representations for Automatic Colorization 21

Input Our Method Input Our Method

Fig. 15: B&W photograhs. Old photographs that were automatically colorized.(Source: Library of Congress, www.loc.gov)

22 Larsson, Maire, Shakhnarovich

Input Our Method Ground-truth Input Our Method Ground-truth

Fig. 16: Fully automatic colorization results on ImageNet/ctest10k.

Learning Representations for Automatic Colorization 23

Input Our Method Ground-truth Input Our Method Ground-truth

Fig. 17: Fully automatic colorization results on ImageNet/ctest10k.

24 Larsson, Maire, Shakhnarovich

Fig. 18: Fully automatic colorization results on ImageNet/ctest10k.

Learning Representations for Automatic Colorization 25

Fig. 19: Fully automatic colorization results on ImageNet/ctest10k.

26 Larsson, Maire, Shakhnarovich

Too Desaturated Inconsistent Chroma Inconsistent Hue Edge Pollution Color Bleeding

Fig. 20: Failure cases. Examples of the five most common failure cases: the wholeimage lacks saturation (Too Desaturated); inconsistent chroma in objects or regions,causing parts to be gray (Inconsistent Chroma); inconsistent hue, causing unnaturalcolor shifts that are particularly typical between red and blue (Inconsistent Hue);inconsistent hue and chroma around the edge, commonly occurring for closeups wherebackground context is unclear (Edge Pollution); color boundary is not clearly separated,causing color bleeding (Color Bleeding).

Learning Representations for Automatic Colorization