# yasuhiro nagasaki ryo sako arxiv:1905.08615v1 [cs.cv] 15 may · pdf file 2019. 5....

Post on 23-Jan-2021

2 views

Embed Size (px)

TRANSCRIPT

ROI REGULARIZATION FOR SEMI-SUPERVISED AND SUPERVISED LEARNING

Hiroshi Kaizuka NS Solutions Corporation kaizuka.hiroshi.3hn

@jp.nssol.nipponsteel.com

Yasuhiro Nagasaki Texeng Solutions Corporation nagasaki.yasuhiro.x3

@tex-sol.com

Ryo Sako University of Tsukuba

s1820454@s.tsukuba.ac.jp

ABSTRACT We propose ROI regularization (ROIreg) as a semi-supervised learning method for image classification. ROIreg focuses on the maximum probability of a posterior probability distribution g(x) obtained when inputting an unlabeled data sample x into a convolutional neural network (CNN). ROIreg divides the pixel set of x into multiple blocks and evaluates, for each block, its contribution to the maximum probability. A masked data sample xROI is generated by replacing blocks with relatively small degrees of contribution with random images. Then, ROIreg trains CNN so that g(xROI) does not change as much as possible from g(x). Therefore, ROIreg can be said to refine the classification ability of CNN more. On the other hand, Virtual Adverserial Training (VAT), which is an excellent semi-supervised learning method, generates data sample xVAT by perturbing x in the direction in which g(x) changes most. Then, VAT trains CNN so that g(xVAT) does not change from g(x) as much as possible. Therefore, VAT can be said to be a method to improve CNN’s weakness. Thus, ROIreg and VAT have complementary training effects. In fact, the combination of VAT and ROIreg improves the results obtained when using VAT or ROIreg alone. This combination also improves the state-of-the-art on "SVHN with and without data augmentation" and "CIFAR-10 without data augmentation". We also propose a method called ROI augmentation (ROIaug) as a method to apply ROIreg to data augmentation in supervised learning. However, the evaluation function used there is different from the standard cross-entropy. ROIaug improves the performance of supervised learning for both SVHN and CIFAR-10. Finally, we investigate the performance degradation of VAT and VAT+ROIreg when data samples not belonging to classification classes are included in unlabeled data.

Document changelog v1 Initial release.

1 Introduction

When solving the problem of classifying images into K classes by convolutional neural networks (CNNs), images related to the tasks are collected as training data. Manually assigning a label (indicating the class to which the image belongs) for each image is a time consuming task when the number of training data is large. Therefore, a situation may occur where the number of unlabeled data is much larger than the number of labeled data. If the number of labeled data is not sufficient, supervised learning (SL) using only labeled data can not achieve high generalization performance. Therefore, it is expected to improve the generalization performance of CNNs by utilizing the unlabeled data existing in a large amount. Semi-supervised learning (SemiSL) is a method to realize such expectation.

As one method of SemiSL, Virtual Advisual Training (VAT) [15] inspired by Adversalial Training [23, 6] has been proposed. The method combining VAT and the "entropy minimization" term [7] (this is called ENT in this paper) was the state-of-the-art for SemiSL at that time. The superiority of VAT is also confirmed in thorough performance comparison experiments on SemiSL [17]. Let g(x;θk) be the posterior probability distribution obtained when a data sample x is input to CNN (with weights θ = θk). VAT first finds the perturbation direction d (with unit length) of x

ar X

iv :1

90 5.

08 61

5v 1

[ cs

.C V

] 1

5 M

ay 2

01 9

ROI Regularization for Semi-supervised and Supervised Learning

where g(x;θk) changes most sensitively. Next, VAT updates θ so that g(x + εd;θ) and g(x;θk) do not change as much as possible for a small positive number ε. Therefore, VAT can be interpreted as effectively enhancing CNN’s performance by improving the weakening point where CNN’s generalization performance is most deteriorated. In this sense, VAT can be said to be a learning method that competes with SL.

We propose a new learning method ROI Regularization (ROIreg) that can complement the learning effect of VAT. The aim is to realize SemiSL with performance superior to VAT by simultaneously using this new learning method and VAT.

Let gmax(x;θk) be the maximum probability among g(x;θk). First, in ROIreg, a region Ω having a relatively small contribution to gmax(x;θk) is extracted from the pixel region of x. Next, the masked image x(Ω) is obtained by replacing the pixel value of the pixel included in the region Ω with a random number. Then, ROIreg updates θ so that g(x(Ω);θ) does not change as much as possible from g(x;θk). Here, the complementary set Ωc of Ω corresponds to the region of interest (ROI) which is an important area for classification. For pixels contained in Ωc, x(Ω) and x have the same pixel value. Therefore, it can be said that the ROIreg is a learning method that ensures that classification ability of CNN does not deteriorate even if input information is limited only to information included in Ωc. On the other hand, the reliability of Ωc increases according to the progress of SL using labeled data. Therefore, ROIreg has the effect of refining the features acquired by SL. In this sense, ROIreg can be said to be a learning method cooperating with SL. Therefore, by combining ROIreg which is cooperative with SL and VAT which is competitive with SL, there is a high possibility that each other can complement the learning effect. In fact, VAT+ROIreg+ENT outperforms VAT+ENT. It also improves the state-of-the-art on "SVHN with and without data augmentation" and "CIFAR-10 without data augmentation".

VAT and ROIreg can also be applied to situations where only labeled data is used. We experiment in this situation whether they can improve the performance of normal SL. In this case, since SL is used, ENT is not used.

In addition, we propose a data augmentation method called ROI augmentation (ROIaug) as one variation of ROIreg. However, the evaluation function used there is different from the standard cross-entropy.

Finally, we investigate the issue of Class Distribution Mismatch raised in [17]. In the conventional SemiSL problem setup, an unlabeled data set is created by discarding the label information of the data samples contained in the labeled data set. As a result, any sample data in the unlabeled data set belongs to one of the K classes, which is a problem setting advantageous for SemiSL. However, in [17], it is reported that the performance of SemiSL is greatly degraded in situations where data samples not belonging to any of the K classes are included in the unlabeled data set. We evaluate the robustness of VAT and VAT+ROIreg against such a situation (Class Distribution Mismatch) using the animal 6 class classification problem raised in [17].

2 Related Work

Various methods have been proposed for SemiSL. Among them, there is a group of methods called consistency regularization. In recent years, the best results for SemiSL have been realized by it. Consistency regularization defines two functions ftarget(x;θ) and f(x;θ) determined from CNN (weights θ, posterior probability distribution g(x;θ)) and an unlabeled data sample x. These are functions that can be assumed to be natural that the difference between the two is small. Then, θ is optimized to minimize the sum of the evaluation function indicating the difference between ftarget(x;θ) and f(x;θ) and the conventional loss function for labeled data samples.

VAT [15] and our proposed method ROIreg analytically generate perturbed data x̃ from x. And they adopt ftarget(x;θ) = g(x;θ) and f(x;θ) = g(x̃;θ). While VAT adds a linear perturbation to x, ROIreg uses a non- linear perturbation that replaces the image of a subregion of x with a random image.

In Π-model [24], x is input to CNN twice. In each forward propagation, two different values of g(x;θ) are obtained because the realized values of the stochastic elements (noise addition, dorpout patterns) are different. The Π-model adopts these as ftarget(x;θ) and f(x;θ). Π-model can also be viewed as a simplification of the Γ-model of the Ladder Network [20]. Π-model is a method to enhance the generalization performance of CNN by two ensemble averages.

In Π-model, the two dorpout patterns �target and � are determined randomly. However, a method has been proposed that analytically determines � so that the difference between ftarget(x;θ) and f(x;θ) is maximized among δ-hyperspheres centered on �target. This method is called Virtual Adversalial Dropout (VAdD) [18].

Temporal Ensembling [12] is a method where f(x;θ) = g(x;θ) and ftarget(x;θ) is an exponential moving average of g(x;θ) in the past epoch . Therefore, it can be said that Temporal Ensembling adopts an ensemble average over epochs that appeared in the training process. However, ftarget(x;θ) for each x is updated only once per epoch, so it takes time for temporal ensembling to be effective. The method to improve this is Mean Teacher [24]. Mean Teacher uses CNN, which has an an exponential moving average of θ obtained during training as weights, as a teacher CNN. Mean Teacher

2

ROI Regularization for Semi-supervised and Supervised Learning

adopts g(x;θ) as f(x;θ) and a posterior probability distribution predicted by the teacher CNN as ftarget(x;θ).

VAdD and Mean Teacher developed from Π-model are learning methods that complement with VAT and ROIreg. In fact, the simultaneous use of VAdD and VAT produces results that exceed the re