adversarial preprocessing - understanding and preventing ...adversarial preprocessing understanding...

13
Adversarial Preprocessing Understanding and Preventing Image-Scaling Attacks in Machine Learning Erwin Quiring , David Klein, Daniel Arp, Martin Johns and Konrad Rieck USENIX Security Symposium

Upload: others

Post on 28-Jan-2021

21 views

Category:

Documents


0 download

TRANSCRIPT

  • Adversarial PreprocessingUnderstanding and Preventing Image-Scaling Attacksin Machine LearningErwin Quiring, David Klein, Daniel Arp, Martin Johns and Konrad Rieck

    USENIX Security Symposium 2020

  • Motivation

    Preprocessing data is often necessary for machine learningImage scaling is omnipresent in machine learning

    Downscaling

    Image-Scaling Attack

    Do not enter

    One Way

    Erwin Quiring Adversarial Preprocessing Page 2

  • Image-Scaling Attacks

    Manipulated image changes appearance after downscaling

    Solve

    Source Image S

    Target Image T

    Modified Source A Output Image Dscale

    S ∼ A

    T ∼ D

    Optimization problem:

    min(‖∆‖22) s.t. ‖scale(S + ∆)− T‖∞ 6 e

    Both goals must be achieved: T ' D and S ' AXiao et al. 2019

    Erwin Quiring Adversarial Preprocessing Page 3

  • Threat Scenario in Machine LearningPossible attacks

    False predictions at test time

    Data manipulations at training time

    Capabilities and knowledgeAttack is agnostic to learning model or dataKnowledge of scaling algorithm only needed

    Quiring and Rieck 2020, Xiao et al. 2019

    Erwin Quiring Adversarial Preprocessing Page 4

  • Contribution

    Our work provides the first comprehensive analysis of scaling attacks

    Root-cause analysisUnderstand why and when the attack works

    Prevention defensesDerive secure scaling algorithms

    Adaptive adversariesShow that defenses are robust against an adaptive adversary

    Erwin Quiring Adversarial Preprocessing Page 5

  • Root-Cause: Scaling in General (1D)

    Scaling: Convolution between a source signal s and kernel wOutput is computed by moving w over s as a sliding window

    1 2 3 4 5 6 7 8 9

    s and w

    1 2

    Downscaled signal

    0.5 · (s[3] + s[4]) 1 · s[7]

    Not all pixels contribute equallyIf step size exceeds kernel width: pixels are even ignored

    Erwin Quiring Adversarial Preprocessing Page 6

  • Root-Cause: Scaling-Attack

    1 2 3 4 5 6 7 8 9

    a and w

    1 2

    Downscaled signal

    0.5 · (a[3] + a[4])1 · a[7]

    Adversary only modifies pixels with high weightsSuccess depends on two key parameters:

    The scaling ratio (→ step size)The kernel width

    Erwin Quiring Adversarial Preprocessing Page 7

  • Prevention Defenses

    We consider two defenses to prevent the attackBoth do not change the API of machine learning workflows

    1. Robust scaling algorithms

    1 2 3 4 5 6 7 8 9 10

    2. Image Reconstruction

    Erwin Quiring Adversarial Preprocessing Page 8

  • Evaluation

    Common imaging libraries evaluated1. OpenCV used by Caffe, DeepLearning4j2. Pillow used by PyTorch3. tf.image used by TensorFlow

    All implemented scaling algorithms tested

    ImageNet dataset with VGG19 model used for evaluating predictions

    Russakovsky et al. 2015, Simonyan and Zisserman 2014

    Erwin Quiring Adversarial Preprocessing Page 9

  • Attack Performance

    Nearest Bilinear Bicubic

    0

    25

    50

    75

    100

    Succ

    essR

    ate

    [%]

    Target T ' Downscaled output D

    OpenCVPillowTensorFlow

    Attack succeeds: Downscaled output is close to target imageHowever, visibility depends on scaling ratio and algorithm

    Erwin Quiring Adversarial Preprocessing Page 10

  • Defense Performance

    Reconstruction prevents attacks against all vulnerable algorithmsAttacker can thus not achieve T ' DReconstruction increases visual qualityDefense allows reconstructing the original prediction

    Attack image A Output scale(A) Restored image R Output scale(R)

    Original attack With defense

    Erwin Quiring Adversarial Preprocessing Page 11

  • Conclusion

    Solve

    Source Image S

    Target Image T

    Modified Source AOutput Image D

    scale

    S ∼ A

    T ∼ D

    Analysis of image-scaling attacks

    1 2 3 4 5 6 7 8 9

    s and w

    Root-cause analysis

    Effective defensesNearest Bilinear Bicubic

    0255075

    100

    Succ

    essR

    ate

    [%]

    Comprehensive evaluation

    → Further information and implementation at: https://scaling-attacks.net

    Erwin Quiring Adversarial Preprocessing Page 12

    https://scaling-attacks.net

  • References I

    [1] Erwin Quiring and Konrad Rieck. “Backdooring and Poisoning Neural Networks withImage-Scaling Attacks”. In: Deep Learning and Security Workshop (DLS). 2020.

    [2] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma,Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, andLi Fei-Fei. “ImageNet Large Scale Visual Recognition Challenge”. In: International Journal ofComputer Vision (IJCV) 115.3 (2015).

    [3] Karen Simonyan and Andrew Zisserman. Very Deep Convolutional Networks for Large-ScaleImage Recognition. Tech. rep. arXiv:1409.1556, 2014.

    [4] Qixue Xiao, Yufei Chen, Chao Shen, Yu Chen, and Kang Li. “Seeing is Not Believing:Camouflage Attacks on Image Scaling Algorithms”. In: Proc. of USENIX Security Symposium.2019.

    Erwin Quiring Adversarial Preprocessing Page 13

    IntroductionAnalysis of Scaling AttacksConclusionAppendix