general images (bach’ 15, lapuschkin’16)iphome.hhi.de/samek/pdf/icip2018_4.pdf · 2018. 10....

62

Upload: others

Post on 21-Sep-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways
Page 2: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 2

General Images (Bach’ 15, Lapuschkin’16)Text Analysis (Arras’16 &17)

Speech (Becker’18)

Games (Lapuschkin’18)

EEG (Sturm’16)

fMRI (Thomas’18)

Morphing (Seibold’18)

Video (Anders’18)VQA (Arras’18)

Histopathology (Binder’18)

Faces (Lapuschkin’17)

Gait Patterns (Horst’18)

Digits (Bach’ 15)

LRP revisited

Page 3: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 3

LRP revisited

LSTM (Arras’17, Thomas’18)Convolutional NNs (Bach’15, Arras’17 …)

Bag-of-words / Fisher Vector models (Bach’15, Arras’16, Lapuschkin’17, Binder’18)

One-class SVM (Kauffmann’18)

Local RenormalizationLayers (Binder’16)

Page 4: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

MICCAI’18 Tutorial on Interpretable Machine Learning

Application of LRPCompare models

Page 5: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 5

Application: Compare Classifiers

(Arras et al. 2016 & 2017)

word2vec/CNN:

Performance: 80.19%Strategy to solve the problem: identify semantically meaningful words related to the topic.

BoW/SVM:

Performance: 80.10%Strategy to solve the problem: identify statistical patterns,i.e., use word statistics

Page 6: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 6

Application: Compare Classifiers

word2vec / CNN model BoW/SVM model

(Arras et al. 2016 & 2017)Words with maximum relevance

Page 7: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 7

Visual Object Classes Challenge: 2005 - 2012

LRP in Practice

Page 8: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 8

(Lapuschkin et al. 2016)same performance —> same strategy ?

Application: Compare Classifiers

Page 9: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 9

‘horse’ images in PASCAL VOC 2007

Application: Compare Classifiers

Page 10: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 10

Application: Compare Classifiers

GoogleNet:- 22 Layers- ILSRCV: 6.7%- Inception layers

BVLC:- 8 Layers- ILSRCV: 16.4%

Page 11: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 11

GoogleNet focuses onfaces of animal.—> suppresses background noise

BVLC CaffeNet heatmaps are much more noisy.

(Binder et al. 2016)

Application: Compare Classifiers

Is it related to the architecture ?

Is it related to the performance ?

performance

heatmapstructure

?

Page 12: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

Application of LRPQuantify Context Use

Page 13: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 13

Application: Measure Context Use

classifier

how importantis context ?

how importantis context ?

relevance outside bbox

relevance inside bboximportance of context =LRP decomposition allows

meaningful pooling over bbox !

Page 14: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 14

Application: Measure Context Use

(Lapuschkin et al., 2016)

- BVLC reference model + fine tuning- PASCAL VOC 2007

Page 15: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 15

GoogleNetBVLC CaffeNet(Lapuschkin et al. 2016)

Application: Measure Context Use

Con

text

use

- Differen models (BVLC CaffeNet, GoogleNet, VGG CNN S)

- ILSVCR 2012

VGG CNN S

Context use anti-correlated with performance.

BVLC

Caf

feN

etG

oogl

eNet

VGG

CN

N S

Page 16: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

MICCAI’18 Tutorial on Interpretable Machine Learning

Application of LRPDetect Biases & Improve Models

Page 17: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning

(Lapuschkin et al., 2017)

17

Application: Face analysis- Compare AdienceNet, CaffeNet,

GoogleNet, VGG-16- Adience dataset, 26,580 images

Age classification Gender classification

A = AdienceNetC = CaffeNetG = GoogleNetV = VGG-16

[i] = in-place face alignment[r] = rotation based alignment[m] = mixing aligned images for training[n] = initialization on Imagenet[w] = initialization on IMDB-WIKI

Page 18: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning

(Lapuschkin et al., 2017)

18

with pretraining

without pretraining

Strategy to solve the problem: Focus on chin / beard, eyes & hear, but without pretraining the model overfits

Gender classification

Application: Face analysis

Page 19: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning

(Lapuschkin et al., 2017)

19

Application: Face analysisAge classification Predictions

25-32 years old

60+ years old

pretraining onImageNet

pretraining onIMDB-WIKI

Strategy to solve the problem: Focus on the laughing …

laughing speaks against 60+(i.e., model learned that old people do not laugh)

Page 20: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning

(Seibold et al., 2018)

20

Application: Face analysis

realperson

realperson

fakeperson

Different training methods

- 1,900 images of different individuals- pretrained VGG19 model- different ways to train the models

50% genuine images, 50% complete morphs

50% genuine images, 10% complete morphs and 4 × 10% one region morphed

50% genuine images, 10% complete morphs, partial morphs with 10% one, two, three and four region morphed

partial morphs with zero, one, two, three or four morphed regions,for two class classification last layer reinitialized

Page 21: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 21

Application: Face analysis

Semantic attack on the model

Black box adversarial attack on the model

Page 22: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning

(Seibold et al., 2018)

22

Application: Face analysis

Page 23: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning

(Seibold et al., 2018)

23

Application: Face analysismulticlass

network seems to compare different structures

network seems to identify “original” parts

multiclass

Different models have different strategies !

Page 24: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

Application of LRPLearn new Representations

Page 25: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 25

(Arras et al. 2016 & 2017)

Application: Learn new Representations

… …

documentvector

+ +=

word2vecword2vecword2vec

relevancerelevancerelevance

Page 26: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 26

(Arras et al. 2016 & 2017)

Application: Learn new Representations

uniform TFIDF2D PCA projection ofdocument vectors

Document vector computation is unsupervised(given we have a classifier).

Page 27: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

Application of LRPInterpreting Scientific Data

Page 28: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 28

CNN

DNN

explain

LRP

(Sturm et al. 2016)

Brain-Computer Interfacing

Application: EEG Analysis

Neural network learns that:Left hand movement imagination leads todesynchronization over right sensorimotor cortext(and vice versa).

Page 29: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 29

(Sturm et al. 2016)

Application: EEG Analysis

Our neural networks are interpretable:We can see for every trial “why” it isclassified the way it is.

Page 30: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

30

Difficulty to apply deep learning to fMRI :- high dimensional data (100 000 voxels), but only few subjects- results must be interpretable (key in neuroscience)

Our approach:- Recurrent neural networks

(CNN + LSTM) for whole-brain analysis

- LRP allows to interpret the results

Application: fMRI Analysis

(Thomas et al. 2018)

Dataset:- 100 subjects from Human Connectome Project- N-back task (faces, places, tools and body parts)

Page 31: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

31

Application: fMRI Analysis

(Thomas et al. 2018)

Page 32: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

32

Our approach:- Classify & explain individual gait

patterns- Important for understanding

diseases such as Parkinson

Application: Gait Analysis

(Horst et al. 2018)

Page 33: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

Application of LRPUnderstand Model &Obtain new Insights

Page 34: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 34

(Lapuschkin et al. 2016)

- Fisher Vector / SVM classifier- PASCAL VOC 2007

Application: Understand the model

Page 35: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 35

(Lapuschkin et al. 2016)

Application: Understand the model

Page 36: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 36

Motion vectors can be extracted from the compressed video-> allows very efficient analysis

Application: Understand the model

(Srinivasan et al. 2017)

- Fisher Vector / SVM classifier- Model of Kantorov & Laptev, (CVPR’14)- Histogram Of Flow, Motion Boundary Histogram- HMDB51 dataset

Page 37: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning

(Arras et al., 2017 & 2018)

Negative sentiment

movie review: ++, —

37

Application: Understand the model

- bidirectional LSTM model (Li’16)- Stanford Sentiment Treebank dataset

Model understands negation !

How to handle multiplicative interactions ?

gate neuron indirectly affect relevance distribution in forward pass

Page 38: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 38

Application: Understand the model

(Anders et al., 2018)

- 3-dimensional CNN (C3D)- trained on Sports-1M- explain predictions for 1000 videos

from the test set

Page 39: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 39

Application: Understand the model

(Anders et al., 2018)

Page 40: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 40

Application: Understand the model

Observation: Explanations focus on the bordering of the video, as if it wants to watch more of it.

Page 41: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 41

Application: Understand the model

Idea: Play video in fast forward (without retraining) and then the classification accuracy improves.

Page 42: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning

(Becker et al., 2018)

42

Application: Understand the model

female speaker male speaker

model classifies gender based on the fundamental frequency and its immediate harmonics (see also Traunmüller & Eriksson 1995)

- AlexNet model- trained on spectrograms- spoken digits dataset (AudioMNIST)

Page 43: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning

(Arras et al., 2018)

43

Application: Understand the model

- reimplement model of (Santoro et al., 2017)

- test accuracy of 91,0%- CLEVR dataset

model understands the question and correctly identifies the object of interest

Page 44: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 44

Application: Understand the model

(Lapuschkin et al., in prep.)

Sensitivity Analysis LRP

does not focus on where the ball is, but on where the ball could be in the next frame

LRP shows that that model tracks the ball

Page 45: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 45

Application: Understand the model

(Lapuschkin et al., in prep.)

After 0 epochs After 25 epochs

After 195 epochs

Page 46: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 46

Application: Understand the model

(Lapuschkin et al., in prep.)

Page 47: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 47

Application: Understand the model

(Lapuschkin et al., in prep.)

model learns 1. track the ball2. focus on paddle3. focus on the tunnel

Page 48: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways
Page 49: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 49

Take Home Messages

Page 50: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 50

Take Home Messages

Page 51: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 51

Take Home Messages

Page 52: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 52

Take Home Messages

High flexibility: Different LRP variants, free parameters

Page 53: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 53

Take Home Messages

Page 54: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 54

Take Home Messages

Page 55: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 55

Take Home Messages

Page 56: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 56

Take Home Messages

Page 57: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 57

Take Home Messages

Page 58: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

CVPR 2018 Tutorial — W. Samek, G. Montavon & K.-R. Müller

More information

Tutorial PaperMontavon et al., “Methods for interpreting and understanding deep neural networks”, Digital Signal Processing, 73:1-5, 2018

Keras Explanation Toolboxhttps://github.com/albermax/innvestigate

Page 59: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 59

ReferencesTutorial / Overview PapersG Montavon, W Samek, KR Müller. Methods for Interpreting and Understanding Deep Neural Networks. Digital Signal Processing, 73:1-15, 2018.W Samek, T Wiegand, and KR Müller, Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models, ITU Journal: ICT Discoveries - Special Issue 1 - The Impact of Artificial Intelligence (AI) on Communication Networks and Services, 1(1):39-48, 2018.

Methods PapersS Bach, A Binder, G Montavon, F Klauschen, KR Müller, W Samek. On Pixel-wise Explanations for Non-Linear Classifier Decisions by Layer-wise Relevance Propagation. PLOS ONE, 10(7):e0130140, 2015.G Montavon, S Bach, A Binder, W Samek, KR Müller. Explaining NonLinear Classification Decisions with Deep Taylor Decomposition. Pattern Recognition, 65:211–222, 2017L Arras, G Montavon, K-R Müller, W Samek. Explaining Recurrent Neural Network Predictions in Sentiment Analysis. EMNLP'17 Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis (WASSA), 159-168, 2017.

A Binder, G Montavon, S Lapuschkin, KR Müller, W Samek. Layer-wise Relevance Propagation for Neural Networks with Local Renormalization Layers. Artificial Neural Networks and Machine Learning – ICANN 2016, Part II, Lecture Notes in Computer Science, Springer-Verlag, 9887:63-71, 2016.

J Kauffmann, KR Müller, G Montavon. Towards Explaining Anomalies: A Deep Taylor Decomposition of One-Class Models. arXiv:1805.06230, 2018.

Evaluation ExplanationsW Samek, A Binder, G Montavon, S Lapuschkin, KR Müller. Evaluating the visualization of what a Deep Neural Network has learned. IEEE Transactions on Neural Networks and Learning Systems, 28(11):2660-2673, 2017.

Page 60: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 60

ReferencesApplication to TextL Arras, F Horn, G Montavon, KR Müller, W Samek. Explaining Predictions of Non-Linear Classifiers in NLP. Workshop on Representation Learning for NLP, Association for Computational Linguistics, 1-7, 2016.L Arras, F Horn, G Montavon, KR Müller, W Samek. "What is Relevant in a Text Document?": An Interpretable Machine Learning Approach. PLOS ONE, 12(8):e0181142, 2017.L Arras, G Montavon, K-R Müller, W Samek. Explaining Recurrent Neural Network Predictions in Sentiment Analysis. EMNLP'17 Workshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis (WASSA), 159-168, 2017.

L Arras, A Osman, G Montavon, KR Müller, W Samek. Evaluating and Comparing Recurrent Neural Network Explanation Methods in NLP. arXiv, 2018.

Application to Images & FacesS Lapuschkin, A Binder, G Montavon, KR Müller, Wojciech Samek. Analyzing Classifiers: Fisher Vectors and Deep Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2912-20, 2016.S Bach, A Binder, KR Müller, W Samek. Controlling Explanatory Heatmap Resolution and Semantics via Decomposition Depth. IEEE International Conference on Image Processing (ICIP), 2271-75, 2016.F Arbabzadeh, G Montavon, KR Müller, W Samek. Identifying Individual Facial Expressions by Deconstructing a Neural Network. Pattern Recognition - 38th German Conference, GCPR 2016, Lecture Notes in Computer Science, 9796:344-54, Springer International Publishing, 2016.

S Lapuschkin, A Binder, KR Müller, W Samek. Understanding and Comparing Deep Neural Networks for Age and Gender Classification. IIEEE International Conference on Computer Vision Workshops (ICCVW), 1629-38, 2017.

C Seibold, W Samek, A Hilsmann, P Eisert. Accurate and Robust Neural Networks for Security Related Applications Exampled by Face Morphing Attacks. arXiv:1806.04265, 2018.

Page 61: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 61

Application to VideoC Anders, G Montavon, W Samek, KR Müller. Understanding Patch-Based Learning by Explaining Predictions. arXiv:1806.06926, 2018.V Srinivasan, S Lapuschkin, C Hellge, KR Müller, W Samek. Interpretable Human Action Recognition in Compressed Domain. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 1692-96, 2017.

Application to SpeechS Becker, M Ackermann, S Lapuschkin, KR Müller, W Samek. Interpreting and Explaining Deep Neural Networks for Classification of Audio Signals. arXiv:1807.03418, 2018.

Application to SciencesI Sturm, S Lapuschkin, W Samek, KR Müller. Interpretable Deep Neural Networks for Single-Trial EEG Classification. Journal of Neuroscience Methods, 274:141–145, 2016.A Thomas, H Heekeren, KR Müller, W Samek. Interpretable LSTMs For Whole-Brain Neuroimaging Analyses. arXiv, 2018.KT Schütt, F. Arbabzadah, S Chmiela, KR Müller, A Tkatchenko. Quantum-chemical insights from deep tensor neural networks. Nature communications, 8, 13890, 2017.A Binder, M Bockmayr, M Hägele and others. Towards computational fluorescence microscopy: Machine learning-based integrated prediction of morphological and molecular tumor profiles. arXiv:1805.11178, 2018.F Horst, S Lapuschkin, W Samek, KR Müller, WI Schöllhorn. What is Unique in Individual Gait Patterns? Understanding and Interpreting Deep Learning in Gait Analysis. arXiv:1808.04308, 2018

References

Page 62: General Images (Bach’ 15, Lapuschkin’16)iphome.hhi.de/samek/pdf/ICIP2018_4.pdf · 2018. 10. 9. · -1,900 images of different individuals-pretrained VGG19 model-different ways

ICIP’18 Tutorial on Interpretable Deep Learning 62

References

SoftwareM Alber, S Lapuschkin, P Seegerer, M Hägele, KT Schütt, G Montavon, W Samek, KR Müller, S Dähne, PJ Kindermans. iNNvestigate neural networks!. arXiv:1808.04260, 2018.S Lapuschkin, A Binder, G Montavon, KR Müller, W Samek. The Layer-wise Relevance Propagation Toolbox for Artificial Neural Networks. Journal of Machine Learning Research, 17(114):1-5, 2016.