loss-based visual learning with weak supervision
DESCRIPTION
Loss-based Visual Learning with Weak Supervision. M. Pawan Kumar. Joint work with Pierre-Yves Baudin , Danny Goodman , Puneet Kumar, Nikos Paragios , Noura Azzabou , Pierre Carlier. SPLENDID. Self-Paced Learning for Exploiting Noisy, Diverse or Incomplete Data . Machine Learning. - PowerPoint PPT PresentationTRANSCRIPT
![Page 1: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/1.jpg)
Loss-based Visual Learning with Weak Supervision
M. Pawan Kumar
Joint work with Pierre-Yves Baudin, Danny Goodman,
Puneet Kumar, Nikos Paragios,Noura Azzabou, Pierre Carlier
![Page 2: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/2.jpg)
SPLENDID
Nikos ParagiosEquipe GalenINRIA Saclay
Daphne KollerDAGS
Stanford
Machine LearningWeak AnnotationsNoisy Annotations
ApplicationsComputer VisionMedical Imaging
Self-Paced Learning for Exploiting Noisy, Diverse or Incomplete Data
2 Visits from INRIA to Stanford1 Visit from Stanford to INRIA
2012 ICML
3 Visits Planned2013 MICCAI
![Page 3: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/3.jpg)
Medical Image Segmentation
MRI Acquisitions of the thigh
![Page 4: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/4.jpg)
Medical Image Segmentation
MRI Acquisitions of the thigh
Segments correspond to muscle groups
![Page 5: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/5.jpg)
Random Walks Segmentation
Probabilistic segmentation algorithm
Computationally efficient
Interactive segmentation
Automated shape prior driven segmentation
L. Grady, 2006 L. Grady, 2005; Baudin et al., 2012
![Page 6: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/6.jpg)
Random Walks Segmentation
y(i,s): Probability that voxel ‘i’ belongs to segment ‘s’
x: Medical acquisition
miny E(x,y) = yTL(x)y + wshape||y-y0||2
Positive semi-definite Laplacian matrix
Shape prior on the segmentation
Parameter of the RW algorithm
Convex
Hand-tuned
![Page 7: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/7.jpg)
Random Walks Segmentation
Several Laplacians
L(x) = Σα wαLα(x)
Several shape and appearance priors
Σβ wβ||y-yβ||2
Hand-tuning large number of parameters is onerous
![Page 8: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/8.jpg)
Parameter Estimation
Learn the best parameters from training data
Σα wαyTLα(x)y + Σβ wβ||y-yβ||2
![Page 9: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/9.jpg)
Parameter Estimation
Learn the best parameters from training data
wTΨ(x,y)
w is the set of all parameters
Ψ(x,y) is the joint feature vector of input and output
![Page 10: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/10.jpg)
• Parameter Estimation– Supervised Learning– Hard vs. Soft Segmentation– Mathematical Formulation
• Optimization
• Experiments
• Related and Future Work in SPLENDID
Outline
![Page 11: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/11.jpg)
Supervised LearningDataset of segmented fMRIs
Sample xk, voxel i
zk(i,s) = 1, s is ground-truth
0, otherwise
Probabilistic segmentation??
![Page 12: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/12.jpg)
Supervised Learning
wTΨ(xk,zk)
Energyof
Ground-truth
wTΨ(xk,ŷ)
Energyof
Segmentation
- ≥ Δ(ŷ,zk) - ξk
minw Σk ξk + λ||w||2
Δ(ŷ,zk) = Fraction of incorrectly labeled voxels
Taskar et al., 2003; Tsochantardis et al., 2004
Structured-output Support Vector Machine
![Page 13: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/13.jpg)
Supervised Learning
Convex with several efficient algorithms
No parameter provides ‘hard’ segmentation
We only need a correct ‘soft’ probabilistic segmentation
![Page 14: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/14.jpg)
• Parameter Estimation– Supervised Learning– Hard vs. Soft Segmentation– Mathematical Formulation
• Optimization
• Experiments
• Related and Future Work in SPLENDID
Outline
![Page 15: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/15.jpg)
Hard vs. Soft SegmentationHard segmentation zk
Don’t require 0-1 probabilities
![Page 16: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/16.jpg)
Hard vs. Soft SegmentationSoft segmentation yk
Compatible with zk
Binarizing yk gives zk
![Page 17: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/17.jpg)
Hard vs. Soft Segmentation
yk C(zk)
Soft segmentation yk
Compatible with zk
Which yk to use??
yk provided by best parameter
Unknown
![Page 18: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/18.jpg)
• Parameter Estimation– Supervised Learning– Hard vs. Soft Segmentation– Mathematical Formulation
• Optimization
• Experiments
• Related and Future Work in SPLENDID
Outline
![Page 19: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/19.jpg)
Learning with Hard Segmentation
wTΨ(xk,zk)wTΨ(xk,ŷ) - ≥ Δ(ŷ,zk) - ξk
minw Σk ξk + λ||w||2
![Page 20: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/20.jpg)
Learning with Soft Segmentation
wTΨ(xk,yk)wTΨ(xk,ŷ) - ≥ Δ(ŷ,zk) - ξk
minw Σk ξk + λ||w||2
![Page 21: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/21.jpg)
Learning with Soft Segmentation
wTΨ(xk,yk)wTΨ(xk,ŷ) - ≥ Δ(ŷ,zk) - ξk
minw Σk ξk + λ||w||2
Smola et al., 2005; Felzenszwalb et al., 2008; Yu et al., 2009
Latent Support Vector Machine
minyk
yk C(zk)
![Page 22: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/22.jpg)
• Parameter Estimation
• Optimization
• Experiments
• Related and Future Work in SPLENDID
Outline
![Page 23: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/23.jpg)
Latent SVM
Difference-of-convex problem
minw Σk ξk + λ||w||2
wTΨ(xk,ŷ) – minyk wTΨ(xk,yk) ≥ Δ(ŷ,zk) – ξk
yk C(zk)
Concave-Convex Procedure (CCCP)
![Page 24: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/24.jpg)
CCCP
yk* = minyk wTΨ(xk,yk) s.t. yk C(zk)
Repeat until convergence
Estimate soft segmentation
Update parametersminw Σk ξk + λ||w||2
wTΨ(xk,ŷ) – wTΨ(xk,yk*) ≥ Δ(ŷ,zk) – ξk
Efficient optimization using dual decomposition
Convex optimization
![Page 25: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/25.jpg)
• Parameter Estimation
• Optimization
• Experiments
• Related and Future Work in SPLENDID
Outline
![Page 26: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/26.jpg)
Dataset30 MRI volumes of thigh
Dimensions: 224 x 224 x 100
4 muscle groups + background
80% for training, 20% for testing
![Page 27: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/27.jpg)
Parameters4 Laplacians
2 shape priors
1 appearance prior
Baudin et al., 2012
Grady, 2005
![Page 28: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/28.jpg)
BaselinesHand-tuned parameters
Structured-output SVM
Soft segmentation based on signed distance transform
Hard segmentation
![Page 29: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/29.jpg)
Results
Small but statistically significant improvement
![Page 30: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/30.jpg)
• Parameter Estimation
• Optimization
• Experiments
• Related and Future Work in SPLENDID
Outline
![Page 31: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/31.jpg)
Loss-based Learning
x: Input a: Annotation
![Page 32: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/32.jpg)
Loss-based Learning
x: Input a: Annotation h: Hidden information
h
a = “jumping”h = “soft-segmentation”
![Page 33: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/33.jpg)
Loss-based Learning
min Σk Δ(correct ak, predicted ak) Annotation Mismatch
x: Input a: Annotation h: Hidden information
h
a = “jumping”h = “soft-segmentation”
![Page 34: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/34.jpg)
Loss-based Learning
min Σk Δ(correct ak, predicted ak) Annotation Mismatch
Small improvement using small medical dataset
![Page 35: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/35.jpg)
Loss-based Learning
min Σk Δ(correct ak, predicted ak) Annotation Mismatch
Large improvement using large vision dataset
![Page 36: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/36.jpg)
Loss-based Learning
min Σk Δ(correct {ak,hk}, predicted {ak,hk}) Modeled using a distributionOutput Mismatch
Kumar, Packer and Koller, ICML 2012
Inexpensive annotation
No experts required
Richer models can be learnt
![Page 37: Loss-based Visual Learning with Weak Supervision](https://reader035.vdocuments.mx/reader035/viewer/2022081420/56816623550346895dd97b78/html5/thumbnails/37.jpg)
Questions?