online, semi-supervised learning for long-term interaction ...online, semi-supervised learning for...
Post on 20-Jul-2020
10 Views
Preview:
TRANSCRIPT
MotivationAlgorithm
Experiments
Online, semi-supervised learning for long-terminteraction with object recognition systems
Alex Teichman and Sebastian Thrun
Department of Computer ScienceStanford University
July 12, 2012
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
The big picture
What is the desired userinterface for objectrecognition?
Want autonomy with theoption for user input.
Online, active,semi-supervised learning...
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
The big picture
What is the desired userinterface for objectrecognition?
Want autonomy with theoption for user input.
Online, active,semi-supervised learning...
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
The big picture
What is the desired userinterface for objectrecognition?
Want autonomy with theoption for user input.
Online, active,semi-supervised learning...
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Static train/test framework
Rigorous evaluation and comparison
Experimental setup
Occasional user interaction
Infinite unlabeled data stream
We don’t want to overfit to this framework!
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Static train/test framework
Rigorous evaluation and comparison
Experimental setup
Occasional user interaction
Infinite unlabeled data stream
We don’t want to overfit to this framework!
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Static train/test framework
Rigorous evaluation and comparison
Experimental setup
Occasional user interaction
Infinite unlabeled data stream
We don’t want to overfit to this framework!
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Static train/test framework
Rigorous evaluation and comparison
Experimental setup
Occasional user interaction
Infinite unlabeled data stream
We don’t want to overfit to this framework!
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Static train/test framework
Rigorous evaluation and comparison
Experimental setup
Occasional user interaction
Infinite unlabeled data stream
We don’t want to overfit to this framework!
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Static train/test framework
Rigorous evaluation and comparison
Experimental setup
Occasional user interaction
Infinite unlabeled data stream
We don’t want to overfit to this framework!
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Static train/test framework
Rigorous evaluation and comparison
Experimental setup
Occasional user interaction
Infinite unlabeled data stream
We don’t want to overfit to this framework!
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Object recognition approaches - sliding window &tracking-by-detection
Spinello and Arras
Spinello, Stachniss, and Burgard
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Object recognition approaches - semantic segmentation
Douillard et al.
Combining sliding windows and semantic segmentation: Lai et al.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Object recognition approaches - keypoint matching
Solutions in Perception Challenge
Collet et al.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Problem decomposition
Segmentation — Tracking — Track classification
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Problem decomposition
Segmentation — Tracking — Track classification
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Problem decomposition
Segmentation — Tracking — Track classification
-11.60 0.36 1.27[ ] -20.31
-6.42 4.01[ ] -5.28
-3.67-0.28[ ]car
pedbike[ ] -30.98
-14.68 11.42[ ]
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Descriptors
Oriented bounding box size
Spin images
HOG descriptors computed on virtualorthographic camera images
29 different descriptor spaces
x ∈ R∼4000
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Tracks
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Tracking-based semi-supervised learning
Train classifier
Classify unlabeled tracks
Induct trainingexamples
Large, automatically-labeled background dataset is provided.Often this is easy to collect.
Only positive examples are inducted during semi-supervisedlearning.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Tracking-based semi-supervised learning
0 100 200 300 400 500 600 700Number of hand-labeled tracks
82
84
86
88
90
92
94
96
98
100
Acc
urac
y(%
)
SupervisedSemi-supervised
car pedestrian bicyclist background
Labels
car
pedestrian
bicyclist
background
Pre
dict
ions
792 0 0 17
0 86 0 0
0 4 138 2
55 22 2 4917
Unsupervised method given millions of additional unlabeled examples.
Track classification accuracy is reported. (This does not include segmentation and tracking errors.)
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Tracking-based semi-supervised learning
Three hand-labeled training examples of each class + millions of unlabeled examples used to generatethese results.
Outlines are tracked objects. Track classifications are computed offline.
White outlines are tracked objects classified as neither person, bicyclist, or car.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Offline to online
Train classifier
Classify unlabeled tracks
Induct trainingexamples
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Modularity
- Connected components- Background subtraction
Segmentation
- Kalman filters
Tracking
- Boosting- Logistic regression, stochastic gradient descent
Classification
- Discriminative segmentation and tracking
WAFR2012
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Logistic regression & stochastic gradient descent
Parametric
Fast to train and evaluate
Easy to incrementally train
x ∈ Rn, y ∈ {−1,+1}
P(y |x) =1
1 + exp(−ywT x)
maximizew
∏m
P(y (m)|x (m))
minimizew
M∑m=1
log(1 + exp(−y (m)wT x (m)))
M might be giant, or you might not have accessto them all at one time.
Stochastic gradient descent: take gradient stepsusing just small subsets of the data.
... but this fails badly if applied without thinking.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Logistic regression & stochastic gradient descent
Parametric
Fast to train and evaluate
Easy to incrementally train
x ∈ Rn, y ∈ {−1,+1}
P(y |x) =1
1 + exp(−ywT x)
maximizew
∏m
P(y (m)|x (m))
minimizew
M∑m=1
log(1 + exp(−y (m)wT x (m)))
M might be giant, or you might not have accessto them all at one time.
Stochastic gradient descent: take gradient stepsusing just small subsets of the data.
... but this fails badly if applied without thinking.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Logistic regression & stochastic gradient descent
Parametric
Fast to train and evaluate
Easy to incrementally train
x ∈ Rn, y ∈ {−1,+1}
P(y |x) =1
1 + exp(−ywT x)
maximizew
∏m
P(y (m)|x (m))
minimizew
M∑m=1
log(1 + exp(−y (m)wT x (m)))
M might be giant, or you might not have accessto them all at one time.
Stochastic gradient descent: take gradient stepsusing just small subsets of the data.
... but this fails badly if applied without thinking.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Logistic regression & stochastic gradient descent
Parametric
Fast to train and evaluate
Easy to incrementally train
x ∈ Rn, y ∈ {−1,+1}
P(y |x) =1
1 + exp(−ywT x)
maximizew
∏m
P(y (m)|x (m))
minimizew
M∑m=1
log(1 + exp(−y (m)wT x (m)))
M might be giant, or you might not have accessto them all at one time.
Stochastic gradient descent: take gradient stepsusing just small subsets of the data.
... but this fails badly if applied without thinking.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Logistic regression & stochastic gradient descent
Parametric
Fast to train and evaluate
Easy to incrementally train
x ∈ Rn, y ∈ {−1,+1}
P(y |x) =1
1 + exp(−ywT x)
maximizew
∏m
P(y (m)|x (m))
minimizew
M∑m=1
log(1 + exp(−y (m)wT x (m)))
M might be giant, or you might not have accessto them all at one time.
Stochastic gradient descent: take gradient stepsusing just small subsets of the data.
... but this fails badly if applied without thinking.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Logistic regression & stochastic gradient descent
Parametric
Fast to train and evaluate
Easy to incrementally train
x ∈ Rn, y ∈ {−1,+1}
P(y |x) =1
1 + exp(−ywT x)
maximizew
∏m
P(y (m)|x (m))
minimizew
M∑m=1
log(1 + exp(−y (m)wT x (m)))
M might be giant, or you might not have accessto them all at one time.
Stochastic gradient descent: take gradient stepsusing just small subsets of the data.
... but this fails badly if applied without thinking.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Logistic regression & stochastic gradient descent
Parametric
Fast to train and evaluate
Easy to incrementally train
x ∈ Rn, y ∈ {−1,+1}
P(y |x) =1
1 + exp(−ywT x)
maximizew
∏m
P(y (m)|x (m))
minimizew
M∑m=1
log(1 + exp(−y (m)wT x (m)))
M might be giant, or you might not have accessto them all at one time.
Stochastic gradient descent: take gradient stepsusing just small subsets of the data.
... but this fails badly if applied without thinking.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Logistic regression & stochastic gradient descent
Parametric
Fast to train and evaluate
Easy to incrementally train
x ∈ Rn, y ∈ {−1,+1}
P(y |x) =1
1 + exp(−ywT x)
maximizew
∏m
P(y (m)|x (m))
minimizew
M∑m=1
log(1 + exp(−y (m)wT x (m)))
M might be giant, or you might not have accessto them all at one time.
Stochastic gradient descent: take gradient stepsusing just small subsets of the data.
... but this fails badly if applied without thinking.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Linear models
logP(Y = 1|x)
P(Y = −1|x)≈ wT x
0 1 2 3 4 5 6 7Legth of long side of object
0
2000
4000
6000
8000
10000
12000
14000
16000
18000
Num
inst
ance
s
PositiveNegative
−2 −1 0 1 2 3 4Spin image element value
0
200
400
600
800
1000
1200
1400
1600
Num
inst
ance
s
PositiveNegative
0 1 2 3 4 5Height of object
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
Num
inst
ance
s
PositiveNegative
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Feature transform
0 1 2 3 4 5 6 7Legth of long side of object
0
2000
4000
6000
8000
10000
12000
14000
16000
18000
Num
inst
ance
s
PositiveNegative
−2 −1 0 1 2 3 4Spin image element value
0
200
400
600
800
1000
1200
1400
1600
Num
inst
ance
s
PositiveNegative
0 1 2 3 4 5Height of object
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
Num
inst
ance
s
PositiveNegative
x
x0 x1 x2...
R4000 → {0, 1}400000
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Supervised performance
0 500 1000 1500 2000 2500 3000 3500 4000 4500Number of hand-labeled tracks
0.82
0.84
0.86
0.88
0.90
0.92
0.94
0.96
0.98
1.00
Acc
urac
yFully-supervised baseline
Linear model reaches a maximum of 94.0%, fully-supervised boosting 98.7%.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Prediction stability
0 1 2 3 4 5 6 7 8×107
0.00
0.02
0.04
0.06
0.08
0.10
0.12
0.14
0.16
0.18
Number of frames looked at
Err
or
Long periods where no cars are seen
Fully-supervised, looping through ∼ 7M training examples.
Can’t do semi-supervised learning if you forget about objects after not seeing them for a while!
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Training buffers
DS
DC
DS is the stream of examples seen so far.
DC is a new chunk of data.
Want to maintain DB , a fixed-size buffer of examples which isrepresentative of DS .
Resample from DB and DC proportionally, relative to how much ofthe total stream they represent.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Training buffers
DS
DC
DS is the stream of examples seen so far.
DC is a new chunk of data.
Want to maintain DB , a fixed-size buffer of examples which isrepresentative of DS .
Resample from DB and DC proportionally, relative to how much ofthe total stream they represent.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Training buffers
DS
DC
DS is the stream of examples seen so far.
DC is a new chunk of data.
Want to maintain DB , a fixed-size buffer of examples which isrepresentative of DS .
Resample from DB and DC proportionally, relative to how much ofthe total stream they represent.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Training buffers
DS
DC
DS is the stream of examples seen so far.
DC is a new chunk of data.
Want to maintain DB , a fixed-size buffer of examples which isrepresentative of DS .
Resample from DB and DC proportionally, relative to how much ofthe total stream they represent.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Training buffers
0 1 2 3 4 5 6 7 8×107
0.00
0.02
0.04
0.06
0.08
0.10
0.12
0.14
0.16
0.18
Number of frames looked at
Err
or
Long periods where no cars are seen
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0×107
0.02
0.04
0.06
0.08
0.10
0.12
0.14
0.16
0.18
Cars seen after the first ~6Mexamples
Stable and improvingperformance
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Confidence thresholds
−10 −5 0 5
Log odds0
50
100
150
200
250
300
350
Num
bero
ftes
texa
mpl
es
20
40
60
80
100
Perc
ent
Test set track resultsCorrectIncorrectPercent
Need to decide when toinduct new tracks aspositive examples of objects.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Variable confidences
−50 −40 −30 −20 −10 0 10
Log odds0
50
100
150
200
250
300
350
Num
bero
ftes
texa
mpl
es
20
40
60
80
100
Perc
ent
Test set track resultsCorrectIncorrectPercent
−1.0 −0.8 −0.6 −0.4 −0.2 0.0 0.2 0.4
Log odds ×1070
50
100
150
200
250
300
350
Num
bero
ftes
texa
mpl
es
20
40
60
80
100
Perc
ent
Test set track resultsCorrectIncorrectPercent
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Confidence threshold learning
−50 −40 −30 −20 −10 0 10
Log odds0
50
100
150
200
250
300
350
Num
bero
ftes
texa
mpl
es
20
40
60
80
100
Perc
ent
Test set track resultsCorrectIncorrectPercent
−1.0 −0.8 −0.6 −0.4 −0.2 0.0 0.2 0.4
Log odds ×1070
50
100
150
200
250
300
350
Num
bero
ftes
texa
mpl
es
20
40
60
80
100
Perc
ent
Test set track resultsCorrectIncorrectPercent
−40 −20 0 20 40
Log odds0
10000
20000
30000
40000
50000
60000
70000
Num
bero
ftes
texa
mpl
es
20
40
60
80
100Pe
rcen
tHoldout set frame results
CorrectIncorrectPercent
−1.5 −1.0 −0.5 0.0 0.5
Log odds ×1070
5000
10000
15000
20000
25000
30000
35000
40000
45000
Num
bero
ftes
texa
mpl
es
20
40
60
80
100
Perc
ent
Holdout set frame resultsCorrectIncorrectPercent
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Algorithm sketch
Hand-labeledtraining
data
Updateclassifier
Auto-labeledbackgroundtraining data
x100
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Algorithm sketch
Hand-labeledholdout
data
Hand-labeledtraining
data
Updateclassifier
Updatethresholds
Auto-labeledbackgroundtraining data
Auto-labeledbackgroundholdout data
x100 x100
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Algorithm sketch
Hand-labeledholdout
data
Unlabeled data from the stream
Induct examples
Hand-labeledtraining
data
Updateclassifier
Updatethresholds
Auto-labeledbackgroundtraining data
Auto-labeledbackgroundholdout data
x100 x100
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Algorithm sketch
Hand-labeledholdout
data
Unlabeled data from the stream
Induct examples
Trainingbuffers
(one per class)
Holdoutbuffers
(one per class)
Half Half
Hand-labeledtraining
data
Updateclassifier
Updatethresholds
Auto-labeledbackgroundtraining data
Auto-labeledbackgroundholdout data
x100 x100
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Online tracking-based semi-supervised learning
0.0 0.5 1.0 1.5 2.0×107
0
20
40
60
80
100
Num
bero
fhan
d-la
bele
dtra
cks
0.86
0.88
0.90
0.92
0.94
0.96
0.98
1.00
Acc
urac
y
Number of unlabeled frames looked at
Hand-labeled examples
Additional hand-labeled examples can break it out of local minima.
∼8M unique unlabeled examples.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Online tracking-based semi-supervised learning
0.0 0.5 1.0 1.5 2.0Number unlabeled frames looked at ×107
0
20
40
60
80
100
Num
bero
fhan
d-la
bele
dtra
cks
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Pre
cisi
onan
dre
call
PrecisionRecallNon-cars examples (gray)
Car examples (green)
Given lots of negative examples, recall initially drops, then recovers; overall accuracy improves.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Online tracking-based semi-supervised learning
0.0 0.2 0.4 0.6 0.8 1.0 1.2×108
0
20
40
60
80
100
Num
bero
fhan
d-la
bele
dtra
cks
0.86
0.88
0.90
0.92
0.94
0.96
0.98
1.00
Acc
urac
y
Number of unlabeled frames looked at
Results after running for ∼1 week. Total hand-labeled tracks: 108, vs ∼4000 needed for goodperformance in fully-supervised case.
Max accuracy when training on automatically-labeled background and all hand-labeled tracks: 90.1%.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Annotating
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Annotating
−40 −20 0 20 40
Log odds0
10000
20000
30000
40000
50000
60000
70000
Num
bero
ftes
texa
mpl
es
20
40
60
80
100
Perc
ent
Holdout set frame resultsCorrectIncorrectPercent
−50 −40 −30 −20 −10 0 10
Log odds0
50
100
150
200
250
300
350
Num
bero
ftes
texa
mpl
es
20
40
60
80
100
Perc
ent
Test set track resultsCorrectIncorrectPercent
The holdout set can tell you where to look for incorrect examples.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Annotating
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Adding classes later
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4×108
0
20
40
60
80
100
120
Num
bero
fhan
d-la
bele
dtra
cks
0.84
0.86
0.88
0.90
0.92
0.94
0.96
0.98
1.00
Acc
urac
y
Adding pedestriansand bicyclists
Breaking existing classes out oflocal minima
Number of unlabeled frames looked at
Max accuracy when training on automatically-labeled background and all hand-labeled tracks: 90.8%.
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Adding classes later
0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4Number unlabeled frames looked at ×108
0
20
40
60
80
100
120
Num
bero
fhan
d-la
bele
dtra
cks
0.0
0.2
0.4
0.6
0.8
1.0
Pre
cisi
onan
dre
call
PrecisionRecall
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Causes of failure while developing this
Memory fragmentation
Combined training buffer rather than one per class
Stochastic gradient constant step size
Not weighting the hand-labeled data
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Causes of failure while developing this
Memory fragmentation
Combined training buffer rather than one per class
Stochastic gradient constant step size
Not weighting the hand-labeled data
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Causes of failure while developing this
Memory fragmentation
Combined training buffer rather than one per class
Stochastic gradient constant step size
Not weighting the hand-labeled data
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Causes of failure while developing this
Memory fragmentation
Combined training buffer rather than one per class
Stochastic gradient constant step size
Not weighting the hand-labeled data
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Future work: dual induction
−50 −40 −30 −20 −10 0 10
Log odds0
50
100
150
200
250
300
350
Num
bero
ftes
texa
mpl
es
20
40
60
80
100
Perc
ent
Test set track resultsCorrectIncorrectPercent
−1.0 −0.8 −0.6 −0.4 −0.2 0.0 0.2 0.4
Log odds ×1070
50
100
150
200
250
300
350
Num
bero
ftes
texa
mpl
es
20
40
60
80
100
Perc
ent
Test set track resultsCorrectIncorrectPercent
−40 −20 0 20 40
Log odds0
10000
20000
30000
40000
50000
60000
70000
Num
bero
ftes
texa
mpl
es
20
40
60
80
100Pe
rcen
tHoldout set frame results
CorrectIncorrectPercent
−1.5 −1.0 −0.5 0.0 0.5
Log odds ×1070
5000
10000
15000
20000
25000
30000
35000
40000
45000
Num
bero
ftes
texa
mpl
es
20
40
60
80
100
Perc
ent
Holdout set frame resultsCorrectIncorrectPercent
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
MotivationAlgorithm
Experiments
Future work
- Connected components- Background subtraction
Segmentation
- Kalman filters
Tracking
- Boosting- Logistic regression, stochastic gradient descent
Classification
- Discriminative segmentation and tracking
Alex Teichman and Sebastian Thrun Online, semi-supervised learning // RSS2012
top related