comments on last lecture. - eecs at uc berkeleysatishr/cs270/sp... · comments on last lecture....

231
Comments on last lecture.

Upload: others

Post on 08-Aug-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Comments on last lecture.

Easy to come up with several Nash for non-zero-sum games.

Is the game framework only interesting in some infinite horizon?

No.Minimize worst expected loss. Best defense.Any prior distribution on opponent. Best offense.

Rational players should play this way!

“Infinite horizon” is just an assumption of rationality.

Page 2: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Comments on last lecture.

Easy to come up with several Nash for non-zero-sum games.

Is the game framework only interesting in some infinite horizon?

No.Minimize worst expected loss. Best defense.Any prior distribution on opponent. Best offense.

Rational players should play this way!

“Infinite horizon” is just an assumption of rationality.

Page 3: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Comments on last lecture.

Easy to come up with several Nash for non-zero-sum games.

Is the game framework only interesting in some infinite horizon?

No.Minimize worst expected loss. Best defense.Any prior distribution on opponent. Best offense.

Rational players should play this way!

“Infinite horizon” is just an assumption of rationality.

Page 4: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Comments on last lecture.

Easy to come up with several Nash for non-zero-sum games.

Is the game framework only interesting in some infinite horizon?

No.

Minimize worst expected loss. Best defense.Any prior distribution on opponent. Best offense.

Rational players should play this way!

“Infinite horizon” is just an assumption of rationality.

Page 5: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Comments on last lecture.

Easy to come up with several Nash for non-zero-sum games.

Is the game framework only interesting in some infinite horizon?

No.Minimize worst expected loss.

Best defense.Any prior distribution on opponent. Best offense.

Rational players should play this way!

“Infinite horizon” is just an assumption of rationality.

Page 6: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Comments on last lecture.

Easy to come up with several Nash for non-zero-sum games.

Is the game framework only interesting in some infinite horizon?

No.Minimize worst expected loss. Best defense.

Any prior distribution on opponent. Best offense.

Rational players should play this way!

“Infinite horizon” is just an assumption of rationality.

Page 7: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Comments on last lecture.

Easy to come up with several Nash for non-zero-sum games.

Is the game framework only interesting in some infinite horizon?

No.Minimize worst expected loss. Best defense.Any prior distribution on opponent.

Best offense.

Rational players should play this way!

“Infinite horizon” is just an assumption of rationality.

Page 8: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Comments on last lecture.

Easy to come up with several Nash for non-zero-sum games.

Is the game framework only interesting in some infinite horizon?

No.Minimize worst expected loss. Best defense.Any prior distribution on opponent. Best offense.

Rational players should play this way!

“Infinite horizon” is just an assumption of rationality.

Page 9: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Comments on last lecture.

Easy to come up with several Nash for non-zero-sum games.

Is the game framework only interesting in some infinite horizon?

No.Minimize worst expected loss. Best defense.Any prior distribution on opponent. Best offense.

Rational players should play this way!

“Infinite horizon” is just an assumption of rationality.

Page 10: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Comments on last lecture.

Easy to come up with several Nash for non-zero-sum games.

Is the game framework only interesting in some infinite horizon?

No.Minimize worst expected loss. Best defense.Any prior distribution on opponent. Best offense.

Rational players should play this way!

“Infinite horizon” is just an assumption of rationality.

Page 11: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Comments on last lecture.

Easy to come up with several Nash for non-zero-sum games.

Is the game framework only interesting in some infinite horizon?

No.Minimize worst expected loss. Best defense.Any prior distribution on opponent. Best offense.

Rational players should play this way!

“Infinite horizon” is just an assumption of rationality.

Page 12: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Today

Finish Maximum Weight Matching Algorithm.

Exact algorithm with dueling players.

Multiplicative Weights Framework.Very general framework of toll/congestion algorithm.

Page 13: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Today

Finish Maximum Weight Matching Algorithm.Exact algorithm with dueling players.

Multiplicative Weights Framework.Very general framework of toll/congestion algorithm.

Page 14: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Today

Finish Maximum Weight Matching Algorithm.Exact algorithm with dueling players.

Multiplicative Weights Framework.

Very general framework of toll/congestion algorithm.

Page 15: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Today

Finish Maximum Weight Matching Algorithm.Exact algorithm with dueling players.

Multiplicative Weights Framework.Very general framework of toll/congestion algorithm.

Page 16: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Matching/Weighted Vertex Cover

Maximum Weight Matching.

Given a bipartite graph, G = (U,V ,E), with edge weightsw : E → R, find a maximum weight matching.

A matching is a set of edges where no two share an endpoint.

Minimum Weight Cover.

Given a bipartite graph, G = (U,V ,E), with edge weightsw : E → R, find an vertex cover function of minimum total value.

A function p : V → R, where for all edges, e = (u,v)p(u)+p(v)≥ w(e).

Minimize ∑v∈U∪V p(u).

Optimal solutions to both iffor e ∈M, w(e) = p(u)+p(v) (Defn: tight edge.) andperfect matching.

Page 17: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Matching/Weighted Vertex Cover

Maximum Weight Matching.

Given a bipartite graph, G = (U,V ,E), with edge weightsw : E → R, find a maximum weight matching.

A matching is a set of edges where no two share an endpoint.

Minimum Weight Cover.

Given a bipartite graph, G = (U,V ,E), with edge weightsw : E → R, find an vertex cover function of minimum total value.

A function p : V → R, where for all edges, e = (u,v)p(u)+p(v)≥ w(e).

Minimize ∑v∈U∪V p(u).

Optimal solutions to both iffor e ∈M, w(e) = p(u)+p(v) (Defn: tight edge.) andperfect matching.

Page 18: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Matching/Weighted Vertex Cover

Maximum Weight Matching.

Given a bipartite graph, G = (U,V ,E), with edge weightsw : E → R, find a maximum weight matching.

A matching is a set of edges where no two share an endpoint.

Minimum Weight Cover.

Given a bipartite graph, G = (U,V ,E), with edge weightsw : E → R, find an vertex cover function of minimum total value.

A function p : V → R, where for all edges, e = (u,v)p(u)+p(v)≥ w(e).

Minimize ∑v∈U∪V p(u).

Optimal solutions to both iffor e ∈M, w(e) = p(u)+p(v) (Defn: tight edge.) andperfect matching.

Page 19: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Matching/Weighted Vertex Cover

Maximum Weight Matching.

Given a bipartite graph, G = (U,V ,E), with edge weightsw : E → R, find a maximum weight matching.

A matching is a set of edges where no two share an endpoint.

Minimum Weight Cover.

Given a bipartite graph, G = (U,V ,E), with edge weightsw : E → R, find an vertex cover function of minimum total value.

A function p : V → R, where for all edges, e = (u,v)p(u)+p(v)≥ w(e).

Minimize ∑v∈U∪V p(u).

Optimal solutions to both iffor e ∈M, w(e) = p(u)+p(v) (Defn: tight edge.) andperfect matching.

Page 20: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Matching/Weighted Vertex Cover

Maximum Weight Matching.

Given a bipartite graph, G = (U,V ,E), with edge weightsw : E → R, find a maximum weight matching.

A matching is a set of edges where no two share an endpoint.

Minimum Weight Cover.

Given a bipartite graph, G = (U,V ,E), with edge weightsw : E → R, find an vertex cover function of minimum total value.

A function p : V → R, where for all edges, e = (u,v)p(u)+p(v)≥ w(e).

Minimize ∑v∈U∪V p(u).

Optimal solutions to both iffor e ∈M, w(e) = p(u)+p(v) (Defn: tight edge.) andperfect matching.

Page 21: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Matching/Weighted Vertex Cover

Maximum Weight Matching.

Given a bipartite graph, G = (U,V ,E), with edge weightsw : E → R, find a maximum weight matching.

A matching is a set of edges where no two share an endpoint.

Minimum Weight Cover.

Given a bipartite graph, G = (U,V ,E), with edge weightsw : E → R, find an vertex cover function of minimum total value.

A function p : V → R, where for all edges, e = (u,v)p(u)+p(v)≥ w(e).

Minimize ∑v∈U∪V p(u).

Optimal solutions to both iffor e ∈M, w(e) = p(u)+p(v) (Defn: tight edge.) andperfect matching.

Page 22: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Matching/Weighted Vertex Cover

Maximum Weight Matching.

Given a bipartite graph, G = (U,V ,E), with edge weightsw : E → R, find a maximum weight matching.

A matching is a set of edges where no two share an endpoint.

Minimum Weight Cover.

Given a bipartite graph, G = (U,V ,E), with edge weightsw : E → R, find an vertex cover function of minimum total value.

A function p : V → R, where for all edges, e = (u,v)p(u)+p(v)≥ w(e).

Minimize ∑v∈U∪V p(u).

Optimal solutions to both iffor e ∈M, w(e) = p(u)+p(v) (Defn: tight edge.) andperfect matching.

Page 23: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Matching/Weighted Vertex Cover

Maximum Weight Matching.

Given a bipartite graph, G = (U,V ,E), with edge weightsw : E → R, find a maximum weight matching.

A matching is a set of edges where no two share an endpoint.

Minimum Weight Cover.

Given a bipartite graph, G = (U,V ,E), with edge weightsw : E → R, find an vertex cover function of minimum total value.

A function p : V → R, where for all edges, e = (u,v)p(u)+p(v)≥ w(e).

Minimize ∑v∈U∪V p(u).

Optimal solutions to both if

for e ∈M, w(e) = p(u)+p(v) (Defn: tight edge.) andperfect matching.

Page 24: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Matching/Weighted Vertex Cover

Maximum Weight Matching.

Given a bipartite graph, G = (U,V ,E), with edge weightsw : E → R, find a maximum weight matching.

A matching is a set of edges where no two share an endpoint.

Minimum Weight Cover.

Given a bipartite graph, G = (U,V ,E), with edge weightsw : E → R, find an vertex cover function of minimum total value.

A function p : V → R, where for all edges, e = (u,v)p(u)+p(v)≥ w(e).

Minimize ∑v∈U∪V p(u).

Optimal solutions to both iffor e ∈M, w(e) = p(u)+p(v) (Defn: tight edge.) and

perfect matching.

Page 25: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Matching/Weighted Vertex Cover

Maximum Weight Matching.

Given a bipartite graph, G = (U,V ,E), with edge weightsw : E → R, find a maximum weight matching.

A matching is a set of edges where no two share an endpoint.

Minimum Weight Cover.

Given a bipartite graph, G = (U,V ,E), with edge weightsw : E → R, find an vertex cover function of minimum total value.

A function p : V → R, where for all edges, e = (u,v)p(u)+p(v)≥ w(e).

Minimize ∑v∈U∪V p(u).

Optimal solutions to both iffor e ∈M, w(e) = p(u)+p(v) (Defn: tight edge.) andperfect matching.

Page 26: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Maximum Weight MatchingGoal: perfect matching on tight edges.

...

...

.....

.

p(·)

p(·)−δ

p(·)

p(·)+δ

Algorithm

Start with empty matching, feasible cover function (p(·))Add tight edges to matching.

Use alt./aug. paths of tight edges.”maximum matching algorithm.”

No augmenting path.Cut, (S,T ), in directed graph of tight edges!

All edges across cut are not tight. (loose?)

Nontight edges leaving cut, go from SU , TV .

Lower prices in SU , raise prices in ST ,all explored edges still tight,backward edges still feasible

... and get new tight edge!What’s delta? w(e) > p(u)+p(v)→δ = mine∈(SU×TV )w(e)−p(u)−p(v).

Page 27: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Maximum Weight MatchingGoal: perfect matching on tight edges.

...

...

.....

.

p(·)

p(·)−δ

p(·)

p(·)+δ

Algorithm

Start with empty matching, feasible cover function (p(·))

Add tight edges to matching.Use alt./aug. paths of tight edges.

”maximum matching algorithm.”

No augmenting path.Cut, (S,T ), in directed graph of tight edges!

All edges across cut are not tight. (loose?)

Nontight edges leaving cut, go from SU , TV .

Lower prices in SU , raise prices in ST ,all explored edges still tight,backward edges still feasible

... and get new tight edge!What’s delta? w(e) > p(u)+p(v)→δ = mine∈(SU×TV )w(e)−p(u)−p(v).

Page 28: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Maximum Weight MatchingGoal: perfect matching on tight edges.

...

...

.....

.

p(·)

p(·)−δ

p(·)

p(·)+δ

Algorithm

Start with empty matching, feasible cover function (p(·))Add tight edges to matching.

Use alt./aug. paths of tight edges.”maximum matching algorithm.”

No augmenting path.Cut, (S,T ), in directed graph of tight edges!

All edges across cut are not tight. (loose?)

Nontight edges leaving cut, go from SU , TV .

Lower prices in SU , raise prices in ST ,all explored edges still tight,backward edges still feasible

... and get new tight edge!What’s delta? w(e) > p(u)+p(v)→δ = mine∈(SU×TV )w(e)−p(u)−p(v).

Page 29: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Maximum Weight MatchingGoal: perfect matching on tight edges.

...

...

.....

.

p(·)

p(·)−δ

p(·)

p(·)+δ

Algorithm

Start with empty matching, feasible cover function (p(·))Add tight edges to matching.

Use alt./aug. paths of tight edges.

”maximum matching algorithm.”

No augmenting path.Cut, (S,T ), in directed graph of tight edges!

All edges across cut are not tight. (loose?)

Nontight edges leaving cut, go from SU , TV .

Lower prices in SU , raise prices in ST ,all explored edges still tight,backward edges still feasible

... and get new tight edge!What’s delta? w(e) > p(u)+p(v)→δ = mine∈(SU×TV )w(e)−p(u)−p(v).

Page 30: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Maximum Weight MatchingGoal: perfect matching on tight edges.

...

...

.....

.

p(·)

p(·)−δ

p(·)

p(·)+δ

Algorithm

Start with empty matching, feasible cover function (p(·))Add tight edges to matching.

Use alt./aug. paths of tight edges.”maximum matching algorithm.”

No augmenting path.Cut, (S,T ), in directed graph of tight edges!

All edges across cut are not tight. (loose?)

Nontight edges leaving cut, go from SU , TV .

Lower prices in SU , raise prices in ST ,all explored edges still tight,backward edges still feasible

... and get new tight edge!What’s delta? w(e) > p(u)+p(v)→δ = mine∈(SU×TV )w(e)−p(u)−p(v).

Page 31: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Maximum Weight MatchingGoal: perfect matching on tight edges.

...

...

.....

.

p(·)

p(·)−δ

p(·)

p(·)+δ

Algorithm

Start with empty matching, feasible cover function (p(·))Add tight edges to matching.

Use alt./aug. paths of tight edges.”maximum matching algorithm.”

No augmenting path.

Cut, (S,T ), in directed graph of tight edges!

All edges across cut are not tight. (loose?)

Nontight edges leaving cut, go from SU , TV .

Lower prices in SU , raise prices in ST ,all explored edges still tight,backward edges still feasible

... and get new tight edge!What’s delta? w(e) > p(u)+p(v)→δ = mine∈(SU×TV )w(e)−p(u)−p(v).

Page 32: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Maximum Weight MatchingGoal: perfect matching on tight edges.

...

...

.....

.

p(·)

p(·)−δ

p(·)

p(·)+δ

Algorithm

Start with empty matching, feasible cover function (p(·))Add tight edges to matching.

Use alt./aug. paths of tight edges.”maximum matching algorithm.”

No augmenting path.Cut, (S,T ), in directed graph of tight edges!

All edges across cut are not tight. (loose?)

Nontight edges leaving cut, go from SU , TV .

Lower prices in SU , raise prices in ST ,all explored edges still tight,backward edges still feasible

... and get new tight edge!What’s delta? w(e) > p(u)+p(v)→δ = mine∈(SU×TV )w(e)−p(u)−p(v).

Page 33: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Maximum Weight MatchingGoal: perfect matching on tight edges.

...

...

.....

.

p(·)

p(·)−δ

p(·)

p(·)+δ

Algorithm

Start with empty matching, feasible cover function (p(·))Add tight edges to matching.

Use alt./aug. paths of tight edges.”maximum matching algorithm.”

No augmenting path.Cut, (S,T ), in directed graph of tight edges!

All edges across cut are not tight. (loose?)

Nontight edges leaving cut, go from SU , TV .

Lower prices in SU , raise prices in ST ,all explored edges still tight,backward edges still feasible

... and get new tight edge!What’s delta? w(e) > p(u)+p(v)→δ = mine∈(SU×TV )w(e)−p(u)−p(v).

Page 34: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Maximum Weight MatchingGoal: perfect matching on tight edges.

...

...

.....

.

p(·)

p(·)−δ

p(·)

p(·)+δ

Algorithm

Start with empty matching, feasible cover function (p(·))Add tight edges to matching.

Use alt./aug. paths of tight edges.”maximum matching algorithm.”

No augmenting path.Cut, (S,T ), in directed graph of tight edges!

All edges across cut are not tight. (loose?)

Nontight edges leaving cut, go from SU , TV .

Lower prices in SU , raise prices in ST ,all explored edges still tight,backward edges still feasible

... and get new tight edge!What’s delta? w(e) > p(u)+p(v)→δ = mine∈(SU×TV )w(e)−p(u)−p(v).

Page 35: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Maximum Weight MatchingGoal: perfect matching on tight edges.

...

...

.....

.

p(·)

p(·)−δ

p(·)

p(·)+δ

Algorithm

Start with empty matching, feasible cover function (p(·))Add tight edges to matching.

Use alt./aug. paths of tight edges.”maximum matching algorithm.”

No augmenting path.Cut, (S,T ), in directed graph of tight edges!

All edges across cut are not tight. (loose?)

Nontight edges leaving cut, go from SU , TV .

Lower prices in SU , raise prices in ST ,all explored edges still tight,backward edges still feasible

... and get new tight edge!What’s delta? w(e) > p(u)+p(v)→δ = mine∈(SU×TV )w(e)−p(u)−p(v).

Page 36: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Maximum Weight MatchingGoal: perfect matching on tight edges.

...

...

.....

.

p(·)

p(·)−δ

p(·)

p(·)+δ

Algorithm

Start with empty matching, feasible cover function (p(·))Add tight edges to matching.

Use alt./aug. paths of tight edges.”maximum matching algorithm.”

No augmenting path.Cut, (S,T ), in directed graph of tight edges!

All edges across cut are not tight. (loose?)

Nontight edges leaving cut, go from SU , TV .

Lower prices in SU ,

raise prices in ST ,all explored edges still tight,backward edges still feasible

... and get new tight edge!What’s delta? w(e) > p(u)+p(v)→δ = mine∈(SU×TV )w(e)−p(u)−p(v).

Page 37: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Maximum Weight MatchingGoal: perfect matching on tight edges.

...

...

.....

.

p(·)

p(·)−δ

p(·)

p(·)+δ

Algorithm

Start with empty matching, feasible cover function (p(·))Add tight edges to matching.

Use alt./aug. paths of tight edges.”maximum matching algorithm.”

No augmenting path.Cut, (S,T ), in directed graph of tight edges!

All edges across cut are not tight. (loose?)

Nontight edges leaving cut, go from SU , TV .

Lower prices in SU , raise prices in ST ,

all explored edges still tight,backward edges still feasible

... and get new tight edge!What’s delta? w(e) > p(u)+p(v)→δ = mine∈(SU×TV )w(e)−p(u)−p(v).

Page 38: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Maximum Weight MatchingGoal: perfect matching on tight edges.

...

...

.....

.

p(·)

p(·)−δ

p(·)

p(·)+δ

Algorithm

Start with empty matching, feasible cover function (p(·))Add tight edges to matching.

Use alt./aug. paths of tight edges.”maximum matching algorithm.”

No augmenting path.Cut, (S,T ), in directed graph of tight edges!

All edges across cut are not tight. (loose?)

Nontight edges leaving cut, go from SU , TV .

Lower prices in SU , raise prices in ST ,all explored edges still tight,backward edges still feasible

... and get new tight edge!What’s delta? w(e) > p(u)+p(v)→δ = mine∈(SU×TV )w(e)−p(u)−p(v).

Page 39: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Maximum Weight MatchingGoal: perfect matching on tight edges.

...

...

.....

.

p(·)

p(·)−δ

p(·)

p(·)+δ

Algorithm

Start with empty matching, feasible cover function (p(·))Add tight edges to matching.

Use alt./aug. paths of tight edges.”maximum matching algorithm.”

No augmenting path.Cut, (S,T ), in directed graph of tight edges!

All edges across cut are not tight. (loose?)

Nontight edges leaving cut, go from SU , TV .

Lower prices in SU , raise prices in ST ,all explored edges still tight,backward edges still feasible

... and get new tight edge!

What’s delta? w(e) > p(u)+p(v)→δ = mine∈(SU×TV )w(e)−p(u)−p(v).

Page 40: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Maximum Weight MatchingGoal: perfect matching on tight edges.

...

...

.....

.

p(·)

p(·)−δ

p(·)

p(·)+δ

Algorithm

Start with empty matching, feasible cover function (p(·))Add tight edges to matching.

Use alt./aug. paths of tight edges.”maximum matching algorithm.”

No augmenting path.Cut, (S,T ), in directed graph of tight edges!

All edges across cut are not tight. (loose?)

Nontight edges leaving cut, go from SU , TV .

Lower prices in SU , raise prices in ST ,all explored edges still tight,backward edges still feasible

... and get new tight edge!What’s delta?

w(e) > p(u)+p(v)→δ = mine∈(SU×TV )w(e)−p(u)−p(v).

Page 41: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Maximum Weight MatchingGoal: perfect matching on tight edges.

...

...

.....

.

p(·)

p(·)−δ

p(·)

p(·)+δ

Algorithm

Start with empty matching, feasible cover function (p(·))Add tight edges to matching.

Use alt./aug. paths of tight edges.”maximum matching algorithm.”

No augmenting path.Cut, (S,T ), in directed graph of tight edges!

All edges across cut are not tight. (loose?)

Nontight edges leaving cut, go from SU , TV .

Lower prices in SU , raise prices in ST ,all explored edges still tight,backward edges still feasible

... and get new tight edge!What’s delta? w(e) > p(u)+p(v)→δ = mine∈(SU×TV )w(e)−p(u)−p(v).

Page 42: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Maximum Weight MatchingGoal: perfect matching on tight edges.

...

...

.....

.

p(·)

p(·)−δ

p(·)

p(·)+δ

Algorithm

Start with empty matching, feasible cover function (p(·))Add tight edges to matching.

Use alt./aug. paths of tight edges.”maximum matching algorithm.”

No augmenting path.Cut, (S,T ), in directed graph of tight edges!

All edges across cut are not tight. (loose?)

Nontight edges leaving cut, go from SU , TV .

Lower prices in SU , raise prices in ST ,all explored edges still tight,backward edges still feasible

... and get new tight edge!What’s delta? w(e) > p(u)+p(v)→δ = mine∈(SU×TV )w(e)−p(u)−p(v).

Page 43: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Some details/Runtime

Add 0 value edges, so that optimal solution contains perfectmatching.

Beginning “Matcher” Solution: M = {}.Feasible! Value = 0.

Beginning “Coverer” Solution: p(u) = maximum incidentedge for u ∈ U, 0 otherwise.

Main Work:breadth first search from unmatched nodes finds cut.

Update prices (find minimum delta.)

Simple Implementation:Each bfs either augments or adds node to S in next cut.O(n) iterations per augmentation.O(n) augmentations.

O(n2m) time.

Page 44: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Some details/Runtime

Add 0 value edges, so that optimal solution contains perfectmatching.

Beginning “Matcher” Solution: M = {}.

Feasible! Value = 0.

Beginning “Coverer” Solution: p(u) = maximum incidentedge for u ∈ U, 0 otherwise.

Main Work:breadth first search from unmatched nodes finds cut.

Update prices (find minimum delta.)

Simple Implementation:Each bfs either augments or adds node to S in next cut.O(n) iterations per augmentation.O(n) augmentations.

O(n2m) time.

Page 45: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Some details/Runtime

Add 0 value edges, so that optimal solution contains perfectmatching.

Beginning “Matcher” Solution: M = {}.Feasible!

Value = 0.

Beginning “Coverer” Solution: p(u) = maximum incidentedge for u ∈ U, 0 otherwise.

Main Work:breadth first search from unmatched nodes finds cut.

Update prices (find minimum delta.)

Simple Implementation:Each bfs either augments or adds node to S in next cut.O(n) iterations per augmentation.O(n) augmentations.

O(n2m) time.

Page 46: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Some details/Runtime

Add 0 value edges, so that optimal solution contains perfectmatching.

Beginning “Matcher” Solution: M = {}.Feasible! Value = 0.

Beginning “Coverer” Solution: p(u) = maximum incidentedge for u ∈ U, 0 otherwise.

Main Work:breadth first search from unmatched nodes finds cut.

Update prices (find minimum delta.)

Simple Implementation:Each bfs either augments or adds node to S in next cut.O(n) iterations per augmentation.O(n) augmentations.

O(n2m) time.

Page 47: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Some details/Runtime

Add 0 value edges, so that optimal solution contains perfectmatching.

Beginning “Matcher” Solution: M = {}.Feasible! Value = 0.

Beginning “Coverer” Solution: p(u) = maximum incidentedge for u ∈ U,

0 otherwise.

Main Work:breadth first search from unmatched nodes finds cut.

Update prices (find minimum delta.)

Simple Implementation:Each bfs either augments or adds node to S in next cut.O(n) iterations per augmentation.O(n) augmentations.

O(n2m) time.

Page 48: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Some details/Runtime

Add 0 value edges, so that optimal solution contains perfectmatching.

Beginning “Matcher” Solution: M = {}.Feasible! Value = 0.

Beginning “Coverer” Solution: p(u) = maximum incidentedge for u ∈ U, 0 otherwise.

Main Work:breadth first search from unmatched nodes finds cut.

Update prices (find minimum delta.)

Simple Implementation:Each bfs either augments or adds node to S in next cut.O(n) iterations per augmentation.O(n) augmentations.

O(n2m) time.

Page 49: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Some details/Runtime

Add 0 value edges, so that optimal solution contains perfectmatching.

Beginning “Matcher” Solution: M = {}.Feasible! Value = 0.

Beginning “Coverer” Solution: p(u) = maximum incidentedge for u ∈ U, 0 otherwise.

Main Work:

breadth first search from unmatched nodes finds cut.Update prices (find minimum delta.)

Simple Implementation:Each bfs either augments or adds node to S in next cut.O(n) iterations per augmentation.O(n) augmentations.

O(n2m) time.

Page 50: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Some details/Runtime

Add 0 value edges, so that optimal solution contains perfectmatching.

Beginning “Matcher” Solution: M = {}.Feasible! Value = 0.

Beginning “Coverer” Solution: p(u) = maximum incidentedge for u ∈ U, 0 otherwise.

Main Work:breadth first search from unmatched nodes finds cut.

Update prices (find minimum delta.)

Simple Implementation:Each bfs either augments or adds node to S in next cut.O(n) iterations per augmentation.O(n) augmentations.

O(n2m) time.

Page 51: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Some details/Runtime

Add 0 value edges, so that optimal solution contains perfectmatching.

Beginning “Matcher” Solution: M = {}.Feasible! Value = 0.

Beginning “Coverer” Solution: p(u) = maximum incidentedge for u ∈ U, 0 otherwise.

Main Work:breadth first search from unmatched nodes finds cut.

Update prices (find minimum delta.)

Simple Implementation:Each bfs either augments or adds node to S in next cut.O(n) iterations per augmentation.O(n) augmentations.

O(n2m) time.

Page 52: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Some details/Runtime

Add 0 value edges, so that optimal solution contains perfectmatching.

Beginning “Matcher” Solution: M = {}.Feasible! Value = 0.

Beginning “Coverer” Solution: p(u) = maximum incidentedge for u ∈ U, 0 otherwise.

Main Work:breadth first search from unmatched nodes finds cut.

Update prices (find minimum delta.)

Simple Implementation:Each bfs either augments or adds node to S in next cut.O(n) iterations per augmentation.O(n) augmentations.

O(n2m) time.

Page 53: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Some details/Runtime

Add 0 value edges, so that optimal solution contains perfectmatching.

Beginning “Matcher” Solution: M = {}.Feasible! Value = 0.

Beginning “Coverer” Solution: p(u) = maximum incidentedge for u ∈ U, 0 otherwise.

Main Work:breadth first search from unmatched nodes finds cut.

Update prices (find minimum delta.)

Simple Implementation:

Each bfs either augments or adds node to S in next cut.O(n) iterations per augmentation.O(n) augmentations.

O(n2m) time.

Page 54: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Some details/Runtime

Add 0 value edges, so that optimal solution contains perfectmatching.

Beginning “Matcher” Solution: M = {}.Feasible! Value = 0.

Beginning “Coverer” Solution: p(u) = maximum incidentedge for u ∈ U, 0 otherwise.

Main Work:breadth first search from unmatched nodes finds cut.

Update prices (find minimum delta.)

Simple Implementation:Each bfs either augments or adds node to S in next cut.

O(n) iterations per augmentation.O(n) augmentations.

O(n2m) time.

Page 55: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Some details/Runtime

Add 0 value edges, so that optimal solution contains perfectmatching.

Beginning “Matcher” Solution: M = {}.Feasible! Value = 0.

Beginning “Coverer” Solution: p(u) = maximum incidentedge for u ∈ U, 0 otherwise.

Main Work:breadth first search from unmatched nodes finds cut.

Update prices (find minimum delta.)

Simple Implementation:Each bfs either augments or adds node to S in next cut.O(n) iterations per augmentation.

O(n) augmentations.

O(n2m) time.

Page 56: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Some details/Runtime

Add 0 value edges, so that optimal solution contains perfectmatching.

Beginning “Matcher” Solution: M = {}.Feasible! Value = 0.

Beginning “Coverer” Solution: p(u) = maximum incidentedge for u ∈ U, 0 otherwise.

Main Work:breadth first search from unmatched nodes finds cut.

Update prices (find minimum delta.)

Simple Implementation:Each bfs either augments or adds node to S in next cut.O(n) iterations per augmentation.O(n) augmentations.

O(n2m) time.

Page 57: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Some details/Runtime

Add 0 value edges, so that optimal solution contains perfectmatching.

Beginning “Matcher” Solution: M = {}.Feasible! Value = 0.

Beginning “Coverer” Solution: p(u) = maximum incidentedge for u ∈ U, 0 otherwise.

Main Work:breadth first search from unmatched nodes finds cut.

Update prices (find minimum delta.)

Simple Implementation:Each bfs either augments or adds node to S in next cut.O(n) iterations per augmentation.O(n) augmentations.

O(n2m) time.

Page 58: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Example

u

v

w

x

a

b

c

d

3

3

3

3

0

0

0

0

3

2

2

3

0

0

0

0

3

1

1

2

1

0

0

0

u

v

w

x

a

b

c

d

v

w

v

w

x

aXX

All matched edges tight.Perfect matching. Feasible price function. Values the same.Optimal!

Notice:no weights on the right problem.retain previous matching through price changes.retains edges in failed search through price changes.

Page 59: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Example

u

v

w

x

a

b

c

d

3

3

3

3

0

0

0

0

3

2

2

3

0

0

0

0

3

1

1

2

1

0

0

0

u

v

w

x

a

b

c

d

v

w

v

w

x

aXX

All matched edges tight.Perfect matching. Feasible price function. Values the same.Optimal!

Notice:no weights on the right problem.retain previous matching through price changes.retains edges in failed search through price changes.

Page 60: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Example

u

v

w

x

a

b

c

d

3

3

3

3

0

0

0

0

3

2

2

3

0

0

0

0

3

1

1

2

1

0

0

0

u

v

w

x

a

b

c

d

v

w

v

w

x

aXX

All matched edges tight.Perfect matching. Feasible price function. Values the same.Optimal!

Notice:no weights on the right problem.retain previous matching through price changes.retains edges in failed search through price changes.

Page 61: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Example

u

v

w

x

a

b

c

d

3

3

3

3

0

0

0

0

3

2

2

3

0

0

0

0

3

1

1

2

1

0

0

0

u

v

w

x

a

b

c

d

v

w

v

w

x

aXX

All matched edges tight.Perfect matching. Feasible price function. Values the same.Optimal!

Notice:no weights on the right problem.retain previous matching through price changes.retains edges in failed search through price changes.

Page 62: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Example

u

v

w

x

a

b

c

d

3

3

3

3

0

0

0

0

3

2

2

3

0

0

0

0

3

1

1

2

1

0

0

0

u

v

w

x

a

b

c

d

v

w

v

w

x

aXX

All matched edges tight.Perfect matching. Feasible price function. Values the same.Optimal!

Notice:no weights on the right problem.retain previous matching through price changes.retains edges in failed search through price changes.

Page 63: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Example

u

v

w

x

a

b

c

d

3

3

3

3

0

0

0

0

3

2

2

3

0

0

0

0

3

1

1

2

1

0

0

0

u

v

w

x

a

b

c

d

v

w

v

w

x

aXX

All matched edges tight.Perfect matching. Feasible price function. Values the same.Optimal!

Notice:no weights on the right problem.retain previous matching through price changes.retains edges in failed search through price changes.

Page 64: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Example

u

v

w

x

a

b

c

d

3

3

3

3

0

0

0

0

3

2

2

3

0

0

0

0

3

1

1

2

1

0

0

0

u

v

w

x

a

b

c

d

v

w

v

w

x

a

XX

All matched edges tight.Perfect matching. Feasible price function. Values the same.Optimal!

Notice:no weights on the right problem.retain previous matching through price changes.retains edges in failed search through price changes.

Page 65: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Example

u

v

w

x

a

b

c

d

3

3

3

3

0

0

0

0

3

2

2

3

0

0

0

0

3

1

1

2

1

0

0

0

u

v

w

x

a

b

c

d

v

w

v

w

x

a

X

X

All matched edges tight.Perfect matching. Feasible price function. Values the same.Optimal!

Notice:no weights on the right problem.retain previous matching through price changes.retains edges in failed search through price changes.

Page 66: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Example

u

v

w

x

a

b

c

d

3

3

3

3

0

0

0

0

3

2

2

3

0

0

0

0

3

1

1

2

1

0

0

0

u

v

w

x

a

b

c

d

v

w

v

w

x

aX

X

All matched edges tight.Perfect matching. Feasible price function. Values the same.Optimal!

Notice:no weights on the right problem.retain previous matching through price changes.retains edges in failed search through price changes.

Page 67: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Example

u

v

w

x

a

b

c

d

3

3

3

3

0

0

0

0

3

2

2

3

0

0

0

0

3

1

1

2

1

0

0

0

u

v

w

x

a

b

c

d

v

w

v

w

x

aX

X

All matched edges tight.Perfect matching. Feasible price function. Values the same.Optimal!

Notice:no weights on the right problem.retain previous matching through price changes.retains edges in failed search through price changes.

Page 68: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Example

u

v

w

x

a

b

c

d

3

3

3

3

0

0

0

0

3

2

2

3

0

0

0

0

3

1

1

2

1

0

0

0

u

v

w

x

a

b

c

d

v

w

v

w

x

aX

X

All matched edges tight.

Perfect matching. Feasible price function. Values the same.Optimal!

Notice:no weights on the right problem.retain previous matching through price changes.retains edges in failed search through price changes.

Page 69: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Example

u

v

w

x

a

b

c

d

3

3

3

3

0

0

0

0

3

2

2

3

0

0

0

0

3

1

1

2

1

0

0

0

u

v

w

x

a

b

c

d

v

w

v

w

x

aX

X

All matched edges tight.Perfect matching.

Feasible price function. Values the same.Optimal!

Notice:no weights on the right problem.retain previous matching through price changes.retains edges in failed search through price changes.

Page 70: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Example

u

v

w

x

a

b

c

d

3

3

3

3

0

0

0

0

3

2

2

3

0

0

0

0

3

1

1

2

1

0

0

0

u

v

w

x

a

b

c

d

v

w

v

w

x

aX

X

All matched edges tight.Perfect matching. Feasible price function.

Values the same.Optimal!

Notice:no weights on the right problem.retain previous matching through price changes.retains edges in failed search through price changes.

Page 71: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Example

u

v

w

x

a

b

c

d

3

3

3

3

0

0

0

0

3

2

2

3

0

0

0

0

3

1

1

2

1

0

0

0

u

v

w

x

a

b

c

d

v

w

v

w

x

aX

X

All matched edges tight.Perfect matching. Feasible price function. Values the same.

Optimal!

Notice:no weights on the right problem.retain previous matching through price changes.retains edges in failed search through price changes.

Page 72: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Example

u

v

w

x

a

b

c

d

3

3

3

3

0

0

0

0

3

2

2

3

0

0

0

0

3

1

1

2

1

0

0

0

u

v

w

x

a

b

c

d

v

w

v

w

x

aX

X

All matched edges tight.Perfect matching. Feasible price function. Values the same.Optimal!

Notice:no weights on the right problem.retain previous matching through price changes.retains edges in failed search through price changes.

Page 73: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Example

u

v

w

x

a

b

c

d

3

3

3

3

0

0

0

0

3

2

2

3

0

0

0

0

3

1

1

2

1

0

0

0

u

v

w

x

a

b

c

d

v

w

v

w

x

aX

X

All matched edges tight.Perfect matching. Feasible price function. Values the same.Optimal!

Notice:

no weights on the right problem.retain previous matching through price changes.retains edges in failed search through price changes.

Page 74: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Example

u

v

w

x

a

b

c

d

3

3

3

3

0

0

0

0

3

2

2

3

0

0

0

0

3

1

1

2

1

0

0

0

u

v

w

x

a

b

c

d

v

w

v

w

x

aX

X

All matched edges tight.Perfect matching. Feasible price function. Values the same.Optimal!

Notice:no weights on the right problem.

retain previous matching through price changes.retains edges in failed search through price changes.

Page 75: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Example

u

v

w

x

a

b

c

d

3

3

3

3

0

0

0

0

3

2

2

3

0

0

0

0

3

1

1

2

1

0

0

0

u

v

w

x

a

b

c

d

v

w

v

w

x

aX

X

All matched edges tight.Perfect matching. Feasible price function. Values the same.Optimal!

Notice:no weights on the right problem.retain previous matching through price changes.

retains edges in failed search through price changes.

Page 76: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Example

u

v

w

x

a

b

c

d

3

3

3

3

0

0

0

0

3

2

2

3

0

0

0

0

3

1

1

2

1

0

0

0

u

v

w

x

a

b

c

d

v

w

v

w

x

aX

X

All matched edges tight.Perfect matching. Feasible price function. Values the same.Optimal!

Notice:no weights on the right problem.retain previous matching through price changes.retains edges in failed search through price changes.

Page 77: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

The multiplicative weights framework.

Page 78: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Just finished the analysis of deterministic weighted majorityalgorithm in class.

Page 79: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Expert’s framework.

n experts.

Every day, each offers a prediction.

“Rain” or “Shine.”

Whose advise do you follow?

“The one who is correct most often.”

Sort of.

How well do you do?

Page 80: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Expert’s framework.

n experts.

Every day, each offers a prediction.

“Rain” or “Shine.”

Whose advise do you follow?

“The one who is correct most often.”

Sort of.

How well do you do?

Page 81: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Expert’s framework.

n experts.

Every day, each offers a prediction.

“Rain” or “Shine.”

Whose advise do you follow?

“The one who is correct most often.”

Sort of.

How well do you do?

Page 82: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Expert’s framework.

n experts.

Every day, each offers a prediction.

“Rain” or “Shine.”

Whose advise do you follow?

“The one who is correct most often.”

Sort of.

How well do you do?

Page 83: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Expert’s framework.

n experts.

Every day, each offers a prediction.

“Rain” or “Shine.”

Whose advise do you follow?

“The one who is correct most often.”

Sort of.

How well do you do?

Page 84: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Expert’s framework.

n experts.

Every day, each offers a prediction.

“Rain” or “Shine.”

Whose advise do you follow?

“The one who is correct most often.”

Sort of.

How well do you do?

Page 85: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Expert’s framework.

n experts.

Every day, each offers a prediction.

“Rain” or “Shine.”

Whose advise do you follow?

“The one who is correct most often.”

Sort of.

How well do you do?

Page 86: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Infallible expert.One of the expert’s is infallible!

Your strategy?

Choose any expert that has not made a mistake!

How long to find perfect expert?

Maybe..never! Never see a mistake.

Better model?

How many mistakes could you make? Mistake Bound.

(A) 1(B) 2(C) logn(D) n−1

Adversary designs setup to watch who you choose, and makethat expert make a mistake.

n−1!

Page 87: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Infallible expert.One of the expert’s is infallible!

Your strategy?

Choose any expert that has not made a mistake!

How long to find perfect expert?

Maybe..never! Never see a mistake.

Better model?

How many mistakes could you make? Mistake Bound.

(A) 1(B) 2(C) logn(D) n−1

Adversary designs setup to watch who you choose, and makethat expert make a mistake.

n−1!

Page 88: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Infallible expert.One of the expert’s is infallible!

Your strategy?

Choose any expert that has not made a mistake!

How long to find perfect expert?

Maybe..never! Never see a mistake.

Better model?

How many mistakes could you make? Mistake Bound.

(A) 1(B) 2(C) logn(D) n−1

Adversary designs setup to watch who you choose, and makethat expert make a mistake.

n−1!

Page 89: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Infallible expert.One of the expert’s is infallible!

Your strategy?

Choose any expert that has not made a mistake!

How long to find perfect expert?

Maybe..never! Never see a mistake.

Better model?

How many mistakes could you make? Mistake Bound.

(A) 1(B) 2(C) logn(D) n−1

Adversary designs setup to watch who you choose, and makethat expert make a mistake.

n−1!

Page 90: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Infallible expert.One of the expert’s is infallible!

Your strategy?

Choose any expert that has not made a mistake!

How long to find perfect expert?

Maybe..

never! Never see a mistake.

Better model?

How many mistakes could you make? Mistake Bound.

(A) 1(B) 2(C) logn(D) n−1

Adversary designs setup to watch who you choose, and makethat expert make a mistake.

n−1!

Page 91: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Infallible expert.One of the expert’s is infallible!

Your strategy?

Choose any expert that has not made a mistake!

How long to find perfect expert?

Maybe..never!

Never see a mistake.

Better model?

How many mistakes could you make? Mistake Bound.

(A) 1(B) 2(C) logn(D) n−1

Adversary designs setup to watch who you choose, and makethat expert make a mistake.

n−1!

Page 92: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Infallible expert.One of the expert’s is infallible!

Your strategy?

Choose any expert that has not made a mistake!

How long to find perfect expert?

Maybe..never! Never see a mistake.

Better model?

How many mistakes could you make? Mistake Bound.

(A) 1(B) 2(C) logn(D) n−1

Adversary designs setup to watch who you choose, and makethat expert make a mistake.

n−1!

Page 93: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Infallible expert.One of the expert’s is infallible!

Your strategy?

Choose any expert that has not made a mistake!

How long to find perfect expert?

Maybe..never! Never see a mistake.

Better model?

How many mistakes could you make? Mistake Bound.

(A) 1(B) 2(C) logn(D) n−1

Adversary designs setup to watch who you choose, and makethat expert make a mistake.

n−1!

Page 94: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Infallible expert.One of the expert’s is infallible!

Your strategy?

Choose any expert that has not made a mistake!

How long to find perfect expert?

Maybe..never! Never see a mistake.

Better model?

How many mistakes could you make?

Mistake Bound.

(A) 1(B) 2(C) logn(D) n−1

Adversary designs setup to watch who you choose, and makethat expert make a mistake.

n−1!

Page 95: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Infallible expert.One of the expert’s is infallible!

Your strategy?

Choose any expert that has not made a mistake!

How long to find perfect expert?

Maybe..never! Never see a mistake.

Better model?

How many mistakes could you make? Mistake Bound.

(A) 1(B) 2(C) logn(D) n−1

Adversary designs setup to watch who you choose, and makethat expert make a mistake.

n−1!

Page 96: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Infallible expert.One of the expert’s is infallible!

Your strategy?

Choose any expert that has not made a mistake!

How long to find perfect expert?

Maybe..never! Never see a mistake.

Better model?

How many mistakes could you make? Mistake Bound.

(A) 1(B) 2(C) logn(D) n−1

Adversary designs setup to watch who you choose, and makethat expert make a mistake.

n−1!

Page 97: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Infallible expert.One of the expert’s is infallible!

Your strategy?

Choose any expert that has not made a mistake!

How long to find perfect expert?

Maybe..never! Never see a mistake.

Better model?

How many mistakes could you make? Mistake Bound.

(A) 1(B) 2(C) logn(D) n−1

Adversary designs setup to watch who you choose, and makethat expert make a mistake.

n−1!

Page 98: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Concept Alert.

Note.

Adversary:makes you want to look bad.”You could have done so well”...

but you didn’t! ha..ha!

Analysis of Algorithms: do as well as possible!

Page 99: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Concept Alert.

Note.

Adversary:

makes you want to look bad.”You could have done so well”...

but you didn’t! ha..ha!

Analysis of Algorithms: do as well as possible!

Page 100: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Concept Alert.

Note.

Adversary:makes you want to look bad.

”You could have done so well”...but you didn’t! ha..ha!

Analysis of Algorithms: do as well as possible!

Page 101: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Concept Alert.

Note.

Adversary:makes you want to look bad.”You could have done so well”...

but you didn’t! ha..ha!

Analysis of Algorithms: do as well as possible!

Page 102: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Concept Alert.

Note.

Adversary:makes you want to look bad.”You could have done so well”...

but you didn’t!

ha..ha!

Analysis of Algorithms: do as well as possible!

Page 103: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Concept Alert.

Note.

Adversary:makes you want to look bad.”You could have done so well”...

but you didn’t! ha..

ha!

Analysis of Algorithms: do as well as possible!

Page 104: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Concept Alert.

Note.

Adversary:makes you want to look bad.”You could have done so well”...

but you didn’t! ha..ha!

Analysis of Algorithms: do as well as possible!

Page 105: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Concept Alert.

Note.

Adversary:makes you want to look bad.”You could have done so well”...

but you didn’t! ha..ha!

Analysis of Algorithms: do as well as possible!

Page 106: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Back to mistake bound.

Infallible Experts.

Alg: Choose one of the perfect experts.

Mistake Bound: n−1Lower bound: adversary argument.Upper bound: every mistake finds fallible expert.

Better Algorithm?

Making decision, not trying to find expert!

Algorithm: Go with the majority of previously correct experts.

What you would do anyway!

Page 107: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Back to mistake bound.

Infallible Experts.

Alg: Choose one of the perfect experts.

Mistake Bound: n−1Lower bound: adversary argument.Upper bound: every mistake finds fallible expert.

Better Algorithm?

Making decision, not trying to find expert!

Algorithm: Go with the majority of previously correct experts.

What you would do anyway!

Page 108: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Back to mistake bound.

Infallible Experts.

Alg: Choose one of the perfect experts.

Mistake Bound: n−1

Lower bound: adversary argument.Upper bound: every mistake finds fallible expert.

Better Algorithm?

Making decision, not trying to find expert!

Algorithm: Go with the majority of previously correct experts.

What you would do anyway!

Page 109: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Back to mistake bound.

Infallible Experts.

Alg: Choose one of the perfect experts.

Mistake Bound: n−1Lower bound: adversary argument.

Upper bound: every mistake finds fallible expert.

Better Algorithm?

Making decision, not trying to find expert!

Algorithm: Go with the majority of previously correct experts.

What you would do anyway!

Page 110: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Back to mistake bound.

Infallible Experts.

Alg: Choose one of the perfect experts.

Mistake Bound: n−1Lower bound: adversary argument.Upper bound:

every mistake finds fallible expert.

Better Algorithm?

Making decision, not trying to find expert!

Algorithm: Go with the majority of previously correct experts.

What you would do anyway!

Page 111: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Back to mistake bound.

Infallible Experts.

Alg: Choose one of the perfect experts.

Mistake Bound: n−1Lower bound: adversary argument.Upper bound: every mistake finds fallible expert.

Better Algorithm?

Making decision, not trying to find expert!

Algorithm: Go with the majority of previously correct experts.

What you would do anyway!

Page 112: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Back to mistake bound.

Infallible Experts.

Alg: Choose one of the perfect experts.

Mistake Bound: n−1Lower bound: adversary argument.Upper bound: every mistake finds fallible expert.

Better Algorithm?

Making decision, not trying to find expert!

Algorithm: Go with the majority of previously correct experts.

What you would do anyway!

Page 113: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Back to mistake bound.

Infallible Experts.

Alg: Choose one of the perfect experts.

Mistake Bound: n−1Lower bound: adversary argument.Upper bound: every mistake finds fallible expert.

Better Algorithm?

Making decision, not trying to find expert!

Algorithm: Go with the majority of previously correct experts.

What you would do anyway!

Page 114: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Back to mistake bound.

Infallible Experts.

Alg: Choose one of the perfect experts.

Mistake Bound: n−1Lower bound: adversary argument.Upper bound: every mistake finds fallible expert.

Better Algorithm?

Making decision, not trying to find expert!

Algorithm: Go with the majority of previously correct experts.

What you would do anyway!

Page 115: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Back to mistake bound.

Infallible Experts.

Alg: Choose one of the perfect experts.

Mistake Bound: n−1Lower bound: adversary argument.Upper bound: every mistake finds fallible expert.

Better Algorithm?

Making decision, not trying to find expert!

Algorithm: Go with the majority of previously correct experts.

What you would do anyway!

Page 116: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Alg 2: find majority of the perfect

How many mistakes could you make?

(A) 1(B) 2(C) logn(D) n−1At most logn!

When alg makes a mistake,|“perfect” experts| drops by a factor of two.

Initially n perfect experts mistake→ ≤ n/2 perfect expertsmistake→ ≤ n/4 perfect experts

...mistake→ ≤ 1 perfect expert

≥ 1 perfect expert→ at most logn mistakes!

Page 117: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Alg 2: find majority of the perfect

How many mistakes could you make?(A) 1(B) 2(C) logn(D) n−1

At most logn!

When alg makes a mistake,|“perfect” experts| drops by a factor of two.

Initially n perfect experts mistake→ ≤ n/2 perfect expertsmistake→ ≤ n/4 perfect experts

...mistake→ ≤ 1 perfect expert

≥ 1 perfect expert→ at most logn mistakes!

Page 118: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Alg 2: find majority of the perfect

How many mistakes could you make?(A) 1(B) 2(C) logn(D) n−1At most logn!

When alg makes a mistake,|“perfect” experts| drops by a factor of two.

Initially n perfect experts mistake→ ≤ n/2 perfect expertsmistake→ ≤ n/4 perfect experts

...mistake→ ≤ 1 perfect expert

≥ 1 perfect expert→ at most logn mistakes!

Page 119: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Alg 2: find majority of the perfect

How many mistakes could you make?(A) 1(B) 2(C) logn(D) n−1At most logn!

When alg makes a mistake,|“perfect” experts| drops by a factor of two.

Initially n perfect experts mistake→ ≤ n/2 perfect expertsmistake→ ≤ n/4 perfect experts

...mistake→ ≤ 1 perfect expert

≥ 1 perfect expert→ at most logn mistakes!

Page 120: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Alg 2: find majority of the perfect

How many mistakes could you make?(A) 1(B) 2(C) logn(D) n−1At most logn!

When alg makes a mistake,|“perfect” experts| drops by a factor of two.

Initially n perfect experts

mistake→ ≤ n/2 perfect expertsmistake→ ≤ n/4 perfect experts

...mistake→ ≤ 1 perfect expert

≥ 1 perfect expert→ at most logn mistakes!

Page 121: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Alg 2: find majority of the perfect

How many mistakes could you make?(A) 1(B) 2(C) logn(D) n−1At most logn!

When alg makes a mistake,|“perfect” experts| drops by a factor of two.

Initially n perfect experts mistake→ ≤ n/2 perfect experts

mistake→ ≤ n/4 perfect experts...

mistake→ ≤ 1 perfect expert

≥ 1 perfect expert→ at most logn mistakes!

Page 122: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Alg 2: find majority of the perfect

How many mistakes could you make?(A) 1(B) 2(C) logn(D) n−1At most logn!

When alg makes a mistake,|“perfect” experts| drops by a factor of two.

Initially n perfect experts mistake→ ≤ n/2 perfect expertsmistake→ ≤ n/4 perfect experts

...mistake→ ≤ 1 perfect expert

≥ 1 perfect expert→ at most logn mistakes!

Page 123: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Alg 2: find majority of the perfect

How many mistakes could you make?(A) 1(B) 2(C) logn(D) n−1At most logn!

When alg makes a mistake,|“perfect” experts| drops by a factor of two.

Initially n perfect experts mistake→ ≤ n/2 perfect expertsmistake→ ≤ n/4 perfect experts

...

mistake→ ≤ 1 perfect expert

≥ 1 perfect expert→ at most logn mistakes!

Page 124: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Alg 2: find majority of the perfect

How many mistakes could you make?(A) 1(B) 2(C) logn(D) n−1At most logn!

When alg makes a mistake,|“perfect” experts| drops by a factor of two.

Initially n perfect experts mistake→ ≤ n/2 perfect expertsmistake→ ≤ n/4 perfect experts

...mistake→ ≤ 1 perfect expert

≥ 1 perfect expert→ at most logn mistakes!

Page 125: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Alg 2: find majority of the perfect

How many mistakes could you make?(A) 1(B) 2(C) logn(D) n−1At most logn!

When alg makes a mistake,|“perfect” experts| drops by a factor of two.

Initially n perfect experts mistake→ ≤ n/2 perfect expertsmistake→ ≤ n/4 perfect experts

...mistake→ ≤ 1 perfect expert

≥ 1 perfect expert→ at most logn mistakes!

Page 126: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Alg 2: find majority of the perfect

How many mistakes could you make?(A) 1(B) 2(C) logn(D) n−1At most logn!

When alg makes a mistake,|“perfect” experts| drops by a factor of two.

Initially n perfect experts mistake→ ≤ n/2 perfect expertsmistake→ ≤ n/4 perfect experts

...mistake→ ≤ 1 perfect expert

≥ 1 perfect expert

→ at most logn mistakes!

Page 127: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Alg 2: find majority of the perfect

How many mistakes could you make?(A) 1(B) 2(C) logn(D) n−1At most logn!

When alg makes a mistake,|“perfect” experts| drops by a factor of two.

Initially n perfect experts mistake→ ≤ n/2 perfect expertsmistake→ ≤ n/4 perfect experts

...mistake→ ≤ 1 perfect expert

≥ 1 perfect expert→ at most logn mistakes!

Page 128: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Imperfect Experts

Goal?

Do as well as the best expert!

Algorithm. Suggestions?

Go with majority?

Penalize inaccurate experts?

Best expert is penalized the least.

1. Initially: wi = 1.2. Predict with weighted majority of experts.3. wi → wi/2 if wrong.

Page 129: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Imperfect Experts

Goal?

Do as well as the best expert!

Algorithm. Suggestions?

Go with majority?

Penalize inaccurate experts?

Best expert is penalized the least.

1. Initially: wi = 1.2. Predict with weighted majority of experts.3. wi → wi/2 if wrong.

Page 130: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Imperfect Experts

Goal?

Do as well as the best expert!

Algorithm.

Suggestions?

Go with majority?

Penalize inaccurate experts?

Best expert is penalized the least.

1. Initially: wi = 1.2. Predict with weighted majority of experts.3. wi → wi/2 if wrong.

Page 131: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Imperfect Experts

Goal?

Do as well as the best expert!

Algorithm. Suggestions?

Go with majority?

Penalize inaccurate experts?

Best expert is penalized the least.

1. Initially: wi = 1.2. Predict with weighted majority of experts.3. wi → wi/2 if wrong.

Page 132: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Imperfect Experts

Goal?

Do as well as the best expert!

Algorithm. Suggestions?

Go with majority?

Penalize inaccurate experts?

Best expert is penalized the least.

1. Initially: wi = 1.2. Predict with weighted majority of experts.3. wi → wi/2 if wrong.

Page 133: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Imperfect Experts

Goal?

Do as well as the best expert!

Algorithm. Suggestions?

Go with majority?

Penalize inaccurate experts?

Best expert is penalized the least.

1. Initially: wi = 1.2. Predict with weighted majority of experts.3. wi → wi/2 if wrong.

Page 134: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Imperfect Experts

Goal?

Do as well as the best expert!

Algorithm. Suggestions?

Go with majority?

Penalize inaccurate experts?

Best expert is penalized the least.

1. Initially: wi = 1.2. Predict with weighted majority of experts.3. wi → wi/2 if wrong.

Page 135: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Imperfect Experts

Goal?

Do as well as the best expert!

Algorithm. Suggestions?

Go with majority?

Penalize inaccurate experts?

Best expert is penalized the least.

1. Initially: wi = 1.

2. Predict with weighted majority of experts.3. wi → wi/2 if wrong.

Page 136: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Imperfect Experts

Goal?

Do as well as the best expert!

Algorithm. Suggestions?

Go with majority?

Penalize inaccurate experts?

Best expert is penalized the least.

1. Initially: wi = 1.2. Predict with weighted majority of experts.

3. wi → wi/2 if wrong.

Page 137: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Imperfect Experts

Goal?

Do as well as the best expert!

Algorithm. Suggestions?

Go with majority?

Penalize inaccurate experts?

Best expert is penalized the least.

1. Initially: wi = 1.2. Predict with weighted majority of experts.3. wi → wi/2 if wrong.

Page 138: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Imperfect Experts

Goal?

Do as well as the best expert!

Algorithm. Suggestions?

Go with majority?

Penalize inaccurate experts?

Best expert is penalized the least.

1. Initially: wi = 1.2. Predict with weighted majority of experts.3. wi → wi/2 if wrong.

Page 139: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: weighted majority

1. Initially: wi = 1.

2. Predict withweightedmajority ofexperts.

3. wi → wi/2 ifwrong.

Goal: Best expert makes m mistakes.

Potential function: ∑i wi . Initially n.

For best expert, b, wb ≥ 12m .

Each mistake:total weight of incorrect experts reduced by−1? −2? factor of 1

2?each incorrect expert weight multiplied by 1

2 !total weight decreases by

factor of 12? factor of 3

4?mistake→ ≥ half weight with incorrect experts.

Mistake→ potential function decreased by 34 .

We have

12m ≤∑

iwi ≤

(34

)M

n.

where M is number of algorithm mistakes.

Page 140: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: weighted majority1. Initially: wi = 1.

2. Predict withweightedmajority ofexperts.

3. wi → wi/2 ifwrong.

Goal: Best expert makes m mistakes.

Potential function: ∑i wi . Initially n.

For best expert, b, wb ≥ 12m .

Each mistake:total weight of incorrect experts reduced by−1? −2? factor of 1

2?each incorrect expert weight multiplied by 1

2 !total weight decreases by

factor of 12? factor of 3

4?mistake→ ≥ half weight with incorrect experts.

Mistake→ potential function decreased by 34 .

We have

12m ≤∑

iwi ≤

(34

)M

n.

where M is number of algorithm mistakes.

Page 141: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: weighted majority1. Initially: wi = 1.

2. Predict withweightedmajority ofexperts.

3. wi → wi/2 ifwrong.

Goal: Best expert makes m mistakes.

Potential function: ∑i wi . Initially n.

For best expert, b, wb ≥ 12m .

Each mistake:total weight of incorrect experts reduced by−1? −2? factor of 1

2?each incorrect expert weight multiplied by 1

2 !total weight decreases by

factor of 12? factor of 3

4?mistake→ ≥ half weight with incorrect experts.

Mistake→ potential function decreased by 34 .

We have

12m ≤∑

iwi ≤

(34

)M

n.

where M is number of algorithm mistakes.

Page 142: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: weighted majority1. Initially: wi = 1.

2. Predict withweightedmajority ofexperts.

3. wi → wi/2 ifwrong.

Goal: Best expert makes m mistakes.

Potential function:

∑i wi . Initially n.

For best expert, b, wb ≥ 12m .

Each mistake:total weight of incorrect experts reduced by−1? −2? factor of 1

2?each incorrect expert weight multiplied by 1

2 !total weight decreases by

factor of 12? factor of 3

4?mistake→ ≥ half weight with incorrect experts.

Mistake→ potential function decreased by 34 .

We have

12m ≤∑

iwi ≤

(34

)M

n.

where M is number of algorithm mistakes.

Page 143: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: weighted majority1. Initially: wi = 1.

2. Predict withweightedmajority ofexperts.

3. wi → wi/2 ifwrong.

Goal: Best expert makes m mistakes.

Potential function: ∑i wi .

Initially n.

For best expert, b, wb ≥ 12m .

Each mistake:total weight of incorrect experts reduced by−1? −2? factor of 1

2?each incorrect expert weight multiplied by 1

2 !total weight decreases by

factor of 12? factor of 3

4?mistake→ ≥ half weight with incorrect experts.

Mistake→ potential function decreased by 34 .

We have

12m ≤∑

iwi ≤

(34

)M

n.

where M is number of algorithm mistakes.

Page 144: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: weighted majority1. Initially: wi = 1.

2. Predict withweightedmajority ofexperts.

3. wi → wi/2 ifwrong.

Goal: Best expert makes m mistakes.

Potential function: ∑i wi . Initially n.

For best expert, b, wb ≥ 12m .

Each mistake:total weight of incorrect experts reduced by−1? −2? factor of 1

2?each incorrect expert weight multiplied by 1

2 !total weight decreases by

factor of 12? factor of 3

4?mistake→ ≥ half weight with incorrect experts.

Mistake→ potential function decreased by 34 .

We have

12m ≤∑

iwi ≤

(34

)M

n.

where M is number of algorithm mistakes.

Page 145: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: weighted majority1. Initially: wi = 1.

2. Predict withweightedmajority ofexperts.

3. wi → wi/2 ifwrong.

Goal: Best expert makes m mistakes.

Potential function: ∑i wi . Initially n.

For best expert, b, wb ≥ 12m .

Each mistake:total weight of incorrect experts reduced by−1? −2? factor of 1

2?each incorrect expert weight multiplied by 1

2 !total weight decreases by

factor of 12? factor of 3

4?mistake→ ≥ half weight with incorrect experts.

Mistake→ potential function decreased by 34 .

We have

12m ≤∑

iwi ≤

(34

)M

n.

where M is number of algorithm mistakes.

Page 146: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: weighted majority1. Initially: wi = 1.

2. Predict withweightedmajority ofexperts.

3. wi → wi/2 ifwrong.

Goal: Best expert makes m mistakes.

Potential function: ∑i wi . Initially n.

For best expert, b, wb ≥ 12m .

Each mistake:

total weight of incorrect experts reduced by−1? −2? factor of 1

2?each incorrect expert weight multiplied by 1

2 !total weight decreases by

factor of 12? factor of 3

4?mistake→ ≥ half weight with incorrect experts.

Mistake→ potential function decreased by 34 .

We have

12m ≤∑

iwi ≤

(34

)M

n.

where M is number of algorithm mistakes.

Page 147: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: weighted majority1. Initially: wi = 1.

2. Predict withweightedmajority ofexperts.

3. wi → wi/2 ifwrong.

Goal: Best expert makes m mistakes.

Potential function: ∑i wi . Initially n.

For best expert, b, wb ≥ 12m .

Each mistake:total weight of incorrect experts reduced by

−1? −2? factor of 12?

each incorrect expert weight multiplied by 12 !

total weight decreases byfactor of 1

2? factor of 34?

mistake→ ≥ half weight with incorrect experts.

Mistake→ potential function decreased by 34 .

We have

12m ≤∑

iwi ≤

(34

)M

n.

where M is number of algorithm mistakes.

Page 148: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: weighted majority1. Initially: wi = 1.

2. Predict withweightedmajority ofexperts.

3. wi → wi/2 ifwrong.

Goal: Best expert makes m mistakes.

Potential function: ∑i wi . Initially n.

For best expert, b, wb ≥ 12m .

Each mistake:total weight of incorrect experts reduced by−1?

−2? factor of 12?

each incorrect expert weight multiplied by 12 !

total weight decreases byfactor of 1

2? factor of 34?

mistake→ ≥ half weight with incorrect experts.

Mistake→ potential function decreased by 34 .

We have

12m ≤∑

iwi ≤

(34

)M

n.

where M is number of algorithm mistakes.

Page 149: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: weighted majority1. Initially: wi = 1.

2. Predict withweightedmajority ofexperts.

3. wi → wi/2 ifwrong.

Goal: Best expert makes m mistakes.

Potential function: ∑i wi . Initially n.

For best expert, b, wb ≥ 12m .

Each mistake:total weight of incorrect experts reduced by−1? −2?

factor of 12?

each incorrect expert weight multiplied by 12 !

total weight decreases byfactor of 1

2? factor of 34?

mistake→ ≥ half weight with incorrect experts.

Mistake→ potential function decreased by 34 .

We have

12m ≤∑

iwi ≤

(34

)M

n.

where M is number of algorithm mistakes.

Page 150: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: weighted majority1. Initially: wi = 1.

2. Predict withweightedmajority ofexperts.

3. wi → wi/2 ifwrong.

Goal: Best expert makes m mistakes.

Potential function: ∑i wi . Initially n.

For best expert, b, wb ≥ 12m .

Each mistake:total weight of incorrect experts reduced by−1? −2? factor of 1

2?

each incorrect expert weight multiplied by 12 !

total weight decreases byfactor of 1

2? factor of 34?

mistake→ ≥ half weight with incorrect experts.

Mistake→ potential function decreased by 34 .

We have

12m ≤∑

iwi ≤

(34

)M

n.

where M is number of algorithm mistakes.

Page 151: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: weighted majority1. Initially: wi = 1.

2. Predict withweightedmajority ofexperts.

3. wi → wi/2 ifwrong.

Goal: Best expert makes m mistakes.

Potential function: ∑i wi . Initially n.

For best expert, b, wb ≥ 12m .

Each mistake:total weight of incorrect experts reduced by−1? −2? factor of 1

2?each incorrect expert weight multiplied by 1

2 !

total weight decreases byfactor of 1

2? factor of 34?

mistake→ ≥ half weight with incorrect experts.

Mistake→ potential function decreased by 34 .

We have

12m ≤∑

iwi ≤

(34

)M

n.

where M is number of algorithm mistakes.

Page 152: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: weighted majority1. Initially: wi = 1.

2. Predict withweightedmajority ofexperts.

3. wi → wi/2 ifwrong.

Goal: Best expert makes m mistakes.

Potential function: ∑i wi . Initially n.

For best expert, b, wb ≥ 12m .

Each mistake:total weight of incorrect experts reduced by−1? −2? factor of 1

2?each incorrect expert weight multiplied by 1

2 !total weight decreases by

factor of 12? factor of 3

4?mistake→ ≥ half weight with incorrect experts.

Mistake→ potential function decreased by 34 .

We have

12m ≤∑

iwi ≤

(34

)M

n.

where M is number of algorithm mistakes.

Page 153: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: weighted majority1. Initially: wi = 1.

2. Predict withweightedmajority ofexperts.

3. wi → wi/2 ifwrong.

Goal: Best expert makes m mistakes.

Potential function: ∑i wi . Initially n.

For best expert, b, wb ≥ 12m .

Each mistake:total weight of incorrect experts reduced by−1? −2? factor of 1

2?each incorrect expert weight multiplied by 1

2 !total weight decreases by

factor of 12? factor of 3

4?

mistake→ ≥ half weight with incorrect experts.

Mistake→ potential function decreased by 34 .

We have

12m ≤∑

iwi ≤

(34

)M

n.

where M is number of algorithm mistakes.

Page 154: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: weighted majority1. Initially: wi = 1.

2. Predict withweightedmajority ofexperts.

3. wi → wi/2 ifwrong.

Goal: Best expert makes m mistakes.

Potential function: ∑i wi . Initially n.

For best expert, b, wb ≥ 12m .

Each mistake:total weight of incorrect experts reduced by−1? −2? factor of 1

2?each incorrect expert weight multiplied by 1

2 !total weight decreases by

factor of 12? factor of 3

4?mistake→ ≥ half weight with incorrect experts.

Mistake→ potential function decreased by 34 .

We have

12m ≤∑

iwi ≤

(34

)M

n.

where M is number of algorithm mistakes.

Page 155: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: weighted majority1. Initially: wi = 1.

2. Predict withweightedmajority ofexperts.

3. wi → wi/2 ifwrong.

Goal: Best expert makes m mistakes.

Potential function: ∑i wi . Initially n.

For best expert, b, wb ≥ 12m .

Each mistake:total weight of incorrect experts reduced by−1? −2? factor of 1

2?each incorrect expert weight multiplied by 1

2 !total weight decreases by

factor of 12? factor of 3

4?mistake→ ≥ half weight with incorrect experts.

Mistake→ potential function decreased by 34 .

We have

12m ≤∑

iwi ≤

(34

)M

n.

where M is number of algorithm mistakes.

Page 156: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: weighted majority1. Initially: wi = 1.

2. Predict withweightedmajority ofexperts.

3. wi → wi/2 ifwrong.

Goal: Best expert makes m mistakes.

Potential function: ∑i wi . Initially n.

For best expert, b, wb ≥ 12m .

Each mistake:total weight of incorrect experts reduced by−1? −2? factor of 1

2?each incorrect expert weight multiplied by 1

2 !total weight decreases by

factor of 12? factor of 3

4?mistake→ ≥ half weight with incorrect experts.

Mistake→ potential function decreased by 34 .

We have

12m ≤∑

iwi ≤

(34

)M

n.

where M is number of algorithm mistakes.

Page 157: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: continued.

12m ≤ ∑i wi ≤

(34

)Mn.

m - best expert mistakes M algorithm mistakes.

12m ≤

(34

)Mn.

Take log of both sides.

−m ≤−M log(4/3)+ logn.

Solve for M.M ≤ (m + logn)/ log(4/3)≤ 2.4(m + logn)

Multiple by 1− ε for incorrect experts...

(1− ε)m ≤(1− ε

2

)M n.

Massage...

M ≤ 2(1+ ε)m + 2lnnε

Approaches a factor of two of best expert performance!

Page 158: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: continued.

12m ≤ ∑i wi ≤

(34

)Mn.

m - best expert mistakes

M algorithm mistakes.

12m ≤

(34

)Mn.

Take log of both sides.

−m ≤−M log(4/3)+ logn.

Solve for M.M ≤ (m + logn)/ log(4/3)≤ 2.4(m + logn)

Multiple by 1− ε for incorrect experts...

(1− ε)m ≤(1− ε

2

)M n.

Massage...

M ≤ 2(1+ ε)m + 2lnnε

Approaches a factor of two of best expert performance!

Page 159: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: continued.

12m ≤ ∑i wi ≤

(34

)Mn.

m - best expert mistakes M algorithm mistakes.

12m ≤

(34

)Mn.

Take log of both sides.

−m ≤−M log(4/3)+ logn.

Solve for M.M ≤ (m + logn)/ log(4/3)≤ 2.4(m + logn)

Multiple by 1− ε for incorrect experts...

(1− ε)m ≤(1− ε

2

)M n.

Massage...

M ≤ 2(1+ ε)m + 2lnnε

Approaches a factor of two of best expert performance!

Page 160: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: continued.

12m ≤ ∑i wi ≤

(34

)Mn.

m - best expert mistakes M algorithm mistakes.

12m ≤

(34

)Mn.

Take log of both sides.

−m ≤−M log(4/3)+ logn.

Solve for M.M ≤ (m + logn)/ log(4/3)≤ 2.4(m + logn)

Multiple by 1− ε for incorrect experts...

(1− ε)m ≤(1− ε

2

)M n.

Massage...

M ≤ 2(1+ ε)m + 2lnnε

Approaches a factor of two of best expert performance!

Page 161: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: continued.

12m ≤ ∑i wi ≤

(34

)Mn.

m - best expert mistakes M algorithm mistakes.

12m ≤

(34

)Mn.

Take log of both sides.

−m ≤−M log(4/3)+ logn.

Solve for M.M ≤ (m + logn)/ log(4/3)≤ 2.4(m + logn)

Multiple by 1− ε for incorrect experts...

(1− ε)m ≤(1− ε

2

)M n.

Massage...

M ≤ 2(1+ ε)m + 2lnnε

Approaches a factor of two of best expert performance!

Page 162: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: continued.

12m ≤ ∑i wi ≤

(34

)Mn.

m - best expert mistakes M algorithm mistakes.

12m ≤

(34

)Mn.

Take log of both sides.

−m ≤−M log(4/3)+ logn.

Solve for M.M ≤ (m + logn)/ log(4/3)≤ 2.4(m + logn)

Multiple by 1− ε for incorrect experts...

(1− ε)m ≤(1− ε

2

)M n.

Massage...

M ≤ 2(1+ ε)m + 2lnnε

Approaches a factor of two of best expert performance!

Page 163: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: continued.

12m ≤ ∑i wi ≤

(34

)Mn.

m - best expert mistakes M algorithm mistakes.

12m ≤

(34

)Mn.

Take log of both sides.

−m ≤−M log(4/3)+ logn.

Solve for M.M ≤ (m + logn)/ log(4/3)

≤ 2.4(m + logn)

Multiple by 1− ε for incorrect experts...

(1− ε)m ≤(1− ε

2

)M n.

Massage...

M ≤ 2(1+ ε)m + 2lnnε

Approaches a factor of two of best expert performance!

Page 164: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: continued.

12m ≤ ∑i wi ≤

(34

)Mn.

m - best expert mistakes M algorithm mistakes.

12m ≤

(34

)Mn.

Take log of both sides.

−m ≤−M log(4/3)+ logn.

Solve for M.M ≤ (m + logn)/ log(4/3)≤ 2.4(m + logn)

Multiple by 1− ε for incorrect experts...

(1− ε)m ≤(1− ε

2

)M n.

Massage...

M ≤ 2(1+ ε)m + 2lnnε

Approaches a factor of two of best expert performance!

Page 165: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: continued.

12m ≤ ∑i wi ≤

(34

)Mn.

m - best expert mistakes M algorithm mistakes.

12m ≤

(34

)Mn.

Take log of both sides.

−m ≤−M log(4/3)+ logn.

Solve for M.M ≤ (m + logn)/ log(4/3)≤ 2.4(m + logn)

Multiple by 1− ε for incorrect experts...

(1− ε)m ≤(1− ε

2

)M n.

Massage...

M ≤ 2(1+ ε)m + 2lnnε

Approaches a factor of two of best expert performance!

Page 166: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: continued.

12m ≤ ∑i wi ≤

(34

)Mn.

m - best expert mistakes M algorithm mistakes.

12m ≤

(34

)Mn.

Take log of both sides.

−m ≤−M log(4/3)+ logn.

Solve for M.M ≤ (m + logn)/ log(4/3)≤ 2.4(m + logn)

Multiple by 1− ε for incorrect experts...

(1− ε)m ≤(1− ε

2

)M n.

Massage...

M ≤ 2(1+ ε)m + 2lnnε

Approaches a factor of two of best expert performance!

Page 167: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: continued.

12m ≤ ∑i wi ≤

(34

)Mn.

m - best expert mistakes M algorithm mistakes.

12m ≤

(34

)Mn.

Take log of both sides.

−m ≤−M log(4/3)+ logn.

Solve for M.M ≤ (m + logn)/ log(4/3)≤ 2.4(m + logn)

Multiple by 1− ε for incorrect experts...

(1− ε)m ≤(1− ε

2

)M n.

Massage...

M ≤ 2(1+ ε)m + 2lnnε

Approaches a factor of two of best expert performance!

Page 168: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: continued.

12m ≤ ∑i wi ≤

(34

)Mn.

m - best expert mistakes M algorithm mistakes.

12m ≤

(34

)Mn.

Take log of both sides.

−m ≤−M log(4/3)+ logn.

Solve for M.M ≤ (m + logn)/ log(4/3)≤ 2.4(m + logn)

Multiple by 1− ε for incorrect experts...

(1− ε)m ≤(1− ε

2

)M n.

Massage...

M ≤ 2(1+ ε)m + 2lnnε

Approaches a factor of two of best expert performance!

Page 169: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis: continued.

12m ≤ ∑i wi ≤

(34

)Mn.

m - best expert mistakes M algorithm mistakes.

12m ≤

(34

)Mn.

Take log of both sides.

−m ≤−M log(4/3)+ logn.

Solve for M.M ≤ (m + logn)/ log(4/3)≤ 2.4(m + logn)

Multiple by 1− ε for incorrect experts...

(1− ε)m ≤(1− ε

2

)M n.

Massage...

M ≤ 2(1+ ε)m + 2lnnε

Approaches a factor of two of best expert performance!

Page 170: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Best Analysis?

Two experts: A,B

Bad example?

Which is worse?(A) A right on even, B right on odd.(B) A right first half of days, B right second

Best expert peformance: T/2 mistakes.

Pattern (A): T −1 mistakes.

Factor of (almost) two worse!

Page 171: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Best Analysis?

Two experts: A,B

Bad example?

Which is worse?(A) A right on even, B right on odd.(B) A right first half of days, B right second

Best expert peformance: T/2 mistakes.

Pattern (A): T −1 mistakes.

Factor of (almost) two worse!

Page 172: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Best Analysis?

Two experts: A,B

Bad example?

Which is worse?(A) A right on even, B right on odd.(B) A right first half of days, B right second

Best expert peformance: T/2 mistakes.

Pattern (A): T −1 mistakes.

Factor of (almost) two worse!

Page 173: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Best Analysis?

Two experts: A,B

Bad example?

Which is worse?(A) A right on even, B right on odd.(B) A right first half of days, B right second

Best expert peformance: T/2 mistakes.

Pattern (A): T −1 mistakes.

Factor of (almost) two worse!

Page 174: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Best Analysis?

Two experts: A,B

Bad example?

Which is worse?(A) A right on even, B right on odd.(B) A right first half of days, B right second

Best expert peformance: T/2 mistakes.

Pattern (A): T −1 mistakes.

Factor of (almost) two worse!

Page 175: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Best Analysis?

Two experts: A,B

Bad example?

Which is worse?(A) A right on even, B right on odd.(B) A right first half of days, B right second

Best expert peformance: T/2 mistakes.

Pattern (A): T −1 mistakes.

Factor of (almost) two worse!

Page 176: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomization!!!!

Better approach?

Use?

Randomization!

That is, choose expert i with prob ∝ wi

Bad example: A,B,A,B,A...

After a bit, A and B make nearly the same number of mistakes.

Choose each with approximately the same probabilty.

Make a mistake around 1/2 of the time.

Best expert makes T/2 mistakes.

Rougly optimal!

Page 177: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomization!!!!

Better approach?

Use?

Randomization!

That is, choose expert i with prob ∝ wi

Bad example: A,B,A,B,A...

After a bit, A and B make nearly the same number of mistakes.

Choose each with approximately the same probabilty.

Make a mistake around 1/2 of the time.

Best expert makes T/2 mistakes.

Rougly optimal!

Page 178: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomization!!!!

Better approach?

Use?

Randomization!

That is, choose expert i with prob ∝ wi

Bad example: A,B,A,B,A...

After a bit, A and B make nearly the same number of mistakes.

Choose each with approximately the same probabilty.

Make a mistake around 1/2 of the time.

Best expert makes T/2 mistakes.

Rougly optimal!

Page 179: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomization!!!!

Better approach?

Use?

Randomization!

That is, choose expert i with prob ∝ wi

Bad example: A,B,A,B,A...

After a bit, A and B make nearly the same number of mistakes.

Choose each with approximately the same probabilty.

Make a mistake around 1/2 of the time.

Best expert makes T/2 mistakes.

Rougly optimal!

Page 180: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomization!!!!

Better approach?

Use?

Randomization!

That is, choose expert i with prob ∝ wi

Bad example: A,B,A,B,A...

After a bit, A and B make nearly the same number of mistakes.

Choose each with approximately the same probabilty.

Make a mistake around 1/2 of the time.

Best expert makes T/2 mistakes.

Rougly optimal!

Page 181: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomization!!!!

Better approach?

Use?

Randomization!

That is, choose expert i with prob ∝ wi

Bad example: A,B,A,B,A...

After a bit, A and B make nearly the same number of mistakes.

Choose each with approximately the same probabilty.

Make a mistake around 1/2 of the time.

Best expert makes T/2 mistakes.

Rougly optimal!

Page 182: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomization!!!!

Better approach?

Use?

Randomization!

That is, choose expert i with prob ∝ wi

Bad example: A,B,A,B,A...

After a bit, A and B make nearly the same number of mistakes.

Choose each with approximately the same probabilty.

Make a mistake around 1/2 of the time.

Best expert makes T/2 mistakes.

Rougly optimal!

Page 183: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomization!!!!

Better approach?

Use?

Randomization!

That is, choose expert i with prob ∝ wi

Bad example: A,B,A,B,A...

After a bit, A and B make nearly the same number of mistakes.

Choose each with approximately the same probabilty.

Make a mistake around 1/2 of the time.

Best expert makes T/2 mistakes.

Rougly optimal!

Page 184: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomization!!!!

Better approach?

Use?

Randomization!

That is, choose expert i with prob ∝ wi

Bad example: A,B,A,B,A...

After a bit, A and B make nearly the same number of mistakes.

Choose each with approximately the same probabilty.

Make a mistake around 1/2 of the time.

Best expert makes T/2 mistakes.

Rougly optimal!

Page 185: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomization!!!!

Better approach?

Use?

Randomization!

That is, choose expert i with prob ∝ wi

Bad example: A,B,A,B,A...

After a bit, A and B make nearly the same number of mistakes.

Choose each with approximately the same probabilty.

Make a mistake around 1/2 of the time.

Best expert makes T/2 mistakes.

Rougly

optimal!

Page 186: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomization!!!!

Better approach?

Use?

Randomization!

That is, choose expert i with prob ∝ wi

Bad example: A,B,A,B,A...

After a bit, A and B make nearly the same number of mistakes.

Choose each with approximately the same probabilty.

Make a mistake around 1/2 of the time.

Best expert makes T/2 mistakes.

Rougly optimal!

Page 187: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized analysis.

Some formulas:

For ε ≤ 1,x ∈ [0,1],

(1+ ε)x ≤ (1+ εx)(1− ε)x ≤ (1− εx)

For ε ∈ [0, 12 ],

−ε− ε2

≤ ln(1− ε)≤−ε

ε− ε2

≤ ln(1+ ε)≤ ε

Proof Idea: ln(1+x) = x− x2

2 + x3

3 −·· ·

Page 188: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized analysis.

Some formulas:

For ε ≤ 1,x ∈ [0,1],

(1+ ε)x ≤ (1+ εx)(1− ε)x ≤ (1− εx)

For ε ∈ [0, 12 ],

−ε− ε2

≤ ln(1− ε)≤−ε

ε− ε2

≤ ln(1+ ε)≤ ε

Proof Idea: ln(1+x) = x− x2

2 + x3

3 −·· ·

Page 189: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized analysis.

Some formulas:

For ε ≤ 1,x ∈ [0,1],

(1+ ε)x ≤ (1+ εx)(1− ε)x ≤ (1− εx)

For ε ∈ [0, 12 ],

−ε− ε2

≤ ln(1− ε)≤−ε

ε− ε2

≤ ln(1+ ε)≤ ε

Proof Idea: ln(1+x) = x− x2

2 + x3

3 −·· ·

Page 190: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized analysis.

Some formulas:

For ε ≤ 1,x ∈ [0,1],

(1+ ε)x ≤ (1+ εx)(1− ε)x ≤ (1− εx)

For ε ∈ [0, 12 ],

−ε− ε2

≤ ln(1− ε)≤−ε

ε− ε2

≤ ln(1+ ε)≤ ε

Proof Idea: ln(1+x) = x− x2

2 + x3

3 −·· ·

Page 191: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized analysis.

Some formulas:

For ε ≤ 1,x ∈ [0,1],

(1+ ε)x ≤ (1+ εx)(1− ε)x ≤ (1− εx)

For ε ∈ [0, 12 ],

−ε− ε2

≤ ln(1− ε)≤−ε

ε− ε2

≤ ln(1+ ε)≤ ε

Proof Idea: ln(1+x) = x− x2

2 + x3

3 −·· ·

Page 192: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized analysis.

Some formulas:

For ε ≤ 1,x ∈ [0,1],

(1+ ε)x ≤ (1+ εx)(1− ε)x ≤ (1− εx)

For ε ∈ [0, 12 ],

−ε− ε2

≤ ln(1− ε)≤−ε

ε− ε2

≤ ln(1+ ε)≤ ε

Proof Idea: ln(1+x) = x− x2

2 + x3

3 −·· ·

Page 193: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized algorithmLosses in [0,1].

Expert i loses `ti ∈ [0,1] in round t.

1. Initially wi = 1 for expert i .

2. Choose expert i with prob wiW , W = ∑i wi .

3. wi ← wi(1− ε)`ti

W (t) sum of wi at time t . W (0) = n

Best expert loses L∗ total. →W (T )≥ (1− ε)L∗ .

Lt =wi `

ti

W expected loss of alg. in time t .

Claim: W (t +1)≤W (t)(1− εLt)

Proof:W (t +1)≤∑

i(1− ε`t

i )wi = ∑i

wi − ε ∑i

wi`ti

= ∑i

wi

(1− ε

∑i wi`ti

∑i wi

)= W (t)(1− εLt)

Page 194: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized algorithmLosses in [0,1].

Expert i loses `ti ∈ [0,1] in round t.

1. Initially wi = 1 for expert i .

2. Choose expert i with prob wiW , W = ∑i wi .

3. wi ← wi(1− ε)`ti

W (t) sum of wi at time t . W (0) = n

Best expert loses L∗ total. →W (T )≥ (1− ε)L∗ .

Lt =wi `

ti

W expected loss of alg. in time t .

Claim: W (t +1)≤W (t)(1− εLt)

Proof:W (t +1)≤∑

i(1− ε`t

i )wi = ∑i

wi − ε ∑i

wi`ti

= ∑i

wi

(1− ε

∑i wi`ti

∑i wi

)= W (t)(1− εLt)

Page 195: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized algorithmLosses in [0,1].

Expert i loses `ti ∈ [0,1] in round t.

1. Initially wi = 1 for expert i .

2. Choose expert i with prob wiW , W = ∑i wi .

3. wi ← wi(1− ε)`ti

W (t) sum of wi at time t . W (0) = n

Best expert loses L∗ total. →W (T )≥ (1− ε)L∗ .

Lt =wi `

ti

W expected loss of alg. in time t .

Claim: W (t +1)≤W (t)(1− εLt)

Proof:W (t +1)≤∑

i(1− ε`t

i )wi = ∑i

wi − ε ∑i

wi`ti

= ∑i

wi

(1− ε

∑i wi`ti

∑i wi

)= W (t)(1− εLt)

Page 196: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized algorithmLosses in [0,1].

Expert i loses `ti ∈ [0,1] in round t.

1. Initially wi = 1 for expert i .

2. Choose expert i with prob wiW , W = ∑i wi .

3. wi ← wi(1− ε)`ti

W (t) sum of wi at time t . W (0) = n

Best expert loses L∗ total. →W (T )≥ (1− ε)L∗ .

Lt =wi `

ti

W expected loss of alg. in time t .

Claim: W (t +1)≤W (t)(1− εLt)

Proof:W (t +1)≤∑

i(1− ε`t

i )wi = ∑i

wi − ε ∑i

wi`ti

= ∑i

wi

(1− ε

∑i wi`ti

∑i wi

)= W (t)(1− εLt)

Page 197: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized algorithmLosses in [0,1].

Expert i loses `ti ∈ [0,1] in round t.

1. Initially wi = 1 for expert i .

2. Choose expert i with prob wiW , W = ∑i wi .

3. wi ← wi(1− ε)`ti

W (t) sum of wi at time t . W (0) = n

Best expert loses L∗ total. →W (T )≥ (1− ε)L∗ .

Lt =wi `

ti

W expected loss of alg. in time t .

Claim: W (t +1)≤W (t)(1− εLt)

Proof:W (t +1)≤∑

i(1− ε`t

i )wi = ∑i

wi − ε ∑i

wi`ti

= ∑i

wi

(1− ε

∑i wi`ti

∑i wi

)= W (t)(1− εLt)

Page 198: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized algorithmLosses in [0,1].

Expert i loses `ti ∈ [0,1] in round t.

1. Initially wi = 1 for expert i .

2. Choose expert i with prob wiW , W = ∑i wi .

3. wi ← wi(1− ε)`ti

W (t) sum of wi at time t .

W (0) = n

Best expert loses L∗ total. →W (T )≥ (1− ε)L∗ .

Lt =wi `

ti

W expected loss of alg. in time t .

Claim: W (t +1)≤W (t)(1− εLt)

Proof:W (t +1)≤∑

i(1− ε`t

i )wi = ∑i

wi − ε ∑i

wi`ti

= ∑i

wi

(1− ε

∑i wi`ti

∑i wi

)= W (t)(1− εLt)

Page 199: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized algorithmLosses in [0,1].

Expert i loses `ti ∈ [0,1] in round t.

1. Initially wi = 1 for expert i .

2. Choose expert i with prob wiW , W = ∑i wi .

3. wi ← wi(1− ε)`ti

W (t) sum of wi at time t . W (0) =

n

Best expert loses L∗ total. →W (T )≥ (1− ε)L∗ .

Lt =wi `

ti

W expected loss of alg. in time t .

Claim: W (t +1)≤W (t)(1− εLt)

Proof:W (t +1)≤∑

i(1− ε`t

i )wi = ∑i

wi − ε ∑i

wi`ti

= ∑i

wi

(1− ε

∑i wi`ti

∑i wi

)= W (t)(1− εLt)

Page 200: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized algorithmLosses in [0,1].

Expert i loses `ti ∈ [0,1] in round t.

1. Initially wi = 1 for expert i .

2. Choose expert i with prob wiW , W = ∑i wi .

3. wi ← wi(1− ε)`ti

W (t) sum of wi at time t . W (0) = n

Best expert loses L∗ total. →W (T )≥ (1− ε)L∗ .

Lt =wi `

ti

W expected loss of alg. in time t .

Claim: W (t +1)≤W (t)(1− εLt)

Proof:W (t +1)≤∑

i(1− ε`t

i )wi = ∑i

wi − ε ∑i

wi`ti

= ∑i

wi

(1− ε

∑i wi`ti

∑i wi

)= W (t)(1− εLt)

Page 201: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized algorithmLosses in [0,1].

Expert i loses `ti ∈ [0,1] in round t.

1. Initially wi = 1 for expert i .

2. Choose expert i with prob wiW , W = ∑i wi .

3. wi ← wi(1− ε)`ti

W (t) sum of wi at time t . W (0) = n

Best expert loses L∗ total.

→W (T )≥ (1− ε)L∗ .

Lt =wi `

ti

W expected loss of alg. in time t .

Claim: W (t +1)≤W (t)(1− εLt)

Proof:W (t +1)≤∑

i(1− ε`t

i )wi = ∑i

wi − ε ∑i

wi`ti

= ∑i

wi

(1− ε

∑i wi`ti

∑i wi

)= W (t)(1− εLt)

Page 202: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized algorithmLosses in [0,1].

Expert i loses `ti ∈ [0,1] in round t.

1. Initially wi = 1 for expert i .

2. Choose expert i with prob wiW , W = ∑i wi .

3. wi ← wi(1− ε)`ti

W (t) sum of wi at time t . W (0) = n

Best expert loses L∗ total. →W (T )≥ (1− ε)L∗ .

Lt =wi `

ti

W expected loss of alg. in time t .

Claim: W (t +1)≤W (t)(1− εLt)

Proof:W (t +1)≤∑

i(1− ε`t

i )wi = ∑i

wi − ε ∑i

wi`ti

= ∑i

wi

(1− ε

∑i wi`ti

∑i wi

)= W (t)(1− εLt)

Page 203: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized algorithmLosses in [0,1].

Expert i loses `ti ∈ [0,1] in round t.

1. Initially wi = 1 for expert i .

2. Choose expert i with prob wiW , W = ∑i wi .

3. wi ← wi(1− ε)`ti

W (t) sum of wi at time t . W (0) = n

Best expert loses L∗ total. →W (T )≥ (1− ε)L∗ .

Lt =wi `

ti

W expected loss of alg. in time t .

Claim: W (t +1)≤W (t)(1− εLt)

Proof:W (t +1)≤∑

i(1− ε`t

i )wi = ∑i

wi − ε ∑i

wi`ti

= ∑i

wi

(1− ε

∑i wi`ti

∑i wi

)= W (t)(1− εLt)

Page 204: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized algorithmLosses in [0,1].

Expert i loses `ti ∈ [0,1] in round t.

1. Initially wi = 1 for expert i .

2. Choose expert i with prob wiW , W = ∑i wi .

3. wi ← wi(1− ε)`ti

W (t) sum of wi at time t . W (0) = n

Best expert loses L∗ total. →W (T )≥ (1− ε)L∗ .

Lt =wi `

ti

W expected loss of alg. in time t .

Claim: W (t +1)≤W (t)(1− εLt)

Proof:W (t +1)≤∑

i(1− ε`t

i )wi = ∑i

wi − ε ∑i

wi`ti

= ∑i

wi

(1− ε

∑i wi`ti

∑i wi

)= W (t)(1− εLt)

Page 205: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized algorithmLosses in [0,1].

Expert i loses `ti ∈ [0,1] in round t.

1. Initially wi = 1 for expert i .

2. Choose expert i with prob wiW , W = ∑i wi .

3. wi ← wi(1− ε)`ti

W (t) sum of wi at time t . W (0) = n

Best expert loses L∗ total. →W (T )≥ (1− ε)L∗ .

Lt =wi `

ti

W expected loss of alg. in time t .

Claim: W (t +1)≤W (t)(1− εLt)

Proof:

W (t +1)≤∑i(1− ε`t

i )wi = ∑i

wi − ε ∑i

wi`ti

= ∑i

wi

(1− ε

∑i wi`ti

∑i wi

)= W (t)(1− εLt)

Page 206: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized algorithmLosses in [0,1].

Expert i loses `ti ∈ [0,1] in round t.

1. Initially wi = 1 for expert i .

2. Choose expert i with prob wiW , W = ∑i wi .

3. wi ← wi(1− ε)`ti

W (t) sum of wi at time t . W (0) = n

Best expert loses L∗ total. →W (T )≥ (1− ε)L∗ .

Lt =wi `

ti

W expected loss of alg. in time t .

Claim: W (t +1)≤W (t)(1− εLt)

Proof:W (t +1)≤∑

i(1− ε`t

i )wi

= ∑i

wi − ε ∑i

wi`ti

= ∑i

wi

(1− ε

∑i wi`ti

∑i wi

)= W (t)(1− εLt)

Page 207: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized algorithmLosses in [0,1].

Expert i loses `ti ∈ [0,1] in round t.

1. Initially wi = 1 for expert i .

2. Choose expert i with prob wiW , W = ∑i wi .

3. wi ← wi(1− ε)`ti

W (t) sum of wi at time t . W (0) = n

Best expert loses L∗ total. →W (T )≥ (1− ε)L∗ .

Lt =wi `

ti

W expected loss of alg. in time t .

Claim: W (t +1)≤W (t)(1− εLt)

Proof:W (t +1)≤∑

i(1− ε`t

i )wi = ∑i

wi − ε ∑i

wi`ti

= ∑i

wi

(1− ε

∑i wi`ti

∑i wi

)= W (t)(1− εLt)

Page 208: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized algorithmLosses in [0,1].

Expert i loses `ti ∈ [0,1] in round t.

1. Initially wi = 1 for expert i .

2. Choose expert i with prob wiW , W = ∑i wi .

3. wi ← wi(1− ε)`ti

W (t) sum of wi at time t . W (0) = n

Best expert loses L∗ total. →W (T )≥ (1− ε)L∗ .

Lt =wi `

ti

W expected loss of alg. in time t .

Claim: W (t +1)≤W (t)(1− εLt)

Proof:W (t +1)≤∑

i(1− ε`t

i )wi = ∑i

wi − ε ∑i

wi`ti

= ∑i

wi

(1− ε

∑i wi`ti

∑i wi

)

= W (t)(1− εLt)

Page 209: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Randomized algorithmLosses in [0,1].

Expert i loses `ti ∈ [0,1] in round t.

1. Initially wi = 1 for expert i .

2. Choose expert i with prob wiW , W = ∑i wi .

3. wi ← wi(1− ε)`ti

W (t) sum of wi at time t . W (0) = n

Best expert loses L∗ total. →W (T )≥ (1− ε)L∗ .

Lt =wi `

ti

W expected loss of alg. in time t .

Claim: W (t +1)≤W (t)(1− εLt)

Proof:W (t +1)≤∑

i(1− ε`t

i )wi = ∑i

wi − ε ∑i

wi`ti

= ∑i

wi

(1− ε

∑i wi`ti

∑i wi

)= W (t)(1− εLt)

Page 210: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis

(1− ε)L∗ ≤W (T )≤ n∏(1− εLt)

Take logsL∗ ln(1− ε)≤ lnn +∑ ln(1− εLt)

Use −ε− ε2 ≤ ln(1− ε)≤ ε

−L∗(ε + ε2)≤ lnn− ε ∑Lt

And

∑Lt ≤ L∗(1+ ε)+ lnnε

.

Left hand side is the total expected loss of the expertsalgorithm.

Within (1+ ε) ish of the best expert!

No factor of 2 loss!

Page 211: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis

(1− ε)L∗ ≤W (T )≤ n∏(1− εLt)

Take logsL∗ ln(1− ε)≤ lnn +∑ ln(1− εLt)

Use −ε− ε2 ≤ ln(1− ε)≤ ε

−L∗(ε + ε2)≤ lnn− ε ∑Lt

And

∑Lt ≤ L∗(1+ ε)+ lnnε

.

Left hand side is the total expected loss of the expertsalgorithm.

Within (1+ ε) ish of the best expert!

No factor of 2 loss!

Page 212: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis

(1− ε)L∗ ≤W (T )≤ n∏(1− εLt)

Take logsL∗ ln(1− ε)≤ lnn +∑ ln(1− εLt)

Use −ε− ε2 ≤ ln(1− ε)≤ ε

−L∗(ε + ε2)≤ lnn− ε ∑Lt

And

∑Lt ≤ L∗(1+ ε)+ lnnε

.

Left hand side is the total expected loss of the expertsalgorithm.

Within (1+ ε) ish of the best expert!

No factor of 2 loss!

Page 213: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis

(1− ε)L∗ ≤W (T )≤ n∏(1− εLt)

Take logsL∗ ln(1− ε)≤ lnn +∑ ln(1− εLt)

Use −ε− ε2 ≤ ln(1− ε)≤ ε

−L∗(ε + ε2)≤ lnn− ε ∑Lt

And

∑Lt ≤ L∗(1+ ε)+ lnnε

.

Left hand side is the total expected loss of the expertsalgorithm.

Within (1+ ε) ish of the best expert!

No factor of 2 loss!

Page 214: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis

(1− ε)L∗ ≤W (T )≤ n∏(1− εLt)

Take logsL∗ ln(1− ε)≤ lnn +∑ ln(1− εLt)

Use −ε− ε2 ≤ ln(1− ε)≤ ε

−L∗(ε + ε2)≤ lnn− ε ∑Lt

And

∑Lt ≤ L∗(1+ ε)+ lnnε

.

Left hand side is the total expected loss of the expertsalgorithm.

Within (1+ ε) ish of the best expert!

No factor of 2 loss!

Page 215: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis

(1− ε)L∗ ≤W (T )≤ n∏(1− εLt)

Take logsL∗ ln(1− ε)≤ lnn +∑ ln(1− εLt)

Use −ε− ε2 ≤ ln(1− ε)≤ ε

−L∗(ε + ε2)≤ lnn− ε ∑Lt

And

∑Lt ≤ L∗(1+ ε)+ lnnε

.

Left hand side is the total expected loss of the expertsalgorithm.

Within (1+ ε) ish of the best expert!

No factor of 2 loss!

Page 216: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis

(1− ε)L∗ ≤W (T )≤ n∏(1− εLt)

Take logsL∗ ln(1− ε)≤ lnn +∑ ln(1− εLt)

Use −ε− ε2 ≤ ln(1− ε)≤ ε

−L∗(ε + ε2)≤ lnn− ε ∑Lt

And

∑Lt ≤ L∗(1+ ε)+ lnnε

.

Left hand side is the total expected loss of the expertsalgorithm.

Within (1+ ε) ish of the best expert!

No factor of 2 loss!

Page 217: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis

(1− ε)L∗ ≤W (T )≤ n∏(1− εLt)

Take logsL∗ ln(1− ε)≤ lnn +∑ ln(1− εLt)

Use −ε− ε2 ≤ ln(1− ε)≤ ε

−L∗(ε + ε2)≤ lnn− ε ∑Lt

And

∑Lt ≤ L∗(1+ ε)+ lnnε

.

Left hand side is the total expected loss of the expertsalgorithm.

Within (1+ ε)

ish of the best expert!

No factor of 2 loss!

Page 218: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis

(1− ε)L∗ ≤W (T )≤ n∏(1− εLt)

Take logsL∗ ln(1− ε)≤ lnn +∑ ln(1− εLt)

Use −ε− ε2 ≤ ln(1− ε)≤ ε

−L∗(ε + ε2)≤ lnn− ε ∑Lt

And

∑Lt ≤ L∗(1+ ε)+ lnnε

.

Left hand side is the total expected loss of the expertsalgorithm.

Within (1+ ε) ish

of the best expert!

No factor of 2 loss!

Page 219: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis

(1− ε)L∗ ≤W (T )≤ n∏(1− εLt)

Take logsL∗ ln(1− ε)≤ lnn +∑ ln(1− εLt)

Use −ε− ε2 ≤ ln(1− ε)≤ ε

−L∗(ε + ε2)≤ lnn− ε ∑Lt

And

∑Lt ≤ L∗(1+ ε)+ lnnε

.

Left hand side is the total expected loss of the expertsalgorithm.

Within (1+ ε) ish of the best expert!

No factor of 2 loss!

Page 220: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Analysis

(1− ε)L∗ ≤W (T )≤ n∏(1− εLt)

Take logsL∗ ln(1− ε)≤ lnn +∑ ln(1− εLt)

Use −ε− ε2 ≤ ln(1− ε)≤ ε

−L∗(ε + ε2)≤ lnn− ε ∑Lt

And

∑Lt ≤ L∗(1+ ε)+ lnnε

.

Left hand side is the total expected loss of the expertsalgorithm.

Within (1+ ε) ish of the best expert!

No factor of 2 loss!

Page 221: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Summary: multiplicative weights.

Framework: n experts, each loses different amount every day.

Perfect Expert: logn mistakes.

Imperfect Expert: best makes m mistakes.

Deterministic Strategy: 2(1+ ε)m + logmε

Real numbered losses: Best loses L∗ total.

Randomized Strategy: (1+ ε)L∗+ logmε

Strategy:choose according to weights updated by multiplying by(1− ε)loss.

Multiplicative weights framework!

Applications next!

Page 222: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Summary: multiplicative weights.

Framework: n experts, each loses different amount every day.

Perfect Expert: logn mistakes.

Imperfect Expert: best makes m mistakes.

Deterministic Strategy: 2(1+ ε)m + logmε

Real numbered losses: Best loses L∗ total.

Randomized Strategy: (1+ ε)L∗+ logmε

Strategy:choose according to weights updated by multiplying by(1− ε)loss.

Multiplicative weights framework!

Applications next!

Page 223: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Summary: multiplicative weights.

Framework: n experts, each loses different amount every day.

Perfect Expert: logn mistakes.

Imperfect Expert: best makes m mistakes.

Deterministic Strategy: 2(1+ ε)m + logmε

Real numbered losses: Best loses L∗ total.

Randomized Strategy: (1+ ε)L∗+ logmε

Strategy:choose according to weights updated by multiplying by(1− ε)loss.

Multiplicative weights framework!

Applications next!

Page 224: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Summary: multiplicative weights.

Framework: n experts, each loses different amount every day.

Perfect Expert: logn mistakes.

Imperfect Expert: best makes m mistakes.

Deterministic Strategy: 2(1+ ε)m + logmε

Real numbered losses: Best loses L∗ total.

Randomized Strategy: (1+ ε)L∗+ logmε

Strategy:choose according to weights updated by multiplying by(1− ε)loss.

Multiplicative weights framework!

Applications next!

Page 225: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Summary: multiplicative weights.

Framework: n experts, each loses different amount every day.

Perfect Expert: logn mistakes.

Imperfect Expert: best makes m mistakes.

Deterministic Strategy: 2(1+ ε)m + logmε

Real numbered losses: Best loses L∗ total.

Randomized Strategy: (1+ ε)L∗+ logmε

Strategy:choose according to weights updated by multiplying by(1− ε)loss.

Multiplicative weights framework!

Applications next!

Page 226: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Summary: multiplicative weights.

Framework: n experts, each loses different amount every day.

Perfect Expert: logn mistakes.

Imperfect Expert: best makes m mistakes.

Deterministic Strategy: 2(1+ ε)m + logmε

Real numbered losses: Best loses L∗ total.

Randomized Strategy: (1+ ε)L∗+ logmε

Strategy:choose according to weights updated by multiplying by(1− ε)loss.

Multiplicative weights framework!

Applications next!

Page 227: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Summary: multiplicative weights.

Framework: n experts, each loses different amount every day.

Perfect Expert: logn mistakes.

Imperfect Expert: best makes m mistakes.

Deterministic Strategy: 2(1+ ε)m + logmε

Real numbered losses: Best loses L∗ total.

Randomized Strategy: (1+ ε)L∗+ logmε

Strategy:choose according to weights updated by multiplying by(1− ε)loss.

Multiplicative weights framework!

Applications next!

Page 228: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Summary: multiplicative weights.

Framework: n experts, each loses different amount every day.

Perfect Expert: logn mistakes.

Imperfect Expert: best makes m mistakes.

Deterministic Strategy: 2(1+ ε)m + logmε

Real numbered losses: Best loses L∗ total.

Randomized Strategy: (1+ ε)L∗+ logmε

Strategy:

choose according to weights updated by multiplying by(1− ε)loss.

Multiplicative weights framework!

Applications next!

Page 229: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Summary: multiplicative weights.

Framework: n experts, each loses different amount every day.

Perfect Expert: logn mistakes.

Imperfect Expert: best makes m mistakes.

Deterministic Strategy: 2(1+ ε)m + logmε

Real numbered losses: Best loses L∗ total.

Randomized Strategy: (1+ ε)L∗+ logmε

Strategy:choose according to weights updated by multiplying by(1− ε)loss.

Multiplicative weights framework!

Applications next!

Page 230: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Summary: multiplicative weights.

Framework: n experts, each loses different amount every day.

Perfect Expert: logn mistakes.

Imperfect Expert: best makes m mistakes.

Deterministic Strategy: 2(1+ ε)m + logmε

Real numbered losses: Best loses L∗ total.

Randomized Strategy: (1+ ε)L∗+ logmε

Strategy:choose according to weights updated by multiplying by(1− ε)loss.

Multiplicative weights framework!

Applications next!

Page 231: Comments on last lecture. - EECS at UC Berkeleysatishr/cs270/sp... · Comments on last lecture. Easy to come up with several Nash for non-zero-sum games. Is the game framework only

Summary: multiplicative weights.

Framework: n experts, each loses different amount every day.

Perfect Expert: logn mistakes.

Imperfect Expert: best makes m mistakes.

Deterministic Strategy: 2(1+ ε)m + logmε

Real numbered losses: Best loses L∗ total.

Randomized Strategy: (1+ ε)L∗+ logmε

Strategy:choose according to weights updated by multiplying by(1− ε)loss.

Multiplicative weights framework!

Applications next!