chapter 20 machine learning algorithms: … cse 473 chapter 20 machine learning algorithms: neural...

8
1 CSE 473 Chapter 20 Machine Learning Algorithms: Neural Networks CSE 473 Chapter 20 Machine Learning Algorithms: Neural Networks © CSE AI Faculty 2 Recap: Neurons as “Threshold Units” Artificial neuron: m binary inputs (-1 or 1) and 1 output (-1 or 1) Synaptic weights w ji Threshold μ i Inputs u j (-1 or +1) Output v i (-1 or +1) Weighted Sum Threshold Θ(x) = 1 if x > 0 and -1 if x 0 ) ( i j j ji i u w v μ - Θ = w 1i w 2i w 3i

Upload: haminh

Post on 04-Sep-2018

229 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Chapter 20 Machine Learning Algorithms: … CSE 473 Chapter 20 Machine Learning Algorithms: Neural Networks © CSE AI Faculty 2 Recap: Neurons as “Threshold Units” • Artificial

1

CSE 473

Chapter 20

Machine Learning Algorithms:

Neural Networks

CSE 473

Chapter 20

Machine Learning Algorithms:

Neural Networks

© CSE AI Faculty 2

Recap: Neurons as “Threshold Units”

• Artificial neuron: m binary inputs (-1 or 1) and 1 output (-1 or 1) Synaptic weights wji

Threshold µi

Inputs uj

(-1 or +1)Output vi

(-1 or +1)

Weighted Sum Threshold

Θ(x) = 1 if x > 0 and -1 if x ≤ 0

)( ij

j

jii uwv µ−Θ= ∑

w1i

w2i

w3i

Page 2: Chapter 20 Machine Learning Algorithms: … CSE 473 Chapter 20 Machine Learning Algorithms: Neural Networks © CSE AI Faculty 2 Recap: Neurons as “Threshold Units” • Artificial

2

© CSE AI Faculty 3

“Perceptrons” for Classification

• Fancy name for a single layer “feed-forward” network

• Uses artificial neurons (“units”) with binary inputs and outputs

Single-layer

© CSE AI Faculty 4

Perceptrons and Classification• Consider a single-layer perceptron

Weighted sum forms a linear hyperplane

Everything on one side of this hyperplane is in class 1 (output = +1) and everything on other side is class 2 (output = -1)

0=−∑ ij

j

jiuw µ

Page 3: Chapter 20 Machine Learning Algorithms: … CSE 473 Chapter 20 Machine Learning Algorithms: Neural Networks © CSE AI Faculty 2 Recap: Neurons as “Threshold Units” • Artificial

3

© CSE AI Faculty 5

Example: AND function• Example: AND is linearly separable

Linear hyperplane

v

u1 u2

µ = 1.5(1,1)

1

-1

1

-1u1

u2

111

-11-1

-1

-1

-11

-1-1u1 u2 AND

v = 1 iff u1 + u2 – 1.5 > 0

How do we learn the appropriate weights given only examples of (input,output)?

Idea: Change the weights to decrease the error in ouput

Page 4: Chapter 20 Machine Learning Algorithms: … CSE 473 Chapter 20 Machine Learning Algorithms: Neural Networks © CSE AI Faculty 2 Recap: Neurons as “Threshold Units” • Artificial

4

© CSE AI Faculty 7

Perceptron Learning Rule

• Given input pair (u, vd) where vd ∈ {+1,-1} is the desired output, adjust w and µ as follows:

1. Calculate current output v of neuron

2. Compute error signal e = (vd – v)

)()( µµ −Θ=−Θ= ∑ uwT

j

j

juwv

© CSE AI Faculty 8

Perceptron Learning Rule3. Change w and µ according to error:

If input is positive and error is positive, then w not large enough ⇒ increase w

If input is positive and error is negative, then w too large ⇒ decrease w

Similar reasoning for other cases yields:

)(

)(

vv

vv

d

d

−−→−+→

εµµε uww BABA with replace means →

εεεε is the “learning rate” (a small positive number, e.g., 0.2)

Page 5: Chapter 20 Machine Learning Algorithms: … CSE 473 Chapter 20 Machine Learning Algorithms: Neural Networks © CSE AI Faculty 2 Recap: Neurons as “Threshold Units” • Artificial

5

What if we want to learn continuous-valued functions?

Input

Output

© CSE AI Faculty 10

Function Approximation

• We want networks that can learn a function Network maps real-valued inputs to real-valued output

Idea: Given data, minimize errors between network’s output and desired output by changing weights

Continuous output values à Can’t

use binary threshold units anymore

To minimize errors, a differentiable

output function is desirable

Page 6: Chapter 20 Machine Learning Algorithms: … CSE 473 Chapter 20 Machine Learning Algorithms: Neural Networks © CSE AI Faculty 2 Recap: Neurons as “Threshold Units” • Artificial

6

© CSE AI Faculty 11

Sigmoidal Networks

Input nodes ae

ag β−+=

1

1)(

a

Ψ(a)1

The most common

activation function:

Sigmoid function:

Non-linear “squashing” function: Squashes input to be between 0

and 1. The parameter β controls the slope.

g(a)

)( uwT

g

u = (u1 u2 u3)T

w

Outputv =

© CSE AI Faculty 12

Gradient-Descent Learning (“Hill-Climbing”)

• Given training examples (um,dm) (m = 1, …, N), define an error function (cost function or “energy” function)

2)(2

1)( m

m

mvdE −= ∑w

)( mTmgv uw=where

Page 7: Chapter 20 Machine Learning Algorithms: … CSE 473 Chapter 20 Machine Learning Algorithms: Neural Networks © CSE AI Faculty 2 Recap: Neurons as “Threshold Units” • Artificial

7

© CSE AI Faculty 13

Gradient-Descent Learning (“Hill-Climbing”)

• Would like to change w so that E(w) is minimized Gradient Descent: Change w in proportion to –dE/dw (why?)

mmTmm

m

mmm

m

gvdd

dvvd

d

dE

d

dE

uuwww

www

)()()( ′−−=−−=

−→

∑∑

ε

Derivative of sigmoid

© CSE AI Faculty 14

“Stochastic” Gradient Descent

• What if the inputs only arrive one-by-one?• Stochastic gradient descent approximates

sum over all inputs with an “on-line” running sum:

mmTmmgvd

d

dE

d

dE

uuww

www

)()(1

1

′−−=

−→ εAlso known as

the “delta rule”

or “LMS (least

mean square)

rule”delta = error

Page 8: Chapter 20 Machine Learning Algorithms: … CSE 473 Chapter 20 Machine Learning Algorithms: Neural Networks © CSE AI Faculty 2 Recap: Neurons as “Threshold Units” • Artificial

8

© CSE AI Faculty 15

But wait….

• Delta rule tells us how to modify the connections from input to output (one layer network) One layer networks are not that interesting (remember XOR?)

• What if we have multiple layers?

Delta rule can be used to

adapt these weights

How do we adapt these?

Input u = (u1 u2 … uK)T

Output v = (v1 v2 … vJ)T; Desired = d

© CSE AI Faculty 16

Next Time

• Learning by Backpropagating• Reinforcement Learning