unsupervised learning: part iii counter propagation network

Upload: roots999

Post on 06-Apr-2018

223 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/3/2019 Unsupervised Learning: Part III Counter Propagation Network

    1/17

    Klinkhachorn:CpE520

    Unsupervised Learning: Part IIICounter Propagation Network

  • 8/3/2019 Unsupervised Learning: Part III Counter Propagation Network

    2/17

  • 8/3/2019 Unsupervised Learning: Part III Counter Propagation Network

    3/17

  • 8/3/2019 Unsupervised Learning: Part III Counter Propagation Network

    4/17

    Klinkhachorn:CpE520

    Robert Hecht-Nielsen

    Adjunct Professor, Electrical & Computer Engineering

    An authority on neural networks, he introduced the first comprehensive

    theory of the mammalian cerebral cortex and thalamus in 2002. His

    research revolves around scientific testing, elaboration, and extension of

    this theory.

    Professor Hecht-Nielsen is an expert on brain theory, associativememory neural networks and Perceptron theory. His theory of

    thalamocortex is currently being promulgated and integrated into

    research worldwide.

    Robert Hecht-Nielsen has been adjunct professor at UCSD since 1986.

    He teaches the popular ECE 270 three-quarter graduate course

    Neurocomputing, which focuses on the basic constructs of his theory of

    thalamocortex and their applications. He is a member of the UCSD

    Institute for Neural Computation and is a founder of the UCSD GraduateProgram in Computational Neurobiology. An IEEE Fellow, he has

    received the IEEE Neural Networks Pioneer Award and the ECE

    Graduate Teaching Award. He received his Ph.D. in Mathematics from

    Arizona State University in 1974.

    http://www.jacobsschool.ucsd.edu/FacBios/findprofile.pl?fmp_recid=89

  • 8/3/2019 Unsupervised Learning: Part III Counter Propagation Network

    5/17

    Klinkhachorn:CpE520

    Counter Propagation Training

    Two stages

    Unsupervised

    Input vectors are clustered (similar to SOM without

    neighbor)

    Supervised

    Weights from the cluster units to the output units

    are adapted to produce the desire response

    Fundamentals of Neural Networks, L. Fausett, Prentice Hall, 1994

  • 8/3/2019 Unsupervised Learning: Part III Counter Propagation Network

    6/17

    Klinkhachorn:CpE520

    Counter Propagation Nets

    Two types

    Full Counterpropagation Efficient method to represent a large number of vector pairs by adaptively

    constructing a look up table.

    Produces an approximation input to output relationship (hetero associative)

    Forward only Counterpropagation Simplified version of the full counterpropagation

    Produces a mapping from input to output

    Fundamentals of Neural Networks, L. Fausett, Prentice Hall, 1994

  • 8/3/2019 Unsupervised Learning: Part III Counter Propagation Network

    7/17

    Klinkhachorn:CpE520

    Full Counterpropagation Nets

    Fundamentals of Neural Networks, L. Fausett, Prentice Hall, 1994

  • 8/3/2019 Unsupervised Learning: Part III Counter Propagation Network

    8/17

    Klinkhachorn:CpE520

    Full CP Nets: Cluster Layer

    Fundamentals of Neural Networks, L. Fausett, Prentice Hall, 1994

  • 8/3/2019 Unsupervised Learning: Part III Counter Propagation Network

    9/17

  • 8/3/2019 Unsupervised Learning: Part III Counter Propagation Network

    10/17

    Klinkhachorn:CpE520

    Forward only Counter Propagation

  • 8/3/2019 Unsupervised Learning: Part III Counter Propagation Network

    11/17

    Klinkhachorn:CpE520

    Counter Propagation Operation Present input to network

    Calculate output of all neurons in Kohonen

    layer

    Determine winner (neuron with maximum

    output)

    Set output of winner to 1 (others to 0)

    Calculate output vector

  • 8/3/2019 Unsupervised Learning: Part III Counter Propagation Network

    12/17

    Klinkhachorn:CpE520

    Counter Propagation Training

    Present input vector (x)

    Determine winner in competitive layer

    Adapt weights to winner

    Wci(t+1) = Wci (t) + a(xi-Wci (t))

    Normalize weights going to winner (divideeach weight by magnitude of vector)

    Adapt weights of output layerVji(t+1) = Vji (t) + b*zi*(Yj-Y j) if i = c

    = Vji (t) if i != c

  • 8/3/2019 Unsupervised Learning: Part III Counter Propagation Network

    13/17

    Klinkhachorn:CpE520

    Counter Propagation - Example

    Fundamentals of Neural Networks, L. Fausett, Prentice Hall, 1994

    Y = 1/X

  • 8/3/2019 Unsupervised Learning: Part III Counter Propagation Network

    14/17

    Klinkhachorn:CpE520

    Counter Propagation - Example

    Fundamentals of Neural Networks, L. Fausett, Prentice Hall, 1994

    Y = 1/X

  • 8/3/2019 Unsupervised Learning: Part III Counter Propagation Network

    15/17

  • 8/3/2019 Unsupervised Learning: Part III Counter Propagation Network

    16/17

    Klinkhachorn:CpE520

    Counterpropagation Network Notes

    Not as general as Backpropagation

    Trains faster than Backpropagation

    May not generalize well on new patterns

    Input clusters must be well separated and

    represented

    Comprtitive layer can become unstable if enough

    units are not present

    Uses include Pattern Classifications

  • 8/3/2019 Unsupervised Learning: Part III Counter Propagation Network

    17/17

    Klinkhachorn:CpE520

    Counterpropagation Network Notes