continuous hopfield

Upload: tigabu-yaya

Post on 28-Feb-2018

239 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/25/2019 Continuous Hopfield

    1/59

    Optimization with Neural

    Networks

    Presented by:

    Nasim Zeynolabedinishoale HashemiZahra Rashti

    Instructor:

    Dr. S. Bagheri

    Sharif University of Technoloy

    Ordibehesht !"#$

  • 7/25/2019 Continuous Hopfield

    2/59

    Introduction

    Optimization Problem:

    % & Problem with a cost function that is to be

    minimized or ma'imized

    e: TSP( )napsack( *raph Partitionin( *raph+isection( *raph ,olorin( etc-

    % .any solutions to solve these problems such as:

    /inear optimization( Simulated &nnealin( .ont

    ,arlo( &N0Neural Networks-

  • 7/25/2019 Continuous Hopfield

    3/59

    Applications

    &pplications in many fields like:

    1outin in computer networks

    2/SI circuit desin Plannin in operational and loistic systems Power distribution systems 3ireless and satellite communication systems

  • 7/25/2019 Continuous Hopfield

    4/59

    Optimization roblems !ypes

    &n optimization problem consists of two parts:,ost

    functionand,onstraints

    % ,onstrained

    4 The constraints are built in the cost function( sominimizin the cost function also satisfies the constraints

    % Unconstraint

    4 There is no constraint for the problem5

    % ,ombinatorial

    4 3e separate the constraints and the cost function( minimize

    each of them and then add them toether

  • 7/25/2019 Continuous Hopfield

    5/59

    "hy Neural Net#or$s%

    % 0rawbacks of conventional computin systems:

    4 Perform poorly on comple' problems

    4 /ack the computational power

    4 0ont utilize the inherent parallelism of problems

    % &dvantaes of artificial neural networks:

    4 Perform well even on comple' problems

    4 2ery fast computational cycles if implemented in hardware

    4 ,an take the advantae of inherent parallelism of problems

  • 7/25/2019 Continuous Hopfield

    6/59

    Some &''orts to Sol(e Optimization roblems

    % .any &NN alorithms with feedforward and recurrent

    architectures have been used to solve different

    optimization problems

    % 3eve selected:

    4 6opfield NN

    4 Self Oranizin .ap NN

    4 1ecurrent NN to solve TSP as the most common benchmark for

    optimization alorithms-

  • 7/25/2019 Continuous Hopfield

    7/59

    ,ontinuous 6opfield

    Neuron function is continuous (Sigmoid function)

    System behavior is described by

    a differential equation :

    i

    n

    j

    jijii Vw

    U

    dt

    dU

    ++=

    =!

    iUii eUfV

    +

    ==

    !

    !78

  • 7/25/2019 Continuous Hopfield

    8/59

    &n electronic implementation

  • 7/25/2019 Continuous Hopfield

    9/59

  • 7/25/2019 Continuous Hopfield

    10/59

    +asic idea

    If : decision variables

    Suppose is our obective function !

    "onstraints can be e#pressed as nonnegative

    penalty functions that only $hen

    represent a feasible solution

    %y combining the penalty functions $ith & ' the

    original constrained problem may be reformulatedas unconstrained problem in $hich the goal is to

    minimie the quantity :

    nXXX (---(( 9!

    79! (---((8 nXXXF

    nXXX (---(( 9!

    7(---((8 9! ni XXXC

    :7(---((8 9! =ni XXXC

    =

    +=

    m

    knkn XXXCXXXFF

    !9!9! 7(---((87(---((8

  • 7/25/2019 Continuous Hopfield

    11/59

    +asic idea 8cont-7

    * is a sufficiently large scaling factor for the

    penalty terms !

    +inimiing yields a minimal ' feasible solution

    to the original problem &urthermore if can be $ritten in the form of

    energy function ' there is a corresponding neural

    net$or, $hose equilibria represent solution to the

    problem

    F

    F

  • 7/25/2019 Continuous Hopfield

    12/59

    Simplification of enery function

    is a -yapunov function so long as the the

    function is sigmoidal!

    .e modify slightly' but in such a $ay that it

    remains sigmoidal /78 iii ufV = 78 iii ufV =

  • 7/25/2019 Continuous Hopfield

    13/59

    Simplification of enery function 8cont-7

    0he inverse function can obviously be

    If $e use this in the middle term of energy function :

    if $e let become very large' this term $ill become

    negligible! 0he function is still a sigmoid' so L(v) is

    still a -yapunov function' but in this situation (,no$n asthe high-gain limit) the middle term can be ignored!

    787;!8 ! iii Vfu

    =

  • 7/25/2019 Continuous Hopfield

    14/59

    Travelin Salesman Problem

    "hec,ing out all possible routes :

    routes

    N / 1* / routes

    route in a second / seconds

    / 123*4342456 years

    In many industrial problems : N 1*

    7 continuous 8opfield net$or, can be

    constructed to quic,ly provide a good solution to

    the 0S9

  • 7/25/2019 Continuous Hopfield

    15/59

    0he 8opfield net$or, approach to the 0S9 involvesarranging the net$or, neurons in such a $ay that theyrepresent the entries in the table

    for an Ncity problem $e $ould require neurons

    N of them $ill be turned ;N $ith the remainder turned;&&

  • 7/25/2019 Continuous Hopfield

    16/59

    Ob

  • 7/25/2019 Continuous Hopfield

    17/59

    Settin up the function to be minimized

    % : state 8 or !7 of the neuron in position 8i(a7 in

    the table

    % : distance between city i and cityj

    % Total lenth of tour :

    iaV

    ijd

  • 7/25/2019 Continuous Hopfield

    18/59

    % If we take into account both constraints and ob

  • 7/25/2019 Continuous Hopfield

    19/59

    Finding the network weights and input currents

    % 3e should select weihts and currents so that two

    followin e>uations become e>ual

    % ?irst we make output voltaes double subscripts :

  • 7/25/2019 Continuous Hopfield

    20/59

    % Note first that the multiply second=order termsand the multiply first order terms

    -

    % ?irst order terms should be e>ual :

    jbiaw (jbiaVV iai

    iaV

  • 7/25/2019 Continuous Hopfield

    21/59

    % we will need to treat the four sets of second=order termsin separately

    the second=order C terms are iven by

    @ = ,

  • 7/25/2019 Continuous Hopfield

    22/59

    % If then will add to

    lyapunov function but what we want is

    )ronecker delta :

    So we should have

    In the same way :

    AwA bjia =(

    elseif! jiij ==

  • 7/25/2019 Continuous Hopfield

    23/59

    % D term contributes an amount to the /yapunovfunction only when or when

    so :

    +rinin toether all four components of the weihts(

    we have( finally:

  • 7/25/2019 Continuous Hopfield

    24/59

    Applying the method in practice

    % suitable values for the parametersA(B(C,D and must

    be determined

    % Tank and 6opfield usedA@B@D@9A ( ,@! and

    @A% Tank and 6opfield applied it to random !=city maps

    and found that( overall( in about AB of cases( the

    method found the optimum route from amon the

    !#!$$ distinct paths

  • 7/25/2019 Continuous Hopfield

    25/59

    % The size of each blacks>uare indicates the valueof the output of thecorrespondin neuron

  • 7/25/2019 Continuous Hopfield

    26/59

    Self Oranizin .ap NN

    % In !CDA Teuvo )ohonen introduced new type of neural

    network that uses competitive( unsupervised learnin

  • 7/25/2019 Continuous Hopfield

    27/59

    Self Oranizin .ap NNSummary o' the algorithm

    1.Initialization) ,hoose random values for the initial weiht vectors wj87-

    2.Sampling)Select an input e'amplex from the trainin set for use as an input-

    3.Identify winning neuron)?ind the neuron whose weiht vector is closest to the inputx.

    4.Updating)&dual to zero-5.Termination:,ontinue by returnin to step 9 until there no further chanes in the

    feature map-

    racticalities

    !-The learnin rate parameter should bein with a value close to unity and decreaseradually as learnin proceeds-

    9- The neihborhoods should also decrease in size as learnin proceeds-

  • 7/25/2019 Continuous Hopfield

    28/59

    Self Oranizin .ap NN

    % One dimensional

    neihborhood of )ohonen

    SO.

    % ,lassical two dimensional

    neihborhood

    % F'tended two dimensionalneihborhood of )ohonenSO.

  • 7/25/2019 Continuous Hopfield

    29/59

    Self Oranizin .ap NN% Self=oranization of a network with two dimensional neihborhood-

    Sel'*organization o' a net#or$ #ith one dimensional neighborhood.

  • 7/25/2019 Continuous Hopfield

    30/59

    TSP Solvin

    % The Flastic Net &pproach4 0urbin and 3illshaw first proposed the elastic net method

    in !C#D as a means of solvin the TSP-

    % SO. &pproach4 Fven before 0urbin and 3illshaws work on the elastic net

    method was published( ?ort had been workin on the idea

    of usin a self oranizin process to solve the TSP-

  • 7/25/2019 Continuous Hopfield

    31/59

    The Flastic Net &pproach

  • 7/25/2019 Continuous Hopfield

    32/59

    SO. &pproach

  • 7/25/2019 Continuous Hopfield

    33/59

    .odifications of SO.

    % The work of &neniol et al- is based on the distinctive feature

    that units in the rin are dynamically created and deleted-

    % +urke and 0amany use the GconscienceG mechanism to solve

    the problem related to the mappin of multiple cities to the

    same unit in the rin-

    % .atsuyama adds a new term( previously introduced in 0urbin

    and 3illshaw to the weiht update e>uations-

  • 7/25/2019 Continuous Hopfield

    34/59

    Flastic Net vs- SO.

    % The difference between the two approaches however is that

    ?orts alorithm incorporates stochasticities into the weiht

    adaptations whereas the elastic net method is completely

    deterministic in nature-

    % There is also no enery minimization involved with the

    method of ?ort-

    % ?ortHs results were not as ood as those obtained usin the

    elastic net method-

  • 7/25/2019 Continuous Hopfield

    35/59

    Flastic Net vs- SO.8cont-7

    FN: Flastic

    Net*N *uilty Net

    6T 6opfield=Tank

  • 7/25/2019 Continuous Hopfield

    36/59

    &dvantaes of a self oranizin approach

    % The reater bioloical resemblance of the SO?.

    % the reduced number of neurons and synapses needed toperform optimization tasks

    % The )ohonen Self=Oranizin .ap is substantially superior to

    the ,ontinuous 6opfield Net-

  • 7/25/2019 Continuous Hopfield

    37/59

    6opfield &pproach 2s- SO. &pproach

    +est route for !A cities 8usin # runs7 is 9!9D km by

    ,ontinuous 6opfield and !"!! km by )ohonen Self=Oranizin

    .ap- The )ohonen path seems optimal( but this has not been

    proven-

  • 7/25/2019 Continuous Hopfield

    38/59

    Recurrent Neural Net#or$s

    % & recurrent neural network 81NN7 is an &rtificial

    Neural Network( which has e'ternal inputs in the form

    of a vector ( a feedforward function J8-7 8any

    feedforward network includin multi=layer perceptronis appropriate7( outputs in the form of vector K and a

    feedbackpath( which copies the outputs to inputs-

    % The network behavior is based on its history and so

    we must think of pattern presentation as it happens intime-

  • 7/25/2019 Continuous Hopfield

    39/59

    Recurrent Neural Net#or$s

    4Simple 1ecurrent Networks41ecurrent .ultilayer Perceptron

    and

    4Simultaneous 1ecurrent Networks

    F'ternal

    Inputs

    6idden /ayer8s7 Output /ayer Output

    +ank of

    0elays

  • 7/25/2019 Continuous Hopfield

    40/59

    Simultaneous Recurrent Neural Net#or$s

    Simultaneous 1ecurrent Neural Network 8S1N7 is

    a feedforward network with simultaneous feedback

    from outputs of the network to its inputswithout

    any time delay- ?ormal description of S1N:

    Feedforwa

    rd

    +apping

    ,.-W

    Outputs

    /eedbac$

    ath

    Inputs!

    7(8 XF!=

    7((8

    78

    7!8

    78

    X"f"

    ""!

    nn=

    =

    +

  • 7/25/2019 Continuous Hopfield

    41/59

    Simultaneous Recurrent Neural Net#or$s

    ?ollows a tra

  • 7/25/2019 Continuous Hopfield

    42/59

    SRN !raining

    The standard procedure to train a recurrent

    network is to define an error measure( which is a

    function of network outputs and modify the

    weihts usin a derivative of the error with respectto the weihts themselves- The eneric weiht

    update e>uation is iven by:

    ij

    o#d

    ij

    new

    ij w

    $ww

    =

  • 7/25/2019 Continuous Hopfield

    43/59

    !raining o' SRN

    The derivative values can be computed usin a number of

    techni>ues:

    +ackpropaation Throuh Time 8+TT7 which re>uires the

    knowlede of desired outputs throuhout the trauarantee of yieldin e'act results

    in e>uilibrium

    Truncation did not provide satisfactory results and needs

    to be further tested

    1ecurrent +ackpropaation re>uires only knowlede of

    desired outputs at the end of tra

  • 7/25/2019 Continuous Hopfield

    44/59

    Sol(ing !S #ith SRN

    Network topoloy for travelin salesman problem:

    6idden

    /ayers

    ,ost

    .atri'

    Output

    &rray

    N N

    nodes

    N N

    nodes

    N N

    nodes

    Input /ayer 6idden /ayer8s7 Output /ayer

    Path

    Specification

  • 7/25/2019 Continuous Hopfield

    45/59

    roblem 0onstraints and &rror /unctions

    These constraints enforce the row and column

    sums to be e>ual to a value of G!-G( to force the

    neuron outputs to limitin values of G-G and

    G!-G( to eliminate loops in the solution path( andto encourae minimum distance solutions to be

    identified

  • 7/25/2019 Continuous Hopfield

    46/59

    roblem 0onstraints and &rror /unctions

    Frror ?unctions:

  • 7/25/2019 Continuous Hopfield

    47/59

    "eight Ad1ustment

    The formula used for weiht ad

  • 7/25/2019 Continuous Hopfield

    48/59

    "eight Ad1ustment

  • 7/25/2019 Continuous Hopfield

    49/59

    "eight arameters

    The stability of the network durin trainin reatly

    depends on these constraint weiht parameters- 2ery lare

    values of these parameters will force all the weihts of the

    network to become neative( which tend to make the

    network unstable and all the outputs of the network to

    convere to -A-

    It was also determined throuh e'ploratory

    e'perimentation with the alorithm that these parametervalues needed to be chaned durin trainin sub

  • 7/25/2019 Continuous Hopfield

    50/59

    !ra(eling Salesman roblem

  • 7/25/2019 Continuous Hopfield

    51/59

    &rror 0omputation 'or !S

    ,onstraints used for trainin TSP

    &symmetry of the path traveled

    ,olumn inhibition 1ow inhibition

    ,ost of the path traveled

    2alues of the solution matri'

    Source

    0ities

    Destination 0ities

    Output +atri2

  • 7/25/2019 Continuous Hopfield

    52/59

    Simulation)* Initialization 'or !S

    Cities 3alues Asymmetry Row/Column Output Value Cost

    Initial *!**5 *!**

  • 7/25/2019 Continuous Hopfield

    53/59

    Simulation)* !raining

    Frror function vs- Simulation Time for TSP

  • 7/25/2019 Continuous Hopfield

    54/59

    Simulation)* Results

    ,onverence criteria of network is checked after every ! rela'ations

    ,riteria: CAB of active nodes have value reater than -C

    Cities NormalizedDistance

    between Cities

    ComputationalTime inmin/100

    Relaxations

    Aera!e Numbero" Relaxations "or

    #olution

    TotalComputational

    Time

    6* *!=> *!21 23** =!=2 min!

    ** =3!** min!

    =** *!=< 1!=* 6=** 216!6* min!

    1** *!=5 3!5= <

  • 7/25/2019 Continuous Hopfield

    55/59

    Simulation)* Results

    Normalized Distance s$ %roblem #ize

    *

    *!2

    *!=

    *!1

    *!6

    *!

    * 6*

  • 7/25/2019 Continuous Hopfield

    56/59

    Simulation)* Results

    Plot of Number of 1ela'ations re>uired for a solution and values of

    ,onstraint 3eiht Parameters c and rafter " 1ela'ations vs- Problem Size

    %roblem #ize s Number o" Relaxations and

    %roblem #ize s Constraint &ei!'t %arameter !cor !

    r

    *

    2***

    =***

    1***

    6***

    ***

    3***

    6*

  • 7/25/2019 Continuous Hopfield

    57/59

    0onclusions

    % The S1N with the 1+P was able to find ood >uality solutions( in the rane of -9A=-"A( for lare=scale 8$ to A city7

    Travelin Salesman Problem% Solutions were obtained within acceptable computation efforts

    % The simulator developed does not re>uire weihts to bepredetermined before simulation as is the case with the 6N and

    its derivatives% The initial and incremental values of constraint weiht

    parameters

    play very important role in the trainin of the network% ,omputational effort and memory re>uirement increased

    proportional to the s>uare of the problem size% The number of rela'ations re>uired increased with the increase

    in the problem size

  • 7/25/2019 Continuous Hopfield

    58/59

    0onclusions ,continued

    Simultaneous 1ecurrent Neural Network with1ecurrent +ackpropaation trainin alorithmscaled #ell

    'or large*scale static optimization problemslike the Travelin

    Salesman Problem within acceptable computation effort bounds-

  • 7/25/2019 Continuous Hopfield

    59/59

    4uestions %