Transcript

Particle Swarm optimisation

Metaheuristic

Particle Swarm optimisation

Tabu Search

Particle Swarm optimisation

Tabu Search is an improvement over basic local search that attempts to get over local search problems by not being stuck in a local minima.

allowing the acceptance of non-improving moves, in order to not be stuck in a locally optimum solution, and in the hope of finding the global best.

Tabu search also allows us to escape from sub-optimal solutions by the use of a tabu list.

A tabu list : is a list of possible moves that could be performed on a solution.

These moves could be swap operations (as in TSP) . its move is made tabu for a certain number of

iterations When a move is made tabu , it is added to the tabu

list with a certain value called the Tabu Tenure (tabu length).

With each iteration, the tabu tenure is decremented.Only when the tabu tenure of a certain move is 0, can the move be performed and accepted.

Particle Swarm optimisation

Intensification/Diversification•Intensification: penalize

solutions far from the current solution

•Diversification: penalize solutions close to the current solution

Particle Swarm optimisation

Particle Swarm optimisation

Tabu Search: Finding the Minimal Value of Peaks Function

Particle Swarm optimisation

Particle Swarm optimisation

Particle Swarm optimisation

TRAVELING SALESMAN PROBLEMThe Travelling Salesman Problem (TSP) is an NP-Hard problem. It goes as follows, given a set of cities, with paths connectingeach city with every other city, we need to find the shortest path from the starting city, to every other city and come back to thestarting city in the shortest distance without visiting any city along the path more than once.This problem is easy to state, but hard to solve .It has been proved that the TSP is NP-hard and cannot be solved to optimality within polynomially bounded computation time.

Particle Swarm optimisation The largest solved traveling salesman problem, an 85,900-city route at 30 /01/2013 -

Particle Swarm optimisation

Particle Swarm optimisation

Particle Swarm optimisation

These slides adapted from a presentationby [email protected] - one of the

main researchers in PSO

PSO invented by Russ Eberhart (engineering Prof) and James Kennedy (social scientist)

in USA

Explore PSO and its parameters with my appat http://www.macs.hw.ac.uk/~dwcorne/mypages/apps/pso.html

Cooperation example

Particle Swarm optimisation

Particle’s velocity

PSO Algorithm

social cognitiveInertia)1( kiv

GBest

Inertia

cognitive

social

PBest

x(k)

x(k+1)

v(k+1)

v(k)

Particle Swarm optimisation

PSO solution update in 2D

Current solution (4, 2)

Particle’s best solution (9,

1)

Global best solution (5, 10)

Inertia: v(k)=(-2, 2)

x(k) - Current solution (4, 2)

PBest - Particle’s best solution (9, 1)

GBest-Global best solution (5, 10)

GBest

PBest

Particle Swarm optimisation

PSO solution update in 2D

Current solution (4, 2)

Particle’s best solution (9,

1)

Global best solution (5, 10)

Inertia: v(k)=(-2,2) Cognitive: PBest-x(k)=(9,1)-(4,2)=(5,-1) Social: GBest-x(k)=(5,10)-(4,2)=(1,8)

x(k) - Current solution (4, 2)

PBest - Particle’s best solution (9, 1)

GBest-Global best solution (5, 10)

GBest

PBest

Particle Swarm optimisation

PSO solution update in 2D

x(k) - Current solution (4, 2)

PBest - Particle’s best solution (9, 1)

GBest-Global best solution (5, 10)

Inertia: v(k)=(-2,2) Cognitive: PBest-x(k)=(9,1)-(4,2)=(5,-1) Social: GBest-x(k)=(5,10)-

(4,2)=(1,8)

v(k+1)=(-2,2)+0.8*(5,-1) +0.2*(1,8) = (2.2,2.8)

GBest

PBest

v(k+1)

Particle Swarm optimisation

PSO solution update in 2D

GBest

PBest

x(k+1)

x(k) - Current solution (4, 2)

PBest - Particle’s best solution (9, 1)

GBest-Global best solution (5, 10)

Inertia: v(k)=(-2,2) Cognitive: PBest-x(k)=(9,1)-(4,2)=(5,-1) Social: GBest-x(k)=(5,10)-

(4,2)=(1,8) v(k+1)=(2.2,2.8)

x(k+1)=x(k)+v(k+1)=(4,2)+(2.2,2.8)=(6.2,4.8)

Particle Swarm optimisation

Pseudocodehttp://www.swarmintelligence.org/tutorials.php

Equation (a)v[] = c0 *v[] + c1 * rand() * (pbest[] - present[]) + c2 * rand() * (gbest[] - present[])

(in the original method, c0=1, but many researchers now play with this parameter)

Equation (b)present[] = present[] + v[]

Particle Swarm optimisation

Parameters

Number of particles (swarmsize)

C1 (importance of personal best) C2 (importance of neighbourhood

best)

Vmax: limit on velocity

Particle Swarm optimisation

Pseudocodehttp://www.swarmintelligence.org/tutorials.php For each particle

    Initialize particleEND

Do    For each particle         Calculate fitness value        If the fitness value is better than its peronal best

            set current value as the new pBest    End

    Choose the particle with the best fitness value of all as gBest    For each particle         Calculate particle velocity according equation (a)        Update particle position according equation (b)    End While maximum iterations or minimum error criteria is not attained

Particle Swarm optimisation


Top Related