metaheuristic tabu pso

Download metaheuristic tabu pso

Post on 15-Aug-2015

209 views

Category:

Education

0 download

Embed Size (px)

TRANSCRIPT

  1. 1. Particle Swarm optimisation Metaheuristic
  2. 2. Particle Swarm optimisation Tabu Search
  3. 3. Particle Swarm optimisation Tabu Search is an improvement over basic local search that attempts to get over local search problems by not being stuck in a local minima. allowing the acceptance of non-improving moves, in order to not be stuck in a locally optimum solution, and in the hope of finding the global best. Tabu search also allows us to escape from sub-optimal solutions by the use of atabu list. A tabu list : is a list of possible moves that could be performed on a solution. These moves could be swap operations (as in TSP). its move is made tabu for a certain number of iterations When a move is made tabu , it is added to the tabu list with a certain value called theTabu Tenure (tabu length). With each iteration, the tabu tenure is decremented. Only when the tabu tenure of a certain move is 0, can the move be performed and accepted.
  4. 4. Particle Swarm optimisation Intensification/Diversifica tion Intensification: penalize solutions far from the current solution Diversification: penalize solutions close to the current solution
  5. 5. Particle Swarm optimisation
  6. 6. Particle Swarm optimisation Tabu Search: Finding the Minimal Value of Peaks Function
  7. 7. Particle Swarm optimisation
  8. 8. Particle Swarm optimisation
  9. 9. Particle Swarm optimisation TRAVELING SALESMAN PROBLEMThe Travelling Salesman Problem (TSP) is an NP-Hard problem. It goes as follows, given a set of cities, with paths connecting each city with every other city, we need to find the shortest path from the starting city, to every other city and come back to the starting city in the shortest distancewithoutvisiting any city along the path more than once. This problem is easy to state, but hard to solve .It has been proved that the TSP is
  10. 10. Particle Swarm optimisation The largestsolvedtravelingsalesman problem, n 85,900-cityroute at 30/013102/ -
  11. 11. Particle Swarm optimisation
  12. 12. Particle Swarm optimisation
  13. 13. Particle Swarm optimisation
  14. 14. These slides adapted from a presentation by Maurice.Clerc@WriteMe.com - one of the main researchers in PSO PSO invented by Russ Eberhart (engineering Prof) and James Kennedy (social scientist) in USA
  15. 15. Explore PSO and its parameters with my app at http://www.macs.hw.ac.uk/~dwcorne/mypages/apps/pso.html
  16. 16. Cooperation example
  17. 17. Particle Swarm optimisation Particles velocity PSO Algorithm socialcognitiveInertia)1( ++=+kiv GBest Inertia cognitive social PBest x(k) x(k+1) v(k+1) v(k)
  18. 18. Particle Swarm optimisation PSO solution update in 2D Current solution (4, 2) Particles best solution (9, 1) Global best solution (5, 10) Inertia: v(k)=(-2, 2) x(k) - Current solution (4, 2) PBest - Particles best solution (9, 1) GBest-Global best solution (5, 10) GBest PBest
  19. 19. Particle Swarm optimisation PSO solution update in 2D Current solution (4, 2) Particles best solution (9, 1) Global best solution (5, 10) Inertia: v(k)=(-2,2) Cognitive: PBest-x(k)=(9,1)-(4,2)=(5,-1) Social: GBest-x(k)=(5,10)-(4,2)=(1,8) x(k) - Current solution (4, 2) PBest - Particles best solution (9, 1) GBest-Global best solution (5, 10) GBest PBest
  20. 20. Particle Swarm optimisation PSO solution update in 2D x(k) - Current solution (4, 2) PBest - Particles best solution (9, 1) GBest-Global best solution (5, 10) Inertia: v(k)=(-2,2) Cognitive: PBest-x(k)=(9,1)-(4,2)=(5,-1) Social: GBest-x(k)=(5,10)- (4,2)=(1,8) v(k+1)=(-2,2)+0.8*(5,-1) +0.2*(1,8) = (2.2,2.8) GBest PBest v(k+1 )
  21. 21. Particle Swarm optimisation PSO solution update in 2D GBest PBest x(k+1 ) x(k) - Current solution (4, 2) PBest - Particles best solution (9, 1) GBest-Global best solution (5, 10) Inertia: v(k)=(-2,2) Cognitive: PBest-x(k)=(9,1)-(4,2)=(5,-1) Social: GBest-x(k)=(5,10)- (4,2)=(1,8) v(k+1)=(2.2,2.8) x(k+1)=x(k)+v(k+1)= (4,2)+(2.2,2.8)=(6.2,4.8)
  22. 22. Particle Swarm optimisation Pseudocode http://www.swarmintelligence.org/tutorials.php Equation (a) v[] = c0 *v[] + c1 * rand() * (pbest[] - present[]) + c2 * rand() * (gbest[] - present[]) (in the original method, c0=1, but many researchers now play with this parameter) Equation (b) present[] = present[] + v[]
  23. 23. Particle Swarm optimisation Parameters Number of particles (swarmsize) C1 (importance of personal best) C2 (importance of neighbourhood best) Vmax: limit on velocity
  24. 24. Particle Swarm optimisation Pseudocode http://www.swarmintelligence.org/tutorials.php For each particle Initialize particle END Do For each particle Calculate fitness value If the fitness value is better than its peronal best set current value as the new pBest End Choose the particle with the best fitness value of all as gBest For each particle Calculate particle velocity according equation (a) Update particle position according equation (b) End While maximum iterations or minimum error criteria is not attained
  25. 25. Particle Swarm optimisation