七年级下册 unit 2 i’m going to study computer science section a 2d
DESCRIPTION
A: What are you going to be when you grow up? B: I’m going to be a teacher. A: How are you going to do that ? B: I’m going to study hard every day. A: Where are you going to work? B: I’m going to work in Tianmen. Pair workTRANSCRIPT
Different Local Search Algorithms in STAGE for Solving Bin Packing Problem
Gholamreza Haffari
Sharif University of [email protected]
Overview Combinatorial Optimization
Problems and State Spaces STAGE Algorithm Local Search Algorithms Results Conclusion and Future works
Optimization Problems Objective function: F(x1, x2, …, xn)
Find vector X=(x1, x2, …, xn) which minimizes (maximizes) F
Constraints: g1(X) 0 g2(X) 0 . . . gm(X) 0
Combinatorial Optimization Problems (COP) Special kind of Optimization
Problems which are Discrete
Most of the COPs are NP-Hard, I.e. there is not any polynomial time algorithm for solving them.
Satisfiability SAT: Given a formula in
propositional calculus, is there an assignment to its variables making it true?
f(x1, x2, .., xn)
Problem is NP-Complete. (Cook 1971)
Bin Packing Problem (BPP)
Given a list (a1, a2, …) of items, each of which has a size s(ai)>0, and a bin Capacity C, what is the minimum number of bins for packing items?
Problem is NP-Complete (Garey and Johnson 1979)
An Example of BPP
a1 a2 a3 a4
b1 b2 b3 b4
Objects list: a1, a2, …, an
Bin’s capacity (bj) is C
Objective function: m
ai < C, aibj, 1j m
Definition of State in BPP A particular permutation of items
in the object list is called state.
b1 b2 b3 b4
a1 a2 a3 a4
Greedy Algorithm
State Space of BPP
a1, a2, a3, a4
a2, a4, a3, a1
a1, a4, a2, a3. . .a1, a2, a4, a3
. . . . . .
A Local Search Algorithm1) s1) s0 0 : a random start state: a random start state
2)2) for i = 0 to +for i = 0 to +
- - generategenerate new solutions set S from the current new solutions set S from the current solution ssolution sii
- - decidedecide whether s whether si+1i+1 = s’ = s’S or sS or sii
- if a - if a stopping conditionstopping condition is satisfied is satisfied return the return the bestbest solution found solution found
Local Optimum Solutions The quality of a local optimum
resulted from a local search process depends on a starting state.
Multi-Start LSA Runs the base local search
algorithms from different starting states and returns the best result found.
Is it possible to choose a promising new starting state?
Other Features of a State Other features of a state can help
the search process.
(Boyan 1998)
Previous Experiences There is a relationship among local
optima of a COP, so previously found local optima can help to locate more promising start states.
Core ideas Using an Evaluation Function to
predict the eventual outcome of doing a local search from a state.
The EF is a function of some features of a state.
The EF is retrained gradually.
STAGE Algorithm
Uses an Evaluation Function to locate a good start state.
Does local search.
Retrains EF with the new generated search trajectory
Learning Phase
Execution Phase
Evaluation Function
State Features EF Prediction
EF can be used by another local search algorithm for finding a good new starting point.
Applying EF on a state
Diagram of STAGE
(Boyan 98)
Analysis of STAGE What is the effect of using different local
search algorithms?
Local search algorithms: Best Improvement Hill Climbing (BIHC) First Improvement Hill Climbing (FIHC) Stochastic Hill Climbing (STHC)
Best Improvement HC Generates all of the neighboring
states, and then selects the best one.
…
1
4 7 2
First Improvement HC Generates neighboring states
systematically, and then selects the first good one.
5
4 7
Stochastic HC Stochastically generates some of
the neighboring states, and then selects the best one.
The size of the set containing neighbors is called PATIENCE.
Different LSAs
Different LSAs for solving U250_00 instancehttp://www.ms.ic.ac.uk/info.html
Different LSAs, bounded steps
Some Results The higher the accuracy in choosing the next
state, the better the quality of the final solution, by comparing STHC1 and STHC2 (PATIENCE1=350, PATIENCE2=700)
Deep paces result in higher quality and faster solutions, by comparing BIHC and others.
Different LSAs, bounded moves
Some Results• It is better to search the solution space randomly rather than systematically, by comparing STHC and others.
Future works Using other learning structures in
STAGE Verifying these results on another
problem (for example Graph Coloring)
Using other LSA, such as Simulated Annealing.
Questions