ewri 2008 may 12, 2008

37
1 A New Algorithm for Water Distribution System Optimization: Discrete Dynamically Dimensioned Search (DDDS) EWRI 2008 May 12, 2008 Dr. Bryan Tolson 1 Masoud A. Esfahani 1 Dr. Holger Maier 2 Aaron Zecchin 2 1. Department of Civil & Environmental Engineering University of Waterloo, Canada 2. School of Civil, Environmental and Mining Engineering, University of Adelaide

Upload: shyla

Post on 11-Jan-2016

32 views

Category:

Documents


0 download

DESCRIPTION

A New Algorithm for Water Distribution System Optimization: Discrete Dynamically Dimensioned Search (DDDS). EWRI 2008 May 12, 2008. Dr. Bryan Tolson 1 Masoud A. Esfahani 1 Dr. Holger Maier 2 Aaron Zecchin 2 Department of Civil & Environmental Engineering University of Waterloo, Canada - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: EWRI 2008 May 12, 2008

1

A New Algorithm for Water Distribution System Optimization: Discrete Dynamically Dimensioned

Search (DDDS) EWRI 2008

May 12, 2008

Dr. Bryan Tolson1

Masoud A. Esfahani1

Dr. Holger Maier2

Aaron Zecchin2

1. Department of Civil & Environmental Engineering University of Waterloo, Canada

2. School of Civil, Environmental and Mining Engineering, University of Adelaide

Page 2: EWRI 2008 May 12, 2008

2

Research Goal• Develop a simple, parsimonious algorithm for

constrained single objective Water Distribution System (WDS) design optimization

• Algorithm design goals:1. Eliminate need to fine tune algorithm parameters

(regular algorithm + penalty function parameters)2. Avoid poor solutions with a high reliability

• Build off efficient and effective DDS algorithm for continuous optimization

Page 3: EWRI 2008 May 12, 2008

3

Background: DDS Algorithm• Simple and fast approximate stochastic global

optimization algorithm • For continuous optimization problems• Single-solution search (not population based)• Designed originally for computationally expensive

automatic hydrologic model calibration:– Generate good* results in modeler’s time frame– Algorithm parameter tuning is unnecessary

• Tolson & Shoemaker (2007), WRR

Page 4: EWRI 2008 May 12, 2008

4

DDS Description• General DDS search strategy:

0. User inputs:

- maximum function evaluations

- decision variable ranges

- perturbation size parameter (0.2*)

1. Initialize starting solution

2. Perturb current best solution to generate candidate solution

3. Compare candidate solution to best solution and update best solution if necessary

4. Repeat from step 2 until maximum objective function evaluations completed.

Page 5: EWRI 2008 May 12, 2008

5

DDS Description

• key to DDS is perturbation in step 2:– search globally at the start of the search by perturbing all decision variables

(DVs) from their current best values

– search locally at the end of the search by perturbing typically only 1 decision variable (DV) from its current best value

– perturbed DVs are generated from a normal probability distribution centered on current best value

• global to local search strategy scaled to user-specified maximum number of objective function evaluations

• the only information used to direct candidate solution sampling is the current best solution

Page 6: EWRI 2008 May 12, 2008

6

DDS to Discrete DDS (DDDS)• only modification is to discretize the DV

perturbation distribution

Discrete probability distribution of candidate solution option numbers for a single decision variable with 16

possible values and a current best solution of xbest=8. Default DDDS-v1 r-parameter of 0.2*

0.00

0.05

0.10

0.15

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Option # for Decision Variable x

Pro

ba

bili

tyxbest = 8

Page 7: EWRI 2008 May 12, 2008

7

Global to Local Search

Pipe 1 Pipe 2 Pipe 3 Pipe 4

Example Candidate Solutions

Start of Search End of Search

Best Current Solution (red)

• key to DDS and DDDS is to search globally at the start of the search and finish by searching locally

• consider a WDS example with 4 decision variables:

Page 8: EWRI 2008 May 12, 2008

8

Pipe 1 Pipe 2 Pipe 3 Pipe 4

Start of Search End of Search

Example Candidate Solutions

Best Current Solution (red)

Pipe 1 Pipe 2 Pipe 3 Pipe 4

• key to DDS and DDDS is to search globally at the start of the search and finish by searching locally

• consider a WDS example with 4 decision variables:

Global to Local Search

Page 9: EWRI 2008 May 12, 2008

9

General Constrained WDS Optimization Formulation

Given pipe layout, its connectivity & nodal demands choose pipe diameters (the decision variables) that:

Minimize Total Pipe Costs

Subject to:• meeting minimum nodal pressure requirements• selecting pipe diameters from a set of discrete alternatives

Note that the hydraulic solver (e.g. EPANET2) determines a flow regime that automatically satisfies hydraulic constraints (conservation of mass, energy)

Page 10: EWRI 2008 May 12, 2008

10

DDDS for WDS Optimization1. Add constraint handling technique to account for nodal pressure constraints

– DDDS only explicitly handles DV bound constraints– DDDS compares two solutions based only on rank (which one is better) to update current best solution

• therefore, objective function scaling is irrelevant

– use a parameterless penalty function such that objective (Cost) is defined as:• Costs = total pipe costs for feasible solutions, or• for infeasible solutions

• Same as Deb (2000) tournament selection-based method

])(,0max[2#

1

min

nodes

iii HHMaxCostCosts x

evaluated with all pipes at max diameter

min required pressure actual pressure for solution

Page 11: EWRI 2008 May 12, 2008

11

DDDS for WDS Optimizationsecond modification:

2. At end of the search, avoid wasting excessive function evaluations on candidate solutions with only one pipe perturbed from best solution depending on # of DVs, this waste can be substantial (e.g. ~50 or fewer DVs) try something more productive! one pipe perturbations from a good solution will generally not improve

solution since good solutions are typically ‘just’ feasible

Page 12: EWRI 2008 May 12, 2008

12

Experimental Approach1. Determine if DDDS extension to DDS for WDS

optimization is competitive with high quality Ant Colony Optimization (ACO) results (HP & NYTP)

2. Assess improvements of multi-cycle DDDS approach over basic DDDS

3. Apply DDDS to large scale WDS optimization problem (hundreds of pipes to size)

No algorithm parameter tuning in steps above

Page 13: EWRI 2008 May 12, 2008

13

WDS Case Studies

Problem # decision variables

# options

Search space size

New York Tunnels (NYTP)

- see Maier et. al. (2003)

21 16 1621=1.9×1025

Doubled New York Tunnels

(2-NYTP)

- see Maier et. al. (2003)

42 16 1641=3.74×1050

Hanoi (HP)

- see Maier et. al. (2003)

34 6 634=2.8×1026

Balerma

- introduced by Reca and Martinez (2006)

454 10 10454

• EPANET2 used as hydraulic solver and library functions from EPANET Toolkit link to DDDS code in Matlab.

• all previous results in literature for other algorithms utilize EPANET2 as hydraulic solver

Page 14: EWRI 2008 May 12, 2008

14

Results in Proceedings Paper• evaluated very simple fix to excessive 1-pipe perturbations by DDDS

(called DDDS-v1) showed – DDDS-v1 results for NYTP of comparable quality to various ACO

algorithms in Zecchin et al. (2007)

– DDDS-v1 results for HP that were better on average than the best ACO algorithm in Zecchin et al. (2007)

• Our new approach shows good potential!

• Remaining slides highlight some new results to appear in extension to conference paper …

Hanoi Problem

6

6.5

7

7.5

8

8.5

9

AS ACS ASelite ASrank MMAS ASi-best DDDS-v1

Algorithm

Ave

rag

e b

est

ob

ject

ive

fu

nct

ion

va

lue

a

cro

ss

20

op

tim

iza

tio

n t

ria

ls (

co

st

in

mill

ion

s $

)

DDDS-v1 used 50,000 function evaluations (fevals)

All ACO algorithms used 120,000 function evaluations

NYTP Problem

38

38.2

38.4

38.6

38.8

39

39.2

39.4

39.6

39.8

40

AS ACS ASelite ASrank MMAS ASi-best DDDS-v1

Algorithm

Av

erag

e b

est

ob

jec

tiv

e f

un

cti

on

va

lue

a

cro

ss 2

0 o

pti

miz

atio

n t

ria

ls (

cos

t in

m

illio

ns

$)

DDDS-v1 used 50,000 function evaluations (fevals)

All ACO algorithms used 45,000 function evaluations

NFS

Page 15: EWRI 2008 May 12, 2008

15

Basics of Multi-Cycle DDDS

Cycle 1, C1 - start DDDS optimization trial and stop once DDDS perturbing only 1 decision variable (DV), (50-75% of M)*

Cycle 2, C2

(global)

- start an independent DDDS optimization trial using remaining computational budget (25-50% of M)

- again stop when DDDS perturbing only 1 variable

Cycle 3, C3

(more local)

-combine solutions from first 2 cycles to initialize a third optimization trial of DDDS (5-15% of M)

- again stop when DDDS perturbing only 1 variable

Cycle 4, C4

(more local)

-start another DDDS optimization trial that refines best current solution from above 3 cycles considering a smaller dimension problem (fix some pipe diameters), (<5% of M)

-again stop when DDDS perturbing only 1 variable

Cycle 5, 2P

(very local search heuristic)

-apply simple two-pipe change heuristic to refine current best solution from cycle 4 (remainder of M)

- optionally, this can continue until local minima found (2P-stuck)

Specify maximum # of model evaluations, M

NOTE:

-point at which DDDS search perturbs a single DV varies mainly with problem dimension and secondarily with M

- with hundreds of DVs, multiple cycles unnecessary because this point is not reached until >95% of M completed (not wasting effort)

Page 16: EWRI 2008 May 12, 2008

16

2-NYTP Case Study• from Zecchin et al. (2007)

• 6 ACO algorithms in Zecchin et al. use 500,000 function evaluations– optimal algorithm parameters determined for each

algorithm using millions of evaluations

• For multi-cycle DDDS, specify approx. maximum of 300,000 function evaluations– no algorithm parameter tuning– simply observe improvement achieved by each cycle

• 20 optimization trials per algorithm

Page 17: EWRI 2008 May 12, 2008

17

2-NYTP Case Study – Cycle 1 performance

Doubled New York Tunnels problem

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

77 78 79 80 81 82 83 84

Objective function (Cost millions $)

Pro

bab

ility

of

equ

al o

r b

ette

r so

luti

on

C1 fevals = 222000

Empirical CDF of best obj. func. values

Page 18: EWRI 2008 May 12, 2008

18

2-NYTP Case Study – impact of cycle 2

Doubled New York Tunnels problem

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

77 78 79 80 81 82 83 84

Objective function (Cost millions $)

Pro

bab

ility

of

equ

al o

r b

ette

r so

luti

on

C1

C1+C2

fevals = 222000

fevals = 282000

60,000 function evaluations not long enough for C2

(different result for NYTP)

Page 19: EWRI 2008 May 12, 2008

19

2-NYTP Case Study – impact of cycles 3 and 4

Doubled New York Tunnels problem

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

77 78 79 80 81 82 83 84

Objective function (Cost millions $)

Pro

bab

ility

of

equ

al o

r b

ette

r so

luti

on

C1

C1+C2

C1+C2+C3

C1+C2+C3+C4

fevals = 222000

fevals = 282000

fevals = 296000

Avg. fevals =298000

Page 20: EWRI 2008 May 12, 2008

20

Doubled New York Tunnels problem

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

77 78 79 80 81 82 83 84

Objective function (Cost millions $)

Pro

bab

ility

of

equ

al o

r b

ette

r so

luti

on

C1

C1+C2

C1+C2+C3

C1+C2+C3+C4

C1+C2+C3+C4+2P

C1+C2+C3+C4+2P(stuck)

fevals = 222000

fevals = 282000

fevals = 296000

Avg. fevals =298000

Avg. fevals =317000

Avg. fevals =369000

2-NYTP Case Study – impact of 2P local search heuristic

2P change heuristic very effective polisher at end* of search

Page 21: EWRI 2008 May 12, 2008

21

2-NYTP Case Study – add best of 6 ACO algorithms (MMAS) from Zecchin et al

Doubled New York Tunnels problem

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

77 78 79 80 81 82 83 84

Objective function (Cost millions $)

Pro

bab

ility

of

equ

al o

r b

ett

er

solu

tio

n

C1

C1+C2

C1+C2+C3

C1+C2+C3+C4

C1+C2+C3+C4+2P

C1+C2+C3+C4+2P(stuck)

MMAS

fevals = 222000

fevals = 282000

fevals = 296000

Avg. fevals =298000

Avg. fevals =317000

Avg. fevals =369000

fevals =500000

Page 22: EWRI 2008 May 12, 2008

22

Constraint Handling Assessment for DDDS

• Consider results for Hanoi network where many studies report algorithm difficulty in locating any feasible solution (Euseff & Lansey, 2003; Zecchin et al., 2005 and Zecchin et al., 2007)

Page 23: EWRI 2008 May 12, 2008

23

Constraint Handling Assessment for DDDS: HP

• Simple approach with no penalty parameters works very well

Hanoi problem

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

6.05 6.15 6.25 6.35 6.45 6.55 6.65

Objective function (Network Cost in millions $)

Pro

ba

bili

ty o

f eq

ua

l or

be

tter

so

luti

on

C1

C1+C2

C1+C2+C3

C1+C2+C3+C4

C1+C2+C3+C4+2p

C1+C2+C3+C4+2P(stuck)

MMAS

Avg. feval.=70930

Avg. feval.=92457

Avg. feval.=98258

Avg. feval.=99381

Avg. feval.=107635

Avg. feval.=109644

Avg. feval = 120000

best of 6 algorithms in Zecchin et al. 2007

Page 24: EWRI 2008 May 12, 2008

24

Large Scale WDS: Balerma

2000000

2500000

3000000

3500000

4000000

4500000

5000000

5500000

6000000

0 200,000 400,000 600,000 800,000 1,000,000

Number of function evaluations (EPANET evaluations)

Net

wo

rk C

ost

(E

uro

s)

Individual Trials

Average

Page 25: EWRI 2008 May 12, 2008

25

Large Scale WDS: Balerma

2000000

2500000

3000000

3500000

4000000

4500000

5000000

5500000

6000000

0 200,000 400,000 600,000 800,000 1,000,000

Number of function evaluations (EPANET evaluations)

Net

wo

rk C

ost

(E

uro

s)

Individual Trials

Average

1-cycle DDDS with 1,000,000 function evaluation budget

1-cycle DDDS with 100,000 function evaluation budget

Algorithm response to smaller user-specified computational budget

Page 26: EWRI 2008 May 12, 2008

26

Large Scale WDS: Balerma

Median Value of Lowest Network Cost of 5 optimization trials

2000000

2200000

2400000

2600000

2800000

3000000

3200000

3400000

3600000

3800000

4000000

0 2,000,000 4,000,000 6,000,000 8,000,000 10,000,000

Number of function evaluations (EPANET evaluations)

Net

wo

rk C

ost

(E

uro

s)

1-Cycle DDDS-1,000,000

1 Cycle DDDS-100,000

Page 27: EWRI 2008 May 12, 2008

27

Large Scale WDS: Balerma

Median Value of Lowest Network Cost of 5 optimization trials

2000000

2200000

2400000

2600000

2800000

3000000

3200000

3400000

3600000

3800000

4000000

0 2,000,000 4,000,000 6,000,000 8,000,000 10,000,000

Number of function evaluations (EPANET evaluations)

Net

wo

rk C

ost

(E

uro

s)

1-Cycle DDDS-1,000,0001 Cycle DDDS-100,000Reca and Martinez (2006) w GAReca et al. (2007) w best of 3 metaheuristics

only conducted one optimization trial

all studies use EPANET2

Page 28: EWRI 2008 May 12, 2008

28

Conclusions• DDDS for WDS optimization is parsimonious:

– no algorithm parameter-tuning

– no penalty parameter-tuning

– no parameter adjustment here for case studies with 21-454 pipe size decision variables

• DDDS for WDS optimization is very effective:– 1-cycle and multi-cycle DDDS show improved results over

alternative algorithm results

– to the best of our knowledge DDDS (1-cycle and multi-cycle) found new best known solutions to two WDS design problems in the literature

• Two-pipe change heuristic appears to be new

Page 29: EWRI 2008 May 12, 2008

29

QUESTIONS ?

Page 30: EWRI 2008 May 12, 2008

30

New York Tunnels problem

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

38.5 39.5 40.5 41.5 42.5 43.5 44.5 45.5 46.5 47.5

Objective Function (Cost M$)

Pro

bab

ility

of

equ

al o

r b

ette

r so

luti

on

C1

C1+C2

C1+C2+C3

C1+C2+C3+C4

C1+C2+C3+C4+2p

C1+C2+C3+C4+2p (stuck)

MMAS

Avg. feval=29726

Avg. feval=42409Avg. feval=47369

Avg. feval=48171

Avg. feval=52689

Avg. feval=5942745,000

Page 31: EWRI 2008 May 12, 2008

31

Keys to DDS• Algorithm scales to user-specified computational limits• Early in search favours global search• Late in search favours local search

STEP 1. Define DDS inputs for D dimensional problem: -neighborhood perturbation size parameter, r (0.2 is default)-maximum # of function evaluations, m STEP 2. Evaluate objective function at initial solution

STEP 3. Randomly select a subset of the D decision variables for perturbation from the current best solution.

STEP 4. Perturb the decision variables selected in Step 3 from their current best solution (reflect at decision variable bounds if necessary)

STEP 5. Evaluate new solution and update current best solution if necessary

STEP 6. Update function evaluation counter, i=i+1, and check stopping criterion:-IF i = m STOP-ELSE repeat STEP 3

Size of subset decreases as maximum function evaluation limit approached

normally distributed perturbations with adequate variance ensures global search

Page 32: EWRI 2008 May 12, 2008

32

Robustness of DDS• DDS has been applied to a number of case

studies, for example:– 6, 9, 10, 14, 20, 26, 30, 34 & 50 calibration

parameters (= decision variables)– Anywhere from 100 to 100,000 model evaluations– Uncorrelated to very correlated decision variables

• In each case, DDS was applied with the same algorithm parameter value & typically generated the best comparative results

Page 33: EWRI 2008 May 12, 2008

33

Local Search Procedure for Polishing/Refining

• Use two procedures:– One pipe change– Two pipe change

• One pipe change procedure cycles through all possible one-increment pipe diameter reductions until none can improve solution

Pipe 1 Pipe 2 Pipe 3 Pipe 4

Page 34: EWRI 2008 May 12, 2008

34

Two Pipe Change

• an improved solution that differs in two pipes will have one pipe diameter reduced and another increased such that:

1. total WDS cost is reduced (*this does not require running EPANET*)

2. reduced pressures due to pipe diameter decrease are potentially mitigated by an increase in another pipe diameter

Page 35: EWRI 2008 May 12, 2008

35

Two Pipe Change

• How long does this take?– How long to confirm a solution is a locally

optimal solution where no possible two pipe change will improve results?

• the maximum number of combinations to be evaluated can be determined and is between:

Page 36: EWRI 2008 May 12, 2008

36

Page 37: EWRI 2008 May 12, 2008

37