evaluation of a particle swarm algorithm for biomechanical optimization

10
Jaco F. Schutte Department of Mechanical & Aerospace Engineering, University of Florida, Gainesville, FL 32611-6250 Byung-Il Koh Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL 32611-6250 Jeffrey A. Reinbolt Department of Mechanical & Aerospace Engineering, University of Florida, Gainesville, FL 32611-6250 Raphael T. Haftka Department of Mechanical & Aerospace Engineering, University of Florida, Gainesville, FL 32611-6250 Alan D. George Department of Electrical & Computer Engineering, University of Florida, Gainesville, FL 32611-6250 Benjamin J. Fregly 1 e-mail: fregly@ufl.edu Department of Mechanical & Aerospace Engineering and Department of Biomedical Engineering, University of Florida, Gainesville, FL 32611-6250 Evaluation of a Particle Swarm Algorithm For Biomechanical Optimization Optimization is frequently employed in biomechanics research to solve system identifica- tion problems, predict human movement, or estimate muscle or other internal forces that cannot be measured directly. Unfortunately, biomechanical optimization problems often possess multiple local minima, making it difficult to find the best solution. Furthermore, convergence in gradient-based algorithms can be affected by scaling to account for design variables with different length scales or units. In this study we evaluate a recently- developed version of the particle swarm optimization (PSO) algorithm to address these problems. The algorithm’s global search capabilities were investigated using a suite of difficult analytical test problems, while its scale-independent nature was proven math- ematically and verified using a biomechanical test problem. For comparison, all test problems were also solved with three off-the-shelf optimization algorithms—a global genetic algorithm (GA) and multistart gradient-based sequential quadratic programming (SQP) and quasi-Newton (BFGS) algorithms. For the analytical test problems, only the PSO algorithm was successful on the majority of the problems. When compared to pre- viously published results for the same problems, PSO was more robust than a global simulated annealing algorithm but less robust than a different, more complex genetic algorithm. For the biomechanical test problem, only the PSO algorithm was insensitive to design variable scaling, with the GA algorithm being mildly sensitive and the SQP and BFGS algorithms being highly sensitive. The proposed PSO algorithm provides a new off-the-shelf global optimization option for difficult biomechanical problems, especially those utilizing design variables with different length scales or units. fDOI: 10.1115/1.1894388g 1 Introduction Optimization methods are used extensively in biomechanics re- search to predict movement-related quantities that cannot be mea- sured experimentally. Forward dynamic, inverse dynamic, and in- verse static optimizations have been used to predict muscle, ligament, and joint contact forces during experimental or pre- dicted movements se.g., f1–12gd. System identification optimiza- tions have been employed to tune a variety of musculoskeletal model parameters to experimental movement data se.g., f13–17gd. Image matching optimizations have been performed to align im- plant and bone models to in vivo fluoroscopic images collected during loaded functional activities se.g., f18–20gd. Since biomechanical optimization problems are typically non- linear in the design variables, gradient-based nonlinear program- ming has been the most widely used optimization method. The increasing size and complexity of biomechanical models has also led to the parallelization of gradient-based algorithms, since gra- dient calculations can be easily distributed to multiple processors f1–3g. However, gradient-based optimizers can suffer from several important limitations. They are local rather than global by nature and so can be sensitive to the initial guess. Experimental or nu- merical noise can exacerbate this problem by introducing multiple local minima into the problem. For some problems, multiple local minima may exist due to the nature of the problem itself. In most situations, the necessary gradient values cannot be obtained ana- lytically, and finite difference gradient calculations can be sensi- tive to the selected finite difference step size. Furthermore, the use of design variables with different length scales or units can pro- duce poorly scaled problems that converge slowly or not at all f21,22g, necessitating design variable scaling to improve performance. Motivated by these limitations and improvements in computer speed, recent studies have begun investigating the use of nongra- dient global optimizers for biomechanical applications. Neptune f4g compared the performance of a simulated annealing sSAd al- gorithm with that of downhill simplex sDSd and sequential qua- dratic programming sSQPd algorithms on a forward dynamic op- timization of bicycle pedaling utilizing 27 design variables. Simulated annealing found a better optimum than the other two methods and in a reasonable amount of CPU time. More recently, Soest and Casius f5g evaluated a parallel implementation of a genetic algorithm sGAd using a suite of analytical tests problems with up to 32 design variables and forward dynamic optimizations of jumping and isokinetic cycling with up to 34 design variables. The genetic algorithm generally outperformed all other algorithms tested, including SA, on both the analytical test suite and the movement optimizations. In this study we evaluate a recent addition to the arsenal of 1 Address correspondence to: B. J. Fregly, Ph.D., Assistant Professor, Department of Mechanical & Aerospace Engineering, 231 MAE-A Building, P.O. Box 116250, University of Florida, Gainesville, FL 32611-6250. Phone: s352d 392-8157; fax: s352d 392-7303. Contributed by the Bioengineering Division for publication In the JOURNAL OF BIOMECHANICAL ENGINEERING. Manuscript received: July 9, 2003; revision received: January 1, 2005. Associate Editor: Maury Hull. Journal of Biomechanical Engineering JUNE 2005, Vol. 127 / 465 Copyright © 2005 by ASME

Upload: independent

Post on 03-Dec-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

ntifica-es that

oftenmore,nt for

cently-these

uite ofmath-ll testglobalmingly theto pre-globaleneticsitiveP anda newcially

Jaco F. SchutteDepartment of Mechanical & Aerospace

Engineering,University of Florida,

Gainesville, FL 32611-6250

Byung-Il KohDepartment of Electrical and Computer

Engineering,University of Florida,

Gainesville, FL 32611-6250

Jeffrey A. ReinboltDepartment of Mechanical & Aerospace

Engineering,University of Florida,

Gainesville, FL 32611-6250

Raphael T. HaftkaDepartment of Mechanical & Aerospace

Engineering,University of Florida,

Gainesville, FL 32611-6250

Alan D. GeorgeDepartment of Electrical & Computer

Engineering, University of Florida,Gainesville, FL 32611-6250

Benjamin J. Fregly1

e-mail: [email protected] of Mechanical & Aerospace

Engineering and Department of BiomedicalEngineering,

University of Florida,Gainesville, FL 32611-6250

Evaluation of a Particle SwarmAlgorithm For BiomechanicalOptimizationOptimization is frequently employed in biomechanics research to solve system idetion problems, predict human movement, or estimate muscle or other internal forccannot be measured directly. Unfortunately, biomechanical optimization problemspossess multiple local minima, making it difficult to find the best solution. Furtherconvergence in gradient-based algorithms can be affected by scaling to accoudesign variables with different length scales or units. In this study we evaluate a redeveloped version of the particle swarm optimization (PSO) algorithm to addressproblems. The algorithm’s global search capabilities were investigated using a sdifficult analytical test problems, while its scale-independent nature was provenematically and verified using a biomechanical test problem. For comparison, aproblems were also solved with three off-the-shelf optimization algorithms—agenetic algorithm (GA) and multistart gradient-based sequential quadratic program(SQP) and quasi-Newton (BFGS) algorithms. For the analytical test problems, onPSO algorithm was successful on the majority of the problems. When comparedviously published results for the same problems, PSO was more robust than asimulated annealing algorithm but less robust than a different, more complex galgorithm. For the biomechanical test problem, only the PSO algorithm was insento design variable scaling, with the GA algorithm being mildly sensitive and the SQBFGS algorithms being highly sensitive. The proposed PSO algorithm providesoff-the-shelf global optimization option for difficult biomechanical problems, espethose utilizing design variables with different length scales or units.fDOI: 10.1115/1.1894388g

s rm

d iscpre-le

imd

onraThalgrso

vertur

r nu-ltipleocal

ostd ana-nsi-e usepro-t atove

uterongra-tune

a-p-es.r twoently,f amsionsles.hms

the

rtm250:

OFion

1 Introduction

Optimization methods are used extensively in biomechanicsearch to predict movement-related quantities that cannot besured experimentally. Forward dynamic, inverse dynamic, anverse static optimizations have been used to predict muligament, and joint contact forces during experimental ordicted movementsse.g., f1–12gd. System identification optimizations have been employed to tune a variety of musculoskemodel parameters to experimental movement datase.g.,f13–17gd.Image matching optimizations have been performed to alignplant and bone models toin vivo fluoroscopic images collecteduring loaded functional activitiesse.g.,f18–20gd.

Since biomechanical optimization problems are typically nlinear in the design variables, gradient-based nonlinear progming has been the most widely used optimization method.increasing size and complexity of biomechanical models hasled to the parallelization of gradient-based algorithms, sincedient calculations can be easily distributed to multiple procesf1–3g. However, gradient-based optimizers can suffer from seimportant limitations. They are local rather than global by na

1Address correspondence to: B. J. Fregly, Ph.D., Assistant Professor, Depaof Mechanical & Aerospace Engineering, 231 MAE-A Building, P.O. Box 116University of Florida, Gainesville, FL 32611-6250. Phone:s352d 392-8157; faxs352d 392-7303.

Contributed by the Bioengineering Division for publication In the JOURNALBIOMECHANICAL ENGINEERING. Manuscript received: July 9, 2003; revis

received: January 1, 2005. Associate Editor: Maury Hull.

Journal of Biomechanical Engineering Copyright © 200

e-ea-n-le,-

tal

-

-m-e

soa-rsale

and so can be sensitive to the initial guess. Experimental omerical noise can exacerbate this problem by introducing mulocal minima into the problem. For some problems, multiple lminima may exist due to the nature of the problem itself. In msituations, the necessary gradient values cannot be obtainelytically, and finite difference gradient calculations can be setive to the selected finite difference step size. Furthermore, thof design variables with different length scales or units canduce poorly scaled problems that converge slowly or noall f21,22g, necessitating design variable scaling to imprperformance.

Motivated by these limitations and improvements in compspeed, recent studies have begun investigating the use of ndient global optimizers for biomechanical applications. Nepf4g compared the performance of a simulated annealingsSAd al-gorithm with that of downhill simplexsDSd and sequential qudratic programmingsSQPd algorithms on a forward dynamic otimization of bicycle pedaling utilizing 27 design variablSimulated annealing found a better optimum than the othemethods and in a reasonable amount of CPU time. More recSoest and Casiusf5g evaluated a parallel implementation ogenetic algorithmsGAd using a suite of analytical tests problewith up to 32 design variables and forward dynamic optimizatof jumping and isokinetic cycling with up to 34 design variabThe genetic algorithm generally outperformed all other algorittested, including SA, on both the analytical test suite andmovement optimizations.

ent,

In this study we evaluate a recent addition to the arsenal of

JUNE 2005, Vol. 127 / 4655 by ASME

tionpeThreith

lehanbuto

lgo

om

nb-te

ducon

ionspat

e

ector

ptor

-

at,ea. Tgi-eddua

yrns

pspacin

r a

ofe

hile

o-

po-

e

ex-

e bymeds waso se-

ttp://

e

e

5

5

global optimization methods—particle swarm-optimizasPSOd—for use on biomechanical problems. A recently develovariant of the PSO algorithm is used for the investigation.algorithm’s global search capabilities are evaluated using a pously published suite of difficult analytical test problems wmultiple local minimaf5g, while its insensitivity to design variabscaling is proven mathematically and verified using a biomeccal test problem. For both categories of problems, PSO roness, performance, and scale independence are comparedof three off-the-shelf optimization algorithms—a genetic arithm sGAd, a sequential quadratic programming algorithmsSQPd,and a Broydon-Fletcher-Goldfarb-ShannosBFGSd quasi-Newtonalgorithm. In addition, previously published resultsf5g for theanalytical test problems permit a comparison with a more cplex GA algorithmsGA*d, a simulated annealing algorithmsSAd,a different SQP algorithmsSQP*d, and a downhill simplexsDSdalgorithm.

2 Theory

2.1 Particle Swarm Algorithm. Particle swarm optimizatiois a stochastic global optimization approach introducedKennedy and Eberhartf23g. The method’s strength lies in its simplicity, being easy to code and requiring few algorithm parameto define convergence behavior. The following is a brief introtion to the operation of the particle swarm algorithm basedrecent implementation by Fourie and Groenwoldf24g incorporat-ing dynamic inertia and velocity reduction.

Consider a swarm ofp particles, where each particle’s positxk

i represents a possible solution point in the problem designD. For each particlei, Kennedy and Eberhartf23g proposed thathe positionxk+1

i be updated in the following manner:

xk+1i = xk

i + nk+1i , s1d

with a pseudovelocitynk+1i calculated as follows:

nk+1i = wknk

i + c1r1spki − xk

i d + c2r2sgk − xki d s2d

Here, subscriptk indicates asunitd pseudotime increment. Thpointpk

i is the best-found cost location by particlei up to time stepk, which represents the cognitive contribution to the search vnk+1

i . Each component ofnk+1i is constrained to be less than

equal to a maximum value defined innk+1max. The pointgk is the

global best-found position among all particles in the swarm utime k and forms the social contribution to the velocity vecCost function values associated withpk

i andgk are denoted byfbesti

and fbestg respectively. Random numbersr1 and r2 are uniformly

distributed in the intervalf0,1g. Shi and Eberhartf25g proposedthat the cognitive and social scaling parametersc1 and c2 be selected such thatc1=c2=2 to allow the productc1r1 or c2r2 to havea mean of 1. The result of using these proposed values is thparticles overshoot the attraction pointspk

i and gk half the timethereby maintaining separation in the group and allowing a grarea to be searched than if the particles did not overshootvariablewk, set to 1 at initialization, is a modification to the orinal PSO algorithmf23g. By reducing its value dynamically bason the cost function improvement rate, the search area is grareducedf26g. This dynamic reduction behavior is defined bywd,the amount by which the inertiawk is reduced,nd, the amount bwhich the maximum velocitynk+1

max is reduced, andd, the numbeof iterations with no improvement ingk before these reductiotake placef24g ssee algorithm flow description belowd.

Initialization of the algorithm involves several important steParticles are randomly distributed throughout the design sand particle velocitiesn0

i are initialized to random values withthe limits 0øn0

i øn0max. The particle velocity upper limitn0

max iscalculated as a fraction of the distance between the uppelower bound on variables in the design spacen0

max=ksxUB−xLBd

with k=0.5 as suggested inf26g. Iteration countersk andt are set

466 / Vol. 127, JUNE 2005

devi-

i-st-that-

-

y

rs-a

ce

or

to.

the

terhe

lly

.e,

nd

to 0. Iteration counterk is used to monitor the total numberswarm iterations, while iteration countert is used to monitor thnumber of swarm iterations since the last improvement ingk.Thus,t is periodically reset to zero during the optimization wk is not.

The algorithm flow can be represented as follows:

1. Initialize

sad Set constantsk , c1, c2, kmax, n0max, w0, nd, wd, and

dsbd Set countersk=0, t=0. Set random number seedscd Randomly initialize particle positionsx0

i PD in R fori =1,… ,p

sdd Randomly initialize particle velocities 0øn0i øn0

max fori =1,… ,p

sed Evaluate cost function valuesf0i using design space c

ordinatesx0i for i =1,… ,p

sfd Set fbesti = f0

i andp0i =x0

i for i =1,… ,psgd Set fbest

g to bestfbesti andg0 to correspondingx0

i

2. Optimize

sad Update particle velocity vectorsnk+1i using Eq.s2d

sbd If nk+1i .nk+1

max for any component, then set that comnent to its maximum allowable value

scd Update particle position vectorsxk+1i using Eq.s1d

sdd Evaluate cost function valuesfk+1i using design spac

coordinatesxk+1i for i =1,… ,p

sed If fk+1i ø fbest

i , then fbesti = fk+1

i , pk+1i =xk+1

i for i =1,… ,psfd If fk+1

i ø fbestg , then fbest

g = fk+1i , gk+1=xk+1

i for i =1,… ,psgd If fbest

g was improved insed, then resett=0, otherwiseincrementt

shd If the maximum number of function evaluations isceeded, then go to 3

sid If t=d, then multiplywk+1 by s1−wdd and nk+1max by s1

−nddsjd Incrementkskd Go to 2sad

3. Report results4. Terminate

This algorithm was coded in the C programming languagthe lead authorf27g and was used for all PSO analyses perforin the present study. A standard population size of 20 particleused for all runs, and other algorithm parameters were alslected based on standard recommendationssTable 1d f27–29g. TheC source code for our PSO algorithm is freely available at hwww.mae.ufl.edu/fregly/downloads/pso.zip.

2.2 Analysis of Scale Sensitivity.One of the benefits of th

Table 1 Standard PSO algorithm parameters used in thepresent study

Parameter Description Valu

p Population sizesnumber of particlesd 20c1

Cognitive trust parameter 2.0c2

Social trust parameter 2.0w0

Initial inertia 1wd

Inertia reduction parameter 0.0k Bound on velocity fraction 0.5nd

Velocity reduction parameter 0.0d Dynamic inertia/velocity reduction delaysfunction

evaluationsd200

PSO algorithm is its insensitivity to design variable scaling. To

Transactions of the ASME

howpacl Pndnt

m-

nny

n

asp

tion

leitte

cauc

tim

1

r

e s

ablese-ionsrvalscaled

n all

sen-and

sitivemple.ign

cann

c-valu-caled

theearch, theinglealing

scal-f the

sen-atrixcal-

al ap-ma-rsusunt ofatrixpro-

ssianeth-

quares-tion

-d to

lopedu-

ncod-wereFGS,

-iome-was

ons,

prove this characteristic, we will use a proof by induction to sthat all particles follow an identical path through the design sregardless of how the design variables are scaled. In actuaruns intended to investigate this property, use of the same raseed in scaled and unscaled cases will ensure that an idesequence of randomr1 and r2 values are produced by the coputer throughout the course of the optimization.

Consider an optimization problem withn design variables. An-dimensional constant scaling vectorz can be used to scale aor all dimensions of the problem design space:

z =3z1

z2

z3

Azn

4 s3d

We wish to show that for any time stepkù0,

nk8 = znk s4d

xk8 = zxk s5d

wherexk andnk sdropping superscriptid are the unscaled positioand velocity, respectively, of an individual particle andxk8=zxk

andnk8=znk are the corresponding scaled versions.First, we must show that our proposition is true for the b

case, which involves initializationsk=0d and the first time stesk=1d. Applying the scaling vectorz to an individual particlepositionx0 during initialization produces a scaled particle posix08:

x08 = zx0 s6dThis implies that

p08 = zp0, g08 = zg0 s7dIn the unscaled case, the pseudovelocity is calculated as

n0 = ksxUB − xLBd s8dIn the scaled case, this becomes

n08 = ksxUB8 − xLB8 d = kszxUB − zxLBd = zfksxUB − xLBdg = zn0

s9dFrom Eqs.s1d and s2d and these initial conditions, the particpseudovelocity and position for the first time step can be wras

n1 = w0n0 + c1r1sp0 − x0d + c2r2sg0 − x0d s10d

x1 = x0 + n1 s11din the unscaled case and

n18 = w0n08 + c1r1sp08 − x08d + c2r2sg08 − x08d

= w0zn0 + c1r1szp0 − zx0d + c2r2szg0 − zx0d

= zfw0n0 + c1r1sp0 − x0d + c2r2sg0 − x0dg = zn1 s12d

x18 = x08 + n18 = zx0 + zn1 = zfx0 + n1g = zx1 s13din the scaled case. Thus, our proposition is true for the base

Next, we must show that our proposition is true for the indtive step. If we assume that our proposition holds for anystepk= j , we must prove that it also holds for time stepk= j +1.We begin by replacing subscriptk with subscriptj in Eqs.s4d ands5d. If we then replace subscript 0 with subscriptj and subscriptwith subscriptj +1 in Eqs.s12d ands13d, we arrive at Eqs.s4d ands5d where subscriptk is replaced by subscriptj +1. Thus, ouproposition is true for any time stepj +1.

Consequently, since the base case is true and the inductiv

is true, Eqs.s4d and s5d are true for allkù0. From Eqs.s4d and

Journal of Biomechanical Engineering

eSOomical

e

n

se.-e

tep

s5d, we can conclude that any linear scaling of the design varisor subset thereofd will have no effect on the final or any intermdiate result of the optimization, since all velocities and positare scaled accordingly. This fact leads to identical step intebeing taken in the design space for a scaled and an unsversion of the same problem, assuming infinite precision icalculations.

In contrast, gradient-based optimization methods are oftensitive to design variable scaling due to algorithmic issuesnumerical approximations. First derivative methods are senbecause of algorithmic issues, as illustrated by a simple exaConsider the following minimization problem with two desvariablessx,yd where the cost function is

x2 +y2

100s14d

with initial guesss1,1d. A scaled version of the same problembe created by lettingx=x, y=y/10 so that the cost functiobecomes

x2 + y2 s15dwith initial guesss1,10d. Taking first derivatives of each cost funtion with respect to the corresponding design variables and eating at the initial guesses, the search direction for the unsproblem is along a line rotated 5.7° from the positivex axis andfor the scaled problem along a line rotated 45°. To reachoptimum in a single step, the unscaled problem requires a sdirection rotated 84.3° and the scaled problem 45°. Thusscaled problem can theoretically reach the optimum in a sstep while the unscaled problem cannot due to the effect of scon the calculated search direction.

Second derivative methods are sensitive to design variableing because of numerical issues related to approximation oHessianssecond derivatived matrix. According to Gill et al.f21g,Newton methods utilizing an exact Hessian matrix will be insitive to design variable scaling as long as the Hessian mremains positive definite. However, in practice, exact Hessianculations are almost never available, necessitating numericproximations via finite differencing. Errors in these approxitions result in different search directions for scaled veunscaled versions of the same problem. Even a small amodesign variable scaling can significantly affect the Hessian mso that design variable changes of similar magnitude will notduce comparable magnitude cost function changesf21g. Commongradient-based algorithms that employ an approximate Heinclude Newton and quasi-Newton nonlinear programming mods such as BFGS, SQP methods, and nonlinear least-smethods such as Levenberg—Marquardtf21g. A detailed discussion of the influence of design variable scaling on optimizaalgorithm performance can be found in Gill et al.f21g.

3 Methodology

3.1 Optimization Algorithms. In addition to our PSO algorithm, three off-the-shelf optimization algorithms were applieall test problemssanalytical and biomechanical—see belowd forcomparison purposes. One was a global GA algorithm deveby Deb f30–32g. This basic GA implementation utilizes one mtation operator and one crossover operator along with real eing to handle continuous variables. The other two algorithmscommercial implementations of gradient-based SQP and BalgorithmssVisualDOC, Vanderplaats R & D, Colorado SpringsCOd.

All four algorithmssPSO, GA, SQP, and BFGSd were parallelized to accommodate the computational demands of the bchanical test problem. For the PSO algorithm, parallelizationperformed by distributing individual particle function evaluatito different processors as detailed inf33g. For the GA algorithm

individual chromosome function evaluations were parallelized as

JUNE 2005, Vol. 127 / 467

iteprohellmeCsnd

horketeed

l tesibalgtioifief 6

e w-g olyru

ea

il tsunttaDSallyer

esdiffian

ateedwisin

erms

ticabe

e

f tedile

nitiomth

in

n bch

ndabl t

l,

m.r etthe

per

hrgel

ceThe,000

con-

ber

a-ed to

. Al-signtionthe

is testther

f anhetic

non-etersationsands and

described inf5g. Finally, for the SQP and BFGS algorithms, findifference gradient calculations were performed on differentcessors as outlined inf34g. A master-slave paradigm using tMessage Passing InterfacesMPId f35,36g was employed for aparallel implementations. Parallel optimizations for the biochanical test problem were run on a cluster of Linux-based Pthe University of Florida High-performance Computing aSimulation Research Laboratorys1.33 GHz Athlon CPUs wit256 MB memory on a 100 Mbps switched Fast Ethernet netwd.

While the PSO algorithm used standard algorithm paramfor all optimization runs, minor algorithm tuning was performon the GA, SQP, and BFGS algorithms for the biomechanicaproblem. The goal was to give these algorithms the best poschance for success against the PSO algorithm. For the GArithm, preliminary optimizations were performed using populasizes ranging from 40 to 100. It was found that for the specmaximum number of function evaluations, a population size oproduced the best results. Consequently, this population sizused for all subsequent optimization runssanalytical and biomechanicald. For the SQP and BFGS algorithms, automatic tuninthe finite difference step sizesFDSSd was performed separatefor each design variable. At the start of each gradient-basedforward and central difference gradients were calculated fordesign variable beginning with a relative FDSS of 10−1. The stepsize was then incrementally decreased by factors of 10 untabsolute difference between forward and central gradient rewas a minimum. This approach was taken since the amounoise in the biomechanical test problem prevented a single sgradient value from being calculated over a wide range of Fvaluesssee Sec. 5d. The forward difference step size automaticselected for each design variable was used for the remaindthe run.

3.2 Analytical Test Problems.The global search capabilitiof our PSO implementation were evaluated using a suite ofcult analytical test problems previously published by SoestCasiusf5g. In that study, each problem in the suite was evaluusing four different optimizers: SA, GA* , SQP* , and DS, wheran asterisk indicates a different version of an algorithm useour study. One thousand optimization runs were performedeach optimizer starting from random initial guesses and ustandard optimization algorithm parameters. Each run was tnated based on a predefined number of function evaluationthe particular problem being solved. We followed an idenprocedure with our four algorithms to permit a comparisontween our results and those published inf5g. Since two of thealgorithms used in our studysGA and SQPd were of the samgeneral category as algorithms usedf5g sGA* and SQP*d, com-parisons could be made between different implementations osame general algorithm. Failed PSO and GA runs were allowuse up the full number of function evaluations, whereas faSQP and BFGS runs were restarted from new random iguesses until the full number of function evaluations was cpleted. Only 100 rather than 1000 runs were performed withSQP and BFGS algorithms due to a database size problemVisualDOC software.

A detailed description of the six analytical test problems cafound in Soest and Casiusf5g. Since the design variables for eaproblem possessed the same absolute upper and lower bouappeared in the cost function in a similar form, design variscaling was not an issue in these problems. The six analyticaproblems are described briefly below.

H1: This simple two-dimensional functionf5g has several locamaxima and a global maximum of 2 at the coordinatess8.6998

6.7665d.

468 / Vol. 127, JUNE 2005

-

-in

rs

stleo-

nd0as

f

n,ch

heltsofbleS

of

-dd

inthgi-

forl-

hetodal-e

the

e

andleest

H1sx1,x2d =

sin2Sx1 −x2

8D + sin2Sx2 −

x1

8D

d + 1s16d

where

x1,x2 P f− 100,100g

d = Îsx1 − 8.6998d2 + sx2 − 6.7665d2

Ten-thousand function evaluations were used for this probleH2: This inverted version of the F6 function used by Schaffe

al. f37g has two dimensions with several local maxima aroundglobal maximum of 1.0 ats0,0d.

H2sx1,x2d = 0.5 −sin2sÎx1

2 + x22d − 0.5

s1 + 0.001sx12 + x2

2dd2 s17d

x1,x2 P f− 100,100g

This problem was solved using 20,000 function evaluationsoptimization run.

H3: This test function from Corana et al.f38g was used witdimensionalityn=4, 8, 16, and 32. The function contains a lanumber of local minimason the order of 104nd with a globaminimum of 0 atuxiu,0.05.

H3sx1,…,xnd = oi=1

n Hst · sgnszid + zid2 ·c ·di if uxi − ziu , t

di ·xi2 otherwise

Js18d

xi P f− 1000,1000g

where

zi = bUxi

sU + 0.49999c · sgnsxid ·s, c = 0.15,

s= 0.2, t = 0.05, and di =51 i = 1,5,9,…1000 i = 2,6,10,…10 i = 3,7,11,…100 i = 4,8,12,…

6The use of the floor function in Eq.s18d makes the search spafor this problem the most discrete of all problems tested.number of function evaluations used for this problem was 50sn=4d, 100,000sn=8d, 200,000sn=16d, and 400,000sn=32d.

For all of the analytical test problems, an algorithm wassidered to have succeeded if it converged to within 10−3 of theknown optimum cost function value within the specified numof function evaluationsf5g.

3.3 Biomechanical Test Problem.In addition to these anlytical test problems, a biomechanical test problem was usevaluate the scale-independent nature of the PSO algorithmthough our PSO algorithm is theoretically insensitive to devariable scaling, numerical round-off errors and implementadetails could potentially produce a scaling effect. Runningother three algorithms on scaled and unscaled versions of thproblem also permitted investigation of the extent to which oalgorithms are influenced by design variable scaling.

The biomechanical test problem involved determination oankle joint kinematic model that best matched noisy syntsi.e., computer generatedd movement data. Similar tof13g, theankle was modeled as a three-dimensional linkage with twointersecting pin joints defined by 12 subject-specific paramsFig. 1d. These parameters represent the positions and orientof the talocrural and subtalar joint axes in the tibia, talus,calcaneous. Position parameters were in units of centimeter

orientation parameters in units of radians, resulting in parameter

Transactions of the ASME

r 2p-

dadeod

da

ththeecn a, nesarkrm

ters

deowTh

ointhe

di-

d

frameivecu-tion

n Eq.nd-,vari-alue.olve

opti-

andspec-ose asignmain

teraltancen

finedal-

pec-

vity tofixedh

tiallgo-stop-alu-

values of varying magnitude. This model was part of a largedegree-of-freedomsDOFd full-body kinematic model used to otimize other joints as wellf15g.

Given this model structure, noisy synthetic movementwere generated from a nominal model for which the “true” moparameters were known. Joint parameters for the nominal malong with a nominal motion were derived fromin vivo experi-mental movement data using the optimization methodologyscribed below. Next, three markers were attached to the tibiacalcaneous segments in the model at locations consistent wiexperiment, and the 27 model DOFs were moved throughnominal motions. This process created synthetic marker trajries consistent with the nominal model parameters and motioalso representative of the original experimental data. Finallymerical noise was added to the synthetic marker trajectoriemulate skin and soft tissue movement artifacts. For each mcoordinate, a sinusoidal noise function was used with unifodistributed random period, phase, and amplitudeslimited to amaximum of ±1 cmd. The values of the sinusoidal paramewere based on previous studies reported in the literaturef39,40g.

An unconstrained optimization problem with bounds on thesign variables was formulated to attempt to recover the knjoint parameters from the noisy synthetic marker trajectories.cost function was

minp

fspd s19d

with

fspd = ok=1

50

minq

oj=1

6

oi=1

3

scijk − ci jksp,qdd2, s20d

where p is a vector of 12 design variables containing the jparameters,q is a vector of 27 generalized coordinates forkinematic model,cijk is theith coordinate of synthetic markerj attime framek, and ci jksp ,qd is the corresponding marker coornate from the kinematic model. At each time frame,ci jksp ,qd wascomputed from the current model parametersp and an optimize

Fig. 1 Experimental shank and foot surface msubject-specific kinematic ankle model defined byparameter defines the position or orientation of a

model configurationq. A separate Levenberg–Marquardt nonlin-

Journal of Biomechanical Engineering

7

talel

e-ndtheir

to-ndu-toer

ly

-ne

t

ear least-squares optimization was performed for each timein Eq. s20d to determine this optimal configuration. A relatconvergence tolerance of 10−3 was chosen to achieve good acracy with minimum computational cost. A nested optimizaformulation fi.e., minimization occurs in Eqs.s19d and s20dg wasused to decrease the dimensionality of the design space is19d. Equations20d was coded in Matlab and exported as staalone C code using the Matlab CompilersThe Mathworks, NatickMA d. The executable read in a file containing the 12 designables and output a file containing the resulting cost function vThis approach facilitated the use of different optimizers to sEq. s19d.

To investigate the influence of design variable scaling onmization algorithm performance, two versions of Eq.s20d weregenerated. The first used the original units of centimetersradians for the position and orientation design variables, retively. Bounds on the design variables were chosen to enclphysically realistic region around the solution point in despace. Each position design variable was constrained to rewithin a cube centered at the midpoint of the medial and lamalleoli, where the length of each side was equal to the disbetween the malleolisi.e., 11.32 cmd. Each orientation desigvariable was constrained to remain within a circular cone deby varying its “true” value by ±30°. The second version normized all 12 design variables to be withinf−1,1g using

xnorm=2x − xUB − xLB

xUB − xLBs21d

where UB and LB denote the upper and lower bounds, restively, on the design variable vectorf41g.

Two approaches were used to compare PSO scale sensitithat of the other three algorithms. For the first approach, anumber of scaled and unscaled runss10d was performed with eacoptimization algorithm starting from different random iniseeds, and the sensitivity of the final cost function value to arithm choice and design variable scaling was evaluated. Theping condition for PSO and GA runs was 10,000 function ev

ker configuration „left … for developing a2 parameters p1 through p12 „right …. Eachnt axis in one of the body segments.

ar1

joi

ations, while SQP and BFGS runs were terminated when a relative

JUNE 2005, Vol. 127 / 469

ceermsinhmfunuiwe

otainS

l paccg aerdco

esemb

o

f th

lo-anib-

natio

shesG

nlyheA

witowlotsA

ithm

andey therom

69ith

aled

ter er-eing

ensi-lem.rarely

tion

fivel coste ofst

trials

,000

able,ota-ari-ri-. To

d SQPd runstheumber

oreng a

algo-ibleficultesti-s per-

convergence tolerance of 10−5 or absolute convergence toleranof 10−6 was met. For the second approach, a fixed numbfunction evaluationss10,000d were performed with each algorithto investigate unscaled versus scaled convergence history. Arandom initial guess was used for the PSO and GA algoritand each algorithm was terminated once it reached 10,000tion evaluations. Since individual SQP and BFGS runs reqmuch fewer than 10,000 function evaluations, repeated runsperformed with different random initial guesses until the tnumber of function evaluations exceeded 10,000 at the termtion of a run. This approach essentially uses SQP and BFGglobal optimizers, where the separate runs are like individuaticles that cannot communicate with each another but have ato local gradient information. Finite difference step size tuninthe start of each run was included in the computation of numbfunction evaluations. Once the total number of runs requirereach 10,000 function evaluations was known, the lowestfunction value from all runs at each iteration was used to reprthe cost over a range of function evaluations equal to the nuof runs.

4 ResultsFor the analytical test problems, our PSO algorithm was m

robust than our GA, SQP, and BFGS algorithmssTable 2, tophalfd. PSO converged to the correct global solution 100% otime on four of the six test problemssH1 andH3 with n=4, 8, and16d. It converged 58% of the time for problemH2 and not at alfor problemH3 with n=32. In contrast, none of the other algrithms converged more than 32% of the time on any of thelytical test problems. Though our GA algorithm typically exhited faster initial convergence than did our PSO algorithmsFig. 2,left columnd, it leveled off and rarely reached the correct fipoint in design space within the specified number of funcevaluations.

When PSO results were compared with previously publiresults for the same analytical test problemsf5g, PSO succesrates were better than those of SA but worse than those of*

sTable 2, bottom halfd. SA was successful 100% of the time oon problemH1, while GA* was successful nearly 100% of ttime on all six problems. Thus, for the global algorithms, G*

was the most robust overall, followed by PSO and then SA,GA exhibiting the worst robustness. PSO converged more slthan did GA* on all problems with available convergence psfour of the six problemsd and also more slowly than did Son the one problem for which SA was successfulsFig. 2, rightcolumnd.

For the biomechanical test problem, only the PSO algor

Table 2 Fraction of successful optimizer runs fosults from the PSO, GA, SQP, and BFGS algoriResults from the SA, GA, SQP, and DS algorithmSQP algorithms used in that study were differentruns were identified by a final cost function valuconsistent with †5‡.

Study Algorithm H1

PSO 1.000Present GA 0.000

SQP 0.00BFGS 0.00

SA 1.000Soest and Casiusf5g GA 0.990

SQP 0.279DS 1.000

was insensitive to design variable scaling, with the GA algorithm

470 / Vol. 127, JUNE 2005

of

gles,c-

rerela-asr-

esstoftostnter

re

e

a-

ln

d

A

hly

being only mildly sensitive. In ten out of ten trials, unscaledscaled PSO runs converged to the same point in design spacfFig.3sadg, while unscaled and scaled GA runs converged to nearlsame pointfFig. 3sbdg. PSO results were the most consistent ftrial to trial, converging to a final cost function value betweenand 71.sTable 3d. GA results were the next most consistent wfinal cost function values ranging from 71 to 84. Typical unscand scaled PSO and GA runs produced root-mean-squaresRMSdmarker distance, position parameter, and orientation paramerors of comparable magnitude, with PSO errors generally bslightly smaller.

In contrast, the SQP and BFGS algorithms were highly stive to design variable scaling in the biomechanical test probFor the ten trials, unscaled and scaled SQP or BFGS runsconverged to similar points in design spacesnote they axis scalein Fig. 3d and produced large differences in the final cost funcvalue from one trial to the nextfFigs. 3scd and 3sddg. Scalingimproved the final result in seven out of ten SQP trials and inof ten BFGS trials. The best unscaled and scaled SQP finafunction values were 255 and 121, respectively, while thosBFGS were 355 and 102sTable 3d. Thus, scaling yielded the beresult found with both algorithms. The best SQP and BFGSgenerally produced larger RMS marker distance errorssup to twotimes worsed, orientation parameter errorssup to five timesworsed, and position parameter errorssup to six times worsed thanthose found by PSO or GA.

When detailed convergence histories were plotted over 10function evaluations for the biomechanical test problemsFig. 4d,unscaled and scaled histories for PSO were indistinguishwhile those of GA were similar and those of SQP or BFGS nbly different. For PSO, only minute differences in the design vables on the order of 10−5 were observed, resulting from numecal round off errors caused by limitations in machine precisionreach 10,000 function evaluations, 17 unscaled and 26 scaleruns were required compared to 56 unscaled and 44 scalefor BFGS. The length and value of the initial flat region inSQP and BFGS convergence histories was related to the nof FDSS tuning evaluations performed. More runs meant mtuning evaluations as well as an increased likelihood of findilower initial cost function value.

5 DiscussionIn this paper we evaluated a recent variation of the PSO

rithm with dynamic inertia and velocity updating as a possaddition to the arsenal of methods that can be applied to difbiomechanical optimization problems. For all problems invgated, our PSO algorithm with standard algorithm parameter

e analytical test problems. Top half: Re-s used in the present study. Bottom half:

used in Soest and Casius †5‡. The GA andm the ones used in our study. Successful

ithin 10 −3 of the known optimum value,

H3

2 sn=4d sn=8d sn=16d sn=32d

.583 1.000 1.000 1.000 0.000.034 0.000 0.000 0.000 0.002.11 0.00 0.00 0.00 0.00.32 0.00 0.00 0.00 0.00

027 0.000 0.001 0.000 0.000999 1.000 1.000 1.000 1.000.810 0.385 0.000 0.000 0.000636 0.000 0.000 0.000 0.000

r ththms

froe w

H

0000

0.0.00.

formed better than did three off-the-self optimizers—GA, SQP,

Transactions of the ASME

Fig. 2 A comparison of convergence history results for the analytical test problems. Leftcolumn: Results from the PSO, GA, SQP, and BFGS algorithms used in the present study.Right column: Results from the SA, GA, SQP, and DS algorithms used in Soest and Casius†5‡. The GA and SQP algorithms used in that study were different from the ones used in ourstudy. „a… Problem H1. The SA results have been updated using corrected data provided bySoest and Casius, since the results in †5‡ accidentally used a temperature reduction rate of0.5 rather than the standard value of 0.85 as reported. „b… Problem H2. „c… Problem H3 withn =4. „d… Problem H3 with n =32. The error was computed using the known cost at the globaloptimum and represents the average of 1000 runs „100 multi-start SQP and BFGS runs inour study … with each algorithm.

Journal of Biomechanical Engineering JUNE 2005, Vol. 127 / 471

s wbule

veandt the-sig

temrobaticA alemn

za-alu

letsito

n theinthe

singampos-

a prob-workis noticalamicllel

d SAeter

lllgo-

and BFGS. For the analytical test problems, PSO robustnesfound to be better than that of two other global algorithmsworse than that of a third. For the biomechanical test probwith added numerical noise, PSO was found to be insensitidesign variable scaling while GA was only mildly sensitiveSQP and BFGS highly sensitive. Overall, the results suggesour PSO algorithm is worth consideration for difficult biomchanical optimization problems, especially those for which devariable scaling may be an issue.

Though our biomechanical optimization involved a sysidentification problem, PSO may be equally applicable to plems involving forward dynamic, inverse dynamic, inverse stor image matching analyses. Other global methods such as SGA have already been applied successfully to such probf4,5,19g, and there is no reason to believe that PSO wouldperform equally well. As with any global optimizer, PSO utilition would be limited by the computational cost of function evations given the large number required for a global search.

Our particle swarm implementation may also be applicabsome large-scale biomechanical optimization problems. Outhe biomechanics arenaf28,29,42–51g, PSO has been usedsolve problems on the order of 120 design variablesf49–51g. In

Fig. 3 Final cost function values for ten unscaled „dark bars …

and scaled „gray bars … parallel PSO, GA, SQP, and BFGS runsfor the biomechanical test problem. Each pair of unscaled andscaled runs was started from the same initial point „s… in designspace, and each run was terminated when the specified stop-ping criteria was met „see the text ….

Table 3 Final cost function values and associatemean-square „RMS… errors after 10,000 functionand scaled PSO, GA, SQP, and BFGS runs. Shistories.

Optimizer Formulation Cost function dis

PSO Unscaled 70.4Scaled 70.4

GA Unscaled 77.9Scaled 74.0

SQP Unscaled 255Scaled 121

BFGS Unscaled 355Scaled 102

472 / Vol. 127, JUNE 2005

ast

mto

at

n

-,nds

ot

-

tode

the present study, our PSO algorithm was unsuccessful olargest test problem,H3 with n=32 design variables. However,a recent study, our PSO algorithm successfully solvedGriewank global test problem with 128 design variables upopulation sizes ranging from 16 to 128f33g. When the Corantest problemsH3d was attempted with 128 DVs, the algorithexhibited worse convergence. Since the Griewank problemsesses a bumpy but continuous search space and the Coranlem a highly discrete search space, our PSO algorithm maybest on global problems with a continuous search space. Itknown how our PSO algorithm would perform on biomechanproblems with several hundred DVs, such as the forward dynoptimizations of jumping and walking performed with paraSQP inf1–3g.

One advantage of global algorithms such as PSO, GA, anis that they often do not require significant algorithm paramtuning to perform well on difficult problems. The GA used inf5gswhich is not freely availabled required no tuning to perform weon all of these particular analytical test problems. The SA a

marker distance and joint parameter root-valuations performed by multiple unscaledFig. 4 for the corresponding convergence

RMS error

rkercessmmd

Orientationparameterssdegd

Positionparameterssmmd

5.49 4.85 2.405.49 4.85 2.40

5.78 2.65 6.975.64 3.76 4.01

10.4 3.76 14.37.21 3.02 9.43

12.3 21.4 27.56.61 18.4 8.52

Fig. 4 Convergence history for unscaled „dark lines … andscaled „gray lines … parallel PSO, GA, SQP, and BFGS runs forthe biomechanical test problem. Each algorithm run was termi-nated after 10,000 function evaluations. Only one unscaled andscaled PSO and GA run were required to reach 10,000 functionevaluations, while repeated SQP and BFGS runs were requiredto reach that number. Separate SQP and BFGS runs weretreated like individual particles in a single PSO run for calculat-ing convergence history „see the text ….

de

ee

Matan

Transactions of the ASME

goalgterc-r thtu

iredseoitoer

thmca

hm

Fa

omightimeshed f

erg

sentrto

izaSltip

prortiG

d t

wr-

sorforha

en-to 32lelre of

timet and

allelar-

be-ientcom-aveslow-rm-t pro-ll

en-n isvalu-hichbe af-

, onee in-noo

-ot in-

dif-ionithm

o-rentn-e GAthey de-by a

de-find-thanh anya par-

ve-ni-caleo ad-prob-t inin theobaloise,can

lgo-alter-

ineglye

rithm in f5g required tuning of two parameters to improve alrithm robustness significantly on those problems. Our PSOrithm swhich is freely availabled required tuning of one parameswo, which was increased from 1.0 to 1.5d to produce 100% sucess on the two problems where it had significant failures. Fobiomechanical test problem, our PSO algorithm required noing, and only the population size of our GA algorithm requtuning to improve convergence speed. Neither algorithm wassitive to the two sources of noise present in the problem—nadded to the synthetic marker trajectories, and noise duesomewhat loose convergence tolerance in the LevenbMarquardt suboptimizations. Thus, for many global algoriimplementations, poor performance on a particular problembe rectified by minor tuning of a small number of algoritparameters.

In contrast, gradient-based algorithms such as SQP and Bcan require a significant amount of tuning even to begin toproach global optimizer results on some problems. For the bichanical test problem, our SQP and BFGS algorithms were htuned by scaling the design variables and determining the opFDSS for each design variable separately. FDSS tuning wascially critical due to the two sources of noise noted above. Wforward and central difference gradient results were compareone of the design variables using two different LevenbMarquardt relative convergence tolerancess10−3 and 10−6d, a“sweet spot” was found near a step size of 10−2 sFig. 5d. Outsideof that “sweet spot,” which was automatically identified and uin generating our SQP and BFGS results, forward and cedifference gradient results diverged quickly when the loosererance was used. Since most users of gradient-based optimalgorithms do not scale the design variables or tune the FDSeach design variable separately, and many do not perform muruns, our SQP and BFGS results for the biomechanical testlem represent best-case rather than typical results. For this palar problem, an off-the-shelf global algorithm such as PSO oris preferable due to the significant reduction in effort requireobtain repeatable and reliable solutions.

Another advantage of PSO and GA algorithms is the easewhich they can be parallelizedf5,33g and their resulting high paallel efficiency. For our PSO algorithm, Schutte et al.f33g recentlyreported near ideal parallel efficiency for up to 32 procesSoest and Casiusf5g reported near ideal parallel efficiencytheir GA algorithm with up to 40 processors. Though SA

Fig. 5 Sensitivity of SQP and BFGS gradient calculations tothe selected finite difference step size for one design variable.Forward and central differencing were evaluated using relativeconvergence tolerances of 10 −3 and 10−6 for the nonlinear leastsquares suboptimizations performed during a cost functionevaluation †see Eq. „20…‡.

historically been considered more difficult to parallelizef52g, Hig-

Journal of Biomechanical Engineering

-o-

en-

n-se

ag–

n

GSp-e-lyalpe-nor–

dall-tionforleb-cu-Ao

ith

s.

s

ginson et al.f53g recently developed a new parallel SA implemtation and demonstrated near ideal parallel efficiency for upprocessors. In contrast, Koh et al.f34g reported poor SQP paralefficiency for up to 12 processors due to the sequential natuthe line search portion of the algorithm.

The caveat for these parallel efficiency results is that therequired per function evaluation was approximately constanthe computational nodes were homogeneous. As shown inf33g,when function evaluations take different amounts of time, parefficiency of our PSO algorithmsand any other synchronous pallel algorithm, including GA, SA, SQP, and BFGSd will degradewith an increasing number of processors. Synchronizationtween individuals in the population or between individual gradcalculations requires slave computational nodes that havepleted their function evaluations to sit idle until all nodes hreturned their results to the master node. Consequently, theest computational nodeswhether loaded by other users, perfoing the slowest function evaluation, or possessing the slowescessor in a heterogeneous environmentd will dictate the overatime for each parallel iteration. An asynchronous PSO implemtation with load balancing, where the global best-found positioupdated continuously as each particle completes a function eation, could address this limitation. However, the extent to wconvergence characteristics and scale independence wouldfected is not yet known.

To put the results of our study into the proper perspectivemust remember that optimization algorithm robustness can bfluenced heavily by algorithm implementation details, andsingle optimization algorithm will work for all problems. For twof the analytical test problemssH2 andH3 with n=4d, other studies have reported PSO results using formulations that did nclude dynamic inertia and velocity updating. Comparisons areficult given differences in the maximum number of functevaluations and number of particles, but in general, algormodifications weresnot surprisinglyd found to influence algorithmconvergence characteristicsf54–56g. For our GA and SQP algrithms, results for the analytical test problems were very diffefrom those obtained inf5g using different GA and SQP implemetations. With seven mutation and four crossover operators, thalgorithm used inf5g was obviously much more complex thanone used here. In contrast, both SQP algorithms were highlveloped commercial implementations. Poor performancegradient-based algorithm can be difficult to correct even withsign variable scaling and careful tuning of the FDSS. Theseings indicate that specific algorithm implementations, rathergeneral classes of algorithms, must be evaluated to reacconclusions about algorithm robustness and performance onticular problem.

6 ConclusionIn summary, the PSO algorithm with dynamic inertia and

locity updating provides another option for difficult biomechacal optimization problems with the added benefit of being sindependent. There are few algorithm-specific parameters tjust, and standard recommended settings work well for mostlems f60,85g. The algorithm’s main drawback is the high costerms of function evaluations because of slow convergencefinal stages of the optimization, a common trait among glsearch algorithms. In biomechanical optimization problems, nmultiple local minima, and design variables of different scalelimit the reliability of gradient-based algorithms. The PSO arithm presented here provides a simple-to-use off-the-shelfnative for consideration in such cases.

AcknowledgmentThis study was funded by NIH National Library of Medic

sR03 LM07332d and Whitaker Foundation grants to B. J. Freand an AFOSRsF49620-09-1-0070d grant to R. T. Haftka. Th

authors thank Dr. Knoek van Soest and Dr. Richard Casius for

JUNE 2005, Vol. 127 / 473

oeDr

h

tioned

ma

ove

ning

net

atioetri

Cri

atio.ysi-uro-

SkinBio

oo

ualOpt

ofch,”

anHu

ech

, ane-

od-

od

. Cand

“Adel

ureaph

of a

tio

war.

in

tion,”

tures

tion

sork,

tinu-

. D.,Int.

tionmp.

ple-puter

nce,dard,”

Studythmson

San

ingeal-

re to.,

Skin,” J.

rch

tionnten-

ion—

ationiza-

ericaltors,”

rya of

rticle

ti-

iza-

Par-.

Re-hnol.

g for

d Par-ical

f

lobal.,

ver-

providing the plot data in the right column of Fig. 2, Tushar Gfor providing the C source code for the GA algorithm, andVladimir Balabanov of Vanderplaats R & D for assistance witVisualDOC modifications.

Referencesf1g Anderson, F. C., and Pandy, M. G., 1999, “A Dynamic Optimization Solu

for Vertical Jumping in Three Dimensions,” Comp. Meth. Biomech. BiomEng., 2, pp. 201–231.

f2g Anderson, F. C., and Pandy, M. G., 2001, “Dynamic Optimization of HuWalking,” J. Biomech. Eng.,123, pp. 381–390.

f3g Pandy, M. G., 2001, “Computer Modeling and Simulation of Human Mment,” Annu. Rev. Biomed. Eng.,3, pp. 245–273.

f4g Neptune, R. R., 1999, “Optimization Algorithm Performance in DetermiOptimal Controls in Human Movement Analyses,” J. Biomech. Eng.,121, pp.249–252.

f5g Soest, A. J. and Casius, L. J. R., 2003, “The Merits of a Parallel GeAlgorithm in Solving Hard Optimization Problems,” J. Biomech. Eng.,125,pp. 141–146.

f6g Buchanan, T. S., and Shreeve, D. A., 1996, “An Evaluation of OptimizTechniques for the Prediction of Muscle Activation Patterns During IsomTasks,” J. Biomech. Eng.,118, pp. 565–574.

f7g Crowninshield, R. D., and Brand, R. D., 1981, “A Physiologically Basedterion of Muscle Force Prediction in Locomotion,” J. Biomech.,14, pp. 793–801.

f8g Glitsch, U., and Baumann, W., 1997, “The Three-Dimensional Determinof Internal Loads in the Lower Extremity,” J. Biomech.,11, pp. 1123–1131

f9g Kaufman, K. R., An, K.-N., Litchy, W. J., and Chao, E. Y. S., 1991, “Phological Prediction of Muscle Forces—I. Theoretical Formulation,” Nescience,40, pp. 781–792.

f10g Lu, T.-W. and O’Connor, J. J., 1999, “Bone Position Estimation fromMarker Coordinates using Global Optimisation with Joint Constraints,” J.mech., 32, pp. 129–124.

f11g Raasch, C. C., Zajac, F. E., Ma, B., and Levine, W. S., 1997, “Muscle Cdination of Maximum-Speed Pedaling,” J. Biomech.,30, pp. 595–602.

f12g Prilutsky, B. I., Herzog, W., and Allinger, T. L., 1997, “Forces of IndividCat Ankle Extensor Muscles During Locomotion Predicted Using Staticmization,” J. Biomech.,30, pp. 1025–1033.

f13g Bogert, A. J., Smith, G. D., and Nigg B. M., 1994, “In Vivo Determinationthe Anatomical Axes of the Ankle Joint Complex: An Optimization ApproaJ. Biomech.,12, pp. 1477–1488.

f14g Mommerstag, T. J. A., Blankevoort, L., Huiskes, R., Kooloos, J. G. M.,Kauer, J. M. G., 1996, “Characterization of the Mechanical Behavior ofman Knee Ligaments: A Numerical–Experimental Approach,” J. Biom29, pp. 151–160.

f15g Reinbolt, J. A., Schutte, J. F., Fregly, B. J., Haftka, R. T., George, A. D.Mitchell, K. H., 2004, “Determination of Patient-Specific Multi-Joint Kinmatic Models Through Two-Level Optimization,” J. Biomech.38, pp. 621–626.

f16g Sommer, H. J., III, and Miller, N. R., 1980, “Technique For Kinematic Meling of Anatomical Joints,” J. Biomech. Eng.,102, pp. 311–317.

f17g Vaughan, C. L., Andrews, J. G., and Hay, J. G., 1982, “Selection of BSegment Parameters by Optimization Methods,” J. Biomech. Eng.,104, pp.38–44.

f18g Kaptein, B. L., Valstar, E. R., Stoel, B. C., Rozing, P. M., and Reiber, J. H2003, “A New Model-Based RSA Method Validated Using CAD ModelsModels from Reversed Engineering,” J. Biomech.,36, pp. 873–882.

f19g Mahfouz, M. R., Hoff, W. A., Komistek, R. D., and Dennis, D. A., 2003,Robust Method for Registration of Three-Dimensional Knee Implant Moto Two-Dimensional Fluoroscopy Images,” IEEE Trans. Med. Imaging,22,pp. 1561–1574.

f20g You, B.-M., Siy, P., Anderst, W., and Tashman, S., 2001, “In Vivo Measment of 3-D Skeletal Kinematics from Sequences of Biplane RadiogrApplication to Knee Kinematics,” IEEE Trans. Med. Imaging,20, pp. 514–525.

f21g Gill, P. E., Murray, W., and Wright, M. H., 1986,Practical Optimization,Academic Press, New York.

f22g Wilcox, K. and Wakayama, S., 2003, Simultaneous OptimizationMultiple-Aircraft Family, J. Aircr., 40, pp. 616–622.

f23g Kennedy J., and Eberhart R. C., 1995, “Particle Swarm Optimization,”Pro-ceedings of the IEEE International Conference on Neural Networks, Perth,Australia Vol. 4, pp. 1942–1948.

f24g Groenwold, A. A., and Fourie, P. C., 2002, “The Particle Swarm Optimizain Size and Shape Optimization,” Struct. Multidiscip. Optim.,23, pp. 259–267.

f25g Shi, Y., and Eberhart, R. C., 1998, “Parameter Selection in Particle SOptimization,” Lect. Notes Comput. Sc. 1447, Springer-Verlag, Berlin, pp591–600.

f26g Fourie, P. C. and Groenwold, A. A., 2001, “The Particle Swarm Algorithm

Topology Optimization,”Proceedings of the 4th World Congress of Struct.

474 / Vol. 127, JUNE 2005

l.

.

n

-

ic

nc

-

n

-

r-

i-

d-.,

d

y

.,

s

-s:

n

m

Multidiscip. Opt., Dalian, China pp. 52–53.f27g Schutte, J. F., 2001, “Particle Swarms in Sizing and Global Optimiza

Master’s thesis, University of Pretoria, South Africa.f28g Schutte, J. F., and Groenwold, A. A., 2003, “Sizing Design of Truss Struc

Using Particle Swarms,” Struct. Multidiscip. Optim.,25, pp. 261–269.f29g Schutte, J. F., and Groenwold, A. A., 2004, “A Study of Global Optimiza

Using Particle Swarms,” J. Global Opt.sin pressd.f30g Deb, K., 2001,Multi-Objective Optimization Using Evolutionary Algorithm,

Wiley Interscience Series in Systems and Optimization, Wiley, New YChap. 4.

f31g Deb, K., and Agrawal, R. B., 1995, “Simulated Binary Crossover for Conous Search Space,” Complex Syst.,9, pp. 115–148.

f32g Deb, K., and Goyal, M., 1996, “A Combined Genetic Adaptive SearchsGe-neASd for Engineering Design,” Comput. Sci. Inform.,26, pp. 30–45.

f33g Schutte, J. F., Reinbolt, J. A., Fregly, B. J., Haftka, R. T., and George, A2004, “Parallel Global Optimization with the Particle Swarm Algorithm,”J. Numer. Methods Eng.61, pp. 2296–2315.

f34g Koh, B. I., Reinbolt, J. A., Fregly, B. J., and George, A. D., 2004, “Evaluaof Parallel Decomposition Methods for Biomechanical Optimizations,” CoMeth. Biomech. Biomed. Eng.7, pp. 215–225.

f35g Gropp, W., and Lusk, E., 1996, “User’s Guide for MPICH, “A Portable Immentation of MPI,” Argonne National Laboratory, Mathematics and ComScience Division, http://www.mcs.anl.gov/mpi/mpiuserguide/paper.html.

f36g Gropp, W., Lusk, E., Doss, N., and Skjellum, A., 1996. “A High PerformaPortable Implementation of the MPI Message Passing Interface StanParallel Comput.,22, pp. 789–828.

f37g Schaffer, J. D., Caruana, R. A., Eshelman, L. J., and Das, R., 1989, “Aof Control Parameters Affecting Online Performance of Genetic Algorifor Function Optimizing,”Proceedings of the 3rd International ConferenceGenetic Algebra, edited by J. D., David, Morgan Kaufmann Publishers,Mateo, California, pp. 51–60.

f38g Corana, A., Marchesi, M., Martini, C., and Ridella, S., 1987, “MinimizMultimodal Functions of Continuous Variables with the “Simulated Anning” Algorithm,” ACM Trans. Math. Softw.,13, pp. 262–280.

f39g Chèze, L., Fregly, B. J., and Dimnet, J. 1995, “A Solidification ProceduFacilitate Kinematic Analyses Based on Video System Data,” J. Biomech28,pp. 879–884.

f40g Lu, T.-W. and O’Connor, J. J., 1999, “Bone Position Estimation fromMarker Co-ordinates using Global Optimization with Joint ConstraintsBiomech., 32, pp. 129–124.

f41g Reference Manual for VisualDOCC/C++ API, 2001, Vanderplaats Reseaand Development, Inc., Colorado Springs, CO.

f42g Boeringer, D. W., and Werner, D. H., 2004, “Particle Swarm OptimizaVersus Genetic Algorithms For Phased Array Synthesis,” IEEE Trans. Anas Propag.,52, pp. 771–779.

f43g Brandstatter, B., and Baumgartner, U., 2002, “Particle Swarm OptimizatMass-Spring System Analogon,” IEEE Trans. Magn.,38, pp. 997–1000.

f44g Cockshott, A. R., and Hartman, B. E., 2001, “Improving the FermentMedium for Echinocandin B Production Part II: Particle Swarm Optimtion,” Process Biochem.sOxford, U.K.d, 36, pp. 661–669.

f45g Costa, E. F. J., Lage, P. L. C., and Biscaia, E. C., Jr., 2003, “On the NumSolution and Optimization of Sstyrene Polymerization in Tubular ReacComput. Chem. Eng.,27, pp. 1591–1604.

f46g Lu, W. Z., Fan, H.-Y., and Lo, S. M., 2003, “Application of EvolutionaNeural Network Method in Predicting Pollutant Levels in Downtown AreHong Kong,” Neurocomputing,51, pp. 387–400.

f47g Pidaparti, R. M., and Jayanti, S., 2003, “Corrosion Fatigue Through PaSwarm Optimization,” AIAA J., 41, pp. 1167–1171.

f48g Tandon, V., El-Mounayri, H., and Kishawy, H., 2002, “NC End Milling Opmization Using Evolutionary Computation,” Int. J. Mach. Tools Manuf.,42,pp. 595–605.

f49g Abido, M. A., 2002, “Optimal Power Flow Using Particle Swarm Optimtion,” Int. J. Electr. Power Energy Syst.,24, pp. 563–571.

f50g Abido, M. A., 2002, “Optimal Design of Power System Stabilizers usingticle Swarm Optimization,” IEEE Trans. Energy Convers.,17, pp. 406–413

f51g Gies, D., and Rahmat-Samii, Y., 2003, “Particle Swarm Optimization forconfigurable Phase-Differentiated Array Design,” Microwave Opt. TecLett., 38, pp. 168–175.

f52g Leite, J. P. B. and Topping, B. H. V., 1999, “Parallel Simulated AnnealinStructural Optimization,” Compos. Struct.,73, pp. 545–564.

f53g Higginson, J. S., Neptune, R. R., and Anderson, F. C., 2004, “Simulateallel Annealing Within a Neighborhood for Optimization of BiomechanSystems,” J. Biomech.sin pressd.

f54g Carlisle, A., and Dozier, G., 2001, “An Off-the-Shelf PSO,”Proceedings othe Workshop on Particle Swarm Optimization, Indianapolis, IN.

f55g Parsopoulos, K. E., and Vrahatis, M. N., 2002, “Recent Approaches to GOptimization Problems through Particle Swarm Optimization.” Nat. Comp1,pp. 235–306.

f56g Trelea, I. C., 2002, “The Particle Swarm Optimization Algorithm: Congence Analysis and Parameter Selection,” Inform. Process. Lett.,85, pp. 317–

325.

Transactions of the ASME