solving the parameter identification problem of mathematical models using genetic algorithms

8
Solving the parameter identification problem of mathematical models using genetic algorithms Emmanuel Karlo Nyarko a, * , Rudolf Scitovski b a Faculty of Electrical Engineering, University of Osijek, Kneza Trpimira 2b, HR-31000 Osijek, Croatia b Department of Mathematics, University of Osijek, Gajev trg 6, HR-31000 Osijek, Croatia Abstract A method for solving the parameter identification problem for ordinary second order differential equations using genetic algorithms is given. The method is tested on two numerical examples. Ó 2003 Elsevier Inc. All rights reserved. Keywords: Parameter identification; Genetic algorithms; Mathematical model 1. Introduction Mathematical models used in applied research (biology, physics, economics, etc.) are often defined by a system of ordinary differential equations. It should be noted that in general the solution of such a system need not be an ele- mentary function. Based on experimental data obtained, the parameters of the mathematical models have to be determined. This problem is known in liter- ature as the parameter identification problem. The parameter identification problem is often solved using the quasi-linea- rization method (see [1,2]) and smoothing the data method (see [2–4]). In this paper only genetic algorithms (GAs) are used. GAs are optimization * Corresponding author. E-mail addresses: [email protected] (E.K. Nyarko), [email protected] (R. Scitovski). 0096-3003/$ - see front matter Ó 2003 Elsevier Inc. All rights reserved. doi:10.1016/S0096-3003(03)00661-1 Applied Mathematics and Computation 153 (2004) 651–658 www.elsevier.com/locate/amc

Upload: emmanuel-karlo-nyarko

Post on 02-Jul-2016

216 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Solving the parameter identification problem of mathematical models using genetic algorithms

Applied Mathematics and Computation 153 (2004) 651–658

www.elsevier.com/locate/amc

Solving the parameter identificationproblem of mathematical models

using genetic algorithms

Emmanuel Karlo Nyarko a,*, Rudolf Scitovski b

a Faculty of Electrical Engineering, University of Osijek, Kneza Trpimira 2b,

HR-31000 Osijek, Croatiab Department of Mathematics, University of Osijek, Gajev trg 6, HR-31000 Osijek, Croatia

Abstract

Amethod for solving the parameter identification problem for ordinary second order

differential equations using genetic algorithms is given. The method is tested on two

numerical examples.

� 2003 Elsevier Inc. All rights reserved.

Keywords: Parameter identification; Genetic algorithms; Mathematical model

1. Introduction

Mathematical models used in applied research (biology, physics, economics,

etc.) are often defined by a system of ordinary differential equations. It should

be noted that in general the solution of such a system need not be an ele-

mentary function. Based on experimental data obtained, the parameters of themathematical models have to be determined. This problem is known in liter-

ature as the parameter identification problem.

The parameter identification problem is often solved using the quasi-linea-

rization method (see [1,2]) and smoothing the data method (see [2–4]). In

this paper only genetic algorithms (GAs) are used. GAs are optimization

* Corresponding author.

E-mail addresses: [email protected] (E.K. Nyarko), [email protected] (R. Scitovski).

0096-3003/$ - see front matter � 2003 Elsevier Inc. All rights reserved.

doi:10.1016/S0096-3003(03)00661-1

Page 2: Solving the parameter identification problem of mathematical models using genetic algorithms

652 E.K. Nyarko, R. Scitovski / Appl. Math. Comput. 153 (2004) 651–658

techniques based on the concepts of natural selection and genetics. One of the

main advantages to using GAs is that they require no knowledge or gradientinformation about the response surface. The results obtained will be compared

with earlier results obtained by other methods (see [1]).

2. The parameter identification problem

Let us assume (for simplicity sake) that the mathematical model is definedeither by a differential equation of the first order

dydt

¼ f ðt; yðtÞ; pÞ ð1Þ

or a differential equation of the second order

d2ydt

¼ f ðt; yðtÞ; y0ðtÞ; pÞ; ð2Þ

where p ¼ ðp1; . . . ; pnÞT is the vector of n unknown real parameters. We are also

given the experimental data ðti; yiÞ, i ¼ 1; . . . ;m where ti represents the values ofthe independent variable and, yi, the measured values of the corresponding

dependent variable. Usually we have n � m. With the given data one has to

estimate the optimal parameter vector, pH, and the optimal initial condition forthe differential equation (1) or (2) such that

F ðpHÞ ¼ minp2Rn

F ðpÞ; F ðpÞ ¼Xmi¼1

½yðti; pÞ � yi�2; ð3Þ

where yðti; pÞ is the solution of Eq. (1) or (2).

In this paper we shall consider the problem of determining the optimal

parameters of a mathematical model defined by a differential equation of the

second order. The initial condition of the model is generally not known. It is

not recommended to take the first data for that purpose because the error it

contains is not known (see [5]). The problem of finding the optimal initial

condition in the mathematical model could posed in the following way (see[1,2]):

Find the minimum of the functional

Uðl; mÞ ¼Xmi¼1

yl;mðti; pHÞ�

� yi�2; ð4Þ

where the function yl;m is the solution of the Cauchy problem

y00 ¼ f t; yðtÞ; y0ðtÞ; pH� �

; y0ðt1Þ ¼ m; yðt1Þ ¼ l ð5Þ

for Eq. (2).

Page 3: Solving the parameter identification problem of mathematical models using genetic algorithms

E.K. Nyarko, R. Scitovski / Appl. Math. Comput. 153 (2004) 651–658 653

This minimization problem may be solved by applying either the Nelder–

Meads Downhill Simplex Method (see [6]) or GAs.

3. Genetic algorithms

GAs are optimization techniques based on the concepts of natural selection,

genetics and evolution. The variables are represented as genes on a chromo-some. Each chromosome represents a possible solution in the search space.

Like nature, GAs solve the problem of finding good chromosomes by ma-

nipulating the material in the chromosomes blindly without any knowledge

about the type of problem they are solving. The only information they are

given is an evaluation (or fitness) of each chromosome they produce.

GAs feature a group of candidate solutions (population) in the search space.

Chromosomes with better fitness are found through natural selection and the

genetic operators, mutation and recombination. Natural selection ensures thatchromosomes with the best fitness will propagate in future populations. Using

the recombination operator, the GA combines genes from two parent chro-

mosomes to form two new chromosomes (children) that have a high proba-

bility of having better fitness than their parents. Mutation allows new areas of

the response surface to be explored. Given a problem, one must determine a

way or method of encoding the solutions of the problem into the form of

chromosomes and, secondly, define an evaluation function that returns a

measurement of the cost value (fitness) of any chromosome in the context ofthe problem. A GA then consists of the following steps (see [7]):

1. Initialize a population of chromosomes.

2. Evaluate each chromosome in the population.

3. Create new chromosomes by using GA operators.

4. Delete unsuitable chromosomes of the population to make room for the new

members.

5. Evaluate the new chromosomes and insert them into the population.6. If the stopping criterion is satisfied, then stop and return the best chromo-

some, otherwise, go to step 3.

Due to their nature, the advantages to using GAs are many: they require no

knowledge or gradient information about the response surface, discontinuities

present on the response surface have little effect on overall optimization per-

formance, they are resistant to becoming trapped in local optima, they perform

very well for large-scale optimization problems and can be employed for a widevariety of optimization problems. However they do have some disadvantages,

and these include having trouble finding the exact global optimum and

Page 4: Solving the parameter identification problem of mathematical models using genetic algorithms

654 E.K. Nyarko, R. Scitovski / Appl. Math. Comput. 153 (2004) 651–658

requiring a large number of fitness function evaluations or iterations. This is

more obvious in situations when the dimensionality of the problem is large.

4. Solving the parameter identification problem using genetic algorithms

The representation or encoding of the variables being optimized has a large

impact on search performance, as the optimization is performed on this rep-

resentation of the variables (see [8]). In the parameter identification problem

the parameters and initial conditions which need to be determined and opti-mized are encoded into chromosomes using the floating point implementation.

In the GA, after each iteration, the new values yðti; pÞ for each possible solution

of p are determined using the Runge–Kutta method. The fitness of each

chromosome is then determined using Eq. (3). The GA then consists of the

following steps (see [7]):

1. Initialize a population of chromosomes (possible solutions of p).2. Find the values yðti; pÞ for each chromosome, p, in the population using the

Runga–Kutta method.

3. Evaluate the fitness of each chromosome, p, in the population using Eq. (3).

4. Create new chromosomes by using GA operators.

5. Delete unsuitable chromosomes of the population to make room for the new

members.

6. Find the new values yðti; pÞ for each new chromosome, p, in the population

using the Runga–Kutta method.

7. Evaluate the fitness of each new chromosome, p, in the population using Eq.(3) and insert them into the population.

8. If the stopping criterion is satisfied, then stop and return the best chromo-

some, otherwise, go to step 4.

5. Numerical examples

Example 1. Enzyme effusion problem (see [2]):

y01 ¼ p1ð27:8� y1Þ þp42:6

ðy2 � y1Þ þ4991

tffiffiffiffiffiffi2p

p exp

� 0:5

lnðtÞ � p2p3

� �2!;

y02 ¼p42:7

ðy1 � y2Þ:

ð6Þ

According to the given data (Table 1) one has to estimate the parameter values

p1, p2, p3, p4, in the model.

Page 5: Solving the parameter identification problem of mathematical models using genetic algorithms

Table 1

Data for enzyme effusion problem

t y1 t y1 t y1 t y1

0.1 27.8 21.3 331.9 42.4 62.3 81.1 23.5

2.5 20.0 22.9 243.5 44.4 58.7 91.1 24.8

3.8 23.5 24.9 212.0 47.9 41.9 101.9 26.1

7.0 63.6 26.8 164.1 53.1 40.2 115.4 33.3

10.9 267.5 30.1 112.7 59.0 31.3 138.7 17.8

15.0 427.8 34.1 88.1 65.1 30.0 163.2 16.8

18.2 339.7 37.8 76.2 73.1 30.6 186.7 16.8

E.K. Nyarko, R. Scitovski / Appl. Math. Comput. 153 (2004) 651–658 655

Since only the data on the function y1 is given, y2 is calculated from the first

equation in the system (6) and substituted into the second equation thus ob-

taining a second order differential equation for the function y1.

Table

Result

Met

GA

GA

GA

Scito

f ðt; y; y 0; pÞ ¼ � 5:3

7:02p4y0 � p1y 0 þ

p1p42:7

ð27:8� yÞ

þ 1991

t2p42:7

t�

� 1� wp3

�exp

�� w2

2

�; ð7Þ

where w ¼ ðlnðtÞ � p2Þ=p3.Table 2 shows the results obtained using different number of generations

denoted by No. of GEN. The values of the parameters pi, i ¼ 1; 2; 3; 4 are given

as are the optimal initial conditions and the sum of squares deviationPmi¼1 yl;mðti; pHÞ � yi� �2

denoted by SS. As can be expected, increasing the

number of generations decreases the value of the functional (4), i.e., gives better

results. The results are also compared with those obtained by Scitovski and

Juki�c [2]. Fig. 1 shows the data and the graph of the searched function y1obtained.

Example 2. According to the given data ðti; yiÞ, i ¼ 1; . . . ;m (see Table 3) one

has to estimate the parameter values p1, p2, p3, and p4 of the function

yðt; pÞ ¼ p1 expðp3tÞ þ p2 expðp4tÞ ð8Þ

by minimizing the functional (4).

2

s obtained using GA

hod p1 p2 p3 p4 y1ð0:1Þ y2ð0:1Þ No. of

GEN.

SS

0.31938 2.70104 0.38920 0.07819 21.00 38.75 100 5229.73

0.30501 2.69865 0.40052 0.11663 22.02 39.44 200 4547.34

0.28452 2.67169 0.39268 0.16144 23.99 40.14 500 4068.38

vski [2] 0.3190 2.70100 0.41900 0.10310 22 39 – 5076.6

Page 6: Solving the parameter identification problem of mathematical models using genetic algorithms

Fig. 1. y1 for optimal initial condition (no. of generations¼ 500).

Table 3

Data for Example 2

y 64.0 66.0 69.5 74.0 80.8 91.0 103.5

T )1 )2/3 )1/3 0 1/3 2/3 1

656 E.K. Nyarko, R. Scitovski / Appl. Math. Comput. 153 (2004) 651–658

The functions p1 expðp3tÞ and p2 expðp4tÞ solve the second-order differential

equation

y00 � ðp3 þ p4Þy0 þ p3p4y ¼ 0; ð9Þ

from which we can estimate the parameters p3 and p4 by solving the parameter

identification problem for (9). Solving the linear least squares problem

F ðp1; p2Þ ¼Xmi¼1

½yi � p1 expðp3tÞ þ p2 expðp4tÞ�2; ð10Þ

we obtain the parameters p1 and p2 (see [2]).

Table 4 shows the results obtained using different number of generations

denoted by No. of GEN. As can be expected, increasing the number of gen-

erations decreases the the value of the functional (4) denoted by SS, i.e., gives

Page 7: Solving the parameter identification problem of mathematical models using genetic algorithms

Table 4

Results obtained using GA

p1 p2 p3 p4 No. of GEN. SS

43.2233 30.8774 0.6170 )0.2812 100 0.4396

40.8112 33.3234 0.6400 )0.2464 200 0.3859

37.7414 36.3533 0.6753 )0.2123 500 0.3347

Fig. 2. y for optimal initial condition (no. of generations¼ 500).

E.K. Nyarko, R. Scitovski / Appl. Math. Comput. 153 (2004) 651–658 657

better results. Fig. 2 shows the data and the graph of the searched function yobtained.

References

[1] R. Scitovski, D. Juki�c, I. Urbiha, Solving the parameter identification problem by using TLp

spline, Math. Commun. (Suppl.) 1 (2001) 81–91.

[2] R. Scitovski, D. Juki�c, A method for solving the parameter identification problem for ordinary

differential equations of the second order, Appl. Math. Comput. 74 (1996) 273–291.

[3] R. Gali�c, R. Scitovski, T. Maro�sevic, Application of the moving least square method in solving

the parameter identification problem of a mathematical model (in Croatian), in: T. Hunjak, Lj.

Marti�c, L. Nerali�c (Eds.), Proceedings of the 4th Conference on Operational Research, Zagreb,

1994, pp. 181–191.

[4] R. Gali�c, R. Scitovski, T. Maro�sevic, D. Juki�c, Optimal initial condition in mathematical model

(in Croatian), in: T. Hunjak, Lj. Marti�c, L. Nerali�c, (Eds.), Proceedings of the 5th Conference

on Operational Research, Zagreb, 1995, pp. 62–71.

Page 8: Solving the parameter identification problem of mathematical models using genetic algorithms

658 E.K. Nyarko, R. Scitovski / Appl. Math. Comput. 153 (2004) 651–658

[5] J.M. Varah, A spline least squares method for numerical parameter estimation in differential

equations, SIAM J. Sci. Stat. Comput. 3 (1982) 28–46.

[6] J.E. Dennis Jr., V. Torcon, Direct search methods on parallel machines, SIAM J. Optimization

1 (1991) 448–474.

[7] C.T. Lin, C.S.G. Lee, Neural Fuzzy Systems: A Neuro-Fuzzy Synergism to Intelligent Systems,

Prentice Hall PTR, Prentice-Hall, Inc, 1996, pp. 382–385.

[8] Z. Michalewicz, Genetic Algorithms+Data Structures¼Evolution Programs, Springer-Verlag,

Berlin, Heidelberg, New York, 1996, pp. 95–105.