adaptation and interaction in dynamical systems: modelling and rule discovery through evolving...

16
Adaptation and interaction in dynamical systems: Modelling and rule discovery through evolving connectionist systems Nikola Kasabov * Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Private Bag 92006, Auckland 1020, New Zealand Received 22 September 2004; accepted 10 January 2005 Abstract The paper presents a methodology for adaptive modelling and discovery of dynamic relationship rules from continuous data streams. In dynamic processes, underlying rules may change over time and tracing these changes is a difficult task for computer modelling. Evolving fuzzy neural networks (EFuNN) are used for this purpose here. EFuNNs belong to the group of evolving connectionist systems (ECOS). These are information systems that learn from data in a supervised mode through on-line adaptive clustering and allow for rule extraction, each rule representing input-output relationship within a cluster of data. Extracted rules, after each consecutive chunk of data is entered into the system, are compared in order to discover new patterns of interaction between input and output variables. Thus the stability and plasticity of the investigated process are evaluated. The rules are also used for the prediction of future events. To illustrate the methodology, a mathematical example is used, along with two real case studies. The first case study is from Macroeconomics and the second one is from Bioinformatics. # 2005 Elsevier B.V. All rights reserved. Keywords: Adaptive systems; Knowledge-based neural networks; Evolving connectionist systems; Macroeconomics; Bioinformatics 1. Introduction Many biological and social systems are char- acterized by a continuous adaptation and by a complex interaction of many variables over time. Such systems can be observed at different levels of the functioning of a living organism, e.g.: molecular, genetic, cellular, multi-cellular, neuronal, brain function, evolution. One of the challenges for information science is to be able to represent the dynamic processes, to model them, and to reveal ‘‘the rules’’ that govern the adaptation and the variable interaction over time. Decision making, related to complex and dynami- cally changing processes, requires sophisticated decision support systems (DSS) that are able to: www.elsevier.com/locate/asoc Applied Soft Computing 6 (2006) 307–322 * Tel.: +64 9 91 79506; fax: +64 9 91 79501. E-mail address: [email protected]. 1568-4946/$ – see front matter # 2005 Elsevier B.V. All rights reserved. doi:10.1016/j.asoc.2005.01.006

Upload: aut

Post on 23-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Adaptation and interaction in dynamical systems:

Modelling and rule discovery through

evolving connectionist systems

Nikola Kasabov *

Knowledge Engineering and Discovery Research Institute, Auckland University of Technology,

Private Bag 92006, Auckland 1020, New Zealand

Received 22 September 2004; accepted 10 January 2005

Abstract

The paper presents a methodology for adaptive modelling and discovery of dynamic relationship rules from continuous

data streams. In dynamic processes, underlying rules may change over time and tracing these changes is a difficult task for

computer modelling. Evolving fuzzy neural networks (EFuNN) are used for this purpose here. EFuNNs belong to the group

of evolving connectionist systems (ECOS). These are information systems that learn from data in a supervised mode through

on-line adaptive clustering and allow for rule extraction, each rule representing input-output relationship within a cluster of

data. Extracted rules, after each consecutive chunk of data is entered into the system, are compared in order to discover new

patterns of interaction between input and output variables. Thus the stability and plasticity of the investigated process are

evaluated. The rules are also used for the prediction of future events. To illustrate the methodology, a mathematical example

is used, along with two real case studies. The first case study is from Macroeconomics and the second one is from

Bioinformatics.

# 2005 Elsevier B.V. All rights reserved.

Keywords: Adaptive systems; Knowledge-based neural networks; Evolving connectionist systems; Macroeconomics; Bioinformatics

www.elsevier.com/locate/asoc

Applied Soft Computing 6 (2006) 307–322

1. Introduction

Many biological and social systems are char-

acterized by a continuous adaptation and by a

complex interaction of many variables over time.

Such systems can be observed at different levels of

* Tel.: +64 9 91 79506; fax: +64 9 91 79501.

E-mail address: [email protected].

1568-4946/$ – see front matter # 2005 Elsevier B.V. All rights reserved

doi:10.1016/j.asoc.2005.01.006

the functioning of a living organism, e.g.: molecular,

genetic, cellular, multi-cellular, neuronal, brain

function, evolution. One of the challenges for

information science is to be able to represent the

dynamic processes, to model them, and to reveal ‘‘the

rules’’ that govern the adaptation and the variable

interaction over time.

Decision making, related to complex and dynami-

cally changing processes, requires sophisticated

decision support systems (DSS) that are able to:

.

N. Kasabov / Applied Soft Computing 6 (2006) 307–322308

� l

earn and adapt quickly to new data in an on-line

mode;

� c

ontinuously learn patterns of variable relationship

from data streams;

� d

eal with vague, fuzzy and incomplete information,

as well as with crisp information.

In addition to the well-established neural network

methods (see [1]) new methods have been recently

developed that facilitate building on-line decision

support systems. One particular type, called evolving

connectionist systems (ECOS) [2] is explored in this

paper. ECOS were applied in [3,4] for building hybrid

decision support systems in finance and economics. In

this study, we focus mainly on the process of adaptive

modelling and knowledge discovery from series of

data representing complex dynamic processes with

the use of ECOS. The main research question is to

demonstrate how dynamic changes of rules can be

traced and analysed when chunks of new data is

incrementally fed into an ECOS model. As an illus-

tration, a mathematical example, and two case studies

are presented. The first one is on modelling and

prediction of macroeconomic indicators of world

economies over a period of several years. The second

case study is from the area of Bioinformatics.

2. Evolving connectionist systems – ECOS

The evolving connectionist systems paradigm

(ECOS) is broadly presented in [2]. ECOS are systems

that evolve in time through interaction with the

environment. The functioning of the ECOS is based on

the following general principles:

(1) E

COS learn and adapt in an on-line mode where

new data is incrementally presented;

(2) E

COS have ‘‘open’’ structure, where new inputs,

outputs, modules and connections can be intro-

duced at any stage of the system’s operation;

(3) E

COS learn a set of local models represented as

cluster-based functions;

(4) E

COS facilitate knowledge representation in the

forms of rules allocated to clusters of data.

Some implementations of ECOS are: evolving f-

uzzy neural network (EFuNN) [2,5–7]; evolving cl-

ustering method (ECM) and dynamic evolving fuzzy

neural network (DENFIS) [8]; evolving self-organis-

ing map (ESOM) [9]. The ECM and the EFuNN m-

ethods will be described briefly in this section mainly

from the point of view of adaptive learning and rule

extraction. A method for dynamic rule analysis is

presented in Section 3 and illustrated on an example.

Section 5 applies the method on a case study from

Bioinformatics, and Section 4 on a simple case study

of macroeconomic data.

2.1. Evolving clustering

Traditional statistical clustering methods, such as

k-means clustering, fuzzy C-means clustering, etc.,

require that the number of clusters is preliminary

defined [10]. They work on static batch of data and

require many iterations until the cluster centres are

calculated. For new incoming data the whole process

has to be repeated on both the new and the old data

together for many iterations.

The evolving clustering method ECM [8] is

concerned with an on-line, incremental creation of

clusters from a continuous stream of data. For each

cluster, information about its current cluster centre,

radius, and number of samples accommodated in the

cluster is maintained. With incoming data existing

clusters may be modified, or new clusters created.

ECM can be used either in an unsupervised mode

(only input data is available), or as part of a supervised

learning (both input data to a system and their desired

output values are available). The latter is the case of

the DENFIS and EFuNN.

2.2. Evolving fuzzy neural networks (EFuNN)

The architecture, the learning (evolving) algorithm,

and the rule extraction and rule insertion algorithms of

EFuNN are given in [2].

An EFuNN has a five-layer structure where nodes

and connections are created/connected as data

examples are presented. An optional short-term

memory layer can be used through a feedback

connection from the rule node layer (also known as

case nodes). The layer of feedback connections could

be used if temporal relationships between input data

are to be embedded in the structure. The third layer of

neurons (rule nodes) in EFuNN evolves through either

N. Kasabov / Applied Soft Computing 6 (2006) 307–322 309

Fig. 1. A simplified and exemplified (2 inputs, 1 output, 2 member-

ship functions) diagram of an EFuNN. The rule (case) nodes evolve

in time and represent cluster centres. The fuzzy inputs and fuzzy

outputs represent membership functions (MF).

supervised, or unsupervised learning. The fourth layer

of neurons in EFuNN represents a fuzzy quantisation

of the output variables, similar to the input fuzzy

neurons representation in layer two of neurons. The

fifth layer represents the output variables (Fig. 1).

Generally speaking, the incoming to the rule nodes

connection weights represent the coordinates of

cluster centres in the input space of clusters of data

samples that also have similar output values. Here

evolving clustering is performed in the input-output

space and only data examples that have similar input

and output values, according to defined criteria, are

clustered together. The outgoing from the rule nodes

connection weights are adjusted, based on the output

error with the use of the delta rule [1] and represent an

output function allocated to the cluster represented by

the corresponding rule node.

Different learning, adaptation and optimisation

strategies and algorithms can be applied on an EFuNN

structure. Some of them are: (a) active learning –

learning is performed when a stimulus (input pattern) is

presented and kept active; this is the main learning

mode; (b) passive (inner, sleep, ‘‘echo’’) learning mode

– learning is performed when there is no input pattern

presented to the EFuNN. In this case the process of

further elaboration of the connections in EFuNN is done

in a passive learning phase, when existing connections,

that store previously fed input patterns, are used as

‘‘echo’’ to reiterate the learning process.

Different structure optimisation techniques can be

applied during the learning process: (a) pruning and

forgetting – the nodes and connections that are not

actively participating in the learning process get pruned

according to set criteria; (b) aggregation and abstraction

– rule nodes that are close in the problem space

(accommodate similar exemplars) are merged together.

A simplified learning algorithm for EFuNN is given

in Appendix A (from [2,7]).

3. Adaptive modelling and dynamic rule

discovery with ECOS

3.1. Adaptive modelling with ECOS

As discussed in Section 2, ECOS incrementally

evolve rule nodes to represent clusters of input data,

where the first layer W1 of connection weights of these

nodes represent their co-ordinates in the input space,

and the second layer W2 represents the local models

(functions) allocated to each of the clusters.

Data samples are allocated to rule nodes based on

the similarity between the samples and the nodes

calculated either in the input space (this is the case in

some of the ECOS models, e.g. the dynamic neuro-

fuzzy inference system DENFIS), or in both the input

space and the output space (this is the case in the

evolving fuzzy neural network EfuNN [5] – Fig. 1,

Appendix A). Samples that have a distance to an

existing cluster center (rule node) N of less than a

threshold Rmax (for the EfuNN models the output

vectors of these samples have to be also different from

the output value associated with this cluster center in

not more than an error tolerance E) are allocated to the

same cluster Nc. Samples that do not fit into existing

clusters form new clusters. Cluster centers are

continuously adapted to new data samples, or new

cluster centers are created.

The distance between samples and rule nodes can

be measured in different ways. The most popular

measurement is the normalized Euclidean distance. In

a case of missing values for some of the input

variables, a partial normalized Euclidean distance can

be used which means that only the existing values for

the variables in a current sample S(x,y) are used for the

distance measure between this sample and an existing

rule node N (W1N,W2N):

dðS;NÞ ¼Pði¼1;...;nÞðxi �W1NðiÞÞ2

n; (1)

N. Kasabov / Applied Soft Computing 6 (2006) 307–322310

Table 1

Local prototype rules extracted from an EFuNN new model Mnew on the same problem from Fig. 2a

Rule 1: IF x1 is (Low 0.8) and x2 is (Low 0.8) THEN y is (Low 0.8), radius R1 = 0.24; N1ex = 6

Rule 2: IF x1 is (Low 0.8) and x2 is (Medium 0.7) THEN y is (Small 0.7), R2 = 0.26, N2ex = 9

Rule 3: IF x1 is (Medium 0.7) and x2 is (Medium 0.6) THEN y is (Medium 0.6), R3 = 0.17, N3ex = 17

Rule 4: IF x1 is (Medium 0.9) and x2 is (Medium 0.7) THEN y is (Medium 0.9), R4 = 0.08, N4ex = 10

Rule 5: IF x1 is (Medium 0.8) and x2 is (Low 0.6) THEN y is (Medium 0.9), R5 = 0.1, N5ex = 11

Rule 6: IF x1 is (Medium 0.5) and x2 is (Medium 0.7) THEN y is (Medium 0.7), R6 = 0.07, N6ex = 5

Rule 7: IF x1 is (High 0.6) and x2 is (High 0.7) THEN y is (High 0.6), R7 = 0.2, N7ex = 12

Rule 8: IF x1 is (High 0.8) and x2 is (Medium 0.6) THEN y is (High 0.6), R8 = 0.1, N8ex = 5

Rule 9: IF x1 is (High 0.8) and x2 is (High 0.8) THEN y is (High3 0.8), R9 = 0.1, N9ex = 6

Rules 7, 8, and 9 are created after the EFuNN model (initially trained on the old model data and representyed as 6 rule nodes and 6 rules

respectively) is adaptively trained on the new data.

Fig. 2. (a) A 3D plot of an ‘‘old’’ data D0 (data samples denoted

as ‘‘o’’) generated from a formula y ¼ 5:1x1 þ 0:345x21 �

0:83x1log10x2 þ 0:45x2 þ 0:57 exp ðx0:22 Þ in the sub-space of the

problem space defined by x1 and x2 both having values between

0 and 0.7, and new data D (samples denoted as ‘‘*’’) defined by x1

and x2 having values between 0.7 and 1; (b) test results of the initial

EFuNN model M0 (the dashed line) vs. the new EFuNN model Mnew

(the dotted line) on the generated test data D0tst (the first 42

data samples) and on the new test data Dtst (the last 30 samples)

(the solid line). The new model Mnew performs well on both the

old and the new test data, while the model M0 fails to predict the new

test data.

for all n input variables xi that have a defined value in

the sample S and an already established connection

W1N(i) to the cluster node N.

At any time of the learning process, rules can be

extracted from the ECOS structure. Each rule

associates a cluster from the input space to a local

output function applied to the data in this cluster, e.g.:

IF [data is in cluster Ncj, defined by a cluster center

Nj, a cluster radius Rj and a number of examples Njex in

this cluster] THEN [the output function is fc]

In the case of DENFIS [8], first order local fuzzy

rule models are derived incrementally from data, for

example:

IF [the value of x1 is in the area defined by a

triangular membership function with a center at 0.05,

left point of �0.05 and right point at 0.14) AND (the

value of x2 is in the area defined by a triangular

function (0.15, 0.25, 0.35), respectively)] THEN

[the output value y is calculated by the formula:

y = 0.01 + 0.7x1 + 0.12x2].

In case of EfuNNs [5] local simple fuzzy rule

models are derived, for example:

IF x1 is (Low 0.8) and x2 is (Low 0.8) THEN y is

(Low 0.8), radius R1 = 0.24; N1ex = 6 (see first rule

from Table 1), where low, medium and high are fuzzy

membership functions defined for the range of each of

the variables x1, x2, and y. The number and the type of

the membership functions can either be deduced from

the data through learning algorithms, or it can be

predefined based on human knowledge [11–14].

3.2. Tracing the emergence of new rules during

adaptive incremental learning

Here we will present a methodology for tracing

changes in rules after an already trained ECOS model

M0 on one set of data D0 (we will refer to it as ‘‘old’’

data) is further adaptively trained on a new data set D

that results in a new model Mnew. The old data can be

N. Kasabov / Applied Soft Computing 6 (2006) 307–322 311

either collected from an experiment, or can be

generated from an existing model (e.g., a formula)

in order for a new model to be further trained on new

data. The new model should both preserve the old

knowledge and adapt to the new data.

To compare the generalization ability of M0 and

Mnew, the data sets D0 and D are split randomly into

training and test sets – D0tr, D0tst, Dtr, Dtst. The training

sets are used to evolve the initial model M0 and the

new one Mnew and the test sets are used to validate the

models.

The methodology is illustrated here on an example

that includes a data set D0 generated from a non-linear

function y of two variables x1 and x2, and a new data

Fig. 3. The SOM annual macroeconomic map of the 34 countries (14 EU

data collection – year 1999), and 9 Asia-Pacific countries – Australia, China

United States) for the years 1994–1999. Legend: AS, Austria; IT, Italy; AU

LT, Lithuenia; CA, Canada; LV, Latvia; CH, China; NL, Netherlands; CZ

Denmark; PT, Portugal; EE, Estonia; RO, Romania; EL, Greece; SI, Sloven

SW, Sweden; HK, Hong Kong; TR, Turkey; HU, Hungary; UK, United K

set D (see Fig. 2a). The new data set D is in another

sub-space of the problem space. Data D0tr extracted

from D0 is first used to evolve an EFuNN model M0

(error threshold E = 0.15, and maximum radius

Rmax = 0.25) and six rules are extracted. The model

M0 is equivalent to a set of six local models. The model

M0 is further evolved on Dtr into a new model Mnew,

consisting of nine rules allocated to nine clusters, the

first six representing data D0tr and the last three – data

Dtr (Table 1). While on the test data D0tst both models

performed equally well, Mnew generalizes better on

Dtst (Fig. 2b).

From the analysis of the rules in Table 1 it can be

seen that the new model has the three new rules

member countries, 11 EU candidate countries (at the time of the last

, Japan, Hong Kong, Korea, Singapore, New Zealand, Canada and the

, Australia; JP, Japan; BE, Belgium; KR, Korea, Rep.; BG, Bulgaria;

, Czech Rep.; NZ, New Zealand; DE, Germany; PL, Poland; DK,

ia; ES, Spain; SK, Slovakia; FI, Finland; SN, Singapore; FR, France;

ingdom; IR, Ireland; US, USA.

N. Kasabov / Applied Soft Computing 6 (2006) 307–322312

evolved from the new data, but the old rules did not

change as there was no overlap between the new data

and the old one.

3.3. Adapting ECOS models on new data that

contain new variables or have missing values

The method above is applicable to a large-scale

multidimensional data where new variables may be

added at a later stage. This is possible as partial

Euclidean distance between samples and cluster

Fig. 4. EFuNN on-line, incremental learning and prediction of the GDP per

unemployment rate, and GDP per capita, for years (t � 1) and (t), to predict

the next 44 are for the EU candidates (year 1999), and the last 36 are for the

(the second section in the bottom).

centers can be measured based on a different number

of variables (Eq. (1)). If a current sample Sj contains a

new variable xnew, having a value xnewj and the sample

falls into an existing cluster Nc based on the common

variables, this cluster center N is updated so that it

takes a coordinate value xnewj for the new variable xnew,

or the new value may be calculated as weighted

k-nearest values derived from k new samples allocated

to the same cluster. Dealing with new variables in a

new model Mnew may help distinguish samples that

have very similar input vectors but different output

capita. Eight input variables are used in the model: CPI, interest rate,

i the GDP(t + 1) as output. The first 66 data are for the EU countries,

USA and the Asia-Pacific countries. Error measures are also shown

N. Kasabov / Applied Soft Computing 6 (2006) 307–322 313

values and therefore are difficult to deal with in an

existing model M. For example, samples S1 =

[x1 = 0.75, x2 = 0.824, y = 0.2] and S2 = [x1 = 0.75,

x2 = 0.823, y = 0.8] are easy to be learned in a new

ECOS model Mnew when a new variable x3 is added

that has, for example, values of 0.75 and 0.3

respectively for the samples S1 and S2.

The partial Euclidean distance (Eq. (1)) can be used

not only to deal with missing values, but also to fill in

these values in the input vectors. As every new input

vector xi is mapped into the input cluster (rule node) of

the model Mnew based on the partial Euclidean

distance of the existing variable values, the missing

value in xi, for an input variable, can be substituted

with the weighted average value for this variable

across all data samples that fall in this cluster.

Fig. 5. The desired vs. the predicted for the year 1999 GDP per

capita by a trained EFuNN on the past data of the 14 EU countries

plus the USA for the years 1994–1998. The EFuNN is trained on

eight-element input vectors [CPI(t � 1), Int(t � 1), Unempl(t � 1),

GDP(t � 1), CPI(t), Int(t), Unempl(t), GDP(t)] and on 1-element

output vector [GDP(t + 1)], where t indicates the current year. The

countries are in the following alphabetical order: 1, BE; 2, DK; 3,

DE; 4, EL; 5, ES; 6, FR; 7, IR; 8, IT; 9, NL; 10, AS; 11, PT; 12, FI;

13, SW; 14, UK; 15, USA.

4. A Case study from macroeconomics

4.1. Problem description

In this section it is shown how the evolving

connectionist techniques can be applied for the

purpose of tracing changes of variables and their

relationship over time on a simple and intuitive

macroeconomic data set used as a case study (see

Appendix B).

Large amount of macroeconomic data about annual

or a quarterly development of countries can be

collected from many diverse sources such as EURO-

STAT, Datastream, IMF, World Bank, OECD,

statistics departments, central banks of regions and

countries. The problem is how to analyse all this

information, to extract the knowledge from it and

make adequate predictions for the future. For our

simple case study we use four annual macroeconomic

indicators that are: GDP per capita in US dollars;

inflation rate; interest rate; and unemployment rate

[3]. Data of 34 countries that form three regional and

economic groups are used and analysed, namely: EU

countries; candidate EU countries (at the time of the

data collection – last year is 1999); and Asia-Pacific

countries (see Appendix B). EFuNN-based prediction

models for the USA and the EU countries are created

and rules are extracted at different times. This makes it

possible to analyse how the macroeconomic clusters

are evolving and changing.

4.2. A static clustering of macro-economic data

with the use of SOM

The 34 countries annual macroeconomic develop-

ment represented by the four element vectors of the

CPI, Interest rates, Unemployment, and GDP per

capita was mapped into a SOM – Fig. 3. Economically

close countries are mapped in a same topological

region – close nodes and same colour used.

Here a brief analysis of the map from Fig. 3 is

given. Most of the developed countries form two big

clusters in the left part of the map. The first one

includes Germany99, France99, Italy99, Canada99,

Sweden99, Australia99, NewZealand98–99, and

some other countries. The second one includes

USA94–99, Japan84–99, UK99, Irland99, Hon-

Kong94–99, Austria94–99, the Netherlands99, and

other countries. The development of single countries

and the way they ‘‘move’’ from one cluster to another

can be traced on the map. A third large cluster shows

macroeconomic development in years 1998 and 1999

of all countries that were EU candidates in the year

1998: Bulgaria, Czech Republic, Cyprus, Estonia,

N. Kasabov / Applied Soft Computing 6 (2006) 307–322314

Fig. 6. (a) Tracing the evolving macroeconomic clusters in Europe/US. The evolved clusters in the EFuNN predicting model for the GDP(t + 1)

when data from 1994 till 1998 are used. The upper figure shows a plot of the rule nodes - their cluster centers and receptive fields in the input

space ‘‘x = unemployment (t � 1), y = GDP(t � 1)’’. The lower figure shows the same nodes in the input space ‘‘x = CPI(t � 1) and y = Interest

rate(t � 1)’’. The data examples are represented as ‘‘o’’. The rule nodes are numbered in a larger font with the consecutive numbers of their

evolvement. Data examples are numbered from 1 to 45 meaning the consecutive input vectors used for the evolvement of the EFuNN in the

shown order. BE45 for example means the four parameter values for Belgium for the year 1994 and 1995 as (t = 1) and (t) input values to the

EFuNN model. (b) Tracing the evolving macroeconomic clusters in Europe/US in the 1999 model: the evolved clusters in the GDP(t + 1)

prediction EFuNN model updated on the 1999 data in the input space ‘‘x = unemployment rate (t � 1), y = GDP per capita(t � 1)’’ (upper

figure), and in the input space ‘‘x = CPI(t � 1) and y = interest rate (t � 1)’’ (lower figure). The data examples and the cluster centres (rule nodes)

are represented in the same way as in Fig. 6a.

N. Kasabov / Applied Soft Computing 6 (2006) 307–322 315

Fig. 6. (Continued ).

Hungary, Latvia, Lithunia, Malta, Poland, Romania,

Slovakia, and Slovenia. This cluster also contains

small European economies (e.g. Greece) that in

the previous 2 years have been moving towards the

more advanced European economies but still belong

to this cluster. The cluster also contains IR94, ES99

and FI94. A fourth cluster on the map (the bottom

right corner) includes Turkey94–99, some previous

years of development of Romania, Bulgaria and

Lithuenia. The fifth cluster (the central bottom part)

shows Korea and China as well as the Czech

Republic in the years 1995, 1997 and Portugal98–99.

However, the map from Fig. 3 does not show how

these clusters developed over time in a dynamic way.

This can be traced with the use of evolving clustering

as part of the supervised learning in an EFuNN as

shown in Fig. 6a,b where the macroeconomic

clusters with their radiuses and membership coun-

tries are shown for the years 1998 and 1999,

respectively.

N. Kasabov / Applied Soft Computing 6 (2006) 307–322316

Fig. 7. Using the model from Fig. 6b for predicting the European/

US economies for the future. The macro-economic parameter GDP

per capita for the year 2000 is predicted as shown on the graph along

with the 98 and the 99 values. The countries are in the following

alphabetical order: 1, BE; 2, DK; 3, DE; 4, EL; 5, ES; 6, FR; 7, IR; 8,

IT; 9, NL; 10, AS; 11, PT; 12, FI; 13, SW; 14, UK; 15, USA.

4.3. Dynamic modelling and prediction of

macroeconomic development

Experiments on prediction of the four macro-

economic indices have been carried out with the use of

EFuNN supervised learning models. The EFuNN

models learn in an incremental way, so every time a

new input vector is presented to the system, it outputs

the prediction value. Once the actual value becomes

known the system works out its prediction error and

uses it to adjust its connection weights. This is shown

on the experimental plots of the EFuNN simulation for

the prediction of the GDP per capita for all 34

countries (Fig. 4).

A second prediction model was created that included

all EU macro-economies (from clusters 1 and 2) and the

USA economy from cluster 1 and cluster 2 from Fig. 3.

The model can be trained incrementally on any new

data. Fig. 5 shows the testing results of the model – the

model is trained on the years 1994–1998 and tested for

prediction on the year 1999 data for the GDP per capita

(in US dollars). The mean square error of the evolved

EFuNN on the already used examples is very small –

less than 1% of the average GDP value. The test error

for the year 1999 is 1896US$ in absolute value, which is

about 8% of the average GDP per capita for the 15

countries for 1999 (23,600US$). The EFuNN system

was evolved with an error threshold of 0.1. Seven

clusters of countries are evolved.

The same model is further evolved on year 1999

data (taken here in the meaning of new data). The

change in the clusters between 1998 and 1999 is

shown in Fig. 6a,b. The root mean square error is less

than 1% of the average GDP value. Six clusters of

countries are obtained now. Clusters 1 and 7 from

Fig. 6a are aggregated automatically into the first

cluster with changed parameters – geometrical centre,

receptive field, number of examples accommodated.

This is also shown in the rules extracted from the

EFuNN and explained in the next section.

The graphs on Fig. 6a,b show the data and the rule

nodes in a chosen input sub-space. The circles around

the nodes represent their receptive fields. The

receptive field defines the area from the input space

that is ‘‘covered’’ by this rule node (the corresponding

rule). The number of clusters in the 1999 model (6) is

smaller than the number of clusters of the 1998 model

(7). This illustrates the tendency that the macro-

economy of Europe would converge into a smaller

number of clusters.

The EFuNN model from Fig. 6 can be used to

predict macroeconomic development. The evolved

model on the years from 1994 till the last one, 1999, is

used to predict values for the year 2000 – see Fig. 7.

Similar to the EFuNN model of the GDP per capita,

models are produced for the rest of the macroeco-

nomic parameters.

4.4. Extracting rules for the prediction of

macroeconomic development

The EFuNN models developed in the previous sub-

section can be used to extract rules at any time of the

system operation. One rule represents the information

and knowledge accumulated in one rule node that

includes: the position of the cluster centre in the input

space, the size of the cluster, the number of examples

accommodated, the associated output cluster of values

from the output space – the centre and its radius; the

radius being the same for all output clusters equal to

the error threshold.

Table 2 shows the seven rules extracted from the

EU and USA model up to 1998 (see Figs. 5 and 6) for

N. Kasabov / Applied Soft Computing 6 (2006) 307–322 317

Table 2

Rules extracted from the European model up to the year 1998 (incl.) for the prediction of the GDP(t + 1) on four parameters used in the model:

CPI; interest rate; unemployment, and GDP per capita, for the year (t) (input variables [5] till [8]) and the year (t � 1) (input variables [1] till [4])

Inp

Var Rule

[1]

CPI(t � 1)

[2]

Inter(t � 1)

[3]

Unem(t � 1)

[4]

GDP(t � 1)

[5]

CPI(t)

[6]

Inter(t)

[7]

Unem(t)

[8]

GDP(t)

Cluster

radius

Output

GDP(t + 1)

Numb

examp.

1 (1 0.7) (2 0.8) (2 0.7) (2 0.8) (1 0.7) (2 0.8) (2 0.7) (2 0.9) 0.15 (2 0.9) 18

2 (1 0.6) (2 0.8) (1 0.5) (2 0.5) (1 0.6) (2 0.8) (1 0.6) (3 0.5) 0.10 (3 0.5) 4

3 (2 0.6) (3 0.6) (2 0.6) (1 0.8) (2 0.6) (2 0.6) (2 0.6) (1 0.8) 0.20 (1 0.8) 6

4 (2 0.7) (2 0.5) (3 0.8) (1 0.6) (2 0.5) (2 0.6) (3 0.7) (1 0.6) 0.11 (1 0.6) 3

5 (2 0.5) (3 0.6) (2 0.9) (2 0.7) (1 0.6) (2 0.8) (2 0.8) (2 0.8) 0.09 (2 0.8) 8

6 (1 0.5) (2 0.7) (1 0.6) (2 0.8) (1 0.5) (2 0.7) (1 0.6) (2 0.7) 0.07 (2 0.6) 4

7 (1 0.8) (2 0.8) (2 0.6) (2 0.8) (1 0.8) (2 0.7) (2 0.6) (2 0.9) 0.12 (2 0.9) 2

In the rules 1, 2 and 3 denote respectively the membership functions (MF) ‘‘Small’’, ‘‘Medium’’ and ‘‘Large’’, and the number next to the MF

number is the membership degree (here the range of the GDP per capita is: GDPmin = 9000US$; GDPmax = 40,000US$; the membership

functions of Small, Medium, and Large are triangular, uniformly distributed on the range, i.e. the centre of Small is 9000, the centre of Large is

40,000 and the centre of Medium is 15,500). The respective max/min values for the CPI, IntRate and Unemployment are 12/0, 12/2, and 25/1.

The rules represent the clusters from Fig. 6a.

the prediction of the GDP. The following parameter

values are used in the EFuNN model: MF = 3;

Er = 0.1; maximum radius of a cluster is 0.2; one-

out-of-n mode; normalisation is used; normalised

fuzzy distance is measured; threshold for the rule

extraction is 0.5. The rules would change with new

data being fed (data for the year 1999 and further).

Table 3 shows the six rules extracted from the EU/

USA model from the 1999 model (see Fig. 6). When

compared, the two sets of rules show similarities and

differences in the macroeconomic development of the

European countries from year to year. The similarities

represent the stable component and the differences

represent the change in the rules. The number of

clusters (rules) of the medium GDP per capita

countries has decreased from 4 in the 1998 model

to 3 in the 1999 model. The other number of rules (i.e.

for large GDP and for small GDP have not changed

but the number of countries accommodated in

these rules has changed. The rules may further change

Table 3

Rules for the prediction of the GDP per capita (t + 1) extracted from the

Inp var

rule#

[1]

CPI(t � 1)

[2]

Inter(t � 1)

[3]

Unem(t � 1)

[4]

GDP(t � 1)

[5]

CPI(t

1 (1 0.7) (2 0.8) (2 0.7) (2 0.9) (1 0.8

2 (1 0.6) (2 0.8) (1 0.6) (2 0.6) (1 0.6

3 (2 0.6) (2 0.6) (2 0.6) (1 0.8) (2 0.6

4 (2 0.5) (2 0.7) (3 0.7) (1 0.6) (1 0.6

5 (1 0.6) (2 0.7) (2 0.9) (2 0.7) (1 0.6

6 (1 0.7) (2 0.8) (2 0.6) (2 0.9) (1 0.7

If compared with the rules from Fig. 8 we can notice the stability and the pla

could be traced in an evolving model. The rules represent the clusters fr

with new data being fed (data for the year 2000 and

further).

In this particular experiment the number of the

rules is reduced from 14 to 13 (the same as the number

of clusters shown in Fig. 6a,b) after the system was

trained on the 1999 data, which indicates that the

countries are getting closer in terms of the four

parameters used for the experiments here.

In Tables 2 and 3, the rules extracted from the GDP

model are shown. Similar rules for the CPI, the

Interest rates, and the Unemployment rate, are

extracted from the corresponding EFuNN models.

5. A case study from medical decision support

and bioinformatics

In many medical decision support systems, new

data become available continuously and the already

created models need to be adapted to the new data.

Europe/USA model up to the year 1999 (incl.) (Fig. 6b)

)

[6]

Inter(t)

[7]

Unem(t)

[8]

GDP(t)

Cluster

radius

Output

GDP(t + 1)

Numb

examp.

) (2 0.6) (2 0.6) (2 0.9) 0.09 (2 0.9) 24

) (2 0.7) (1 0.6) (2 0.5) 0.10 (3 0.5) 10

) (2 0.6) (2 0.6) (1 0.8) 0.19 (1 0.8) 8

) (2 0.6) (3 0.6) (1 0.6) 0.12 (1 0.6) 4

) (2 0.7) (2 0.9) (2 0.8) 0.09 (2 0.8) 9

) (2 0.6) (2 0.5) (2 0.9) 0.15 (2 1) 5

sticity of some of the rules as rules may change from year to year that

om Fig. 6b.

N. Kasabov / Applied Soft Computing 6 (2006) 307–322318

How rules (profiles) change during this process of

adaptation can be traced in the ECOS as this is

illustrated in the following case study example.

Fig. 8. The rules (profiles) of class 1 (Survive) (a) and class 2 (Fatal) (b), b

after an ECF model is trained on 50% of the data (28 samples), and the corre

the data (c and d). One new rule was added for class one and 2 new rule

represents a high value; green – a low value.

The second case study uses the DLBCL lymphoma

data set and the problem is predicting survival

outcome over 5 years period. This data set contains

ased on a clinical variable (IPI) and 11 genes as found in Ship et al.,

sponding rules after the ECF model was adapted on the other 50% of

s for class two while the other rules were not changed. Red colour

N. Kasabov / Applied Soft Computing 6 (2006) 307–322 319

58 vectors, 30 cured DLBCL lymphoma disease cases,

and 28 refractory [14,15]. There are 6817 gene

expression variables. Clinical data is available for this

data set represented as IPI, an International Prognostic

Index, which is an integrated number representing

overall effect of several clinical variables [14,15]. The

task is, based on the existing data, to: (1) create a

prognostic system that predicts the survival outcome

of a new patient; (2) to extract profiles that can be used

to provide an explanation for the prognosis; (3) to

trace the change of the profiles when new data is added

to the model.

Fig. 8 shows the rules (profiles) of class 1 (Survive)

– (a), and class 2 (Fatal) – (b), based on a clinical

variable (IPI) and 11 genes as found in Ship et al, after

an ECF model is trained on 50% of the data (28

samples). The corresponding rules, after the ECF

model was adapted on the other 50% of the data, are

shown in Fig. 8c and d. One new rule was added for

class one, and two new rules – for class two, while the

other rules were not changed. Red colour represents a

high value; green colour – a low value of a variable.

6. Conclusions and directions for further research

The evolving connectionist systems are useful

techniques for modelling, visualisation, prediction and

rule elucidation from complex dynamic processes.

The rules of development can be extracted and traced

over time that may help understand the complexity

and the dynamics of the processes. This is illustrated

in the paper on a mathematical example and on two

simple case studies – one from macroeconomics, and

another – from Bioinformatics.

Appendix A. The EFuNN learning algorithm (from [2,

1. Set initial values for the system parameters: number of membership fu

error threshold E; aggregation parameter Nagg – number of consecutive

pruning parameters OLD an Pr; a value for m (in m-of-n mode); maxi

for rule extraction.

2. Set the first rule node r0 to memorise the first example (x,y):

W1(r0) = xf, and W2(r0) = yf;

3. Loop over presentations of new input-output pairs (x,y)

{

3.1. Evaluate the local normalised fuzzy distance D between xf and

In the EFuNN models, as well in the other

modelling techniques, input variables for the model

(the features) have to be selected in advance, as this

may be crucial for the prediction results. For example,

the used in the case study features were appropriate for

the prediction of the GDP of the European countries,

but did not suit the prediction of the GDP for the USA.

The selected set of features were appropriate for

achieving a good prediction for the CPI, the

unemployment rate, and the interest rates for the

USA economy. A set of difference (or growth) features

would be more appropriate for the prediction of the

GDP for the USA economy, and for the prediction of

the other macro-economic indices for the European

economies. In the second case study example, it is not

clear if all 11 genes are important for the outcome

prognosis as it can be also seen from Fig. 8.

One direction for further research is to analyse the

internal relationship between variables within the

rules and their dynamics over time.

Acknowledgements

This project is partially supported by the New

Economy Research Fund of New Zealand, adminis-

tered by the Foundation of Research, Science and

Technology, project NERF-AUTX02001. The ECOS

simulators used in this paper are part of data mining

and decision support environment NeuCom

(www.theneucom.com) and the SIFTWARE tool

(www.peblnz.com). Using these simulators for com-

mercial purposes is subject to obtaining a permission

from KEDRI (www.kedri.info) and Pacific Edge

Biotechnology Ltd (www.peblnz.com).

7]).

nctions; initial sensitivity thresholds (default Sj = 0.9);

examples after each aggregation is performed;

mum radius limit Rmax; thresholds T1 and T2

the existing rule node connections W1 (formulae (1))

N. Kasabov / Applied Soft Computing 6 (2006) 307–322320

Appendix A (Continued)

3.2. Calculate the activation A1 of the rule node layer. Find the closest rule node rk (or the closest m rule nodes in case of

m-of-n mode) to the fuzzy input vector xf for which A1(rk) > = Sk (sensitivity threshold for the node rk),

if there is no such a node, create a new rule node for (xf,yf)

else

Find the activation of the fuzzy output layer A2 = W2�A1(1 � D(W1,xf))) and the normalised output error Err = jj y � y0jj/Nout.

if Err > E

Create a new rule node to accommodate the current example (xf,yf)

else

Update W1(rk) and W2(rk) according to (2) and (3) (in case of m-of-n system update all the m rule nodes with the highest A1

activation).

3.3. Apply aggregation procedure of rule nodes after each group of Nagg examples are presented

3.4. Update the values for the rule node rk parameters Sk, Rk, Age(rk), TA (rk).

4. Prune rule nodes if necessary, as defined by pruning parameters.

5. Extract rules from the rule nodes (

}

Appendix B. The macroeconomic data used in the case study

EU member

country

CPI Int.

rates

Unempl. GDP

cap.

EU candid.

country

CPI Int.

rates

Unempl. GDP

cap.

Asia-Pacific

and USA

CPI Int.

rates

Unempl. GDP cap.

BE94 2.4 6.6 10.0 23501.88 BG94 96.0 102.5 12.8 1070.584 AU94 1.9 5.4 5.6 18864.80

DK94 2.1 5.6 8.2 29203.53 CZ94 10.0 15.0 3.2 3977.484 CA94 0.2 5.8 10.4 19339.92

DE94 2.7 5.6 8.4 25703.15 EE94 47.6 20.0 7.6 1551.433 JP94 0.7 4.5 2.9 37523.78

EL94 10.7 7.7 8.9 9493.891 HU94 18.8 27.3 11.4 4088.381 US94 2.6 7.1 6.1 27064.55

ES94 4.7 8.3 24.1 13069.69 LV94 35.8 35.3 20.0 1387.861 AU95 4.6 7.5 8.5 20011.71

FR94 1.8 6.2 12.3 23603.64 LT94 72.1 100.0 17.3 1128.341 CA95 2.1 7.3 9.4 20022.78

IR94 2.3 7.7 14.3 15249.09 PL94 32.2 42.2 16.0 2552.231 JP95 �0.1 3.4 3.2 41016.32

IT94 4.1 7.7 11.4 18223.49 RO94 137.0 93.1 10.9 1321.433 US95 2.8 8.8 5.6 28159.58

NL94 2.8 5.6 7.1 22839.21 SK94 13.3 17.6 14.8 2575.209 AU96 2.6 7.1 8.6 22125.82

AS94 2.9 5.6 3.8 24893.14 SI94 21.0 37.7 14.2 7228.965 CA96 1.6 4.5 9.6 20393.80

PT94 5.4 7.7 7.0 9406.548 TR94 106.3 104.0 8.1 2136.126 JP96 0.1 3.1 3.4 36635.78

FI94 1.1 5.6 16.6 19814.19 BG95 62.1 79.8 11.1 1450.371 US96 2.9 8.3 5.4 29447.22

SW94 2.4 4.0 9.4 23522.05 CZ95 9.2 14.3 2.9 5042.864 AU97 0.3 5.4 8.6 21893.79

UK94 2.4 6.6 9.6 17748.93 EE95 29.0 15.9 9.7 2323.289 CA97 1.6 3.5 9.1 20823.87

BE95 1.4 7.1 9.9 27688.33 HU95 28.4 32.5 11.3 4371.519 JP97 1.7 2.6 3.4 33470.18

DK95 2.0 8.1 7.2 34468.86 LV95 25.0 28.3 18.9 1760.944 US97 2.3 8.4 4.9 30978.79

DE95 1.7 6.6 8.2 30118.61 LT95 39.7 91.8 17.5 1603.128 AU98 0.8 5.0 8.0 19296.98

EL95 8.9 12.0 9.2 11268.59 PL95 27.9 36.7 15.2 3268.609 CA98 1.0 5.1 8.3 19913.63

ES95 4.7 11.0 22.9 15116.65 RO95 32.3 45.1 9.5 1562.484 JP98 0.7 2.4 4.1 30177.30

FR95 1.7 7.3 11.7 27027.11 SK95 9.9 18.3 13.1 3249.357 US98 1.6 8.4 4.5 32371.24

IR95 2.6 11.7 12.3 18313.38 SI95 13.5 20.7 14.5 9418.932 AU99 1.5 4.8 7.2 20695.62

IT95 5.3 11.7 11.9 19465.62 TR95 93.2 91.5 6.9 2793.032 CA99 1.8 4.9 7.6 20874.28

NL95 1.9 6.6 6.9 26818.29 BG96 121.6 300.3 12.5 1094.422 JP99 �0.3 2.3 4.7 34402.24

AS95 2.2 6.7 3.9 29274.04 CZ96 8.8 13.9 3.5 5618.062 US99 2.1 8.0 4.2 33933.58

PT95 4.2 11.7 7.3 11150.55 EE96 23.0 13.8 10.0 2835.259 CH94 24.1 15 2.8 453.8093

FI95 0.8 6.6 15.4 25519.55 HU96 23.5 27.8 10.7 4437.277 HK94 8.8 7.3 1.9 21844.09

SW95 2.9 9.9 8.8 27153.07 LV96 17.6 19.1 18.3 1981.451 KR94 6.2 12.5 2.4 9035.619

UK95 3.4 8.2 8.7 19207.55 LT96 24.6 62.3 16.4 2099.703 NZ94 2.8 8.4 8.1 14562.10

BE96 2.1 6.6 9.7 26878.00 PL96 19.9 25.0 13.1 3696.670 SN94 3.1 6.5 2.6 23783.86

DK96 2.1 7.2 6.8 34816.05 RO96 38.8 43.5 6.6 1560.051 CH95 17.1 12 2.9 579.6171

DE96 1.4 6.2 8.9 29112.06 SK96 5.8 16.2 12.8 3505.221 HK95 4.5 8 3.2 22765.22

EL96 8.2 10.9 9.6 11897.31 SI96 9.9 21.5 14.5 9486.336 KR95 4.5 12.5 2 10872.87

N. Kasabov / Applied Soft Computing 6 (2006) 307–322 321

Appendix B (Continued)

EU member

country

CPI Int.

rates

Unempl. GDP

cap.

EU candid.

country

CPI Int.

rates

Unempl. GDP

cap.

Asia-Pacific

and USA

CPI Int.

rates

Unempl. GDP cap.

ES96 3.6 9.0 22.2 15708.41 TR96 79.4 92.8 6.1 2801.376 NZ95 2.9 10.1 6.3 16818.37

FR96 2.0 6.4 12.4 26941.92 BG97 1061.5 209.8 14.0 1136.308 SN95 1.7 6 2.7 27523.79

IR96 1.7 9.9 11.6 19973.93 CZ97 8.5 13.9 5.2 5165.963 CH96 8.3 10.1 3 671.2046

IT96 4.0 9.9 12.0 21842.17 EE97 11.2 18.4 9.7 3036.381 HK96 6.3 8 2.8 24716.34

NL96 2.0 6.2 6.3 26506.04 HU97 18.3 22.4 10.9 4510.119 KR96 4.9 11.1 2 11446.39

AS96 1.5 6.2 4.3 28758.02 LV97 8.4 15.1 14.4 2228.753 NZ96 2.6 10.3 6.1 18166.88

PT96 3.1 8.1 7.3 11580.36 LT97 8.9 27.1 14.1 2550.053 SN96 1.4 5.5 3 28963.74

FI96 0.6 6.2 14.6 25125.13 PL97 14.8 25.0 10.5 3698.310 CH97 2.8 8.6 3 730.2216

SW96 0.8 8.2 9.6 29575.41 RO97 160.9 56.0 8.9 1556.907 HK97 5.8 8 2.2 26623.61

UK96 2.4 7.8 8.2 20060.55 SK97 6.0 15.9 12.5 3623.796 KR97 4.5 15.3 2.6 10381.88

BE97 1.6 5.8 9.4 24336.20 SI97 8.4 19.1 14.9 9548.725 NZ97 0.8 9.4 6.6 17775.99

DK97 2.3 6.3 5.6 31961.49 TR97 85.3 93.4 6.4 2975.614 SN97 2 5.5 2.4 28970.36

DE97 1.9 5.6 9.9 25780.23 BG98 18.7 14.1 12.2 1377.469 CH98 �0.7 7.1 3.1 772.4022

EL97 5.5 9.9 9.8 11514.43 CZ98 10.7 13.5 7.5 5488.726 HK98 2.8 9.9 4.7 24893.97

ES97 1.9 6.4 20.8 14393.66 EE98 10.5 16.5 9.9 3391.875 KR98 7.5 11.1 6.8 6840.121

FR97 1.2 5.6 12.3 24325.34 HU98 14.1 19.7 9.9 4659.209 NZ98 0.4 8.9 7.5 14427.66

IR97 1.5 7.1 9.8 21535.54 LV98 4.7 13.1 13.8 2513.289 SN98 �0.3 5.9 3.2 24496.42

IT97 2.0 7.1 12.1 20586.60 LT98 5.1 21.6 13.3 2863.991 CH99 0 5 3 791.3046

NL97 2.2 5.6 5.2 24130.14 PL98 11.6 24.5 10.9 4059.731 HK99 �4 8.5 6 23639.57

AS97 1.3 5.6 4.4 25615.78 RO98 59.1 38.8 10.4 1839.840 KR99 0.8 8.5 6.3 8711.929

PT97 2.3 7.1 6.8 11041.68 SK98 6.7 16.5 15.6 3786.201 NZ99 0.5 7.1 6.8 14596.51

FI97 1.2 5.6 12.7 24022.38 SI98 7.9 16.0 14.5 10024.23 SN99 0 5.8 3.3 24807.76

SW97 0.9 6.7 9.9 26786.31 TR98 83.7 93.9 6.4 3087.431

UK97 3.2 7.2 7.0 22641.23 BG99 2.6 13.6 13.7 1422.346

BE98 1.0 4.8 9.5 24981.72 CZ99 2.1 9.0 9.4 5180.776

DK98 1.8 5.0 5.1 32903.07 EE99 3.3 8.6 12.0 3503.575

DE98 1.0 4.6 9.4 26232.61 HU99 10.0 16.7 9.6 5070.581

EL98 4.7 8.5 10.7 11535.17 LV99 2.3 13.6 9.1 2622.108

ES98 1.8 4.9 18.7 14995.52 LT99 0.8 14.4 10.0 2817.907

FR98 0.8 4.7 11.7 24958.29 PL99 7.3 17.5 13.3 3977.692

IR98 2.4 5.0 7.8 23025.41 RO99 43.2 35.0 11.5 1507.497

IT98 2.0 5.0 12.2 21050.44 SK99 10.5 14.9 19.2 3555.579

NL98 2.0 4.6 4.0 24925.68 SI99 6.2 12.0 13.1 10802.41

AS98 1.0 4.8 4.7 26109.71 TR99 63.6 79.3 7.3 2889.758

PT98 2.7 5.0 5.1 11669.40

FI98 1.5 4.6 11.4 25167.72

SW98 0.4 5.2 8.3 26818.57

UK98 3.4 5.7 6.3 24097.07

BE99 1.1 4.7 9.0 24760.10

DK99 2.4 5.0 5.2 32727.21

DE99 0.6 4.5 8.7 25782.08

EL99 2.7 6.4 10.4 11873.06

ES99 2.3 4.4 15.9 15368.53

FR99 0.6 4.9 11.3 24593.61

IR99 1.6 4.8 5.7 24529.16

IT99 1.7 4.0 11.4 20734.37

NL99 2.2 4.6 3.3 24987.81

AS99 0.6 4.3 3.7 25793.41

PT99 2.3 4.8 4.5 11823.92

FI99 1.2 4.7 10.2 25194.63

SW99 0.3 5.0 7.2 26869.68

UK99 1.6 5.1 6.2 24632.55

N. Kasabov / Applied Soft Computing 6 (2006) 307–322322

References

[1] M. Arbib (Ed.), The Handbook of Brain Theory and Neural

Networks, 2nd ed., MIT Press, Cambridge, MA, 2003.

[2] N. Kasabov, Evolving connectionist systems: methods and

applications in Bioinformatics, in: Brain Study and Intelligent

Machines, Springer-Verlag, London, 2002.

[3] N. Kasabov, et al. Hybrid intelligent decision support systems

and applications for risk analysis and discovery of evolving

economic clusters in Europe, in: N. Kasabov (Ed.), Future

Directions for Intelligent System and Information Sciences,

Springer-Verlag, Physica Verlag, Heidelberg, 2000.

[4] L. Rizzi, et al. Simulation of ECB decisions and forecast of

short term Euro rate with an adaptive fuzzy expert system, Eur.

J. Operat. Res. 145 (2003) 363–381.

[5] N. Kasabov, Evolving fuzzy neural networks for on-line

supervised/unsupervised, knowledge-based learning, IEEE

Trans. SMC B: Cybernetics 31 (6) (2001) 902–918.

[6] N. Kasabov, Evolving connectionist systems for adaptive

learning and knowledge discovery: methods, tools, applica-

tions, in: Proceedings of the First International IEEE Sympo-

sium on Intelligent Systems, 2002.

[7] N. Kasabov, Adaptive Learning Method and System, 2001,

PCT WO 01/78003.

[8] N. Kasabov, Q. Song, DENFIS: dynamic, evolving neural-

fuzzy inference systems and its application for time-series

prediction, IEEE Trans. Fuzzy Syst. 10 (2) (2002) 144–

154.

[9] D. Deng, N. Kasabov, Evolving self-organizing maps for on-

line learning: data analysis and modelling, in: IJCNN 2000 on

Neural Networks Neural Computing: New Challenges and

Perspectives for the New Millennium, IEEE Press, 2000.

[10] J.C. Bezdek, Analysis of Fuzzy Information, CRC Press, Boca

Raton, FL, 1987.

[11] J.C. Bezdek, Pattern Recognition with Fuzzy Objective Func-

tion Algorithms, Plenum Press, New York, 1981.

[12] T. Takagi, M. Sugeno, Fuzzy Identification of systems and its

applications to modeling and control, IEEE Trans. Syst. Man,

Cybernetics 15 (1985) 116–132.

[13] N. Kasabov, Foundations of Neural Networks, Fuzzy Systems

and Knowledge Engineering, MIT Press, 1996.

[14] M.A. Shipp, et al. Diffuse large B-cell lymphoma outcome

prediction by gene-expression profiling and supervised

machine learning, Nature Med. 8 (1) (2002) 68–74.

[15] M.A. Shipp, et al. Supplementary information for diffuse large

B-cell lymphoma outcome prediction by gene-expression

profiling and supervised machine learning, Nature Med. 8

(1) (2002) 68–74.