an efficient cmac neural network for stock index forecasting

8
An efficient CMAC neural network for stock index forecasting Chi-Jie Lu a , Jui-Yu Wu b,a Department of Industrial Engineering and Management, Ching Yun University, Taiwan, ROC b Department of Business Administration, Lunghwa University of Science and Technology, Taiwan, ROC article info Keywords: Cerebellar model articulation controller Neural network Stock index forecasting Support vector regression Back-propagation neural network abstract Stock index forecasting is one of the major activities of financial firms and private investors in making investment decisions. Although many techniques have been developed for predicting stock index, build- ing an efficient stock index forecasting model is still an attractive issue since even the smallest improve- ment in prediction accuracy can have a positive impact on investments. In this paper, an efficient cerebellar model articulation controller neural network (CAMC NN) is proposed for stock index forecast- ing. The traditional CAMC NN scheme has been successfully used in robot control due to its advantages of fast learning, reasonable generalization capability and robust noise resistance. But, few studies have been reported in using a CMAC NN scheme for forecasting problems. To improve the forecasting performance, this paper presents an efficient CMAC NN scheme. The proposed CMAC NN scheme employs a high quan- tization resolution and a large generalization size to reduce generalization error, and uses an efficient and fast hash coding to accelerate many-to-few mappings. The forecasting results and robustness evaluation of the proposed CMAC NN scheme were compared with those of a support vector regression (SVR) and a back-propagation neural network (BPNN). Experimental results from Nikkei 225 and Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) closing indexes show that the performance of the proposed CMAC NN scheme was superior to the SVR and BPNN models. Ó 2011 Elsevier Ltd. All rights reserved. 1. Introduction Over the past three decades there has been a growing interest in financial time series forecasting (De Gooijer & Hyndman, 2006; Lawr- ence, Goodwin, O’Connor, & Önkal, 2006). Stock index forecasting is one of the major activities of financial firms and private investors in making investment decisions. Moreover, it is regarded as a challen- ging task of the financial time series prediction process since the stock market is a complex, evolutionary, and nonlinear dynamic sys- tem (Atsalakis & Valavanis, 2009; Hall, 1994; Yaser & Atiya, 1996). Many techniques have been applied to the domain of stock index prediction, and are traditionally categorized as statistical methods and spectral analysis (Chatfield, 2001; De Gooijer & Hyndman, 2006; Lawrence et al., 2006). Zhang, Patuwo, and Hu (1998) found that the traditional statistical methods based on linear models, such as Box–Jenkins approach (autoregressive integrated moving average models), have difficulty in measuring the performance of real sys- tems with nonlinear behavior. Rigozo, Echer, Nordemann, Vieira, and de Faria (2005) compared the advantages and drawbacks of four classical spectral analytical methods. These approaches are difficult to apply for general practitioners; that is, prior knowledge of signal processing is required to adjust specific parameters for these spec- tral analytical methods. Neural networks (NNs) which can accurately model nonlinear systems, have been found to be useful techniques for stock index prediction. It is due to their ability to capture subtle functional re- lationships among the empirical data even though the underlying relationships are hard to describe (Atsalakis & Valavanis, 2009; Vellido, Lisboa, & Vaughan, 1999; Zhang et al., 1998). Unlike tradi- tional statistical models, NNs are data-driven and non-parametric models. They do not require strong model assumptions and can map any nonlinear function without a priori assumption about the properties of the data (Chauvin & Rumelhart, 1995; Haykin, 1999; McNelis, 2004). A back-propagation neural network (BPNN) is the most popular NN algorithm for stock index forecasting (Atsa- lakis & Valavanis, 2009; Vellido et al., 1999; Zhang et al., 1998). However, the BPNNs also suffer from a number of disadvantages such as the risk of model over-fitting and difficulty in obtaining a stable solution (Cao, 2003; Cao & Tay, 2001; Tay & Cao, 2003). Support vector regression (SVR) based on statistical learning the- ory is a novel algorithm and has been receiving increasing attention to solve nonlinear regression estimation problems (Vapnik, 1999, 2000). It has been successfully applied in different problems of time series prediction such as production forecasting of machinery in- dustry, traffic flow prediction and financial time series forecasting (Castro-Neto, Jeong, Jeong, & Han, 2009; Hsu, Hsieh, Chih, & Hsu, 2009; Huang & Tsai, 2009; Kim, 2003; Lu, Lee, & Chiu, 2009; 0957-4174/$ - see front matter Ó 2011 Elsevier Ltd. All rights reserved. doi:10.1016/j.eswa.2011.05.082 Corresponding author. Address: Department of Business Administration, Lung- hwa University of Science and Technology, Taoyuan, Taiwan. Tel.: +886 2 82093211x6509; fax: +886 2 82093211x6510. E-mail address: [email protected] (J.-Y. Wu). Expert Systems with Applications 38 (2011) 15194–15201 Contents lists available at ScienceDirect Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa

Upload: chi-jie-lu

Post on 03-Sep-2016

223 views

Category:

Documents


9 download

TRANSCRIPT

Page 1: An efficient CMAC neural network for stock index forecasting

Expert Systems with Applications 38 (2011) 15194–15201

Contents lists available at ScienceDirect

Expert Systems with Applications

journal homepage: www.elsevier .com/locate /eswa

An efficient CMAC neural network for stock index forecasting

Chi-Jie Lu a, Jui-Yu Wu b,⇑a Department of Industrial Engineering and Management, Ching Yun University, Taiwan, ROCb Department of Business Administration, Lunghwa University of Science and Technology, Taiwan, ROC

a r t i c l e i n f o

Keywords:Cerebellar model articulation controllerNeural networkStock index forecastingSupport vector regressionBack-propagation neural network

0957-4174/$ - see front matter � 2011 Elsevier Ltd. Adoi:10.1016/j.eswa.2011.05.082

⇑ Corresponding author. Address: Department of Buhwa University of Science and Technology, Taoy82093211x6509; fax: +886 2 82093211x6510.

E-mail address: [email protected] (J.-Y. Wu).

a b s t r a c t

Stock index forecasting is one of the major activities of financial firms and private investors in makinginvestment decisions. Although many techniques have been developed for predicting stock index, build-ing an efficient stock index forecasting model is still an attractive issue since even the smallest improve-ment in prediction accuracy can have a positive impact on investments. In this paper, an efficientcerebellar model articulation controller neural network (CAMC NN) is proposed for stock index forecast-ing. The traditional CAMC NN scheme has been successfully used in robot control due to its advantages offast learning, reasonable generalization capability and robust noise resistance. But, few studies have beenreported in using a CMAC NN scheme for forecasting problems. To improve the forecasting performance,this paper presents an efficient CMAC NN scheme. The proposed CMAC NN scheme employs a high quan-tization resolution and a large generalization size to reduce generalization error, and uses an efficient andfast hash coding to accelerate many-to-few mappings. The forecasting results and robustness evaluationof the proposed CMAC NN scheme were compared with those of a support vector regression (SVR) and aback-propagation neural network (BPNN). Experimental results from Nikkei 225 and Taiwan StockExchange Capitalization Weighted Stock Index (TAIEX) closing indexes show that the performance ofthe proposed CMAC NN scheme was superior to the SVR and BPNN models.

� 2011 Elsevier Ltd. All rights reserved.

1. Introduction

Over the past three decades there has been a growing interest infinancial time series forecasting (De Gooijer & Hyndman, 2006; Lawr-ence, Goodwin, O’Connor, & Önkal, 2006). Stock index forecasting isone of the major activities of financial firms and private investorsin making investment decisions. Moreover, it is regarded as a challen-ging task of the financial time series prediction process since thestock market is a complex, evolutionary, and nonlinear dynamic sys-tem (Atsalakis & Valavanis, 2009; Hall, 1994; Yaser & Atiya, 1996).

Many techniques have been applied to the domain of stock indexprediction, and are traditionally categorized as statistical methodsand spectral analysis (Chatfield, 2001; De Gooijer & Hyndman,2006; Lawrence et al., 2006). Zhang, Patuwo, and Hu (1998) foundthat the traditional statistical methods based on linear models, suchas Box–Jenkins approach (autoregressive integrated moving averagemodels), have difficulty in measuring the performance of real sys-tems with nonlinear behavior. Rigozo, Echer, Nordemann, Vieira,and de Faria (2005) compared the advantages and drawbacks of fourclassical spectral analytical methods. These approaches are difficultto apply for general practitioners; that is, prior knowledge of signal

ll rights reserved.

siness Administration, Lung-uan, Taiwan. Tel.: +886 2

processing is required to adjust specific parameters for these spec-tral analytical methods.

Neural networks (NNs) which can accurately model nonlinearsystems, have been found to be useful techniques for stock indexprediction. It is due to their ability to capture subtle functional re-lationships among the empirical data even though the underlyingrelationships are hard to describe (Atsalakis & Valavanis, 2009;Vellido, Lisboa, & Vaughan, 1999; Zhang et al., 1998). Unlike tradi-tional statistical models, NNs are data-driven and non-parametricmodels. They do not require strong model assumptions and canmap any nonlinear function without a priori assumption aboutthe properties of the data (Chauvin & Rumelhart, 1995; Haykin,1999; McNelis, 2004). A back-propagation neural network (BPNN)is the most popular NN algorithm for stock index forecasting (Atsa-lakis & Valavanis, 2009; Vellido et al., 1999; Zhang et al., 1998).However, the BPNNs also suffer from a number of disadvantagessuch as the risk of model over-fitting and difficulty in obtaining astable solution (Cao, 2003; Cao & Tay, 2001; Tay & Cao, 2003).

Support vector regression (SVR) based on statistical learning the-ory is a novel algorithm and has been receiving increasing attentionto solve nonlinear regression estimation problems (Vapnik, 1999,2000). It has been successfully applied in different problems of timeseries prediction such as production forecasting of machinery in-dustry, traffic flow prediction and financial time series forecasting(Castro-Neto, Jeong, Jeong, & Han, 2009; Hsu, Hsieh, Chih, & Hsu,2009; Huang & Tsai, 2009; Kim, 2003; Lu, Lee, & Chiu, 2009;

Page 2: An efficient CMAC neural network for stock index forecasting

C.-J. Lu, J.-Y. Wu / Expert Systems with Applications 38 (2011) 15194–15201 15195

Mohandes, Halawani, Rehman, & Hussain, 2004; Pai & Lin, 2005; Pai,Yang, & Chang, 2009; Tay & Cao, 2001; Tay & Cao, 2003; Thissen, vanBrakel, de Weijer, Melssen, & Buydens, 2003). When constructing aforecasting model, one major concern is increasing the accuracy ofpredictions. This would be especially beneficial in areas like stockindex forecasting, where even the smallest improvement in predic-tion accuracy can have a positive impact on investments. Therefore,an efficient cerebellar model articulation controller neural network(CMAC NN) scheme is proposed for stock index forecasting.

A CMAC is a supervised NN using a least mean square (LMS)algorithm. Albus (1975a, 1975b) first introduced the CMAC NNbased on the functions of the human cerebellum, which is respon-sible for muscle control and motor coordination. A cerebellumworks as follows. An input signal to the cerebellum activates manymossy fibers, each touching a granule cell. The output of the cere-bellum is then sum of the activated Granule cells. The CMAC NNscheme performs cerebellum functions by a series of mappings,and acts as clever look-up table. The CMAC NN scheme has theadvantages of very fast learning, reasonable generalization abilityand robust noise resistance (Wong & Sideris, 1992). Furthermore,Miller, Glanz, and Kraft (1990) found that a CMAC NN schemehas higher convergence speed than a standard BPNN. It has beensuccessfully used in many applications, such as control (Lin, Chen,& Chen, 2007; Lin, Peng, & Hsu, 2004; Lu & Tseng, 2005; Peng,2009), diagnosis (Hung & Wang, 2004; Wang & Jiang, 2004) andclassification (Lin, Lee, & Lee, 2008; Wen, Lin, Chang, & Huang,2009; Wu, 2011). Moreover, CMAC NN schemes have been usedto predict, such as electricity price forecasting, power system mar-ginal price forecasting and time series forecasting (Qiaolin, Jing, &Jianxin, 2005; Shi, Gao, & Tilani, 2004; Zhou, Chen, Wu, & Ho,2003). However, to the best knowledge of the authors, no studyhas been reported in using a CMAC NN scheme for stock indexforecasting. Therefore, this paper presents an efficient CMAC NNscheme to extend the application field to stock index forecasting.

The proposed method improves the performance of the tradi-tional CMAC NN scheme in two aspects. First, the traditional CMACNN scheme has local generalization ability (Wong & Sideris, 1992).To increase generalization ability, the proposed method performedthe quantization operator by a high quantization resolution, andused a large generalization size. Second, to perform a fast many-into-few mapping, the proposed CMAC NN scheme employed anefficient hash coding operator (bitwise XOR operation) that is fastand easy to use (Zobrist, 1969).

Two datasets are used in this paper to evaluate the performanceof the proposed efficient CMAC NN model. One is the Nikkei 225closing cash index collected from Japanese stock market, and otheris the Taiwan Stock Exchange Capitalization Weighted Stock Index(TAIEX) closing cash index obtained from Taiwanese stock market.The forecasting performance of the proposed model is compared tothat of BPNN and SVR models using prediction error and predictionaccuracy as criteria.

The rest of this paper is organized as follows: Section 2 gives abrief introduction about CMAC NN scheme and SVR approach. Theproposed CMAC NN model is thoroughly described in Section 3.Section 4 presents the experimental results from the datasetsincluding the Nikkei 225 and TAIEX closing cash indexes. The paperis concluded in Section 5.

2. Methodology

2.1. CMAC NN scheme

A CMAC NN scheme consists five cells, input space (X), sensorycell (S), association cell (A), physical memory cell (P) and outputcell (Y), and transforms input values into output values using a

series of mappings. Fig. 1 displays a functional scheme of three in-put vectors and the single output vector of the CMAC NN scheme,where the input column vectors are represented by x1, x2, . . . , xn,and the actual output is Yo. Fig. 1 also shows the illustration ofmappings of input pattern 1 (consisting of three input componentsand a desired output yd,1) in the CMAC algorithm, as describedbelow.

(1) X ? S: A quantization operator is used to transform compo-nents of input vectors into discrete quantization indexes.Each component xi of input pattern j is quantized individu-ally by

sij ¼Ni

xmaxi � xmin

i

!� xij � xmin

i

� �" #� 1; i ¼ 1;2; . . . ;n;

j ¼ 1;2; . . . ;Dtotal; ð1Þ

sij ¼ 0; if sij < 0; ð2Þ

where sij = quantization index of component xi of input patternj; xij = component xi of input pattern j; xmin

i = minimum valueof input vector xi; xmax

i = maximum value of input vector xi;Ni = quantization resolution of input vector xi; n = dimensionof input space; Dtotal = total number of input patterns;[�] = rounded number.The resolution of this quantization relies upon xmin

i ; xmaxi and Ni.

Larger values of Ni improve the accuracy of representation, butrequire a large weight table w.

(2) S ? A: A sensory cell comprises random tables (Lee, 1996).The size of each random table can be calculated by

Ci ¼ Ni þ g � 1; i ¼ 1;2; . . . ;n; ð3Þ

where, Ci = size of the random table of input vector xi; g = gen-eralization size.The indexes sij are mapped to random table i, and segmentmappings are created based on g. The quantization and seg-ment mappings generate natural interpolation, and give theCMAC NN scheme the ability to generalize (Handelman, Lane,& Gelfand, 1990). A large g increases generalization ability, butreduces the approximation accuracy.A hash coding, which compresses a huge virtual address spaceinto a compact amount of memory and minimizes the proba-bility of physical address collision, is employed to create anaddress table. The comprises the address table are thenmapped into an association vector a, which consists of ‘‘0’’and ‘‘1’’ values, where 1 is an activated address, and 0 repre-sents an unactivated address.

(3) A ? P ? Y: A vector a in cell A corresponds to a table w incell P. The table w stores many weights. An actual outputvalue in cell Y can be computed by the matrix operation:

yo ¼ aT w; ð4Þ

where yo = actual output:

a ¼

a1

a2

..

.

ak

266664

377775; w ¼

w1

w2

..

.

wk

266664

377775

k = size of w; T = transpose of a matrix.

2.1.1. Learning ruleThe CMAC NN scheme uses the LMS algorithm to update

weights, as follows:

Page 3: An efficient CMAC neural network for stock index forecasting

Fig. 1. A CMAC NN scheme.

15196 C.-J. Lu, J.-Y. Wu / Expert Systems with Applications 38 (2011) 15194–15201

wlðhþ 1Þ ¼ wlðhÞ þbðyd � yoÞ

g; l ¼ 1;2; . . . ; k;

h ¼ 1;2; . . . ; epochmax ð5Þ

where b = learning rate; yd = desired output; wl(h) = value of weightl in epoch h; wl(h + 1) = value of weight l in epoch h + 1;epochmax = maximum epoch number.

Only activated weights need to be modified in the CMACalgorithm.

2.2. Support vector regression

The SVR approach is an artificial intelligent forecasting toolbased on statistical learning theory and structural risk minimiza-tion principle (Vapnik, 1999; Vapnik, 2000). It can be expressedas the following equation:

f ðxÞ ¼ w �UðxÞð Þ þ b ð6Þ

where w is weight vector, b is a bias and U(x) is a kernel functionwhich use a nonlinear function to transform the nonlinear inputto be linear mode in a high dimension feature space.

Traditional regression gets the coefficients through minimizingthe square error which can be considered as empirical risk basedon loss function. Vapnik (2000) introduced so-called e-insensitivityloss function (Le) to SVR. It can be express as:

Le f xð Þ;Ydð Þ ¼f xð Þ � Ydj j � e if f xð Þ � Ydj jP e

0 otherwise

�ð7Þ

where e defined the region of e-insensitivity, when the predictedvalue falls into the band area, the loss is zero. Contrarily, if thepredicted value falls out the band area, the loss is equal to thedifference between the predicted value and the margin.

The weight vector (w) and bias (b) in Eq. (6) can be estimated byminimizing the following regularized risk function (Vapnik, 1999;Vapnik, 2000):

RðCÞ ¼ C1

Dtrain

Xn

i¼1

Le f ðxiÞ; Ydð Þ þ 12kwk2 ð8Þ

where Le(f(x), Yd) is e-insensitive loss function in Eq. (7); 12 kwk

2 isthe regularization term which controls the trade-off between thecomplexity and the approximation accuracy of the regression modelto ensure that the model possesses an improved generalized perfor-mance; C is the regularization constant used to specify the trade-offbetween the empirical risk and regularization term. Both C and eare user-determined parameters.

Two positive slack variables, nj and n�j ; j ¼ 1;2; . . . ;Dtrain, can beused to measure the deviation (Yd � f(xi)) from the boundaries ofthe e-insensitive zone. That is, they represent the distance from ac-tual values to the corresponding boundary values of e-insensitivezone. By using slack variables, the Eq. (8) is transformed into thefollowing constrained form:

Minimize : Rregðf Þ ¼12kwk2 þ C

XDtrain

j¼1

nj þ n�j

� �;

subject to

Yd � w �U xið Þð Þ � b 6 eþ nj

w �U xið Þð Þ þ b� Yd 6 eþ n�jnj; n

�j P 0 for i ¼ 1; . . . ;n; j ¼ 1;2; . . . ;Dtrain

8><>: :

ð9Þ

By using Lagrangian multipliers and Karush–Kuhn–Tucker condi-tions to the Eq. (9), it thus yields the following dual Lagrangian form(Vapnik, 1999; Vapnik, 2000):

Page 4: An efficient CMAC neural network for stock index forecasting

C.-J. Lu, J.-Y. Wu / Expert Systems with Applications 38 (2011) 15194–15201 15197

Maximize : Ld a;a�ð Þ ¼ �eXDtrain

j¼1

a�j þ aj

� �

þXDtrain

j¼1

a�j � aj

� �yd;j

� 12

XDtrain

j¼1

a�j � aj

� �a�j � aj

� �K xi; x0i� �

i ¼ 1;2; . . . ;n; j ¼ 1;2; . . . ;Dtrain;

subject to the constraints :

PDtrain

j¼1a�j � aj

� �¼ 0

0 6 aj 6 C; j ¼ 1;2; . . . ;Dtrain

0 6 a�j 6 C; j ¼ 1;2; . . . ;Dtrain

8>>>><>>>>:

:

ð10Þ

The Lagrangian multipliers in Eq. (10) satisfy the equality aja�j ¼ 0.The Lagrangian multipliers, aj and a�j , are calculated and an optimaldesired weight vector of the regression hyperplan is w� ¼

PDtrainj¼1

ðaj � a�j ÞKðxi;x0iÞði ¼ 1;2; . . . ;nÞ. Hence, the general form of theSVR-based regression function can be written as (Vapnik, 1999;Vapnik, 2000):

f x;wð Þ ¼ f x;a;a�ð Þ ¼XDtrain

j¼1

aj � a�j� �

K xi; x0i� �

þ b;

i ¼ 1;2; . . . ;n; j ¼ 1;2; . . . ;Dtrain:

ð11Þ

where Kðxi;x�i Þ is called the kernel function. (Cherkassky & Ma,2004; Vapnik, 2000) proposed that radial basis function (RBF) is sui-ted for solving most forecasting problems. Thus, the RBF is appliedin this paper as kernel function.

3. The proposed CMAC NN scheme

A CMAC NN scheme was developed and used to two closingcash index cases. Fig. 2 shows the pseudo-code of the proposedCMAC NN scheme, which is described below:

Step 1: Quantize and normalize.Eqs. (1) and (2) are used to quantize components of input pat-terns. Each desired output yd,j(j = 1, 2, . . . , Dtotal) undergoes nor-malization, as follows:

y0d;j ¼yd;j � ymin

d

ymaxd � ymin

d

Emax � Eminð Þ þ Emin;

j ¼ 1;2; . . . ;Dtotal; ð12Þ

where y0d;j = normalized desired output j; ymind = minimum value

of desired output vector Yd ¼ ½yd;1; yd;1; . . . ; yd;Dtotal�T ; ymax

d = maxi-mum value of Yd; Emin = minimum value of expected output;

Fig. 2. The pseudo-code of the proposed CMAC NN scheme.

Emax = maximum value of expected output.The values [Emin, Emax] are generally set to [0.2, 0.8].Step 2: Create random tables and weight table w.Eq. (3) is used to compute the size of each random table, and therandom tables are then created. To control the range of theindex of generated address table, each random table consistsof uniform random numbers created from the interval [0, k/2].A table w is generated based on k as defined in Section 2.1, withinitial weights set to ‘‘0’’. To reduce hash collision, this paperuses a large value of k = 10000.Step 3: Generate address table.When the parameter k is much greater than the index ofaddress table in cell A, the hash coding is not required. Toimplement many-into-few mapping, the proposed CMAC NNscheme still used a hash coding based on bitwise XOR operator(Zobrist, 1969). The implementation of hash coding can befound in the literature (Wu, 2011). The numbers in the addresstable are then mapped onto vector a.Step 4: Calculate actual output.Eq. (4) is used to measure the values yo,j (j = 1, 2, . . . , Dtrain).Step 5: Learning rule.Eq. (5) is employed to modified the weights in w.Step 6: Evaluate training accuracy.A normalized root mean square error (NRMSE) is used as a per-formance index to evaluate a training accuracy, as follows:

NRMSE ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiPDtrainj¼1 y0d;j � yo;j

� �2

Dtrain

vuut; ð13Þ

Steps 3–6 are repeated until the termination condition epochmax ismet.

Step 7: Measure test accuracy.Input pattern j(j = Dtrain + 1, Dtrain + 2, . . . , Dtotal) is used for test.The actual outputs of recall of the CMAC NN scheme can be cal-culated using the w obtained in the training stage (Steps 1–6),and then the generalization accuracy of the CMAC NN schemecan be measured by a test NRMSE.

4. Empirical study

4.1. Datasets and performance criteria

For evaluating the performance of the proposed CMAC NNscheme forecasting model, the daily Nikkei 225 and TAIEX closingcash indexes are used in this paper. In forecasting Nikkei 225 clos-ing cash index, four forecasting variables are used. One is the pre-vious day’s cash market closing index and the others are threeNikkei 225 index futures prices since the futures price changes leadprice changes of the cash market (Lee & Chen, 2002; Lee & Chiu,2002). The three Nikkei 255 index futures contracts are tradedon Singapore Exchange-Derivative Trading Limited (SGX-DT), Osa-ka Securities Exchange (OSE) and Chicago Mercantile Exchange(CME) markets. The daily data of futures and cash prices fromDecember 6, 2004 to December 30, 2008 of the Nikkei 225 cash in-dex provided by Bloomberg are collected and used in this paper.There are totally 1000 data points in the dataset and the daily Nik-kei 225 closing cash prices are shown in Fig. 3.

For forecasting the TAIEX closing cash index, the technical indi-cators are used as forecasting variables since technical indicatorsare the most widely used features in stock index prediction (Balac-handher, Fauzias, & Lai, 2002; Leigh, Hightower, & Modani, 2005).The six technical indicators, determined by the review of domainexperts and literatures (Atsalakis & Valavanis, 2009; Leigh et al.,2005; Wood, 2002), are selected as forecasting variables. Theselected technical indicators are the previous day’s cash market

Page 5: An efficient CMAC neural network for stock index forecasting

Fig. 3. The daily Nikkei 225 closing cash prices from December 6, 2004 to December30, 2008.

15198 C.-J. Lu, J.-Y. Wu / Expert Systems with Applications 38 (2011) 15194–15201

high, low, volume, closing index, 6-days relative strength indicator(RSI 6), and 10-days total amount weight stock price index (TAPI10). The details about technical indicators please refer to (Leighet al., 2005; Wood, 2002). The daily data of technical indicatorsand cash prices from December 13, 2004 to December 31, 2008of TAIEX cash index provided by Capital Futures Corporation, Tai-pei, are collected as a dataset. Fig. 4 depicts the daily TAIEX closingcash index. There are totally 1000 data points in the dataset.

The prediction performance is evaluated using the followingperformance measures including the root mean square error(RMSE), mean absolute difference (MAD), mean absolute percent-age error (MAPE), directional accuracy (DA), correct up trend (CP)and correct down trend (CD). The definitions of these criteria canbe found in Table 1. RMSE, MAD and MAPE are used to evaluatethe prediction error. DA, CP and CD can be utilized to evaluatethe prediction accuracy.

4.2. Forecasting results

The proposed CMAC NN scheme was coded in MATLAB and ap-plied to predict Nikkei 225 and TAIEX closing cash indexes. The first800 data points (80% of the total sample points) of the datasets areused as the training sample while the remaining 200 data points(20% of the total sample points) are used as the test sample. This pa-per employed epochmax = 200 as a termination criterion, and b = 0.1.The optimal settings of parameters (Ni, g) for the proposed CMAC NNscheme was evaluated by using Ni = {10, 20, 30, 40, 50} andg = {10, 20, 30, 40, 50}. Table 2 lists the model selection results ofthe proposed CMAC NN scheme. When using a large value ofNi = 50, increasing g decreased the training accuracy, but enhancedthe generalization ability of the CMAC NN scheme. When usingsmall values of Ni = 10 and Ni = 20 against various g = {10, 20, 30,40, 50}, the poor training and generalization accuracies were found.As shown in Table 2, the CMAC NN scheme obtained good trainingand test RMSEs by using the parameter settings (Ni, g) = (50, 40),which are high quantization and a large generalization size.

Fig. 4. The daily TAIEX closing cash prices from December 13, 2004 to December31, 2008.

Although the parameter settings (Ni, g) = (20, 50) produced the besttraining RMSE, the test RMSE was poor.

For building BPNN forecasting model, the NN toolbox of MAT-LAB software is adapted in this paper. The original data are scaledinto the range of [�1.0, 1.0] when building BPNN forecasting mod-el. The linear scaling ensures the large value input variables do notoverwhelm smaller value inputs, and then helps to reduce predic-tion errors.

Since one hidden layer network is sufficient to model any com-plex system with desired accuracy (Chauvin & Rumelhart, 1995)the designed BPNN model in this paper will have only one hiddenlayer. The performance of BPNN is mainly affected by the setting ofnetwork topology, i.e., the number of nodes in each layer andlearning rates. There are no general rules for the choice of networktopology. The selection is usually based on the trial-and-error (orcalled cross-validation) method. In this paper, the optimal networktopology of the BPNN model is determined by the trial-and errormethod.

In the modeling of BPNN model for forecasting Nikkei 225 clos-ing index, the input layer has four nodes as four forecasting vari-ables are used. Since there are no general rules for the choice ofthe number of the hidden layer, the number of hidden nodes tobe tested was set to 7, 8, 9 and 10. And the network has onlyone output node, the forecasted closing cash price index. As lowerlearning rates tended to give the best network results (Chauvin &Rumelhart, 1995), learning rates 0.01, 0.02, 0.03, 0.04 and 0.05are tested during the training process. The convergence criteriaused for training the BPNN model is a RMSE less than or equal to0.0001 or maximum of 1000 iterations. The network topology withthe minimum test RMSE is considered as the optimal network.

The test results of the BPNN model with combinations of differ-ent hidden nodes and learning rates are summarized in Table 3.From Table 3, it can be observed that the {4–9–1} topology witha learning rate of 0.04 gives the best forecasting result (minimumtest RMSE) and hence is the best topology setup for the BPNN mod-el in forecasting Nikkei 225 closing cash index. Here, {4–9–1} rep-resents the four nodes in the input layer, 9 nodes in the hiddenlayer and one node in the output layer.

For building SVR forecasting model, the LIBSVM package pro-posed by Chang and Lin (2001) is adapted in this paper. The origi-nal datasets are first scaled into the range of [�1.0, 1.0] when usingLIBSVM package. In building the SVR forecasting model, the firststep is to select the kernel function. As mentioned in Section 2.2,the RBF kernel function is adapted in this paper. It is well knownthat the performance (estimation accuracy) of SVR depends on set-ting of parameters. Thus, the selection of three parameters, regu-larization constant C, loss function e and r (the width of the RBF)of a SVR model is important to the accuracy of forecasting.

Cherkassky and Ma (2004) pointed out that for multivariate d-dimensional problems, the RBF width parameter r is set asrd � (0.1, 0.5), where d is number of input variables. For simplify-ing the setting of parameter, in this paper, r = 0.8 is used for allexperiments. For setting of parameters C and e, there are no generalrules for the selection. The analytic parameter selection methodproposed by Cherkassky and Ma (2004) and the grid search pro-posed Lin, Hsu, and Chang (2003) are used in this paper for param-eters setting of C and e. The analytic method is based on sketchingthe structure of training data to determine the best value of param-eters. The grid search is a straightforward method using exponen-tially growing sequences of C and e to identify good parameters(for example, C = 2�5, 2�3, 2�1, . . . , 215). In order to combine theadvantages of those two approaches, this paper first uses the ana-lytic method to select a parameter set of C and e. Then, the gridsearch uses the set as starting point for searching. The parameterset of C and e which generate the minimum forecasting (RMSE) isconsidered as the best parameter set.

Page 6: An efficient CMAC neural network for stock index forecasting

Table 1Performance measures and their definitions.

Metrics Calculation

RMSE

RMSE ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiPntotalj¼1 yd;j � yo;j

� �2

ntotal

s

MAD

MAD ¼Pntotal

j¼1 yd;j � yo;j

�� ��ntotal

MAPE

MAPE ¼Pntotal

j¼1yd;j�yo;j

yd;j

��� ���ntotal

DA

DA ¼ 100ntotal

Xntotal

j¼1

dj; where dj ¼1 yo;j � yo;j�1

� �yd;j � yd;j�1

� �P 0

0 otherwise

(

CP

CP ¼ 100n1

Xntotal

j¼1

dj; where dj ¼1 yo;j � yo;j�1

� �yd;j � yd;j�1

� �P 0

0 otherwise

(

CD

CD ¼ 100n2

Xntotal

j¼1

dj ; where dj ¼1 yo;j � yo;j�1

� �yd;j � yd;j�1

� �P 0

0 otherwiseþ

(

Note that yd and yo represent the actual and predicted value, respectively, ntotal isnumber of data points for training or is number of data points for test; n1 is number ofdata points belong to up trend, and n2 is number of data points belong to down trend.

Table 2Model selection results of the proposed CMAC NN model – Nikkei 225.

Ni g Training RMSE Test RMSE

10 10 0.01023 0.0330020 0.00952 0.0248530 0.00916 0.0237940 0.00896 0.0285050 0.00905 0.03029

20 10 0.00940 0.0265720 0.00925 0.0258230 0.00896 0.0237540 0.00873 0.0249050 0.00861 0.02499

30 10 0.00958 0.0288220 0.00951 0.0253030 0.00925 0.0267840 0.00889 0.0243350 0.00883 0.02507

40 10 0.01139 0.0253320 0.00994 0.0230430 0.00999 0.0244640 0.01118 0.0253150 0.00987 0.02505

50 10 0.00899 0.0237920 0.00897 0.0242630 0.00929 0.0248540 0.00878 0.0229350 0.00919 0.02358

The bold values represent the best training and test RMSEs.

C.-J. Lu, J.-Y. Wu / Expert Systems with Applications 38 (2011) 15194–15201 15199

In the modeling of the SVR model, C = 1.25 and e = 0.0019 canbe obtained. Since C = 1.25 is near C = 21 and e = 0.0019 is closeto e = 2�9, the parameter set (C = 21, e = 2�9) is used as the startingpoint of grid search for searching the best parameters. The test re-sults of the SVR model with combinations of different parametersets are summarized in Table 4. From Table 4, it can be found thatthe parameter set (C = 23, e = 2�9) gives the best forecasting result(RMSE) and is the best parameter set for SVR model in forecastingNikkei 225 closing cash index.

The Nikkei 225 closing cash price index forecast results usingthe BPNN, SVR approach and proposed CMAC NN models are com-puted and listed in Table 5. From Table 5, it can be found that theRMSE, MAD and MAPE of the CMAC NN model are, respectively,55.36%, 35.50% and 0.35%. It can be observed that these valuesare smaller than those of BPNN and SVR models. It indicates thatthere is a smaller deviation between the actual and predicated val-ues using the proposed CMAC NN model. Moreover, compared tothe BPNN and SVR models, the CMAC NN model has the highestDA (directional accuracy), CP (correct up trend) and CD (correctdown trend) ratios which are 81.58%, 83.33% and 79.62%, respec-tively. DA, CP and CD provide a good measure of the consistencyin prediction of the price direction. Thus, it can be concluded thatthe proposed CMAC NN model provides a better forecasting resultthan the BPNN and SVR models in terms of prediction error andprediction accuracy.

The proposed CMAC NN scheme also performs well in forecast-ing the TAIEX closing cash prices. Note that the processes of select-ing the best parameter settings for the CMAC NN, SVR BPNNmodels are omitted. Table 6 summarizes the TAIEX closing cashprices forecasting results using the BPNN, SVR and proposed CMACNN models. It can also be observed from Table 6 that the proposedCMAC NN model has the smallest RMSE, MAD and MAPE valuesand the highest DA, CP and CD values in comparison with BPNNand SVR models. Thus, the proposed method can produce lowerprediction errors and higher prediction accuracy on the directionof change in price and outperforms BPNN and SVR models in fore-casting the TAIEX closing cash prices.

4.3. Robustness evaluation

To evaluate the robustness of the proposed CMAC NN method,the performance of the BPNN and SVR and proposed models wastested using different ratios of training and test sample sizes. Thetest plan is based on the relative ratio of the size of the trainingdataset size to complete dataset size. In this section, four relativeratios, 60%, 70%, 80%, and 90% are considered. The prediction re-sults for the TAIEX closing cash index and Nikkei 225 closing cashindex by the three methods are summarized in Table 7 in terms oftwo criteria, e.g., MAPE and DA.

Based on the findings in Table 7, it can be observed that the pro-posed CMAC NN scheme outperforms the other benchmarkingtools under all four different ratios in terms of the MAPE and DAcriteria. It can produce the lowest prediction error and the highestprediction accuracy under all relative ratios of the two test data-sets. It therefore indicates that the proposed CMAC NN model in-deed provides better forecasting accuracy than the other twoapproaches.

5. Conclusions

Stock index forecasting is a challenging task and has drawn seri-ous attention during past three decades. This paper has developedan efficient CMAC NN scheme for stock index forecasting. The pro-posed method improves the forecasting performance of the tradi-tional CMAC NN scheme in two aspects. One is to reducegeneralization error by using a high quantization resolution and alarge generalization size, and the other is to employ a fast hash cod-ing to speed up the many-to-few mappings. The experiments haveevaluated two datasets including the Nikkei 225 and TAIEX closingcash indexes and compared the performance of proposed methodwith those of SVR and BPNN models using prediction error and pre-diction accuracy as criteria. Experimental results show that the pro-posed model can produce lower prediction error and higherprediction accuracy and outperformed the comparison methods

Page 7: An efficient CMAC neural network for stock index forecasting

Table 3Model selection results of the BPNN forecasting model – Nikkei 225.

Number of nodes in the hiddenlayer

Learningrate

TrainingRMSE

TestRMSE

7 0.01 0.07378 0.061150.02 0.07491 0.061150.03 0.06964 0.060760.04 0.06876 0.061230.05 0.06940 0.06130

8 0.01 0.07453 0.061310.02 0.07443 0.061230.03 0.06886 0.061280.04 0.07424 0.061120.05 0.07491 0.06144

9 0.01 0.06897 0.061240.02 0.06871 0.061320.03 0.06826 0.059800.04 0.06804 0.059780.05 0.06842 0.06099

10 0.01 0.06976 0.061140.02 0.07015 0.061610.03 0.07116 0.061650.04 0.06873 0.061300.05 0.07382 0.06123

The bold values represent the best training and test RMSEs.

Table 4Model selection results of the SVR forecasting model – Nikkei 225.

C e Training RMSE Test RMSE

2�3 2�11 0.04267 0.082012�9 0.06183 0.082302�7 0.13460 0.088022�5 0.27802 0.099982�3 0.52487 0.11573

2�1 2�11 0.03066 0.072792�9 0.03990 0.073452�7 0.09498 0.084652�5 0.25241 0.095952�3 0.43210 0.26134

21 2�11 0.02691 0.067802�9 0.02706 0.067682�7 0.02788 0.081122�5 0.17302 0.090232�3 0.39440 0.25323

23 2�11 0.02519 0.064482�9 0.02503 0.049992�7 0.02993 0.067082�5 0.05305 0.073282�3 0.14103 0.10479

25 2�11 0.02796 0.064862�9 0.04253 0.065982�7 0.15244 0.070152�5 0.39440 0.056932�3 0.91715 0.08226

The bold values represent the best training and test RMSEs.

Table 5The Nikkei 225 closing cash prices forecasting results using the proposed the BPNN,SVR and proposed CMAC NN models.

Models Metrics

RMSE MAD MAPE (%) DA (%) CD (%) CP (%)

BPNN 61.65 43.27 0.39 79.39 81.67 76.85SVR 61.58 43.20 0.39 78.95 81.67 75.93

CMAC NN 55.36 35.50 0.35 81.58 83.33 79.62

The bold values represent the best training and test RMSEs.

Table 6The TAIEX closing cash prices forecasting results using the SVR, BPNN and proposedCMAC NN models.

Models Metrics

RMSE MAD MAPE (%) DA (%) CD (%) CP (%)

BPNN 48.06 36.82 0.59 76.77 79.46 74.32SVR 50.22 38.54 0.61 74.84 79.46 70.62

CMAC NN 40.77 26.90 0.50 79.35 82.16 76.79

The bold values represent the best training and test RMSEs.

Table 7Robustness evaluation of BPNN, SVR, proposed CMAC NN models by different trainingand test sample sizes.

Relativeratio (%)

Models TAIEX Nikkei 225

Test dataMAPE (%)

Test dataDA (%)

Test dataMAPE (%)

Test dataDA (%)

60 BPNN 0.74 81.75 0.48 81.08SVR 0.74 80.89 0.49 82.86CMACNN

0.69 82.69 0.45 85.30

70 BPNN 0.72 73.21 0.40 85.37SVR 0.71 75.36 0.42 84.36CMACNN

0.64 79.67 0.39 86.25

80 BPNN 0.59 76.77 0.39 79.38SVR 0.61 74.84 0.39 78.94CMACNN

0.50 79.35 0.35 81.58

90 BPNN 0.67 70.25 0.36 84.26SVR 0.67 70.11 0.34 85.22CMACNN

0.65 74.50 0.27 87.37

The bold values represent the best training and test RMSEs.

15200 C.-J. Lu, J.-Y. Wu / Expert Systems with Applications 38 (2011) 15194–15201

used in this paper. Moreover, the proposed scheme is much easier touse than traditional statistical and spectral analytical methods.Therefore, it can be concluded that the proposed efficient CMAC

NN scheme can be a good alternative method for stock indexforecasting.

Acknowledgement

The authors would like to thank the National Science Council ofthe Republic of China, Taiwan for financially supporting this re-search under Contract Nos. NSC 98-2221-E-262-014 and NSC 98-2221-E-231-005.

References

Albus, J. S. (1975a). Data storage in the cerebellar model articulation controller(CMAC). ASME Journal of Dynamic Systems, Measurement, and Control, 97,228–233.

Albus, J. S. (1975b). A new approach to manipulator control: The cerebellar modelarticulation controller (CMAC). ASME Journal of Dynamic Systems, Measurement,and Control, 97, 220–227.

Atsalakis, G. S., & Valavanis, K. P. (2009). Surveying stock market forecastingtechniques – part II: Soft computing methods. Expert Systems with Applications,36, 5932–5941.

Balachandher, K. G., Fauzias, M. N., & Lai, M. M. (2002). An examination of therandom walk model and technical trading rules in the Malaysian stock market.Quarterly Journal of Business and Economics, 41, 81–104.

Cao, L. (2003). Support vector machines experts for time series forecasting.Neurocomputing, 51, 321–339.

Cao, L., & Tay, F. E. H. (2001). Financial forecasting using support vector machines.Neural Computing & Applications, 10, 184–192.

Castro-Neto, M., Jeong, Y. S., Jeong, M. K., & Han, L. D. (2009). Online-SVR for short-term traffic flow prediction under typical and atypical traffic conditions. ExpertSystems with Applications, 36, 6164–6173.

Chang, C. C., & Lin, C. J. (2001). LIBSVM: A library for support vector machines. <http://www.csie.ntu.edu.tw/~cjlin/libsvm>.

Chatfield, C. (2001). Time-series forecasting. New York, USA: Chapman & Hall, CRC.Chauvin, Y., & Rumelhart, D. E. (1995). Backpropagation: theory, architectures, and

applications. New Jersey: Lawrence Erlbaum Associates.Cherkassky, V., & Ma, Y. (2004). Practical selection of SVM parameters and noise

estimation for SVM regression. Neural Networks, 17, 113–126.

Page 8: An efficient CMAC neural network for stock index forecasting

C.-J. Lu, J.-Y. Wu / Expert Systems with Applications 38 (2011) 15194–15201 15201

De Gooijer, J. G., & Hyndman, R. J. (2006). 25 Years of time series forecasting.International Journal of Forecasting, 22, 443–473.

Hall, J. W. (1994). Adaptive selection of US stocks with neural nets. In G. J. Deboeck(Ed.), Trading on the edge: Neural, genetic and fuzzy systems for chaotic financialmarkets. New York: Willey.

Handelman, D. A., Lane, S. H., & Gelfand, J. J. (1990). Integrating neural networks andknowledge-based systems for intelligent robotic control. IEEE Control SystemsMagazine, 10, 77–87.

Haykin, S. (1999). Neural network: A comprehensive foundation. New Jersey: PrenticeHall.

Hsu, S. H., Hsieh, J. J. P.-A., Chih, T. C., & Hsu, K. C. (2009). A two-stage architecturefor stock price forecasting by integrating self-organizing map and supportvector regression. Expert Systems with Applications, 36, 7947–7951.

Huang, C. L., & Tsai, C. Y. (2009). A hybrid SOFM–SVR with a filter-based featureselection for stock market forecasting. Expert Systems with Applications, 36,1529–1539.

Hung, C. P., & Wang, M. H. (2004). Diagnosis of incipient faults in powertransformers using CMAC neural network approach. Electric Power SystemsResearch, 71, 235–244.

Kim, K. J. (2003). Financial time series forecasting using support vector machines.Neurocomputing, 55, 307–319.

Lawrence, M., Goodwin, P., O’Connor, M., & Önkal, D. (2006). Judgmentalforecasting: A review of progress over the last 25 years. International Journalof Forecasting, 22, 493–518.

Lee, J. (1996). Measurement of machine performance degradation using a neuralnetwork model. Computers in Industry, 30, 193–209.

Lee, T. S., & Chen, N. J. (2002). Investigating the information content of non-cash-trading index futures using neural networks. Expert Systems with Applications,22, 225–234.

Lee, T. S., & Chiu, C. C. (2002). Neural network forecasting of an opening cash priceindex. International Journal of Systems Science, 33, 229–237.

Leigh, W., Hightower, R., & Modani, N. (2005). Forecasting the New York stockexchange composite index with past price and interest rate on condition ofvolume spike. Expert Systems with Applications, 28, 1–8.

Lin, C. M., Chen, L. Y., & Chen, C. H. (2007). RCMAC hybrid control for MIMOuncertain nonlinear systems using sliding-mode technology. IEEE Transactionson Neural Networks, 18, 708–720.

Lin, C. J., Hsu, C. W., & Chang, C. C. (2003). A practical guide to support vectorclassification. Taipei, Taiwan: Department of Computer Science and InformationEngineering, National Taiwan University.

Lin, C. J., Lee, J. H., & Lee, C. Y. (2008). A novel hybrid learning algorithm forparametric fuzzy CMAC networks and its classification applications. ExpertSystems with Applications, 35, 1711–1720.

Lin, C. M., Peng, Y. F., & Hsu, C. F. (2004). Robust cerebellar model articulationcontroller design for unknown nonlinear systems. IEEE Transactions on Circuitsand Systems II: Express Briefs, 51, 354–358.

Lu, C. J., Lee, T. S., & Chiu, C. C. (2009). Financial time series forecasting usingindependent component analysis and support vector regression. DecisionSupport Systems, 47, 115–125.

Lu, H. C., & Tseng, T. Y. (2005). Design and implementation of CMAC-basedcontroller for permanent magnet synchronous motor. Electric PowerComponents and Systems, 33, 1015–1037.

McNelis, P. D. (2004). Neural networks in finance: Gaining predictive edge in themarket. New York: Academic Press.

Miller, W. T., Glanz, F. H., & Kraft, L. G. (1990). CMAC: An associative neural networkalternative to backpropagation. In Proceedings of the IEEE (Vol. 78, pp. 1561–1567).

Mohandes, M. A., Halawani, T. O., Rehman, S., & Hussain, A. A. (2004). Support vectormachines for wind speed prediction. Renewable Energy, 29, 939–947.

Pai, P. F., & Lin, C. S. (2005). A hybrid ARIMA and support vector machines model instock price forecasting. Omega, 33, 497–505.

Pai, P. F., Yang, S. L., & Chang, P. T. (2009). Forecasting output of integrated circuitindustry by support vector regression models with marriage honey-beesoptimization algorithms. Expert Systems with Applications, 36, 10746–10751.

Peng, Y. F. (2009). Robust intelligent sliding model control using recurrentcerebellar model articulation controller for uncertain nonlinear chaoticsystems. Chaos, Solitons & Fractals, 39, 150–167.

Qiaolin, D., Jing, T., & Jianxin, L. (2005). Application of new FCMAC neural network inpower system marginal price forecasting. In The 7th international powerengineering conference, Singapore (pp. 1–57).

Rigozo, N. R., Echer, E., Nordemann, D. J. R., Vieira, L. E. A., & de Faria, H. H. (2005).Comparative study between for classical spectral analysis methods. AppliedMathematics and Computation, 168, 411–430.

Shi, D. M., Gao, J. B., & Tilani, R. (2004). Univariate time series forecasting with fuzzyCMAC. In 2004 international conference on machine learning and cybernetics,Singapore (Vol. 7, pp. 4166–4170).

Tay, F. E. H., & Cao, L. (2001). Application of support vector machines in financialtime series forecasting. Omega, 29, 309–317.

Tay, F. E. H., & Cao, L. J. (2003). Support vector machine with adaptive parameters infinancial time series forecasting. IEEE Transactions on Neural Networks, 14,1506–1518.

Thissen, U., van Brakel, R., de Weijer, A. P., Melssen, W. J., & Buydens, L. M. C. (2003).Using support vector machines for time series prediction. Chemometrics andIntelligent Laboratory Systems, 69, 35–49.

Vapnik, V. N. (1999). An overview of statistical learning theory. IEEE Transactions onNeural Networks, 10, 988–999.

Vapnik, V. N. (2000). The nature of statistical learning theory. New York: Springer.Vellido, A., Lisboa, P. J. G., & Vaughan, J. (1999). Neural networks in business: A

survey of applications (1992–1998). Expert Systems with Applications, 17, 51–70.Wang, S., & Jiang, Z. (2004). Valve fault detection and diagnosis based on CMAC

neural networks. Energy and Buildings, 36, 599–610.Wen, C., Lin, T. C., Chang, K. C., & Huang, C. H. (2009). Classification of ECG

complexes using self-organizing CMAC. Measurement, 42, 399–407.Wong, Y. F., & Sideris, A. (1992). Learning convergence in the cerebellar model

articulation controller. IEEE Transactions on Neural Networks, 3, 115–121.Wood, S. (2002). Float analysis: Powerful technical indicators using price and volume.

New York: John Wiley and Sons.Wu, J. Y. (2011). MIMO CMAC neural network classifier for solving classification

problems. Applied Soft Computing, 11, 2326–2333.Yaser, S. A. M., & Atiya, A. F. (1996). Introduction to financial forecasting. Applied

Intelligence, 6, 205–213.Zhang, G., Patuwo, B. E., & Hu, M. Y. (1998). Forecasting with artificial neural

networks: The state of art. International Journal of Forecasting, 14, 35–62.Zhou, H., Chen, J., Wu, H., & Ho, S. L. (2003). CMAC-based short-term electricity price

forecasting. In Sixth international conference on advances in power system control,operation and management, Hong Kong (Vol. 1, pp. 348–353).

Zobrist, A. L. (1969). A new hashing method with application for game playing.Madison, Wisconsin, USA: Computer Sciences Department, University ofWisconsin.