comparing the forecasting performance of neural networks and forward exchange rates

12
ELSEVIER Journal of MULTINATIONAL FINANCIAL Journal of Multinational Financial Management MANAGEMENT I (1997) 345-356 Comparing the forecasting performance of neural networks and forward exchange rates Mona R. El Shady a.*, Hassan E. El Shazly b a Departmeni of’ Business anil Economics, Colurwhin College, Columbirr, SC 29203, USA b Deprtment oj' Mm~tgenwnt Science, (lniversity qf’South Ckrolinn, Colurnbi~r, SC 29208, USA Abstract Neural networks are model-free nonparametric estimators, which when applied to forecast currency prices, outperform forward rates of exchange. In this paper. the 1 month forecasting performance of a neural network model is tested and compared with that of the forward rate for three currencies: the British pound, the German mark and the Japanese yen. Two criteria are applied to evaluate performance: accuracy and the ability to correctly predict the direction of the exchange rate movement. The neural network results for the three currencies tested outperformed the forward rate both in terms of accuracy and correctness. 0 1997 Elsevier Science B.V. Key~ou& Foreign exchange forecasting; Neural networks JEL clawfication: F31; F47; G15 1. Introduction The extreme volatility of the foreign exchange market has served as a catalyst in the ongoing effort to improve the accuracy of currency forecasts. The significance of this endeavor could be measured by the amount of research aimed at improving the predictive power of forecasts. To do so, forecasting models attempt to identify the patterns and forces which influence future values and which may be characterized by trends, cycles, or nonstationarity. The key element in improving the forecasting performance is the acquisition of superior information. The source of such an advantage could stem either from ‘inside’ information, or from processing available information more efficiently. In both cases, the superior information edge is hard to hold on to, for as soon as transactions yielding excess returns are exercised, the market renders the information public, thereby eroding its value. * Corresponding author. Tel: + 803 786 3676; fax: + 803 786 3804: e-mail: [email protected] 1042-444X/97/$17.00 0 1997 Elsevier Science B.V. All rights reserved. PII SlO42-444X(97)00018-2

Upload: mona-r-el-shazly

Post on 17-Sep-2016

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Comparing the forecasting performance of neural networks and forward exchange rates

ELSEVIER

Journal of MULTINATIONAL FINANCIAL

Journal of Multinational Financial Management MANAGEMENT I (1997) 345-356

Comparing the forecasting performance of neural networks and forward exchange rates

Mona R. El Shady a.*, Hassan E. El Shazly b

a Departmeni of’ Business anil Economics, Colurwhin College, Columbirr, SC 29203, USA b Deprtment oj' Mm~tgenwnt Science, (lniversity qf’South Ckrolinn, Colurnbi~r, SC 29208, USA

Abstract

Neural networks are model-free nonparametric estimators, which when applied to forecast currency prices, outperform forward rates of exchange. In this paper. the 1 month forecasting performance of a neural network model is tested and compared with that of the forward rate for three currencies: the British pound, the German mark and the Japanese yen. Two criteria are applied to evaluate performance: accuracy and the ability to correctly predict the direction of the exchange rate movement. The neural network results for the three currencies tested outperformed the forward rate both in terms of accuracy and correctness. 0 1997 Elsevier Science B.V.

Key~ou& Foreign exchange forecasting; Neural networks

JEL clawfication: F31; F47; G15

1. Introduction

The extreme volatility of the foreign exchange market has served as a catalyst in the ongoing effort to improve the accuracy of currency forecasts. The significance of this endeavor could be measured by the amount of research aimed at improving the predictive power of forecasts. To do so, forecasting models attempt to identify the patterns and forces which influence future values and which may be characterized by trends, cycles, or nonstationarity.

The key element in improving the forecasting performance is the acquisition of superior information. The source of such an advantage could stem either from ‘inside’ information, or from processing available information more efficiently. In both cases, the superior information edge is hard to hold on to, for as soon as transactions yielding excess returns are exercised, the market renders the information public, thereby eroding its value.

* Corresponding author. Tel: + 803 786 3676; fax: + 803 786 3804: e-mail: [email protected]

1042-444X/97/$1 7.00 0 1997 Elsevier Science B.V. All rights reserved. PII SlO42-444X(97)00018-2

Page 2: Comparing the forecasting performance of neural networks and forward exchange rates

This paper relies on more efficient processing of information to improve forecasts using neural networks and applies this technique to forecast the one,-month future spot rate of exchange for the British pound, German mark and the Japanese yen. The model is designed using the Brainmaker Softwwc Puckuge, produced by California Scientific Software. The network’s forecasting performance is evaluated by comparing its mean absolute forecast errors (MAFE) to those of the forward rate. Predictions of future spot rates of exchange are also assessed by calculating the percentage of times the forecast is on the ‘correct side’ of the realized future value of the spot rate. While MAFE is a test of accuracy, the correctness criterion tests the model’s ability of predicting the direction of the movement. The significance of the second test relates to rewards which market participants can realize from hedging or speculation.

The organization of this paper is as follows. Section 2 presents an overview of the literature on the subject. Section 3 describes the methodology and design of the neural network model. Section 4 reports the results obtained and their performance evaluation. Concluding remarks and direction of future work are presented in Section 5 of the paper.

2. Literature overview

Whereas most traditional statistical forecasting methods have relied on linear models (see Chatfield and Collins, 1986), their drawbacks have 9ed to increased interest in nonlinear modeling. Two of the forecasting techniques which allow for the detection and modehng of nonlinear data are ‘rule induction’ and ‘neural networks’ (Nabney et al,; 1996).

Rule induction identifies patterns in the data and expresses them as rules. Nabney and Jenkins (1992), Quinlan (1986) and Race (1988) applied the rule induction technique to identify patterns and relationships in data. The effectiveness of this method, however, depends on the quality of the attributes used in classification and suffers from a number of drawbacks. First, the algorithm upon which rule induction is based produces a decision tree, which is difficult to interpret. Second, it is aimed at analyzing small data sets, and third, it fails to extract all the knowledge from the data (Nabney et al., 1996).

The second technique, neural networks, has gained popularity fohowing the realization of its powerful pattern recognition capabilities. Whereas standard econo- metric models rely on data. which is processed simultaneously, neural networks may be trained through a system of weight correction until perfect mapping is achieved (Bansal et al., 1993). Applications of neural networks include function estimation and classification, data compression, feature extraction and statistical clustering. Forecasting, an example of function estimation, is the focus of this paper.

financial applications have been a prime area for implementing artificial inteili- gence techniques. Kaastra and Boyd (1995), designed expert systems to forecast monthly futures trading volume on the Winnipeg Commodity Exchange. Their networks were able to predict up to 9 months ahead and outperform nerve models

Page 3: Comparing the forecasting performance of neural networks and forward exchange rates

for four out of the six commodities tested. Other forecasting applications include the work of Grundnitski and Osburn (1993) in which neural networks were used to predict S and P and gold futures prices and Haefke and Helmenstein ( 1996) whose network was designed to forecast Austrian IPOs.

Fueled by the drive to maintain a competitive edge, Coats and Fant (1993) developed a neural network system capable of identifying financial distress patterns. In 1994, Brockett et al. (1994) boasted that their neural network model was able to provide early warning, signals for predicting insurer insolvency.

Neural networks, as a technique, may be considered as a “multivariate nonlinear nonparametric inference technique that is data driven and model free” (Azoff, 1994). By ‘multivariate’ is meant that inputs comprise many variables whose interdepend- ence and causal influences are used in predicting future values. The lack of assump- tions regarding the relationship between the input variables and the forecast future values or output, render it to be nonparametric and model-free. The network is trained by adaptation; it is free from model constraints. The relationships constructed in the network are identified from within the input data. The nonlinearity characteris- tic allows it to process complicated relationships between the input data and the target output.

As an exercise, the viability of forecasting rests on acknowledging the existence of market inefficiencies. According to the efficient market hypothesis EMH, in its weak form, investors cannot systematically earn excess returns by developing trading rules based on historic data. The existence of varying degrees of inefficiencies may be traced by the amount of information captured in the market prices (Azoff, 1994).

Results which support the weakness of the EMH assumption have been reported by Ding et al. (1993) and Peters (1994), whose work indicates that financial markets display signs of predictability. Brock et al. ( 1992) found evidence supporting nonline- arities in market prices. Their work has shown that the use of technical analysis indicators, based on moving averages and trading range breaks, may generate, under certain assumptions, profitable trading rules.

Wong et al. designed an intelligent system for stock selection by integrating fuzzy logic with neural networks. The system was trained by processing a data set of 800 stocks, which included a broad range of variables. Their input set consisted of both expert rules and fundamental data (Wong et al., 1992). In a similar application, Swales and Yoon ( 1992) developed a network capable of distinguishing between well-performing and poorly performing stocks. Their inputs included information on total returns and market valuation given by Fortune 500 and Business Week’s top 1000.

Studies of nonlinear dynamic models on option prices include the work by Savit ( 1989, 1990) in which chaos analysis led to improved forecasts. The basic feature of chaos is that simple deterministic systems can generate what appears to be random behavior (Grabble, 1996). Nonlinear dynamic models of exchange rates have been classified by Scheinkman (1990) and tested by Tata and Vassilicos (1991) and De Grawe and Dewachter ( 1990).

Azoff (1994) reviewed the search for chaos in time series data. Early applications for forecasting chaotic time series using neural networks have been performed by

Page 4: Comparing the forecasting performance of neural networks and forward exchange rates

Lapedes and Farber (1987). Weigend et al. ( 1991) and Refenes et al. ( 1993) applied neural network models to forecast foreign exchange prices. Weigend’s model perfor- mance was tested in terms of accuracy, giving support to nonrandom behavior. This work extends We&end’s research by adding a test of correctness la, the model’s performance and compares the results to those of the forward rate, thereby providing added support to the forecasting ability of neural networks in the foreign exchange market.

3. Methodology and design of the neural network model

The neural netivork architecture design used in this paper is that of ‘supervised beaming’. Under this class of beaming, the network’s output target is known during training. The difference between the desired target and the actual output, which is the error, is fed back to the network to improve its performance and from hence the name backpropagation is derived. The derivation of the backpropagation algo- rithm is detailed in Azoff ( 1994).

3.1. Network architecture

The model’s design is that of the multi-layer perceptron MLP, which is the most commonly used. The model consists of three layers: an input layer, a hidden layer and an output layer. Fig. 1 shows the basic structure of a neural network architecture. The input layer is passive; it receives data which it then transmits to the network.

Layer

Input Layer

Fig. I. Neural network architecture

Page 5: Comparing the forecasting performance of neural networks and forward exchange rates

M. R. El Shawl): H. E. EI Shazly / Journal ~f’Multinationa1 Financial Management 7 (1997) 345-356 349

The hidden or intermediate layer does not connect to the outside world, but does connect to other layers of neurons. The output layer is what produces the net- work’s results.

The input layer in the model contains four variables: the 1 month Eurorate on US dollar deposits, the 1 month Eurorate on the foreign currency deposit, the spot rate of exchange and the 1 month forward premium on the foreign currency. The data set consists of weekly observations covering the period 8 January 1988-8 April 1994 and is obtained from the Harris Bank data set. The network consists of one hidden layer with ten neurons. The output layer is the 1 month forecast of the future spot rate of exchange. The same model specification is run to train and forecast values for the British pound, German mark and the Japanese yen.

Fig. 2 traces the network design. During training the input layer transmits input facts to the hidden nodes, which through a transfer function, calculates a weighted sum of inputs (panel 1). The hidden nodes broadcast the results to the output node (panel 2). The output node then calculates a weighted sum and passes it through the same transfer function to generate actual results. These results, or outputs, are then subtracted from the desired results or pattern, to find the output error (panel 3). The errors are then propagated back to the hidden layer, where the weighted sum of error derivatives is computed to determine its contribution to the output (panel 4). Weights continue to be adjusted according to a prespecified rule such as minimizing the model’s sum squared errors. The beaming process continues until the desired accuracy level is achieved. The network then saves the weights determined during training and uses them for testing (Hammerstrom, 1993).

The total number of facts fed to the network was 321, which were divided into a training set of 289 facts and a testing set of 32 facts. The Brainmaker software randomly selects 10% of the facts from the data set and uses them for testing. Using a train/test approach, the model’s training process is interrupted periodically and tested. If the current network tested is better, it is saved as the best network. Training stops when measured system performance is repeatedly less than the current best performance.

The selection of the number of hidden neurons was a result of experimentation. This is the ‘art’ of the design, which aims at identifying the optimal architecture, which yields the best performance when outcomes are tested. In general, as a starting point the following guidelines have been suggested:

Number of hidden neurons = training facts x error tolerance Number of hidden neurons = (sum of inputs + outputs)/2 Number of hidden neurons = 5510% of training facts

Owens and Mocella ( 1991) and Plutowski and White ( 1990) warn against using too few or too large a number of hidden neurons. For if the number of hidden neurons is too large, this would lead to memorization of facts and poor forecasting results. If, however, the neurons are too few, the network will not be able to learn enough of the facts, increasing the number of iterations and again yielding weak results.

Page 6: Comparing the forecasting performance of neural networks and forward exchange rates

Hidden taye: 0 Hidden Layer

input Layer

L Input Layer

Panel

@ Hidden layer

input layer

Hidden Layer

Input Layer

___-

Panel 3 Fig. 2. Neural network backpropagatioll

ane

Page 7: Comparing the forecasting performance of neural networks and forward exchange rates

M. R. El Sl~nrly, H. E. El Shady / Journul of Multinntionnl Finnncid Mmngenwnt 7 ( 1997) 345-356 35 1

Realized, predicted and forward rates for the British pound

Date Realized Predicted Forward

02/l 2188 1.749 1.7654 1.7813 04/22/88 1.8845 1.8575 I .8852 07/01/88 I .7055 1.7126 1.819 OS/OS/S8 1.7055 _ 1.7154 I .6965 I l/l l/88 1.843 I .8295 1.759 01127189 1.747 1.7788 I .7755 04/07/89 _ 1.699 I .6837 1.7121 06/16/89 1.565 1.5626 1.6005 08125189 1.5655 I .5702 1.6143 I l/03/89 1.5718 1.583 1.5524 01/12/90 .1.6425 _ I .6848 1.6133 03/23/90 1.6465 1.6219 I .6458 06/01/90 1.682 1.6975 1.6712 OS/l o/90 1.9135 1.8991 1.8052 10/19/90 1.9638 1.9384 1.8617 12/28/90 1.917 1.9312 I .9404 0312219 1 I.737 1.7886 1.901 0513 l/9 1 _ 1.711 1.7168 1.7094 08/16/91 1.6755 I .6707 I .6566 1 O/25/9 1 1.74 1.7129 1.7348 01/03/92 1.8045 1.8701 I .8008 03/l 3192 1.7055 1.6836 1.7518 05122192 1.8315 1.8556 I .7692 08197192 I.923 I .9282 I .8965 10/16/92 1.6145 I .6574 1.7005 12124192 1 .S365 1.5268 1.5559 03/l 2193 1.491 1.4725 1.4511 05123193 1.5605 1.5271 1.57 07/30/93 1.49 I .4872 1.4759 10/15/93 I .4795 1.5018 1.53 12124193 1.479 1.4959 1.4913 04/04/94 1.5009 I .4969 I .46

The selection of the number of neurons in the model design was based on that which optimized performance.

The tolerance level used in training was set at lo%, while that for testing was 20%. The initial tolerance levels set were much looser - 20% for training and 40% for testing. These were then gradually tightened after the network had gotten all the correct facts during training.

The neuron transfer function applied is a sigmoid function, which exhibits desir- able properties such as being nonlinear and continuously differentiable and which seems to work well with the backpropagation algorithm. The Gaussian transfer function was experimented with. While the Gaussian transfer function resulted in a much faster training process, when tested the network’s results were less accurate. The beaming rate specified was 1.0.

Page 8: Comparing the forecasting performance of neural networks and forward exchange rates

Table 2 Realized, predicted and forward rates for the Germau mark

Date Realized Predicted Forward

02,‘12/88 04:‘2/88 07:‘01/88 09.!09/88 1 I,‘1 I/88 0!;27/89 04,:07 189 06//l 6;89 08/‘5/89 1 1/‘03/89 01;12/90 03:23/90 06,‘O I:90 OX/l O/90 10,:19/90 I2,‘28/90 03/22/9 I 35:‘31/91 08!14/91 10’75’91 !- / a1/03:92 03,‘13/92 (;5!:22/92 08/97/92 IO/ I6192 12/x/92 03;12/93 05,:23/93 07/30/93 IO,!15193 17;24/93 04/04/94

0.5858 0.5712 0.5994 0.5994 0.5863 0.6057 0.5461 0.5406 0.5844 0.5333 0.5356 0.5292 0.5824 0.562 0.561 0.5329 0.5329 0.5543 0.5339 0.5323 0.5355 0.5 152 0.514 0.5099 0.507 0.5212 0.5289 0.5385 0.5421 0.5262 0.5846 05947 0.5855 0.5894 0.585 0.5814 0.5903 0.5914 0.6133 0.6445 0.6216 0.6092 0.659 0.6536 0.6388 0.6611 0.6585 0.6753 0.5858 0.605 1 0.6505 0.5576 0.5749 0.5772 0.571 0.5636 0.5749 0.6019 0.5851 0.5954 0.6361 0.6495 0.649 0.5965 0.5986 0.6038 6.6234 0.6197 3.6042 0.6798 0.6797 0.6605 0.681 0.6823 0.6641 0.6255 0.6376 0.6202 0.602 0.5985 0.6013 0.6158 0.6387 0.6286 0.5734 0.5718 0.5867 0.6199 0.6176 0.6217 0.5882 0.5748 0.58'7 0.583 0.5721 0.5748

efore feeding the raw data into the network in the form of inputs it must be preprocessed. Preprocessing requires the normalization and the transformation of the data (Medelsohn, 1995). The process of normalization of raw data results in restricting the range of values from 0 to 1. The scaling of the data keeps the weights within a prescribed range of acceptable values and is automatically executed by the Brainmaker Software package. Data transformation is achieved by computing differences between input values rather than by feeding raw data directly. This is done in order to reduce the noise component, which may obscure relationships underlying the variables and to speed the network’s training time.

This suggestion was applied to initial runs and then discarded. It was found that

Page 9: Comparing the forecasting performance of neural networks and forward exchange rates

M. R. El Shady, H. E. El Shady / Journal of Multinational Financial Management 7 (1997) 345-356 353

although the differenced data set did indeed speed the training time by reducing the noise during training, the network when tested yielded poor forecasts. Based on the objective of improving the testing performance rather than speeding the training time, the decision of using raw data during training was made.

4. Network results

Results of the network’s predictions of the one-month future spot rate of exchange are reported in Tables l-3. The first column in Table 1 shows the test sample dates which were randomly selected from the data set. Column two shows the actual realized spot rates of exchange, which are compared with those predicted by the

Table 3 Realized, predicted and forward rates for the Japanese yen

Date Realized Predicted Forward

02/12/88 0.7692 0.7582 0.7848 04/22/88 0.8001 0.8044 0.8075 07/01/88 0.7534 0.7445 0.8037 09/09/88 0.7456 0.75 0.7545 1 l/l l/88 0.8261 0.8104 0.7954 01/27/89 0.7743 0.7769 0.7946 04/07/89 0.7559 0.7559 0.7167 06/l 6/89 0.7189 0.696 0.7159 08/25/89 0.6885 0.701 0.7217 11/03/89 0.6994 0.7002 0.6957 01/12/90 0.6859 0.6934 0.7004 03/23/90 0.6333 0.6723 0.6689 06/01/90 0.6521 0.6763 0.6569 OS/l O/90 0.6777 0.6771 0.6717 10/19/90 0.7806 0.7881 0.7242 12/28/90 -0.7479 0.7381 0.7624 0312219 1 0.7077 0.7135 0.7424 05/31/91 0.7117 0.7095 0.7206 08/16/91 0.7305 0.7164 0.7304 10/25/91 0.768 0.7508 0.7688 01/03/92 0.7896 0.8007 0.7746 03/l 3/92 - 0.744 0.736 0.7772 05122192 0.7834 0.7672 0.7962 08197192 0.7825 0.7852 0.7954 I O/l 6192 0.8354 0.8364 0.8026 12124192 0.8085 0.825 0.8304 03/12/93 0.8491 0.8784 0.8304 05123193 0.9083 0.9166 0.9046 07/30/93 0.9583 0.9358 0.9756 10/15/93 0.9337 0.9411 0.9406 12124193 0.9015 0.9246 0.9208 04/04/94 0.9518 0.9401 0.925

Page 10: Comparing the forecasting performance of neural networks and forward exchange rates

Table 4 Accuracy results

Currency TAFE, TAFEr MAFE, MAFE,

BP 0.3698 0.7365 I.ib% 2.30% DM 0.3656 0.7661 1.14% 2.39% JV IO.482 I 1 I.3943 11.5!% 4.36%

TAFE, = total absolute forecast error of prediction as a percent of the realized value. TAFE,= total absolute forecast error of forward rate as a percent of the realized value MAFE,=mean absolute forecast error of prediction. MAFE,=mean absolute forecast error of forward.

18 TAX,= 1 (,P,- R, ‘I R,).

I=,

TAFE,= i (,F,- R,,,‘l R,). /=I

MAFE,=TAFE,k MAFE,=TAFE,$.

Table 5

Correctness results

Currency Predicted Forward

BP DM JY

62.5% 62.5% 62.5% 46.87% 53.13% 48.87%

network and those forecast by the forward rate. All rates are expressed as US dollars per unit of foreign currency (values for the yen were multiplied by 100).

The network’s evaluation results are reported in Tables 4 and 5 and compared with those of the forward rate. Accuracy results show that the network’s prediction measured by the total absolute forecast error and mean absolute forecast error are significantly reduced for the three currencies tested as reported in Table 4.

In terms of correctly forecasting the direction of the change in the exchange rate, Table 5 shows that the model outperforms the forward rate in the case of the German mark and the Japanese yen and does as well as the forward rate for the British pound. Although the correctness results provide added support to the model’s predictive powers, the improved performance of the neural network model is not as significant as that of the accuracy results.

5. Concluding remarks

This paper tested the forecasting ability of neural networks in the foreign exchange markets and compared its performance to that of the forward rate. The resuits reported for the three currencies tested confirm the credibility and potential of this technique. Although the designing process remains to be somewhat of an art, neural

Page 11: Comparing the forecasting performance of neural networks and forward exchange rates

networks, when applied to time series predictions, develop forecasts from data using 20-20 hindsight. This, coupled with their ability of finding relationships between inputs and outputs even when the patterns are ill defined gives them a definite edge over standard econometric techniques.

Despite the advantage of increased efficiency in information processing derived from the exploitation of computing power, neural network models suffer from a number of limitations. The lack of explanatory capability, the difficulty of including structured knowledge and a bias towards quantitative data are the main drawbacks of such systems (Deboeck, 1994). Fishman et al. (1991) noted that:

Neural nets are truly black boxes. Once you have trained a neural net and are generating predictions, you still do not know why the decisions are being made and can’t find out by just looking at the net. It is not unlike attempting to capture the structure of knowledge by dissecting a human brain.

As an extension of this work, future research will focus on testing the model’s forecasting performance using different time horizons. The model’s design and parameters will be altered in an effort to further improve its performance.

References

Azo& E.M., 1994. Neural Network Time Series Forecasting of Financial Markets. Wiley, New York. Bansal, A., Kaufman, R., Weitz, R., 1993. Comparing the modeling performance of regression and neural

network as data quality varies: a business approach. J. Mgmt Inf. Syst. 10 (l), 1 IO-132 Brock, W.A., Lakonishok, J., Le Baron, B., 1992. Simple technical trading rules and the scholastic

properties of stock return. J. Finance 27 (5), 1731-1764. Brockett, P.L., Cooper, W.W., Golden, L.L., Pitaktong, U., 1994. A neural network method for obtaining

early warning of insurer insolvency. J. Risk Insurance 61 (3), 4022424. Chatfield, C., Collins, A.J., 1986. Introduction to Multivariate Analysis, Chapman and Hall, London. Coats, P.K., Fant, L.F., 1993. Recognizing financial distress patterns using a neural network tool.

Financial Mgmt 22 (3), 1422155. Deboeck, G.J. (Ed.), 1994. Trading on the Edge: Neural, Genetic and Fuzzy Systems for Chaotic Financial

Markets. John Wiley & Sons Inc., New York. De Grawe, P.. Dewdchter, H., 1990. A chaotic monetary model of exchange rates, CEPR Discussion

Paper, 110.46. Ding, Z., Grdnger, C.W.J., Engle, R.F., 1993. A long memory property of stock market returns and a

new model. J. Empirical Finance 1 (1). 833106. Fishman, M.B., Barr, D., Dean, S., Walter, W.J., 1991. Using neural nets in market analysis, Tech. Anal.

Stocks Commodities 9 (4), 18 Grabble, O.J., 1996. International Financial Markets, 3rd ed. Prentice Hall, Englwood Clitfs, New

Jersey, p. 382. Grundnitski, G., Osburn, L., 1993. Forecasting S and P and gold futures prices: An application of neural

networks. J. Futures Markets 3 (6), 631-643. Haefke, C., Helmenstein, C., 1996. Forecasting Austrian IPOs: An application of linear and neural

network error-correction models. J. Forecasting 15 (3), 237-251, Hammerstrom, D., 1993. Neural Networks at Work, IEEE Spectrum, June, 26-32. Kaastra, I., Boyd, M.S., 1995. Forecasting futures trading volume using neural networks. J. Futures

Markets 15 (S), 953-970. Lapedes, A., Farber, R., 1987. Nonlinear Signal Processing using Neural Network Prediction and System

Modeling. Theoretical Division, Los Alamos National Laboratory, NM Report no. LA-UR-87-2662. Medelsohn, L., 1995. Global trading utilizing neural networks: a synergistic approach. In: Freedman,

Page 12: Comparing the forecasting performance of neural networks and forward exchange rates

R.S., Klein, R.A.. Lederman. J. (Eds.). Artificial Intelligence in rhe Capital Markets. Probus. pp. 79% 135.

Nabney, I., Dunis. C., Sullaway, R., Leong. S.; Redshaw, W., 1996. Leading Edge Forecasting Techniques for Exchange Rate Prediction in Forecasting Financial Markets. Wiley, London.

Nabney, I.T., Jenkins, P.G., 1992. Rule induction in finance and mal-keting. IBC Conference on Data Mining in Finance and Marketing, September.

Owens, A.J.. Mocella. M.T., 1991. An experimental design advisor and neural network analysis package. In Proceedings of rhe IWt\NN.

Peters, E.E., 1994. Fractal market analysis: Applying Chaos Theory to Investment and Economics. Joh~l Wiley and Sons Inc., New York.

Plutowski, M., White, H., 1990. Active selection of training examples of network beaming in noiseless environments. In: Technical Report, CS 91-180. University of California. San Diego.

Quinlan, J.R., 1986. Induction of decision trees. Mach. Learning 1. 81-106. Race, P.R., 1988. Rule induction in investment appraisal. J. Ops Rcs. Sot. 12, 1 I 13- 1 131. RefLnes, A.N. et al., 1993. Currency exchange rate prediction and neural netw’ork design strategies. Yeural

Comput. Applic. P, 46-58. Savit, R., 1989. Nonlinearities and chaotic etrects in options prices. J. Futures Markets 9, 507-j 18. Savit, R., 1990. Chaos on the Trading Floor, New Scientist, 11 August. p. 48. Scheinkman, J.A., 1990. Nonlinearities in economic dynamics. Economic J. 334. Swales Jr., G.S., Yoon, Y., 1992. ~4pplying artificial neural networks to investment analysis. Financial

Anal. J. 48 (5); 78-80. Tata F., Vassilicos, C., 1991. Is there chaos in economic series? A study of the stock and foreign exchange

markets. LSE Financial Markets Group Discussion Paper Series. no. 120. Weigend, A.S., Huberman, B.A., Rumelhart, D.E., 1991. Generalisation by weight-elimination with appli-

cation to forecasting. In: Lippman, R.P., Moody. J.E., Toureizky. D.S. (Eds.), Advances in Neural Information Processing Systems, Vol. 3. Morgan Kaufman, San Mateo, CA, pp. 875 -882.

Won&, F.S., Wang, P.Z.. Goh. T.H.. Quek. B.K., 1992. Fuzzy neural systems for stock selection. Financiai Anal. J. 4X (1). 47-52.