groundwater level forecasting using artificial neural networks
TRANSCRIPT
Groundwater level forecasting using artificial neural networks
Ioannis N. Daliakopoulosa, Paulin Coulibalya, Ioannis K. Tsanisb,*
aDepartment of Civil Engineering, McMaster University, Hamilton, Ont., CanadabDepartment of Environmental Engineering, Technical University of Crete, Polytechnioupolis, Chania 73100, Greece
Received 17 June 2004; revised 1 November 2004; accepted 8 December 2004
Abstract
A proper design of the architecture of Artificial Neural Network (ANN) models can provide a robust tool in water resources
modeling and forecasting. The performance of different neural networks in a groundwater level forecasting is examined in order
to identify an optimal ANN architecture that can simulate the decreasing trend of the groundwater level and provide acceptable
predictions up to 18 months ahead. Messara Valley in Crete (Greece) was chosen as the study area as its groundwater resources
have being overexploited during the last fifteen years and the groundwater level has been decreasing steadily. Seven different
types of network architectures and training algorithms are investigated and compared in terms of model prediction efficiency
and accuracy. The different experiment results show that accurate predictions can be achieved with a standard feedforward
neural network trained with the Levenberg–Marquardt algorithm providing the best results for up to 18 months forecasts.
q 2004 Published by Elsevier B.V.
Keywords: Artificial neural networks; Groundwater level forecasting; Non-linear modeling; Messara valley; Aquifer overexploitation
1. Introduction
Although conceptual and physically-based models
are the main tool for depicting hydrological variables
and understanding the physical processes taking place
in a system, they do have practical limitations. When
data is not sufficient and getting accurate predictions
is more important than conceiving the actual physics,
empirical models remain a good alternative method,
and can provide useful results without a costly
calibration time. ANN models are such ‘black box’
models with particular properties which are greatly
0022-1694/$ - see front matter q 2004 Published by Elsevier B.V.
doi:10.1016/j.jhydrol.2004.12.001
* Corresponding author.
E-mail address: [email protected] (I.K. Tsanis).
suited to dynamic nonlinear system modeling. The
advantages of ANN models over conventional
simulation methods have been discussed in detail by
French et al. (1992). ANN applications in hydrology
vary, from real-time to event based modeling. They
have been used for rainfall—runoff modeling,
precipitation forecasting and water quality modeling
(Govindaraju and Ramachandra Rao, 2000). One of
the most important features of ANN models is their
ability to adapt to recurrent changes and detect
patterns in a complex natural system. More concepts
and applications of ANN models in hydrology have
been discussed by Govindaraju and Ramachandra Rao
(2000) and by the ASCE Task Committee on
Application of Artificial Neural Networks in
Journal of Hydrology 309 (2005) 229–240
www.elsevier.com/locate/jhydrol
I.N. Daliakopoulos et al. / Journal of Hydrology 309 (2005) 229–240230
Hydrology (2000). Neural networks have also been
previously applied with success in groundwater level
prediction (Coulibaly et al., 2001a,b,c). In this paper,
several different neural networks are evaluated in
order to reach conclusions regarding the efficiency of
this forecasting technique in groundwater level
prediction.
Fig. 2. Typical feedforward neural network.
2. Methodology
2.1. Training with different ANN architectures
Neural networks are massive parallel processors
comprised of single artificial neurons. Fig. 1 shows a
typical single neuron with a sigmoid activation
function, three input synapses and one output synapse.
Synapses represent the structure where weight values
are stored. In this paper, three different neural
networks are being used in order to identify the one
which gives the best results in predicting mean
monthly groundwater level values. They are described
below.
2.1.1. Feedforward neural network (FNN)
Feedforward neural networks have been applied
successfully in many different problems since the
advent of the error backpropagation learning algor-
ithm. This network architecture and the corresponding
learning algorithm can be viewed as a generalization
Fig. 1. Typical artificial neuron.
of the popular least-mean-square (LMS) algorithm
(Haykin, 1999).
A multilayer perceptron network consists of an
input layer, one or more hidden layers of computation
nodes, and an output layer. Fig. 2 shows a typical
feedforward network with one hidden layer consisting
of three nodes, four input neurons and one output.
The input signal propagates through the network in a
forward direction, layer by layer. Their main advan-
tage is that they are easy to handle, and can
approximate any input/output map, as established by
Hornik et al. (1989). The key disadvantages are that
they train slowly, and require lots of training data
(typically three times more training samples than
network weights).
2.1.2. Elman or recurrent neural network (RNN)
Fully recurrent networks, introduced by Elman
(1990), feed the outputs of the hidden layer back to
itself. Partially recurrent networks start with a fully
recurrent net and add a feedforward connection that
bypasses the recurrence, effectively treating the
recurrent part as a state memory. Fig. 3 shows
a typical recurrent network consisting of four input
nodes, a hidden layer with 3 nodes and one output. A
context layer is interconnected with the hidden layer
and plays the role of the network memory. These
recurrent networks can have an infinite memory depth
and thus find relationships through time as well as
through the instantaneous input space (Haykin, 1999).
Most real-world data contains information in its time
Fig. 3. Typical recurrent neural network.
I.N. Daliakopoulos et al. / Journal of Hydrology 309 (2005) 229–240 231
structure. Recurrent networks are the state of the art in
nonlinear time series prediction, system identification,
and temporal pattern classification (Zhang et al.,
1998).
2.1.3. Radial basis function network (RBF)
Radial basis function (RBF) networks are non-
linear hybrid networks typically containing a single
hidden layer of computation nodes. This layer uses
Gaussian transfer functions, rather than the standard
sigmoidal functions employed by a FNN. Fig. 4 shows
Fig. 4. Typical radial basis function.
a typical radial basis function consisting of a hidden
layer of four nodes, four inputs and three outputs. The
centers and widths of the Gaussians are set by
unsupervised learning rules, and supervised learning
is applied to the output layer (Haykin, 1999). Radial
basis function networks tend to learn much faster than
a FNN.
2.2. Training with different algorithms
Three different algorithms are being used in order
to identify the one which trains a given network more
efficiently.
2.2.1. Gradient descent with momentum and adaptive
learning rate backpropagation (GDX)
This method uses backpropagation to calculate
derivatives of performance cost function with
respect to the weight and bias variables of the
network. Each variable is adjusted according to the
gradient descent with momentum. For each step of
the optimization, if performance decreases the
learning rate is increased. This is probably the
simplest and most common way to train a network
(Haykin, 1999).
2.2.2. Levenberg–Marquardt (LM)
The Levenberg–Marquardt method is a modifi-
cation of the classic Newton algorithm for finding an
optimum solution to a minimization problem. It uses
an approximation to the Hessian matrix in the
following Newton-like weight update
xkC1 Z xk K ½JT J CmI�K1JT e (1)
where x the weights of neural network, J the Jacobian
matrix of the performance criteria to be minimized, m
a scalar that controls the learning process and e the
residual error vector.
When the scalar m is zero, Eq. (1) is just the
Newton’s method, using the approximate Hessian
matrix. When m is large, Eq. (1) becomes gradient
descent with a small step size. Newton’s method is
faster and more accurate near an error minimum, so
the aim is to shift towards Newton’s method as
quickly as possible.
Levenberg–Marquardt has great computational
and memory requirements and thus it can only be
I.N. Daliakopoulos et al. / Journal of Hydrology 309 (2005) 229–240232
used in small networks (Maier and Dandy, 1998).
Nevertheless, many researchers have been success-
fully using it (Anctil et al., 2004; Coulibaly et al.,
2000; Coulibaly et al., 2001a,b,c; Maier and Dandy,
1998; Maier and Dandy, 2000; Toth et al., 2000). The
Levenberg–Marquardt algorithm is often character-
ized as more stable and efficient. Also, both Coulibaly
et al. (2000); Toth et al. (2000) point out that it is
faster and less easily trapped in local minima than
other optimization algorithms.
2.2.3. Bayesian regularization (BR)
The Bayesian regularization is an algorithm that
automatically sets optimum values for the parameters
of the objective function. In the approach used,
the weights and biases of the network are assumed to
be random variables with specified distributions. In
order to estimate regularization parameters, which are
related to the unknown variances, statistical tech-
niques are being used. The advantage of this
algorithm is that whatever the size of the network,
the function won’t be over-fitted. Bayesian regulariz-
ation has been effectively used in literature (Anctil
et al., 2004; Coulibaly et al., 2001a,b,c; Porter et al.,
2000).
2.2.4. Network architecture
Several aspects of the architecture of neural
networks that focus on the prediction of variables
associated with hydrology are covered by Maier
and Dandy (2000). Their suggestions were followed
in the development of the current model. The
structure of the network is determined by trial and
error. The size of the input and hidden layer of the
network has been variable depending on the
prediction horizon, whereas the output layer has a
single node. The number of nodes in the hidden
layer and the stopping criteria were optimized in
terms of obtaining precise and accurate output.
Finally, the activation function of the hidden layer
was set to a hyperbolic tangent sigmoid function as
this proved by trial and error to be the best in
depicting the non-linearity of the modeled natural
system, among a set of other options (linear and
log sigmoid). It is noteworthy that there is no well
established direct method for selecting the number
of hidden nodes for an ANN model for a given
problem. Thus the common trial-and-error approach
remains the most widely used method.
Although special learning parameters (e.g.
momentum factor, learning rate etc.) can help to
avoid local minima, no guarantee of finding the global
minimum can be given. The probability of finding the
global minimum was enhanced by selecting various
random start positions.
2.2.5. Criteria of evaluation
Two different criteria are used in order to evaluate
the effectiveness of each network and its ability to
make precise predictions. The Root Mean Square
Error (RMSE) calculated by
RMSE Z
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiPNiZ1ðyi K yiÞ
2
N
s(2)
where yi is the observed data, yi the calculated data
and N is the number of observations. RMSE indicates
the discrepancy between the observed and calculated
values. The lowest the RMSE, the more accurate the
prediction is.
Also, the R2 efficiency criterion, given by
R2 ¼ 1 K
Pyi K yi
� �2
Py2
i K
Py2
i
n
(3)
representing the percentage of the initial uncertainty
explained by the model. The best fit between observed
and calculated values, which is unlikely to occur,
would have RMSEZ0 and R2Z1.
3. Study area and data description
The neural networks were tested with data taken
from Messara Valley, a basin of 398 km2 at the
southern part of the island of Crete, in Greece (Fig. 5).
The main geological coverage of the valley is
quarternary alluvial clays, silts, sands and gravels
deposited unevenly thus causing great incongruity of
the hydro-geological features of the area. The
hydrogeological basin has an area approximately
112 km2, approximately 25 km long and about 3 km
wide, as shown in Fig. 5 by the shaded area.
Furthermore, the main land-use coverage of the
Messara Valley is olive and vine cultivation which
Fig. 5. Area of study, Messara Valley, Crete, Greece. Black dots represent the locations of pumping wells in the area.
I.N. Daliakopoulos et al. / Journal of Hydrology 309 (2005) 229–240 233
is typical for that part of the island of Crete (Croke
et al., 2000).
Messara Valley faces a severe problem of
depletion of groundwater resources, mainly used in
agriculture. Many wells in the region are illegitimate
and pumping is not regulated resulting in over-
exploitation of the aquifer and as a consequence the
sinking of the groundwater level over the years. The
long term mean annual precipitation for the area has
noticeably decreased from 588 to 516 mm/yr during
the last few years (1985–1995) (Croke et al., 2000).
With an estimated 65% of total evaporation and a
measured discharge of 21 mm/yr, the annual recharge
of the aquifer can be calculated to 159 mm/yr. About
108 mm/yr are lost from subsurface outflow and if
constant abstractions for irrigation are assumed at
97 mm/yr then the average net loss of water resources
sums up to 46 mm/yr. Taking into account the
porosity and area of Messara Valley the annual
decrease of the groundwater level can be calculated to
no less than 1.5 m/yr which is consistent with the
20 m observed drop over an approximately 10-year
period (Croke et al., 2000).
The data acquired from the area consists of rainfall
and temperature time series measured at Pompia, a
village in Messara Valley, the depth of a well and the
discharge of Geropotamos, the main stream of the
valley at the catchment’s outlet at Phaistos. The
time series used in this project are summarized in
the following figures. Monthly precipitation at
Pompia station shows the typical characteristics of
the Mediterranean climate comprised of high rainfall
during the winter months and no rainfall during the
summer months, as shown in Fig. 6. The average
precipitation at Pompia station is 500 mm/yr, slightly
less than the average annual precipitation in the basin.
From the bold line in Fig. 6 it can be inferred that the
average precipitation at the station has had no
noticeable changes for the past 20 years (1981–
2001). The wet (1985, 1988, 1991, 1994 and 1996)
and dry years (1983, 1986, 1990, 1993, 1997, and
2000) can also be distinguished and appear to be
equally distributed with a period of 2–3 years. Table 1
presents the annual precipitation for the station used in
the case study for the period 1981–2001. The
maximum precipitation of this 20-year period
occurred in 1981–1982, giving a total value of
699 mm. In the same table the effective precipitation
is shown, given by
Peff Z P KET KR (4)
where Peff is the effective precipitation, P is the actual
precipitation measured at the station, ET the amount
of water towards evapotranspiration equal to 65% of P
and R the runoff of the stream of Geropotamos, all
measured in mm/yr. In Table 1, runoff can be seen
both as Mm3/yr and mm3/yr. The conversion is made
by dividing the total volume of annual runoff with
Table 1
Annual precipitation, surface discharge and annual effective
precipitation, for the station of Pompia in Messara Valley
Year Precipi-
tation
(mm/yr)
Surface
discharge
(Mm3/yr)
Surface
discharge
(mm/yr)
Effective
precipi-
tation
(mm/yr)
1980–1981 672.5 46.4 116.6 118.8
1981–1982 699 18.8 47.2 197.4
1982–1983 282.5 29.8 74.9 24.0
1983–1984 607.9 48.8 122.6 90.2
1984–1985 646.4 13.5 33.9 192.3
1985–1986 363 10.8 27.1 99.9
1986–1987 466 20.5 51.5 111.6
1987–1988 553 11.7 29.4 164.2
1988–1989 423.5 4.3 10.8 137.4
1989–1990 255 5.2 13.1 76.2
1990–1991 456 2.6 6.5 153.1
1991–1992 330.5 0.0 0.0 115.7
1992–1993 344.5 7.1 17.8 102.7
1993–1994 505.9 4.9 12.3 164.8
1994–1995 488.7 12.0 30.2 140.9
1995–1996 679.9 2.4 6.0 231.9
1996–1997 483.5 0.9 2.3 167.0
1997–1998 472.2 1.3 3.3 162.0
1998–1999 528.6 0.0 0.0 185.0
1999–2000 328.9 0.7 1.8 113.4
2000–2001 512.5 0.0 0.0 179.4
Precipitation and effective precipitation are given in mm/yr and
surface discharge both in mm/yr and Mm3/yr.
Fig. 6. Monthly precipitation in mm versus time in months for the past 15 years in Messara.
I.N. Daliakopoulos et al. / Journal of Hydrology 309 (2005) 229–240234
the area of the watershed (398 km2). The measured
discharge at the catchment’s outlet at Phaistos for the
period 1985–1995 was averaged at 21 mm/y (Croke
et al., 2000).
Temperature also plays an important role in the
water budget as it affects evapotranspiration. Fig. 7
shows monthly temperature measurements at the
meteorological station of Pompia. Values appear to
vary steadily through the years with a slight increasing
trend as depicted by the bold line in Fig. 7. When
compared with longer datasets this minor trend is
neutralized, showing no significant temperature
variation in a long period of time. Nevertheless, for
the small period of our study, this increase may be of
some importance since during drier years the amount
of groundwater pumped for irrigation is bound to
increase as well.
Fig. 8 represents the monthly discharge of
Geropotamos Stream, the main seasonal stream that
runs through Messara Valley. The water level of
Geropotamos Stream has been steadily decreasing for
the past 20 years as shown by the bold line in Fig. 8
due to the overexploitation of the water resources, part
of which would be otherwise discharged into the sea.
The groundwater level reduction has caused wetlands
in the area to dry up (Croke et al., 2000) and the
stream to have no flow during most of the year.
Fig. 7. Monthly temperature in degrees Celsius versus time in months for the past 15 years in Messara.
I.N. Daliakopoulos et al. / Journal of Hydrology 309 (2005) 229–240 235
This steadily decreasing trend is also depicted in
Fig. 8, where from a maximum of 14 Mm3 in 1983
and 1985 the discharge was diminished to an absolute
zero during the wet period of 2000.
The monthly level of a characteristic well located
in Pompia is represented in Fig. 9. Missing depth
values for this well have been interpolated from
Fig. 8. Monthly discharge of Geropotamos stream in Mm3 v
existing measurements with the help of a cubic spline.
Tests showed that data infilling with the use of a cubic
spline was the best among a series of interpolation
techniques, giving acceptable results even when
monthly data were interpolated from measurements
5 months apart. Fig. 10 shows an example of
interpolated and measured data. In this example,
ersus time in months for the past 15 years in Messara.
Fig. 9. Monthly depth in m versus time in months for the past 15 years for a well in Messara.
I.N. Daliakopoulos et al. / Journal of Hydrology 309 (2005) 229–240236
a cubic spline has been drawn between peaks and lows
(every 6 months), depicting very well the available
monthly data, giving R2Z0.98 and RMSEZ0.48 m.
For the rest of the timeline the results were equally
good. This fact also indicates that the transition
between different levels of the aquifer through
the year is smooth. The level of this well is
Fig. 10. Example of spline interpolatio
representative of the groundwater level of the area
when compared to the other wells in the area. For the
past 20 years precipitation and temperature appear
to have a steady fluctuation, the discharge of
Geropotamos and the groundwater level have been
steadily decreasing due to over pumping. One of the
goals of this paper is to evaluate the ability of artificial
n in a small part of the dataset.
Table 2
R2 and RMSE goodness of fit criterions for each of the 7 network—algorithm combinations used
Criterion Network
FNN-LM FNN-BR FNN-GDX RNN-LM RNN-BR RNN-GDX RBF
R2 0.985 0.592 0.993 0.911 0.609 0.830 0.744
RMSE (m) 2.11 9.84 5.68 3.31 9.32 5.63 5.23
I.N. Daliakopoulos et al. / Journal of Hydrology 309 (2005) 229–240 237
neural networks to simulate this groundwater level
drop successfully.
For each one of the input variables, the time series
was divided in 3 different subsets. One subset for
training the neural network (1988–1998), one for
model calibration (1998–2000) and one for model
testing (2000–2002). These subsets are indicated with
vertical dotted lines in Figs. 6–9. The reason the whole
timeline was not used is that the statistical properties
of the groundwater level depth values appear to
change significantly after 1988. Intensification of
irrigation after 1988 caused a downward trend,
whereas the increase in the amplitude of the oscillation
of the groundwater level can be attributed to the
smaller porosity of the geological formations in larger
depths. Since our goal is to predict future groundwater
depths, any information concerning aquifer fluctuation
before irrigation was considered redundant as it would
inhibit the efficiency of our data driven model.
4. Results
All tests and results derived through programming
in Matlab 6. By means of trial and error, an optimum
network and parameter configuration for all three
networks was derived. During calibration, the values
that correlated better with all networks where those of
a 5-month moving window through the data series.
Table 3
Seasonal residuals (in meters) for each of the seven ANN-algorithm comb
Time Network
FNN-LM FNN-BR FNN-GDX R
Oct-00 K0.20 1.64 4.51
Apr-01 1.43 15.45 12.90
Oct-01 K3.84 6.26 0.47 K
Apr-02 1.00 18.21 10.88
Thus the input layer in all networks consisted of 20
input nodes; a 5-monthly time-lag was included (time-
lags t, tK1, tK2, tK3 and tK4 considering xt is the
value of a given variable at the present time step) for
precipitation, temperature, stream flow and ground-
water level. The output of the network is a prediction
of the well level at time step tC1. The number of
hidden neurons for both the RNN and FNN was
determined to 3 as suggested in literature (Coulibaly
et al., 2001a,b,c) and through trial and error. This
number of neurons seems to be both time efficient and
adequate to handle the rather small amount of data
of our problem. The number of hidden nodes of the
RBF network was determined automatically to 25.
Other parameters that were adjusted in order to
achieve more accurate results were the goal value of
the error function of the network during calibration,
calculated by the Mean Square Error (MSE), the
learning rate of the training algorithms, the number of
epochs or feeds of each network and the spread of the
RBF network. The need of adjustment of these
parameters lies in the danger of overtraining a
network, an effect that is analogous to over-fitting a
polynomial function.
Tables 2 and 3, summarize the results of the
testing for every network configuration. As expected,
the efficiency of all method decays as the prediction
period increases. The most efficient method is the
one whose efficiency decays with a slower rate
inations
NN-LM RNN-BR RNN-GDX RBF
0.44 1.70 K0.38 0.49
8.91 15.68 12.03 13.64
3.05 6.58 K0.76 2.66
9.98 16.31 13.19 7.88
Fig. 11. (a) Comparison of validation results to observed values. (b) Comparison of validation results to observed values.
I.N. Daliakopoulos et al. / Journal of Hydrology 309 (2005) 229–240238
and at the same time explains the unknown function
better. The best overall performance for the given
problem was achieved by the feedforward network
trained with the Levenberg–Marquardt algorithm
and the second best by the recurrent neural network
trained with the same algorithm. As we can see from
Table 2, even though the feedforward network
trained with the GDX algorithm seems to explain
better the groundwater level change (R2Z0.993), its
results are shifted rendering the method unsuitable
for the problem. The most unsuitable network was
a recurrent neural network trained with the Bayesian
regularization algorithm. This may indicate that
RNN requires more complex training algorithms
Fig. 12. Comparison of observed groundwater levels with simulated results for 1, 6, 12 and 18 months ahead using a feedforward network
trained with the Levenberg–Marquardt algorithm.
I.N. Daliakopoulos et al. / Journal of Hydrology 309 (2005) 229–240 239
(Coulibaly et al., 2001a,b,c). The rest of the
networks performed relatively well but tended to
overestimate the observed dataset. Also all the
networks performed very well for 1 month ahead
predictions. Since the given problem is aiming at
predicting the level of an aquifer facing depletion,
overestimating models are not of particular interest.
Thus, the most promising techniques seem to be
those using the feedforward neural network trained
with the Levenberg–Marquardt algorithm that under-
estimates part of the groundwater level during the
dry season. The physical meaning of this result is
that the structure of this model allows its weights to
adjust to values that depict the trends of the natural
system we are simulating.
For validating purposes, an 18-months-ahead
prediction is made and compared with observed
values. Fig. 11a and b show how the best 6 out of a
total of 7 combinations predicted the groundwater
level for this 18 month period. Furthermore, Fig. 12
shows a comparison of observed and calculated
groundwater levels for predictions from 1 to 18
months ahead by the best performing network. The
different model results show relative good prediction
of the trend of the groundwater level, however
the model prediction accuracy decreases slightly
with increasing horizon of prediction.
5. Conclusions
Neural networks have proven to be an extremely
useful method of empirical forecasting of hydro-
logical variables. In this paper we made an attempt
to identify the most stable and efficient neural
network configuration for predicting groundwater
level in the Messara Valley. The groundwater in the
area has been steadily decreasing since the late
1980s due to overexploitation due to intensive
irrigation. A total of seven different ANN configur-
ations were tested in terms of optimum results for a
prediction horizon of 18 months. The most suitable
configuration for this task proved to be a 20-3-1
feedforward network trained with the Levenberg–
Marquardt method as it showed the most accurate
predictions of the decreasing groundwater levels.
From the results of the study it can also be inferred
that the Levenberg–Marquardt algorithm is more
appropriate for this problem since the RNN also
performs well when trained with this method.
Moreover, combining two or more methods of
prediction should also be considered as in our case
the FNN-LM method tended to underestimate events
when the rest of the methods overestimated them. In
general, the results of the case study are satisfactory
and demonstrate that neural networks can be
I.N. Daliakopoulos et al. / Journal of Hydrology 309 (2005) 229–240240
a useful prediction tool in the area of groundwater
hydrology. Most importantly, this paper presents
indications that neural networks can also be applied
in cases where the datasets manifest trends and
shifts and the desired output has lies outside of the
range of previously introduced input, as shown by
the results.
Acknowledgements
The authors gratefully acknowledge the Ministry
of Agriculture for providing the field data. Special
thanks to the reviewer’s for their useful comments that
help improved the paper. The present work was
financially supported by the National Science and
Engineering Research Council (NSERC) Grants
RGP157914-02 and RGP249582-02.
References
Anctil, F., Perrin, C., Andreassian, V., 2004. Impact of the length of
observed records on the performance of ANN and of conceptual
parsimonious rainfall-runoff forecasting models. Environ.
Modeling Software 19 (4), 357–368.
ASCE Task Committee on Application of Artificial Neural
Networks in Hydrology, 2000. Artificial neural networks in
hydrology, parts I and II. J. Hydrol. Eng. 5 (2), 115–137.
Coulibaly, P., Anctil, F., Bobee, B., 2000. Daily reservoir inflow
forecasting using artificial neural networks with stopped
training approach. J. Hydrol. 230, 244–257.
Coulibaly, P., Anctil, F., Aravena, R., Bobee, B., 2001a. Artificial
neural network modeling of water table depth fluctuations.
Water Resour. Res. 37 (4), 885–896.
Coulibaly, P., Anctil, F., Bobee, B., 2001b. Multivariate reservoir
inflow forecasting using temporal neural networks. J. Hydrol.
Eng. 9–10, 367–376.
Coulibaly, P., Bobee, B., Anctil, F., 2001c. Improving extreme
hydrologic events forecasting using a new criterion for artificial
neural network selection. Hydrol. Process. 15, 1533–1536.
Croke, B., Cleridou, N., Kolovos, A., Vardavas, I.,
Papamastorakis, J., 2000. Water resources in the desertifica-
tion-threatened Messara valley of Crete: estimation of the
annual water budget using a rainfall-runoff model. Environ.
Modeling Software 15, 387–402.
Elman, J.L., 1990. Finding structure in time. Cognitive Sci. 14, 179–
211.
French, M.N., Krajewski, W.F., Cuykendall, R.R., 1992. Rainfall
forecasting in space and time using a neural network. J. Hydrol.
137, 1–31.
Govindaraju, R.S., Ramachandra Rao, A., 2000. Artificial Neural
Networks in Hydrology. Kluwer Academic Publishing, The
Netherlands.
Haykin, S., 1999. Neural Networks, A Comprehensive Foundation,
second ed. Prentice-Hall, Englewood Cliffs, NJ.
Hornik, K., Stinchcombe, M., White, M., 1989. Multilayer feed
forward networks are universal approximators. Neural Net-
works 2, 359–366.
Maier, H.R., Dandy, G.C., 1998. Understanding the behavior and
optimizing the performance of back-propagation neural networks:
an empirical study. Environ. Modeling Software 13, 179–191.
Maier, H.R., Dandy, G.C., 2000. Neural networks for the prediction
and forecasting of water resources variables: a review of
modeling issues and applications. Environ. Modeling Software
15, 101–124.
Porter, D.W., Gibbs, P.G., Jones, W.F., Huyakorn, P.S.,
Hamm, L.L., Flach, G.P., 2000. Data fusion modeling for
groundwater systems. J. Contaminant Hydrol. 42, 303–335.
Toth, E., Brath, A., Montanari, A., 2000. Comparison of short-term
rainfall prediction models for real-time flood forecasting.
J. Hydrol. 239, 132–147.
Zhang, G., Patuwo, B.E., Hu, M.Y., 1998. Forecasting with
artificial neural networks: the state of the art. Int. J. Forecasting
14, 35–62.