Transcript
Page 1: A Neural Network Approach to the Modelling, … BACK-PROPAGATION AND NEURAL NETWORK THEORY A neural network is defined by its node characteristics, the learning rules and the network

ELSEVIER

J. Construct. Steel Res. Vol. 44, Nos. 1-2, pp. 91-105, 1997 © 1997 Elsevier Science Ltd. All rights reserved

Printed in Great Britain PII: S0143-974X(97)00039-4 0143-974X/97 $17.00 + 0.00

A Neural Network Approach to the Modelling, Calculation and Identification of Semi-Rigid Connections

in Steel Structures

G. E. Stavroulakis, a A. V. Avdelas, b* K. M. Abdalla c & P. D. Panagiotopoulos b,d

aInstitute for Applied Mechanics, Department of Civil Engineering, Technical University Braunschweig, D-38106 Braunschweig, Germany

blnstitute of Steel Structures, Department of Civil Engineering, Aristotle University, GR- 54006 Thessaloniki, Greece

CDepartment of Civil Engineering, Jordan University of Science and Technology, Irbid, Jordan

dFaculty of Mathematics and Physics, RWTH Aachen, D-52062 Aachen, Germany

ABSTRACT

A two-stage neural network approach is proposed for the elastoplastic analy- sis of steel structures with semi-rigid connections. At the first stage, the moment-rotation law of the connection is obtained from experimental results by the use of a neural network based on the perceptron model. At the second stage, the elastoplastic analysis problem is formulated for the given moment- rotation law as a Quadratic Programming Problem and solved by a neural network based on the Hopfield model © 1997 Elsevier Science Ltd.

1 INTRODUCTION

The highly nonlinear effects that have to be considered in the detailed model- ling of semi-rigid steel structure connections (e.g. unilateral contact and fric- tion 'prying effects' arising between adjacent parts of the connection [1-3], local plastification effects, etc.) require the use of time-consuming and compli- cated software not always available to the practising engineer. Another approach, permitted by modem design codes [4], is their treatment by means of simplified nonlinear relations. In the present paper, an alternative to the

*To whom correspondence should be addressed.

91

Page 2: A Neural Network Approach to the Modelling, … BACK-PROPAGATION AND NEURAL NETWORK THEORY A neural network is defined by its node characteristics, the learning rules and the network

92 G. E. Stavroulakis et al.

above procedures is proposed: estimation of the mechanical behaviour of the steel structure connection by the use of learning algorithms in an appropriately defined neural network environment of experimental steel connections data. An estimated simplified law results which will be used as a moment-rotation constitutive law for the joints in the structural analysis and design of the steel structure.

By the use of the error correcting back-propagation algorithm [5], a multi- layer feedforward neural network can be trained to recognize and generalize the experimental data. In our case, the experimentally measured moment- rotation curves for various design parameters of the steel structure connection are used as training paradigms. Next, the neural network reproduces the moment rotation law for a given set of design variables and thus it can be used in every design or structural analysis procedure.

The neural network theory, which provides a solid basis for the construction of the model estimator, is considered highly suitable for the study of complex problems in mechanics and engineering. Some recent representative appli- cations are mentioned next, without any aim of completeness, in the area of structural analysis and design parameter identification problems [6-9], material modelling [10,11], structural analysis [8,12,13] and optimization problems [ 14].

The experimental data considered here concern the case of single angle beam to column bolted steel structure connections. These experiments have been reported in [15] and are included in the steel connection databases [16,17]. The neural network analysis is given next (see [14,18,19]). The exper- imentally measured moment-rotation laws for steel structure connections are first preprocessed in a form suitable for neural network treatment. These data are next used for training a multilayer feed-forward back-propagation neural network. The trained network provides a model-free predictor, with satisfac- tory accuracy.

It should be mentioned here that other artificial neural network models can be used as well, instead of the back-propagation neural method. It is also worth mentioning that the methodology which is proposed in this paper could be generalized to include more complicated effects (for instance dynamic behaviour, fatigue and creep effects).

2 BACK-PROPAGATION AND NEURAL NETWORK THEORY

A neural network is defined by its node characteristics, the learning rules and the network topology. The learning rules control the improvement of the net- work performance through appropriate adaptive changes of the weights of the links. Furthermore, one of the most remarkable properties of neural networks

Page 3: A Neural Network Approach to the Modelling, … BACK-PROPAGATION AND NEURAL NETWORK THEORY A neural network is defined by its node characteristics, the learning rules and the network

Neural network analysis of semi-rigid connections 93

is their considerable fault tolerance due to the increased numbers of locally connected processing nodes.

Supervised learning is the case in which the required output data, with respect to a given set of input data, are known. Then, the whole set of input- output learning paradigms can be used to adjust the values of the connection weights or some variables of the activation functions, such as being able to reconstruct the implicit highly nonlinear mapping between input and output variables. It must be noted that no specific model has been assumed for the mapping between input and output variables; thus, the trained neural network provides us with a model-free estimator which simulates, for instance, the mechanical behaviour of a structural component [9,10] or a structure [10,20].

The supervised learning procedure of a back-propagation neural network can be formulated in the following way, using the same notation as in [2 I]. Take one training example p f rom the set of available training examples T = { 1 .... ,t} with input-output data vectors denoted by [Xp,yp], respectively.

Algorithm 1.

= ( X ( I ) y(1) aT to the input nodes of layer (1). 1. Apply input vector xp , v . . . . . . p,,l: 2. Execute feedforward signal processing

zj- =fj(r)) , rj = ~ x i w o. (1) i

A sigmoidal activation function of the form f~(rj) = I/(1 + e - % ) can be used here.

3. Calculate the error terms 8(p,'7 ) for the processing units of the output layer (last layer).

4. Back-propagate the error and calculate for each previous layer j = (m - 1) . . . . . 1 the error terms 6pq) ) in all units of the j th hidden layer, i = 1 ... . . n:.

5. Update the weights of various layers.

Note that while in step 5, the network is assumed to be fully connected, i.e. all nodes of layer j - 1 are connected to each one of the nodes in layer j ; the generalization to more flexible interconnection schemes is obvious.

One pass through all available learning examples (i.e. execution of the algorithm for all p E 7) is called a learning epoch. The error at this epoch is given by

= - x;; , ) . (2) p ~ T i = 1 , . . . , n m

Page 4: A Neural Network Approach to the Modelling, … BACK-PROPAGATION AND NEURAL NETWORK THEORY A neural network is defined by its node characteristics, the learning rules and the network

94 G. E. Stavroulakis et al.

Learning should continue until a reasonably small error is obtained. Since back-propagation of the error is performed for each individual learn-

ing example [xp,yp] sequentially, this variant of the back-propagation learning algorithm is known as the on-line or per example or pattern learning version. Another variant of the learning algorithm is the off-line or batch mode training where weight correction is performed once per training epoch only after all changes due to error back-propagation have been accumulated for all learning examples p, p s T--{1, . . . , t} (see e.g. [22], p. 111; [14], pp. 129, 139). In this case the iteration step 5 of Algorithm 1 is given by

w~)(t + 1) = w~)(t) + r l ~ aQpi)Xpl "1" o l m ~ l ( t - - 1) pet

(3)

where 0 -< a -< 1 (a value a = 0.9 is proposed in [14], p. 133) and Apw~)(t - 1) is the adaptation of w~ ) performed in the previous iteration step. A momentum (inertial) term has been added here in the weight adjustment step in order to enhance the speed of convergence in the backpropagation algorithm. This more refined version has been used in the computer implemen- tation. Note here that learning algorithms are actually optimization algorithms and can easily be adapted to the solution of minimum problems (cf. e.g. [8,14,22-24]).

Using again the same notation as in [21], the production mode (generalization) after the training of the network takes the i~ollowing steps.

Algorithm 2.

1. Apply input vector xi,, = (x~ 1) ..... x~11)) T to the input nodes of layer (1). 2. Feed-forward the signal in layers 2 ..... m and in each processing

element compute ~ and apply the activation (transfer) function x~, ). 3. Vector xout (xtm),. ~m) X = ..,X,, m) is the response of the neural network to

the input xin.

The algorithm starts from an initial value of w 0. which is produced by taking variables in a reasonably small range (e.g. [ - 1, + 1] according to [7], or even [ - 0.3, + 0.3]). Nevertheless, it must be noted here that learning in a neural network is algorithmically a hard problem (in general it is proved to be a nonpolynomial difficult NP problem, see [25]), and therefore the size of both the network and the training examples must be kept to the minimum necessary. For this reason, engineering experience and a set of good, represen- tative experimental data must be used as learning examples for the neural net- work.

It is well known that neural computers are not yet commercially available.

Page 5: A Neural Network Approach to the Modelling, … BACK-PROPAGATION AND NEURAL NETWORK THEORY A neural network is defined by its node characteristics, the learning rules and the network

Neural network analysis of semi-rigid connections 95

Neural network computations are best suited to a parallel computer environ- ment, which even permits fairly good hardware implementation [14]. Here, all the computer simulations have been performed on a classical serial com- puter, on which the neural environment has been emulated.

3 NEURAL NETWORKS AND ELASTOPLASTIC ANALYSIS

One of the main applications of neural networks is the solution of optimization problems. This constitutes one of their major advantages. Therefore, in order to adapt the structural analysis methods to a neural computing framework, the structural analysis problem must be formulated as a minimization problem [8,12,13,26,27]. For the elastoplastic analysis, corresponding optimization problems have been derived in [28-36]. At this point it should be remarked that the main advantages of a neural network environment concerning the calculations are as follows.

• In a neural network environment, the treatment of an inequality con- strained Quadratic Programming Problem (QPP) is equally time consum- ing as an unconstrained QPP which is equivalent to a system of linear equations. The neural network approach does not need large computer memory.

• In a neural computing environment, the parameter identification problem and the sensitivity analysis are treated directly in a natural way as super- vised learning (cf. the two versions of the fifth step of Algorithm 1) and unsupervised [19] learning problems, respectively without the need for complicated algorithms.

In the following, the elastoplastic analysis problem will be formulated for a given M-~b law as a QPP solved again through a neural network model. Thus, we use two neural networks: the first one gives the M-~b law from the experimental results and the second one solves the resulting QPP. The first neural network is based on the perceptron model [14,19], the second on the Hopfield model [37,38], which seems to be more direct for the calculation of the solution of the plasticity QPP.

A very interesting application of mathematical programming in mechanics is the holonomic and the incremental elastoplastic analysis of structures. The minimum propositions used in this paper have been derived by Maier [28] by the use of the theory of Linear Complementarity Problems (LCP). Here, only the incremental elastoplastic analysis will be briefly presented. The holonomic analysis leads to QPPs having the same structure as those of the incremental theory. The relations of incremental elastoplastic analysis are given, formu-

Page 6: A Neural Network Approach to the Modelling, … BACK-PROPAGATION AND NEURAL NETWORK THEORY A neural network is defined by its node characteristics, the learning rules and the network

96 G. E. Stavroulakis et al.

lated for the assembled structure, under the assumptions described by Maier in [28-34], etc:

f = eo + eE + eP (4)

fiE = Fo~, fp = WA, ~' = Na`§ - HA (5)

A-> 0,/ A=0 (6)

where fo, fE and ep are the initial, the elastic and the plastic strain vectors, respectively, Fo is the natural flexibility matrix, ~ is the stress vector, N is the gradient of the yield functions which are zero, W is the gradient of the corre- sponding plastic potentials, A is the plastic multipliers vector, F is the yield functions vector, and H is the workhardening matrix depicting the moment- rotation relationship (M-t h curve) of the structural components [34]. Further, the equilibrium G§ + l~f i = ~ and compatibility f = Ga`fi equations hold, where G is the equilibrium matrix, Kc the geometric stiffness matrix, fi the displacement vector, and 1~ the load vector. As is well known, relations (6) form an LCP [28,33,34]. By using the same notation, assumptions and substi- tutions as in [13,35] and taking into account the above equilibrium and com- patibility equations, it can be proved, either by the LCP approach proposed by Maier or by the use of variational inequalities as in [35,36], that relations (4)-(6) are equivalent to the QPP (primal problem)

min{P(x)=½xa'Mx+qaxlx>-O). (7)

Matrix M is generally symmetric and positive semi-definite. Under the assumption that matrix K is nonsingular, the problem is formulated, expressed with respect to the plastic multipliers A. Through suitable substitutions and by the use again either of the LCP approach or of variational inequalities, and if matrix D is symmetric (N = W, H = H a ) and positive semi-definite, the minimization problem

min {R(A)= ½ Aa'i)A - (NTsE)TA[ A :> 0} (8)

is obtained. Thus, the main problem may be put in the following form. Find x ~ ~t n such as to solve the problem

men{ x .x q.,x 0}

Page 7: A Neural Network Approach to the Modelling, … BACK-PROPAGATION AND NEURAL NETWORK THEORY A neural network is defined by its node characteristics, the learning rules and the network

Neural network analysis of semi-rigid connections 97

Here M = {/~0} is a given matrix and q = {q~} is a given vector. For the solution of eqn (9), the Hopfield neural network model is applied [37,38]. Denoting by f. the general performance of the neurons, by p~ (respectively C;), i = 1 .. . . . n, the input resistors (respectively capacitors), by Vj the output voltage of the amplifier j and by u: the input voltage, then if a synapse of the neurons i, j is excitatory (respectively recessional) and the synapse conductance T0 is realized through a resistor R 0, whose absolute value is equal to 1/IT01, we connect the resistor to the positive (respectively negative) output of the ampli- fier j. This can be written as

t' 1 Ri~- i f T o > 0 w i t h R ~ 7 = ~

Rij T~j 1 (10) ~ if Tit < 0 with R~ = ~

and

1 1 1 - T~j - if Ri~ > O. (11)

Rii Rif Ri]-

Each neuron also receives an externally supplied input current lj. The evol- ution with t ime t of the circuit is described by equations derived by means of Kirchoff 's law:

/,/

c?",l u, ~\ dt ] = i = 1 - - Rii + li' Vj =fj(uj). (12)

Here, Ri- 1 .~_ Pi-- 1 at . Z Ru- 1, that is a parallel connection of the input resistor j = l

p; and the resistors R 0 of the synapse, and fj is a possibly multivalued mono- tone response function of the j-neuron. For given initial values of the neuron inputs ui at t = 0, the integration of eqn (12) in a digital computer supplies the numerical results for the network under consideration. As can be proved [37,38], if in the fct i t ious network which we have introduced T o = Tji and obviously T. = 0 (the original Hopfield model), the solution of eqn (12) con- verges to solutions with outputs u~ of all neurons constant. Each such solution is called a s table s tate and makes stationary (here minimum) the quantity

vi

E = - ½ T o V y J + ~- ~(V,.) dV~ - I iV i i d = l "= 0 i = l

(13)

Page 8: A Neural Network Approach to the Modelling, … BACK-PROPAGATION AND NEURAL NETWORK THEORY A neural network is defined by its node characteristics, the learning rules and the network

98 G. E. Stavroulakis et al.

which is also called the L i a p u n o v f u n c t i o n of the system. Iff~ is the identity mapping, the above relation takes the following simpler form:

E = _ 1 TijViVj + 2R-~- i , j = l i = 1 i = 1

(14)

This is valid for instance i f f i s given by Fig. l(b). In this case, the minimum of the quantity E will be sought for Vi -> 0. Indeed, Vi = fi(ui) = { ui if ui > 0, or zero if ui <- 0}. It must be noted that the second term in eqn (14) influences the position of the minimum only for shallow sigmoidal curves.

f f

1

-1 I (a)

I ....

II

-I

(c) (d)

Fig. I. Certain common types of responses of neurons.

Page 9: A Neural Network Approach to the Modelling, … BACK-PROPAGATION AND NEURAL NETWORK THEORY A neural network is defined by its node characteristics, the learning rules and the network

Neural network analysis of semi-rigid connections 99

For curves tending to the dotted line of Fig. l(c), that is for narrow sigmoidal curves, the influence of the integral of the second term in the right-hand side of eqn (14) is negligible.

Thus, the network always fulfils the constraints xi >-- O, i = 1 . . . . . n , as is obvious from the function f used for the calculation.

4 NEURAL NETWORK CONFIGURATION FOR STEEL STRUCTURE CONNECTIONS

The experimental results of semi-rigid steel connections have been taken from the databases described in [16,17]. In particular, the case of single web-angle, beam to column connections with the angle bolted to both the beam and the column has been considered here (see Fig. 2).

The values of the design variables that describe the experimentally tested connections and which have been considered in the neural network model are summarized in Table 1. A detailed description of the experiments is given in [15].

A back-propagation artificial neural network such as the one described pre- viously is able to learn an input-output relation between appropriately prepro- cessed data. All input-output variables should lie in the range of the activation function fir). Without restriction of the generality, the interval [0 + e, + 1 -

!

J

Column / t

i"

-J-I - - I - I

_]_

o

o

/ /

P

L1

Angle

Beam

t \

1 I I I I I I

I

I I I

L21 Fig. 2. Single angle beam to column connection.

Page 10: A Neural Network Approach to the Modelling, … BACK-PROPAGATION AND NEURAL NETWORK THEORY A neural network is defined by its node characteristics, the learning rules and the network

100 G. E. Stavroulakis et al.

T A B L E 1 Experimental Data for Beam to Column Connection [cm]

Experiment No. of bolts Angle thickness Angle length

1 2 0.625 13.75 2 3 0.625 21.25 3 4 0.625 28.75 4 6 0.625 36.25 5 6 0.625 43.75 6 4 0.7813 28.75

• ] is assumed here, where the real scalar • >-- 0 is sufficiently small. In this way, the values do not lie near the saturation points 0 and + 1. For each experiment, the design variables of the tested steel connections are taken as input data and the measured moment-rotation curve is considered to be the output data. For instance, the bolted steel structure connections used in the examples are determined by three design variables (Table 1). Thus, an input vector Z = {Zl,Z2,Z3} is constructed in this case by an analogous preprocessing of the design variables. The experimental moment-rotation curve of a steel structure connection which is shown in Fig. 3(a) is first considered. A number of points ~bi, Mi, i = 1 ..... m, placed on the curve are used for discretization (Fig. 3(b)). The output of the neural network model is the pairs {¢ki,Mi}, i =

1 ..... m (or actually only the {Mi}, i = 1 . . . . . m, values if the division of the [0,max th] interval is assumed to be equidistant). The scaling factors al and

M M

maxM

(P

m n

m 1

+1

+1

n (I)

(a) (h)

Fig. 3. Moment-rotation experimental curve for use with the neural network model: (a) meas- ured curve (schematic); (b) normalized curve.

Page 11: A Neural Network Approach to the Modelling, … BACK-PROPAGATION AND NEURAL NETWORK THEORY A neural network is defined by its node characteristics, the learning rules and the network

Neural network analysis of semi-rigid connections 101

a2 must be included additionally in the set of output variables. Thus, the set of variables X = {at,a2,M1 ..... Mm} uniquely determines the preprocessing transformation from the M-~b curve to the discretized M'-~b' one (see Fig. 3). For the steel connection under consideration, the design variables are the input variables for the neural network. So, the input vector takes the form Z = {zl,z2,z3}, after a suitable preprocessing of the three design variables to be represented by values in the interval [0 + E, + 1 - e].

In the numerical example, m = 20 values have been used for the discretiz- ation of the experimental M - $ curve. Thus, the neural network model will have three input variables (the design variables which identify the steel con- nection as it is given in Table 1) and 22 output variables. A fully connected feedforward network has been chosen. Numerical experimentation has been used for the determination of the number of hidden layers and the number of nodes in each of these layers.

The experimental results (see Table 1) and the neural network predictions of the problem are shown in Fig. 4. A 3-100-100-100-100-100-22 network has been used. All available experiments have been used for both training and testing of the neural network. In Fig. 4, the quality obtained in loading the experimentally gained information in the neural network is depicted. The train- ing phase of the network took approximately 9000 epochs to reach an error equal to or less than 0.00001. A serial computer implementation has been used. The computation, on a HP755 workstation, has been completed in approximately 90 min CPU time. Fig. 5 shows a schematic representation of the neural network model which has been developed. In [38], the case of single plate beam to column connections with the plate bolted to the beam and welded to the column has also been treated in a similar way.

Back propagation neural network model

input layer ] hidden layer(s)

Connection design variables: O

number of bolts ~ 0

angle thickness

angle length

I output layer

max M

max

normalized moment-rotation c u r v e

m points

post-processing

moment-rotation c u r v e

(NN prediction)

m+2 points

Fig. 4. Neural network model for the connection problem; schematic representation.

Page 12: A Neural Network Approach to the Modelling, … BACK-PROPAGATION AND NEURAL NETWORK THEORY A neural network is defined by its node characteristics, the learning rules and the network

102 G. E. Stavroulakis et al.

EXPERIMENT i •

~..

o.0= ,s'.=~ ~,'.=~ ,Kooo ROTRT ] ON

(a)

EXPERIMENT 3 ?

ij/" i i i i

O.OOO 19.333 36.1167 58.000 ROTRTION

(c)

EXPERIMENT 5 8

F4 I LESEm~ I .,~ ° o " EXPERIHENT I NN P~mCTIONI,.,e'-

0.000

EXPERIMENT 2

Io - EXPERIHENT I _Jr'j"

O.OO0 cn'.olll

ROTRTION ROTRTION (e) (f)

F i g . 5. N e u r a l n e t w o r k p r e d i c t i o n vs e x p e r i m e n t a l resu l ts ,

o

W"

£13 !-

I: : EX~R,~NT lJ NN PREOICT i ONIP" /"

o.ooo ,s:m ~:,s, ~:ooo ROTRT ] ON

(d)

EXPERIMENT 6 8 --I= LC~ENO I "~ - EXPER [I~NT

NN PREOICTION

P~

t~ p..

a.

o_-

ROTRT I ON

(b)

EXPER I MENT

Page 13: A Neural Network Approach to the Modelling, … BACK-PROPAGATION AND NEURAL NETWORK THEORY A neural network is defined by its node characteristics, the learning rules and the network

Neural network analysis of semi-rigid connections 103

V i TIt

O Vn ) Tni / /+ t -

Oi

Fig. 6. A typical artificial neuron; schematic representation.

The above M-~b law can be used for the elastoplastic calculation of any required steel structure. We formulate according to [34] the hardening matrix H and the corresponding QPP in the form of relation (8) for the holonomic case, characterizing the position of equilibrium. Then the Hopfield neural net- work is obtained, permitting the calculation of the solution of the minimum problem (8), i.e. the position of equilibrium of the elastoplastic structure. The reader is referred to [12,13] for more details about numerical applications to the treatment of elastoplastic analysis problems such as QPP by the use of the Hopfield model. In Fig. 6, the corresponding circuit realizing this model is given ([14], p. 43).

The observations derived from the numerical implementation of the method are briefly presented in the following.

• The performance of the neural network model may be seriously influ- enced by the optimal choice of the network configuration in which the variables must be chosen, for each case, after numerical experimentation.

• If the network uses interpolation from a given training set, the quality of the neural network prediction is better than in the case of extrapolation.

• In the method presented above, the neural network model treats all the parameters describing the experimental curves with the same accuracy, not taking into account their different importance and contribution. In this subject, some research results will be presented by the authors in the near future.

REFERENCES

1. AbdaUa, K. M. and Stavroulakis, G. E., Zur rationalen Berechnung des 'Prying actions' Phanomens in Schraubenverbindungen. Stahlbau, 1989, 58(8), 233-238.

2. Baniotopoulos, C. C. and Abdalla, K. M., Steel column-to-column connections under combined load: a quadratic programming approach. Computers and Struc- tures, 1993, 46(1), 13-20.

3. Abdalla, K. M., Alshegeir, A. and Chen, W. F., Strut-tie approach for the design of fiat plates with drop panels and capitols. Research Report CESTR-93-19, Purdue University, Structural Engineering Department, 1993.

Page 14: A Neural Network Approach to the Modelling, … BACK-PROPAGATION AND NEURAL NETWORK THEORY A neural network is defined by its node characteristics, the learning rules and the network

104 G. E. Stavroulakis et al.

4. Eurocode No. 3, Design of steel structures, Part 1-1: general rules and rules for buildings, ENV 1993-1-1, 1992.

5. Rumelhart, D. E. and McClelland, J. L., Parallel Distributed Processing, Vols I, II, III. MIT Press, Cambridge, MA, 1986.

6. Adeli, H. and Yeh, C., Perceptron learning in engineering design. Microcom- puters in Civil Engineering, 1989, 4, 247-256.

7. Vanluchene, R. D. and Roufei, S., Neural networks in structural engineering. Microcomputers in Civil Engineering, 1990, 5, 207-215.

8. Theocaris, P. S. and Panagiotopoulos, P. D., Neural networks for computing in fracture mechanics. Methods and prospects of applications. Computer Methods in Applied Mechanics and Engineering, 1993, 106, 213-228.

9. Chassiakos, A. G. and Masri, S. F., Identification of the internal forces of struc- tural systems using feedforward multilayer networks. Computing Systems in Engineering, 1991, 2(1), 125-134.

10. Ghaboussi, J., Garrett, J. H. and Wu, X., Knowledge-based modelling of material behaviour with neural networks. Journal of Engineering Mechanics, ASCE, 1991, 117(1), 132-153.

11. Pidaparti, R. M. V. and Palakal, M. J., Material model for composites using neural networks. AIAA Journal, 1993, 31(8), 1533-1535.

12. Kortesis, S. and Panagiotopoulos, P. D,, Neural networks for computing in struc- tural analysis: methods and prospects of applications. International Journal for Numerical Methods in Engineering, 1993, 36, 2305-2318.

13. Avdelas, A. V., Panagiotopoulos, P. D. and Kortesis, S., Neural networks for computing in elastoplastic analysis of structures. Meccanica, 1995, 30, 1-15.

14. Cichocki, A. and Unbehauen, R., Neural Networks for Optimization and Signal Processing. Wiley, New York, 1993.

15. Lipson, S. L., Single-angle and single-plate beam framing connections. Canadian Structural Engineering Conference, Toronto, Ont., 1968, pp. 141-162.

16. Kishi, N. and Chen, W. F., Database of steel beam-to-colunm connections, Vols I and II. Structural Engineering Report No. CE-STR-86-20, School of Civil Engineering, Purdue University, West Lafayette, IN, 1986.

17. Abdalla, K. M., Chen, W. F. and Kishi, W., Expanded database of semi-rigid steel connections. Research Report CE-STR-93-14, Purdue University, Structural Engineering Department, 1993.

18. Kosko, B., Neural Networks and Fuzzy Systems: A Dynamical Systems Approach to Machine Intelligence. Prentice-Hall, Englewood Cliffs, NJ, 1992.

19. Beale, R. and Jackson, T., Neural Computing. An Introduction. Adam Hilger, Bristol, 1990.

20. Berke, L. and Hajela, P., Applications of artificial neural nets in structural mech- anics. Structural Optimization, 1992, 4, 90-98.

21. Abdalla, K. M. and Stavroulakis, G. E., A backpropagation neural network model for semi-rigid connections. Microcomputers in Civil Engineering, 1995, 10, 77-87.

22. Brause, R., Neuronale Netze. Teubner, Stuttgart, 1991. 23. Theocaris, P. S. and Panagiotopoulos, P. D., Plasticity including the Bauschinger

effect, studied by a neural network approach. Acta Mechanica, 1995, 113, 63-75. 24. Theocaris, P. S. and Panagiotopoulos, P. D., Hardening plasticity approximated

via anisotropic elasticity. The Fokker-Planck equation in a neural network environment. ZAMM, 1995, 75(12), 889-900.

Page 15: A Neural Network Approach to the Modelling, … BACK-PROPAGATION AND NEURAL NETWORK THEORY A neural network is defined by its node characteristics, the learning rules and the network

Neural network analysis of semi-rigid connections 105

25. Judd, J. S., Neural Network Design and the Complexity of Learning. MIT Press, Cambridge, MA, 1990.

26. Antes, H. and Panagiotopoulos, P. D., The Boundary Integral Approach to Static and Dynamic Contact Problems. Equality and Inequality Methods. Birk~iuser, Basel, 1992.

27. Panagiotopoulos, P. D., Hemivariational Inequalities. Application in Mechanics and Engineering. Springer, Berlin, 1993.

28. Maier, G., Incremental plastic analysis in the presence of large displacements and physical instabilizing effects. International Journal of Solid Structures, 1971, 7, 345-372.

29. Maier, G., A quadratic programming approach for certain classes of nonlinear structural problems. Meccanica, 1968, 3, 121-130.

30. Maier, G., Quadratic programming and theory of elastic-perfectly plastic struc- tures. Meccanica, 1968, 3, 265-273.

31. Maier, G., Linear flow-laws of elastoplasticity, a unified general approach. Rendu Acc. Naz. Lincei, 1969, VIII, 266-276.

32. Maier, G., A matrix structural theory of piece-wise linear elastoplasticity with interacting yield planes. Meccanica, 1970, 8, 54-66.

33. Maier, G., Mathematical programming methods in structural analysis. In Pro- ceedings of International Conference on Variational Methods in Engineering, Vol. II, ed. C. A. Brebbia and H. Tottenham. Southampton University Press, Southampton, 1973, pp. 1-32.

34. Cohn, M. J. and Maier, G. (ed.), Engineering Plasticity by Mathematical Pro- gramming, Proc. NATO-ASI, Waterloo, Canada, 1977. Pergamon Press, New York, 1979.

35. Panagiotopoulos, P. D., Inequality Problems in Mechanics and Applications, Convex and Nonconvex Energy Functions. Birk~iuser, Basel, 1985 (Russian trans- lation MIR, Moscow, 1989).

36. Panagiotopoulos, P. D., Baniotopoulos, C. C. and Avdelas, A. V., Certain prop- ositions on the activation of yield modes in elastoplasticity and their applications to deterministic and stochastic problems. ZAMM, 1984, 64, 491-501.

37. Hopfield, J. J., Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences, 1982, 79, 2554-2558.

38. Hopfield, J. J. and Tank, D. W., 'Neural' computation of decisions in optimization problems. Biological Cybernetics, 1985, 52, 141-152.


Top Related