probabilistic model code jcss pmc2000

198
Preface This document is a first attempt to put together in a consistent way some - certainly not all - of the rules, regulations, and explanations that are necessary for the design of new structures, or the assessment of existing ones from a probabilistic point of view. The document, of course, is also useful for background calculations of non –probabilistic codes. From a probabilistic point of view designing new structures, or accepting existing ones as sufficiently safe, is the result of a decision-making process guided by some optimality criteria. This process links, in a logical and consistent way, the requirements and expectations of the client or owner of a structure, the loads and actions to be expected, the characteristics of materials to be used or found in the proposed or existing structure, the calculation models, the grades of workmanship expected or observed on the site, the behaviour of the users, and, finally, in an ideal case, the perceptions of society with respect to environmental impact and sustainable development. The aim of this document is threefold: First, it is the attempt of a number of people interested in such an approach to see whether, at this point in time, the main problems in the development of such a document can be mastered. Second, it is intended to put a text into the hands of structural engineers who are willing now to apply new approaches in their work. Third, the Joint Committee on Structural Safety (JCSS) is convinced that such a document will spur the development of a Probabilistic Code covering all aspects of Structural Engineering. There are people who advocate staying with traditional non-probabilistic codes, claiming that data is not sufficient for full probabilistic methods. There is much truth in the statement that often data is scarce. But this holds for both approaches. Let's face it: since data is often scarce in either approach, what remains is in essence probabilistic. Important in this respect is the meaning of the word “probability”. In this document a “probability” is not necessarily considered as a “relative frequency that can be observed in reality”. Such a straightforward interpretation is possible for dice and card games, but not for structural design where uncertainties must be modelled by complicated probabilistic models and which interact in a complex way. Here, probabilities are understood in the Bayesian way, expressing degrees of belief in relation to the various uncertainties, and suitable to decision making processes. At best, probabilities can be interpreted as “best estimates” of the relative frequencies, sometimes being wrong on the one side, sometimes on the other the degree of deviation from reality being a direct function of the state of knowledge. More discussion on this topic can be found on Annex X of Part 1, Basis of Design. The present version of this JCSS Probabilistic Model Code document is available on the Internet at www. jcss.ethz.ch. It is intended that the document will be adapted and extended a number of times in the years to come. To get the best possible and efficient improvements all users are invited to send their questions, comments and suggestions to www.jcss.ethz.ch. The JCSS hopes that this document - the most recent of its pre-codification work since its creation in 1972 - will find its way into the practical work of structural engineers. The Reporter of the JCSS The President of the JCSS Michael Faber Ton Vrouwenvelder March 2001

Upload: manuel-patino-portillo

Post on 28-Nov-2015

69 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Probabilistic Model Code Jcss Pmc2000

Preface This document is a first attempt to put together in a consistent way some - certainly not all - of the rules, regulations, and explanations that are necessary for the design of new structures, or the assessment of existing ones from a probabilistic point of view. The document, of course, is also useful for background calculations of non –probabilistic codes. From a probabilistic point of view designing new structures, or accepting existing ones as sufficiently safe, is the result of a decision-making process guided by some optimality criteria. This process links, in a logical and consistent way, the requirements and expectations of the client or owner of a structure, the loads and actions to be expected, the characteristics of materials to be used or found in the proposed or existing structure, the calculation models, the grades of workmanship expected or observed on the site, the behaviour of the users, and, finally, in an ideal case, the perceptions of society with respect to environmental impact and sustainable development. The aim of this document is threefold: First, it is the attempt of a number of people interested in such an approach to see whether, at this point in time, the main problems in the development of such a document can be mastered. Second, it is intended to put a text into the hands of structural engineers who are willing now to apply new approaches in their work. Third, the Joint Committee on Structural Safety (JCSS) is convinced that such a document will spur the development of a Probabilistic Code covering all aspects of Structural Engineering. There are people who advocate staying with traditional non-probabilistic codes, claiming that data is not sufficient for full probabilistic methods. There is much truth in the statement that often data is scarce. But this holds for both approaches. Let's face it: since data is often scarce in either approach, what remains is in essence probabilistic. Important in this respect is the meaning of the word “probability”. In this document a “probability” is not necessarily considered as a “relative frequency that can be observed in reality”. Such a straightforward interpretation is possible for dice and card games, but not for structural design where uncertainties must be modelled by complicated probabilistic models and which interact in a complex way. Here, probabilities are understood in the Bayesian way, expressing degrees of belief in relation to the various uncertainties, and suitable to decision making processes. At best, probabilities can be interpreted as “best estimates” of the relative frequencies, sometimes being wrong on the one side, sometimes on the other the degree of deviation from reality being a direct function of the state of knowledge. More discussion on this topic can be found on Annex X of Part 1, Basis of Design. The present version of this JCSS Probabilistic Model Code document is available on the Internet at www. jcss.ethz.ch. It is intended that the document will be adapted and extended a number of times in the years to come. To get the best possible and efficient improvements all users are invited to send their questions, comments and suggestions to www.jcss.ethz.ch. The JCSS hopes that this document - the most recent of its pre-codification work since its creation in 1972 - will find its way into the practical work of structural engineers. The Reporter of the JCSS The President of the JCSS Michael Faber Ton Vrouwenvelder March 2001

Page 2: Probabilistic Model Code Jcss Pmc2000

1

Joint Committee 12th draft on Structural Safety JCSS-OSTL/DIA/VROU -10-11-2000

PROBABILISTIC MODEL CODE

Part 1 - BASIS OF DESIGN

Page 3: Probabilistic Model Code Jcss Pmc2000

JCSS-OSTL/DIA-04-10-1999

Contents

1. Introduction ....................................................................................................................... 3

2. Requirements ..................................................................................................................... 3

2.1. Basic requirements ............................................................................................................... 3

2.2. Reliability differentiation .................................................................................................... 3

2.3. Requirements for durability ............................................................................................... 4

3. Principles of limit state design .......................................................................................... 4

3.1. Limit states and adverse states ........................................................................................... 4

3.2. Limit State Function ............................................................................................................ 6

3.3. Design situations ................................................................................................................... 7

4. Basis of uncerainty modelling .......................................................................................... 7

4.1. Basic variables ...................................................................................................................... 7

4.2. Types of uncertainty ............................................................................................................ 8

4.3. Definition of populations ..................................................................................................... 8

4.4. Hierarchy of uncertainty models ........................................................................................ 9

5. Models for physical behaviour .......................................................................................... 9

5.1. General .................................................................................................................................. 9

5.2. Action models ..................................................................................................................... 10

5.3. Geometrical models ............................................................................................................ 11

5.4. Material models .................................................................................................................. 11

5.5. Mechanical models ............................................................................................................. 12

5.6. Model uncertainties ........................................................................................................... 13

6. Reliability ......................................................................................................................... 14

6.1. Reliability measures ........................................................................................................... 14

6.2. Component reliability and system reliability .................................................................. 14

6.3. Methods for reliability analysis and calculation ............................................................. 15

7. Target Reliability ............................................................................................................. 16

7.1. General Aspects .................................................................................................................. 16

7.2. Recommendations .............................................................................................................. 16

Page 4: Probabilistic Model Code Jcss Pmc2000

1

7.2.1. Ultimate Limit States..................................................................................................................... 16 7.2.2. Serviceability Limit State .............................................................................................................. 19

8. Annex A: The Robustness Requirement ......................................................................... 20

8.1. Introduction ........................................................................................................................ 20

8.2. Structural and nonstructural measures ........................................................................... 20

8.3. Simplified design procedure .............................................................................................. 21

8.4. Recommendation ................................................................................................................ 21

9. Annex B: Durability ........................................................................................................ 23

9.1. Probabilistic Formulations ................................................................................................ 23

9.2 Modelling of deterioration processes ................................................................................ 25

9.2. Effect of inspection ............................................................................................................. 29

9.3. Example .............................................................................................................................. 30

10. Annex C: Reliability Analysis Principles ................................................................... 33

10.1. Introduction ........................................................................................................................ 33

10.2. Concepts .............................................................................................................................. 33 10.2.1. Limit States ............................................................................................................................... 33 10.2.2. Structural Reliability ................................................................................................................ 34 10.2.3. System Concepts ...................................................................................................................... 36

10.3. Component Reliability Analysis ....................................................................................... 37 10.3.1. General Steps ............................................................................................................................ 37 10.3.2. Probabilistic Modelling ............................................................................................................ 38 10.3.3. Computation of Failure Probability .......................................................................................... 41 10.3.4. Recommendations .................................................................................................................... 44

10.4. System Reliability Analysis ............................................................................................... 45 10.4.1. Series systems ........................................................................................................................... 45 10.4.2. Parallel Systems........................................................................................................................ 46

10.5. Time-Dependent Reliability .............................................................................................. 46 10.5.1. General Remarks ...................................................................................................................... 46 10.5.2. Transformation to Time-Independent Formulations ................................................................ 48 10.5.3. Introduction to Crossing Theory .............................................................................................. 50

10.6. Figures ................................................................................................................................. 52

10.7. Bibliography ....................................................................................................................... 57

Page 5: Probabilistic Model Code Jcss Pmc2000

2

11. Annex D:Bayesian Interpretation of Probabilities .................................................... 59

11.1. Introduction ........................................................................................................................ 59

11.2. Discussion ............................................................................................................................ 59

11.3. Conclusion .......................................................................................................................... 61

Page 6: Probabilistic Model Code Jcss Pmc2000

3

1. Introduction This part treats the general principles for a probabilistic design of load bearing structures. The more detailed aspects dealing with the probabilistic description of loads are treated in part 2. In the same way the probabilistic description of structural resistance parameters is treated in part 3. This part doesn’t give detailed information about methods for the calculation of probabilities. It is assumed that the user of a probabilistic code is familiar with such methods. A clause on the interpretation of probabilities treated in this document is provided in Annex D.

2. Requirements

2.1. Basic requirements

Structures and structural elements shall be designed, constructed and maintained in such a way that they are suited for their use during the design working life and in an economic way. In particular they shall, with appropriate levels of reliability, fulfil the following requirements: - They shall remain fit for the use for which they are required (serviceability limit state requirement) - They shall withstand extreme and/or frequently repeated actions occurring during their construction and anticipated use (ultimate limit state requirement) - They shall not be damaged by accidental events like fire, explosions, impact or

consequences of human errors, to an extent disproportionate to the triggering event (robustness requirement, see Annex A).

2.2. Reliability differentiation

The expression "with appropriate levels of reliability" used above means that the degree of reliability should be adopted to suit the use of the structure, the type of structure or structural element and the situation considered in the design, etc. The choice of the various levels of reliability should take into account the possible consequences of failure in terms of risk to life or injury, the potential economic losses and the degree of social inconvenience, as described in chapter 8. It should also take into account the amount of expense and effort required to reduce the risk of failure. It is further noted, that the

Page 7: Probabilistic Model Code Jcss Pmc2000

4

term "failure" as used in this document refers to either inadequate strength or inadequate serviceability of the structure. The consequences of a failure generally depend on the mode of failure, specially in those cases when the risk to human life or injury exists. In order to provide a structure corresponding to the requirements and to the assumptions made in the design, appropriate quality measures shall be adopted. These measures comprise definition of reliability requirements, organisational measures and controls at the stages of design, execution and use and the maintenance of the structure.

2.3. Requirements for durability

The durability of the structure in its environment shall be such that it remains fit for use during its design working life. This requirement can be considered in one of the following ways: a) By using materials that, if well maintained, will not degenerate during the design working life. b) By giving such dimensions that deterioration during the design working life is compensated. c) By chosing a shorter lifetime for structural elements, which may be replaced one or more times during the design working life. d) By inspection at fixed or condition dependent intervals and appropriate maintenance activities. In all cases the reliability requirements for long and short term periods should be met. Analysis aspects on durability are described in Annex B.

3. Principles of limit state design

3.1. Limit states and adverse states

The structural performance of a whole structure or part of it should be described with reference to a specified set of limit states which separate desired states of the structure from adverse states. The limit states are divided into the following two basic categories:

Page 8: Probabilistic Model Code Jcss Pmc2000

5

- the ultimate limit states, which concern the maximum load carrying capacity as well as the maximum deformability

- the serviceability limit states, which concern the normal use. The exceedance of a limit state may be irreversible or reversible. In the irreversible case the damage or malfunction associated with the limit state being exceeded will remain until the structure has been repaired. In the reversible case the damage or malfunction will remain only as long as the cause of the limit state being exceeded is present. As soon as this cause ceases to act, a transition from the adverse state back to the desired state occurs. It is further noted here that in some cases a limit between the aforementioned limit state types may be defined This can be done by an artificial discretization of a the continuous situation between the serviceability and the ultimate limit state. By applying such a procedure a so-called partial damage limit state” can be defined. For example in case of earthquake damage of plant structures such limit state is associated to the safe shut down of the plant. Ultimate limit states may correspond to the following adverse states: - loss of equilibrium of the structure or of a part of the structure, considered as a rigid body (eg. overturning) - attainment of the maximum resistance capacity of sections, members or connections

by rupture or excessive deformations - rupture of members or connections caused by fatigue or other time-dependent effects

instability of the structure or part of it - sudden change of the assumed structural system to a new system, (eg. snap through) The exceedance of an ultimate limit state is almost always irreversible and the first time that this occurs causes failure. Serviceability limit states may correspond to the following adverse states: - local damage (including cracking) which may reduce the durability of the structure or affect the efficiency or appearance of structural or non-structural elements. - observable damage caused by fatigue or other time dependent effects - unacceptable deformations which affect the efficient use or appearance of structural or

non-structural elements or the functioning of equipment. excessive vibrations which cause discomfort to people or affect non-structural

- elements or the functioning of equipment

Page 9: Probabilistic Model Code Jcss Pmc2000

6

In the cases of permanent local damage or permanent unacceptable deformations the exceedance of a serviceability limit state is irreversible and the first time that this occurs causes failure. In other cases the exceedance of a serviceability limit state may be reversible and then failure occurs: a) the first time the serviceability limit state is exceeded, if no exceedance is considered

as acceptable b) if exceedance is acceptable but the time when the structure is in the undesired state is longer than specified c) if exceedance is acceptable but the number of times that the serveciability limit state is exceeded is larger than specified d) if a combination of the above criteria occur. These cases may involve temporary local damage (eg. temporarily wide cracks), temporary large deformations and vibrations. Limit values for the serviceability limit state should be defined on the basis of utility considerations.

3.2. Limit State Function

For each specific limit state the relevant basic variables should be identified, i.e. the variables which characterize: - actions and environmental influences - properties of materials and soils - geometrical parameters Such variables may be time dependent. Models, which describe the behaviour of a structure, should be established for each limit state. These models include mechanical models, which describe the structural behaviour, as well as other physical or chemical models, which describe the effects of environmental influences on the material properties. The parameters of such models should in principle be treated in the same way as basic variables. Serviceability constaints (limit values according to 4.1) should in principle be regarded as random and may in many cases be treated in the same way as basic variables. Where calculation models are available, the limit state can be described with aid of a function, g, of the basic variables X(t) = X1(t), X2(t), ... so that

Page 10: Probabilistic Model Code Jcss Pmc2000

7

g (X(t)) = 0 (1) Eq. (1) is called the limit state equation, and g (X(t)) < 0 (2) identifies the adverse state. In a component analysis where there is one dominating failure mode the limit state condition can normally be described by one equation according to eq. (1). In a system analysis, where more than one failure mode may be determining, there are several such equations.

3.3. Design situations

Actions, environmental influences and structural properties may vary with time. Such variations, which occur throughout the lifetime of the structure, should be considered by selected design situations, each one representing a certain time intervall with associated hazards, conditions and relevant structural limit states. The design situations may be classified as: Persistent situations, which refer to conditions of normal use of the structure and are generally related to the working life of the structure. Transient situations, which refer to temporary conditions of the structure, in terms of its use or its exposure. Accidental situations, which refer to exceptional conditions of the structure or its exposure.

4. Basis of uncerainty modelling

4.1. Basic variables

The calculation model for each limit state considered should contain a specified set of basic variables, i.e. physical quantities which characterize actions and environmental influences, material and soil properties and geometrical quantities. The model should also contain model parameters which characterize the model itself and which are treated as basic variables (compare 4.2). Finally there are also parameters which describe the requirements (e.g. serviceability constraints) and which may be treated as basic variables. The basic variables (in

Page 11: Probabilistic Model Code Jcss Pmc2000

8

the wide sence given above) are assumed to carry the entire input information to the calculation model. The basic variables may be random variables (indlucing the special case deterministic variables) or stochastic processes or random fields. Each basic variable is defined by a number of parameters such as mean, standard deviation, parameters determining the correlation structure etc.

4.2. Types of uncertainty

Uncertainties from all essential sources must be evaluated and integrated in a basic variable model. Types of uncertainty to be taken into account are: - intrinsic physical or mechanical uncertainty - statistical uncertainty, when the design decisions are based on a small sample of observations or when there are other similar conditions - model uncertainties (see 5.6). Within given classes of structural design problems the types of probability distributions of the basic variables should be standardized. These standardizations are defined in the parts 2 and 3 of the probabilistic model code.

4.3. Definition of populations

The random quantities within a reliability analysis should always be related to a meaningfull and consistent set of populations. The description of the random quantities should correspond to this set and the resulting failure probability is only valid for the same set. The basis for the definition of a population is in most cases the physical background of the variable. Factors which may define the population are: - the nature and origin of a random quantity - the spatial conditions (e.g. the geographical region considered) - the temporal conditions (e.g. the intended time of use of the structure considered) The choice of a population is to some extent a free choice of the designer. It may depend on the objective of the analysis, the amount and nature of the available data and the amount of work that can be afforded.

Page 12: Probabilistic Model Code Jcss Pmc2000

9

In connection with theoretical treatment of data and with the evaluation of observations it is often convenient to divide the largest population into sub-populations which in turn are further divided in smaller sub-populations etc. Then it is possible to study and distinguish variability within a population and variability between different populations. In an analysis for a specific structure it may be efficient to define a population as small as possible as far as use, shape and location of the structure are concerned (microzonation). When the results are used for design in a national or international code, it may be necessary or convenient to put the sub-populations together to the large population again in order not to get too complicated rules (randomizing). This means that the variability within the population is increased.

4.4. Hierarchy of uncertainty models

This section contains a convenient and recommended mathematical description in general terms of a hierarchical model which can be used for different kinds of actions and materials. The details of this model have to be stated more precisely for each specific variable. The model is associated with a hierarchical set of subpopulations. The hierarchical model assumes that a random quantity X can be written as a function of several variables, each one representing a specific type of variability: Xijk = f (Yi, Yij, Yijk) (3) The Y represent various origins, time scales of fluctuation or spatial scales of fluctuation. For instance Yi may represent the building to building variation, Yij the floor to floor variation in building i and Yijk the point to point variation on floor j in building i. In a similar way, Yi may represent the constant in time variability, Yij a slowly fluctuating time process and Yijk a fast fluctuating time process.

5. Models for physical behaviour

5.1. General

Calculation models shall describe the structure and its behaviour up to the limit state under consideration, accounting for relevant actions and environmental influences. Models should

Page 13: Probabilistic Model Code Jcss Pmc2000

10

generally be regarded as simplifications which take account of decisive factors and neglect the less important ones. One can often distinguish between: - action models - structural models which give action effects (internal forces, moments etc.) - resistance models which give resistances corresponding to the action effects, and are based on. - material models and geometry models . However, in some cases it is not possible or convenient to make this distinction, for example, if the instability or loss of equilibrium of an entire structural system is studied or if interactions between loads and structural response are of interest.

5.2. Action models

A complete action model should describe several properties of the action such as its magnitude, position, direction, duration etc. In some cases there is an interaction between the different properties and also between these properties and the response of the structure. Such interactions should be taken into account. The magnitude F of an action may often be described by two different types of variables so that F = ϕ (Fo, W) (4) where ϕ is an appropriate function and Fo is a basic action variable, often with time and space dependent variations (random or non-random) and is generally independent of the structure W is a random or non-random variable or a random field which may depend on the structural properties and which transformes Fo to F. Eq. (4) should be regarded as a symbolic expression where Fo and W may represent several variables. One example may be snow load where Fo is the time dependent snow load on ground and W is the conversion factor for snow load on ground to snow load on roof which normally is assumed to to be time independent.

Page 14: Probabilistic Model Code Jcss Pmc2000

11

Further information on action models is provided in part 2. It is noted that action models may include material properties (earthquake action depends for example on material damping).

5.3. Geometrical models

A structure can generally be described by a model consisting of one-dimensional elements (beams, columns, cables, arches, etc), two-dimensional elements (slabs, walls, shells, etc) and three-dimensional elements. The geometrical quantities which are included in the model generally refer to nominal values, i.e. the values given in drawings, descriptions etc. Normally, the geometrical quantities of a real structure differ from their nominal values, i.e. the structure has geometrical imperfections. If the structural behaviour is sensitive to such imperfections, these shall be inlcuded in the model. In many cases the deformation of a structure causes significant deviations from nominal values of geometrical quantities. If such deformations are of importance for the structural behaviour, they have to be considered in the design in principally the same way as imperfections. The effects of such deformations are generally denoted geometrically nonlinear or second order effects and should be accounted for.

5.4. Material models

When strength or stiffness is considered the material model normally consists of relations between forces or stresses and deformations i.e costitutive relationships. The parameters of such relations are modulus of elasticity, yield limit, ultimate strength etc. which generally are considered as random variables, Sometimes they are time dependent or space dependent. There is often an correlation between the parameters e.g. the modulus of elasticity and the ultimate strength of concrete. Other material properties, e.g. resistance against material deterioration may often be treated in a similar way. However the principles are strongly dependent on type of material and the property considered. Further information related to models of several material types is given in part 3.

Page 15: Probabilistic Model Code Jcss Pmc2000

12

5.5. Mechanical models

The following mechanical models may be classified

a) models describing static response b) models decribing dynamic response c) models for fatigue a) models describing static response

In almost all design calculations some assumptions concerning the relation between forces or moments and deformations (or deformation rates) are necessary. These assumptions can vary and depend on the purpose and type of calculation. The most general relationship regarding structural response is considered to be elastic) developing into plastic behaviour in certain parts of the structure at high action effects. In other parts of the structure intermediate stages occur. Such relationships may be used generally. However the use of any theory taking into account in-elastic or post-critical behaviour may have to take into account repetitions of variable actions that are free. Such actions may cause great variations of the action effects, repeated yielding and exhaustion of the deformation capacity. The theory of elasticity may be regarded as a simplification of a more general theory and may generally be used provided that forces and moments are limited to those values, for which the behaviour of the structure is still considered as elastic. However, the theory of elasticity may also be used in other cases if it is applied as a conservative approximation. Theories in which fully developed plasticity is assumed to occur in certain zones of the structure (plastic hinges in beams, yield lines in slabs, etc) may also be used, provided that the deformations which are needed to ensure plastic behaviour, occur before the ultimate limit state is reached. Thus theory of plasticity should be used with care to determine the load carrying capacity of a structure, if this capacity is limited by: - brittle failure - failure due to instability

b) models for dynamic response In most cases dynamic response of a structure is caused by a rapid variation of the magnitude, position or direction of an action However, a sudden change of the stiffness or resistance of a structural element may also cause dynamic behaviour.

Page 16: Probabilistic Model Code Jcss Pmc2000

13

The models for dynamic response consist in general of:

• a stiffness model • a damping model • an inertia model

c) models for fatigue

Fatigue models are used for the description of fatigue failures caused by fluctuating actions. Two types of models are distinguished:

a) S-N model based on experiments b) fracture mechanics model

It is further noted here, that other types of degradation such as chemical attack or fire can modify the parameters entering the aforementioned models or the models themselves.

5.6. Model uncertainties

A calculation model is a physically based or empirical relation between relevant variables, which are in general random variables: Y = f (X1, X2, ... Xn) (5) Y = model output f ( ) = model function Xi = basic variables The model f (...) may be complete and exact, so that, if the values of Xi are known in a particular experiment (from measurements), the outcome Y can be predicted without error. This, however, is not normally the situation. In most cases the model will be incomplete and inexact. This may be the result of lack of knowledge, or a deliberate simplification of the model, for the convenience of the designer. The difference between the model prediction and the real outcome of the experiment can be written down as: Y = f ′ (X1 ... Xn, θ1 ... θm) (6)

Page 17: Probabilistic Model Code Jcss Pmc2000

14

θi are referred to as parameters which contain the model uncertainties and are treated as random variables. Their statistical properties can in most cases be derived from experiments or observations. The mean of these parameters should be determined in such a way that, on average, the calculation model correctly predicts the test results.

6. Reliability

6.1. Reliability measures

A standard reliability measure may be chosen to be the generalized reliability index. It is defined as: β = - Φ-1 (Pf) (7) where Pf is the probability of failure Φ-1(⋅) is the inverse Gaussian distribution Another equivalent reliability measure is the probability of the complement of the adverse event

Ps = 1 - Pf (8) The probability Pf should be calculated on the basis of the standardized joint distribution type of the basic variables and the standardized distributional formalism of dealing with both model uncertainty and statistical uncertainty. In special situations other than the standardized distribution types can be relevant for the reliability evaluation. In such cases the distributional assumptions must be tested on a suitable representative set of observation data. Reliability analysis principles including time-dependent reliability problems are described in Annex C.

6.2. Component reliability and system reliability

Component reliability is the reliability of one single structural component which has one dominating failure mode.

Page 18: Probabilistic Model Code Jcss Pmc2000

15

System reliability is the reliability of a structural system composed of a number of components or the reliability of a single component which has several failure modes of nearly equal importance. The following type of systems can be classified: • -redundant systems where the components are “fail safe”, i.e. local behaviour of one

component does not directly result in failure of the structure; • -non-redundant systems where local failure of one component leads rapidly to failure of

the structure. Probabilistic structural design is primarily concerned with component behaviour. System behaviour is, however, of concern because it is usually the most serious consequence of structural failure. Therefore the likelihood of system failure following an initial component failure should be assessed. In particular, it is necessary to determine the system characteristics in relation to damage tolerance or robustness with respect to accidental events. The requirements for the reliability of the components of a system should depend upon the system characteristics. A probabilistic system analysis should therefore be carried out to establish: - the redundancy (alternate load-carrying paths) - the state and complexity of the structure (multiple failure modes). Furher aspects on system reliability are provided in Annex C.

6.3. Methods for reliability analysis and calculation

The numerical value of the reliability measure is obtained by a reliability analysis and calculation method (see Annex C). The reliability method used should be capable of producing a sensitivity analysis including importance factors for uncertain parameters. The choice of the method should be in general justified. The justification can be for example based by another relevant computation method or by reference to appropriate literature. Due to the computational complexity a method giving an approximation to the exact result is generally applied. Two fundamental accuracy requirements are: - Overestimation of the reliability due to use of an approximative calculation method

shall be within limits generally accepted for the specific type of structure. - The overestimation of the reliability index should not exceed 5 % with respect to the

target level.

Page 19: Probabilistic Model Code Jcss Pmc2000

16

The accuracy of the reliability calculation method is linked to the sensitivity with respect to structural dimensions and material properties in the resulting design.

7. Target Reliability

7.1. General Aspects

In terms of a reliability based approach the structural risk acceptance criteria correspond to a required minimum reliability herein defined as target reliability. The requirements to the safety of the structure are consequently expressed in terms of the accepted minimum reliability index or the accepted maximum failure probability. In a rational analysis the target reliability is considered as a control parameter subject to optimization. The parameter assigns a particular investment to the material placed in the structure. The more material - invested in right places - the less is the expected loss. Such optimization is mainly possible when economic loss components dominate over life, injury, and culture components. When the expected loss of life or limb is important, the optimal reliability level becomes more controversional. Frequently, this leads to the problem of the economic equivalent of human life; risk-benefit analyses are then applied to circuvent this difficulty; the reliability of the system is translated into the cost per life saved. The target reliability may then be chosen such that the cost per life saved is at acceptable levels (for example comparable to other similar systems). In a practical approach the required reliability of the structure is controlled by: i) a set of assumptions about quality assurance and quality management measures; these measures are for example related to design and construction supervision and are intended to avoid gross errors. ii) formal failure probability requirements, conditional upon these assumptions, defined by specified target values for the various classes of structures and structural members.

7.2. Recommendations

Target reliability values are provided in the next paragraphs. They are based on optimization procedures and on the assumption that for allmost all engineering facilities the only reasonable reconstruction policy is systematic rebuilding or repair.

7.2.1. Ultimate Limit States

Target reliability values for ultimate limit states are proposed in Table 1. The values in Table 1 are obtained based on cost benefit analysis for the public at characteristic and representative

Page 20: Probabilistic Model Code Jcss Pmc2000

17

but simple example structures and are compatible with calibration studies and statistical observations. Table 1: Tentative target reliability indices β (and associated target failure rates) related to one year reference period and ultimate limit states

1 2 3 4

Relative cost of safety

measure

Minor consequences

of failure

Moderate

consequences of

failure

Large

consequences of

failure

Large (A) β=3.1 (pF≈10-3) β=3.3 (pF ≈ 5 10-4) β=3.7 (pF ≈ 10-4)

Normal (B) β=3.7 (pF≈10-4) β=4.2 (pF ≈ 10-5) β=4.4 (pF ≈ 5 10-6)

Small (C) β=4.2 (pF≈10-5) β=4.4 (pF ≈ 5 10-6) β=4.7 (pF ≈ 10-6)

The shadowed value in Table 1 should be considered as the most common design situation. In order to make the right choice in this table the following guidelines may be of help: ♦ Consequence classes A classification into consequenze classes is based on the ratio ρ defined as the ratio between total costs (i.e. construction costs plus direct failure costs) and construction costs. Class 1 Minor Consequences: ρ is less than approximately 2 Risk to life, given a failure, is small to negligible and economic consequences are small or negligible (e.g. agricultural structures, silos, masts); Class 2 Moderate Consequences: ρ is between 2 and 5. Risk to life, given a failure, is medium or economic consequences are considerable (e.g. office buildings, industrial buildings, apartment buildings). Class 3 Large Consequences: ρ is between 5 and 10.

Risk to life, given a failure, is high, or economic consequences are significant (e.g. main bridges, theaters, hospitals, high rise buildings). If ρ is larger than 10 and the absolute value of H also is large, the consequences should be regarded as extreme and a full cost benefit analysis is recommended. The conclusion might be that the structure should not be build at all.

Page 21: Probabilistic Model Code Jcss Pmc2000

18

One should be aware of the fact that failure consequences also depend on the type of failure, which can be classified according to: a) ductile failure with reserve strength capacity resulting from strain hardening b) ductile failure with no reserve capacity c) brittle failure Consequently a structural element which would be likely to collapse suddenly without warning should be designed for a higher level of reliability than one for which a collapse is preceded by some kind of warning which enables measures to be taken to avoid severe consequences. The values given relate to the structural system or in approximation to the dominant failure mode or structural component dominating system failure. Therefore, structures with multiple, equally important failure modes should be designed for a higher level of reliability. ♦ Relative cost of safety measures classificaton The normal class (B) should be associated with: • medium variabilities of the total loads and resistances (0.1 < V < 0.3), • relative cost of safety measure • normal design life and normal obsolesce rate composed to construction costs of the order

of 3% The given values are for structures or structural elements as designed (not as built). Failures due to human error or ignorance and failures due to non-structural causes are not covered by table 1. Values outside the given ranges may lead to a higher or lower classification. In particular attention may be given to the following aspects: ♦ Degree of Uncertainty A large uncertainty in either loading or resistance (coefficients of variation larger then 40 %), as for instance the case of many accidental and seismic situations, a lower reliability class should be used. The point is that for these large uncertainties the additional costs to achieve a high reliability are prohibitive. If on the other hand both acting and resisting variables have coefficients of variation smaller than 10%, like for most dead loads and well-known small resistance variability, a higher class can be achieved by very little effort and this should be done. ♦ Quality assurance and inspections

Page 22: Probabilistic Model Code Jcss Pmc2000

19

Quality assurance (for new structures) and inspections (for existing structures) have an increasing effect on costs. This will lead to a lower reliability class. On the other hand, due to QA and inspections the uncertainty will normally decrease and a higher class becomes economically more attractive. General rules are difficult to give. ♦ Existing structures For existing structures the costs of achieving a higher reliability level are usually high compared to structures under design. For this reason the target level for existing structures usually should be lower. ♦ Service life and/or obsolesce For structures designed for short service life or otherwise rapid obsolesce (say less than 10 years) the beta-values can be lowered by one or half a class. By definition serviceability failures are not associated with loss of human life or limb. For existing structures the demand will be more related to the actual situation in performance and use. No general rules are given in this document.

7.2.2. Serviceability Limit State

When setting target values for serviceability limit states (SLS) it is important to distinguish between irreversible and reversible serviceability limit states. Target values for SLS can be derived based on decision analysis methods. For irreversible serviceability limit states tentative target values are given in Table 2. A variation from the target serviceability indexes of the order of 0.3 can be considered. For reversible serviceability limit states no general values are given. Table 2: Tentative target reliability indices (and associated probabilities) related to

one year reference period and irreversible serviceability limit states Relative Cost of Safety Measure Target Index

(irreversible SLS) High β=1.3(pF≈10-1)

Normal β=1.7(pF≈5 10-2) Low β=2.3(pF≈10-2)

Page 23: Probabilistic Model Code Jcss Pmc2000

20

8. Annex A: The Robustness Requirement

8.1. Introduction

In clause 3.1 the following robustment requirement has been formulated:

“A structure shall not be damaged by events like fire explosions or consequences of human errors, deterioration effects, etc. to an extend disproportionate to the severeness of the triggering event”.

This annex is intended to give some further guidance. No attention is being paid to terrorist actions and actions of war. The general idea is that, whatever the design, proper destructive actions can always be succesful.

8.2. Structural and nonstructural measures

In order to attain adequate safety in relation with accidental loads one or more of the following strategies may be followed:

1. reduction of the probability that the action occurs or reduction of the action intensity (prevention)

2. reduction of the effect of the action on the structure (protection) 3. making the structure strong enough to withstand the loads 4. limiting the amount of structural damage 5. migitation of the consequences of failure

The strategies 1, 2 and 5 are so called non-structural measures. These measures are considered as being very effective for some specific accidental action. The strategies 3 and 4 are so called structural measures. In general strategy 3 is extremely expensive in most cases. Strategy 4, on the other hand accepts some members to fail, but requires that the total damage is limited. This means that the structure should have sufficient redundancy and possibilities to mobilise so called alternative load paths. In the ideal design procedure, the occurrence and effects of an accidental action (impact, explosion, etc.) are simulated for all possible action scenarios. The damage effect of the structural members is calculated and stability of the remaining structure assessed. Next the consequences are estimated in terms of number of casualties and economic losses. Various measures can be compared on the basis of economic criteria.

Page 24: Probabilistic Model Code Jcss Pmc2000

21

8.3. Simplified design procedure

The approach sketched in A2 has two disadvantages: (1) it is extremely complicated (2) it does not work for unforseenable hazards

As a result other more global design strategies have been developed, like the classical requirements on sufficient ductility and tying of elements. Another approach is that one considers the situation that a structural element (beam, column) has been damaged, by whatever event, to such an extend that its normal load bearing capacity has vanished almost completely. For the remaining part of the structure it then required that fore some relatively short period of time (repair period T) the structure can withstand the "normal" loads with some prescribed reliability: P(R < S in T | one element removed) < ptarget (A1) The target reliability in (A1) depends on: - the normal safety target for the building - the period under consideration (hours, days or months) - the probability that the element under consideration is removed (by other causes then

already considered in design). The probability that some element is removed by some cause, not yet considered in design, depends on the sophistication of the design procedure and on the type of structure. For a conventional structure it should, at least in theory, be possible to include all relevant collapse origins in the design. Of course, it will always be possible to think of failure causes not covered by the design, but those will have a remote likelihood and may be disregarded on the basis of decision theoretical arguments. For unconventional structures this certainly will not be the case.

8.4. Recommendation

For unconventional structures, as for instance large structures, the probability of having some unspecified failure cause is substantial. If in addition new materials or new design concepts are used, unexpected failure causes become more likely. This would indicate that for unconventional structures the simplified approach should be recommended. For conventional structures there is a choice:

Page 25: Probabilistic Model Code Jcss Pmc2000

22

(1) one might argue that, as one never succeeds in dealing with all failure causes explicitly in a satisfactory way, it has no use to make refined analyses including system effect, accidental actions and so on; this leads to the use of the simplified procedure.

(2) one might also eliminate the use of an explicit robustness requirement (A1) as much as possible by taking into the design as many aspects explicitly as possible.

Stated as such it seems that the second approach is more rational, as it offers the possibility to reduce the risks in the most economical way, e.g. by sprinklers (for fire), barriers (for collision), QA (for errors), relief openings (for explosions), artificial damping (for earth quake), maintenance (for deterioration) and so on.

Page 26: Probabilistic Model Code Jcss Pmc2000

23

9. Annex B: Durability

9.1. Probabilistic Formulations

Loads as well as material properties may vary in time as stationary or non-stationary processes. Time may also be present in the limit state function as an explicit parameter. As a result, the failure probability of a structure is also time dependent. The general formulation for the failure probability for a period of time t may be presented as: PF ( t ) = P [ min g( x(τ);τ ) < 0 for 0 < τ < t ] (B1) g(.) = limit state function x(τ) = vector of basic variable at time τ t = period of time under consideration τ = time The failure may be of ULS as well as SLS type. One should keep in mind that also in the case of a non-deteriorating time independent resistance and a stationary loading condition, the failure probability is also time dependent due to the random fluctuations of the load. This, however, is usually not considered as a durability problem. Given (B1), the conditional failure rate (also referred to as risk functions) at time t may be found as:

r( t ) = [ ])(1

)(),(tP

tpt

tupsurvivaltttinfailureP

F

F

−=

Δ

Δ+ (B2)

where

pF( t ) = dt

(t)FdP (B3)

is the failure time density. For small values of t, the failure probability PF( t ) is close to zero, which makes the conditional failure rate and the density almost numerically equal. For durability problems, the conditional failure rate is usually increasing in time. Reliability limits set in section 7 may be related to (B2) or (B3) whichever is appropriate. If failure of a structural element leads automatically to replacement by a similar element, one may alternatively use the renewal density h, defined as;

Page 27: Probabilistic Model Code Jcss Pmc2000

24

h( t ) = [ ]

t

tttinnnumberelementoffailurePn

Δ

Δ+∑∞

=

),(1 (B4)

For small t the result will be equal to (B2) and (B3). For large t the value of h will asymptotically lead to 1/μ and where μ is the mean time to failure, defined as:

μ = ( ) dttPdttpt FF ∫∫∞∞

−=00

)(1)( (B5)

The calculation procedure for PF ( t ) depends on the nature of the limit state function g(.). If g(.) is a smooth monotonically decreasing function not depending explicitly on random process variables, the minimum value is reached at the end of the period, and we simply have: PF(t) = P [g( x;t ) < 0 ] (B6) If g(.) depends on random process variables and, therefore, is not monotonically decreasing, we have a first passage problem. In that case the following upper bound approximation may be useful:

PF(t) = PF(0) + ∫t

0ν - (τ)dτ (B7)

where PF(0) is the failure at the start and ν- the outcrossing rate or unconditional failure rate which is given by:

ν- (τ) = τΔ

<τΔ+τ∩>τ ]0)(0)([ ggP (B8)

In general, the limit state function g(.) may be quite complex due to a combination of physical, chemical and mechanical processes. Take as an example the deterioration processes due to carbonation and/or chloride ingress of concrete. After some period of time the carbonation or chloride fronts may reach the reinforcement and corrosion may start, resulting eventually in spalling and later even in failure by collapse due to some large mechanical load (see figure B1). Many parameters like the outside climate, the cover of the concrete, the diffusion properties, the corrosion speed and so on may play a role.

Page 28: Probabilistic Model Code Jcss Pmc2000

25

Figure B1: Failure due to a combination of physical and chemical processes and a variable

mechanical load

9.2 Modelling of deterioration processes

In this Annex we will restrict the discussion to a family of relatively simple damage accumulation processes that can be described by the following differential equation:

dtdy = yk h(z) (B9)

where y(t) = damage indicator z(t) = random process of disturbances h(.) = positive definite function of z k = parameter determining the nature of the process From B(9) we may arrive at:

∫)(

)0(

ty

y

y-k dy = ∫t

0

h(z(τ)) dτ (B10)

R (τ)R0

R,S

S

failure

timeinitiation propagation

Page 29: Probabilistic Model Code Jcss Pmc2000

26

Defining Ψ(y) as the integral function of y--k and χ(t) as the right hand side integral of (B10), this can be written as: Ψ(y(t)) - Ψ(y(0)) = χ(t) If z(t) is stationary and ergodic, χ(t) may asymptotically be taken as implying that the damage increases smoothly:

χ(t) = t Eh(z(t) (B11) Failure will occur if de damage y(t) exceeds some critical value ycr, which leads finally to the following expression for the limit state function: g(t) = Ψ( ycr (t)) - Ψ(y(0)) - χ(t) (B12) The critical value ycr may be a constant or time dependent. If ycr is a constant we may use (B3), to find the failure probability. If ycr is time dependent we have a first passage problem.

Characteristic examples

1. Abrasion / corrosion modelling Abrasion and/or corrosion mechanisms can be modelled by k=0 and h(z) = z. In that case (B9) reduces to:

dtdy = z(t)

For abrasion or corrosion the damage parameter y corresponds to the thickness of the lost material and z represents is the abrasion or corrosion rate. In this case Ψ is simple equal to y itself. Assuming that z(t) is a stationary and ergodic random process with mean μz, we may use (B12) and arrive at: g(t) = ycr – yo – μz t The value yo may be 0 (or random) and the critical value of ycr may be related to the load and material strength, for instance:

Page 30: Probabilistic Model Code Jcss Pmc2000

27

ycr = do – S/f where do is the original material thickness, S the load per unit length and f the material rupture strenght. It can easily be seen that ycr is constant in time for a constant load S and that ycr is time dependent for a fluctuating load. 2. Duration of load We consider again the case n=0 and h(z) = z. Let now, however, y represent the relative reduction of the material strength R, that is R(t) = Ro(1-y).Let further the disturbance z be proportional to the mechanical load S. In other words: the presence of a load will lead to a damage or strength reduction, and more if the load is higher. Such a model can be used to represent duration of load effects. If we define z = S/So, with So some random material parameter, we arrive at: g(t) = ycr – yo – μS t / So Let yo = 0 and let ycr correspond to R(t) = Ro(1- ycr) = S(t), we arrive finally at: g(t) = (1- μS t / So ) – S(t)/Ro

or equivalently: g(t) = Ro(1- μS t / So ) – S(t)

Again, if S is a constant load we may use (B6); if not we have a first passage problem. The resulting time dependent strength for a constant load S is presented in figure B2.

R,S

R0

R (τ) failure

time to failure

S

Page 31: Probabilistic Model Code Jcss Pmc2000

28

Figure B2: Load duration dependent strength under constant load

3. Fatigue Crack Propagation Due to load fluctuations some initial small crack in a structure may grow and weaken the

cross section. Finally some large load amplitude may lead to collapse of the structural element

(see figure B3). The differential equation for the crack growth a is given by:

dnda = C Y(a) [ ΔS(n) aπ ]m

Where ΔS represents the stress range, Y(a) represents a geometrical function, C and m are

material constants and n is the stress cycle number. Note that in this example the time t has

been replaced by the load cycle number n and that k in (B5) corresponds to m/2. The

functions Ψ and χ are then given by (assuming ΔS to be stationary and ergodic):

Ψ = CYm1

22−

π--m/2 p-m/2 a 1-m/2

χ = n E(ΔS)m

And the limit state function is given by: g(t) = Ψ(acr) - Ψ(a0)-χ where a0 is the initial crack length and acr the critical crack length, which again may be time dependent or time independent. In the fist case (B6) may be used, in the second case we have a first passage problem. Alternatively, one may formulate the limit state function in the crack domain:

g(t) = acr – a(n) with a(n) = ⎭⎬⎫

⎩⎨⎧ Δ

−+−

22 2/2/1 mmm

o snEYCm

a π

or in the time domain:

g(t) = N – n with N = m)S(E

)oa()cra(

Δ

ψ−ψ

Page 32: Probabilistic Model Code Jcss Pmc2000

29

These alternative formulations are fully equivalent to the first one.

Figure B3: Fatigue fracture under cyclic loading

9.2. Effect of inspection

In the case of deteriorating processes it may be uneconomic to design a structure in such a way that the reliability is sufficient for a normal design life of 50 years. In those cases a more economical solution can be obtained by the definition of an inspection scheme. In those cases failure will not occur if the inspection reveals some predefined deterioration criterion and the structure is repaired adequately. The sequence of events can be represented in an event tree as indicated in Figure B4. Let the first inspection Ii be planned at time ti. In that case we may have three possibilities.

1) a failure occurs before ti (branche F) 2) the inspection detects a serious defect and repair is necessary (branche R) 3) no serious defect is detected and a next inspection at t = t2 is planned

If the structure is repaired, one may usually assume that all variables are reset to the initial situation. From every event R then a new event tree of the same type as the one in figure B4 is started. For reasons of simplicity we will start by having one inspection only. Using the total probability theorem, the probability of failure for a period t may then formally be written as:

S

R (τ)

R,S

failure

k = load cycle

Page 33: Probabilistic Model Code Jcss Pmc2000

30

PF(t) = P[ F | Zi > 0 ] P(Zi > 0) + P[ F | Zi < 0 ] P(Zi < 0) (B13) where F = failure Zi = inspection result of inspection at time ti (negative values correspond to the detection

of damage) If we assume that in the case of a serious damage revealed at the inspection (that is Z<0) the structure will be repaired adequately, (B13) may be reduced to (replacing F by minτ g (τ) < 0, where g( ) is the limit state function and 0 < τ< t): PF(t) = P[ minτ g(τ ) < 0 | Zi > 0 ] P(Zi > 0) ] or simply: PF(t) = P[ minτ g(τ ) < 0 ∩ Zi > 0 ] If more inspections in fixed intervals are present we arrive at: PF(t) = P [ minτ g(x(τ);τ) < 0 ∩ ∩Zi(x(ti);ti) > 0 for 0 < τ < t ] (B14) ti = time of inspection; only inspections with ti < τ are relevant Note: whether or not an inspection is planned, of course, is a matter of economy.

9.3. Example

Figure B5 clarifies formula (B14) for the case of fatigue. As discussed before, the g-function for the situation at the load cycle at time τ is given by: g = acr - a(t) Let the crack a(τ) be monitored by a yearly inspection. If the measured crack am is larger that some limit alim the structure will be adequately repaired. An inspection failure may then be modelled as Zins < 0 with: Zins = alim - am (ti)

Page 34: Probabilistic Model Code Jcss Pmc2000

31

In present practice alim usually corresponds to the detection limit and the probability distribution for alim is then equal to the so called POD-curve (probability of detection). Failure will occur only if the measured value am(tins) is below the limit value alim at inspection ti but above the acrit before the next inspection. This way failure probability can be reduced by shorter inspection intervals or by more refined or accurate inspection techniques. Note that an implication of this method is that these Probability of Detection curves (POD curves) and measurement accuracy’s must be known to the designer in order to decide whether or not a certain structure meets the reliability requirements. Note further that the probability of repair is given by: P = P[Zins < 0] Repair may be considered like some serviceability limit state. The designer should also make sure that the probability of repair is below some economic limit value.

F

0 F

I1 R F

I2 R

I3

Figure B4: Event tree representation of an inspected component: R = Repair or maintenance action; F = Failure, Ii = Inspection at time ti

Page 35: Probabilistic Model Code Jcss Pmc2000

32

a

acrit

alim

ti ti + Δti

Figure B5: Fatigue failure in the interval ti, ti + Δti with a(τ) < alim at the beginning of the interval.

Page 36: Probabilistic Model Code Jcss Pmc2000

33

10. Annex C: Reliability Analysis Principles

10.1. Introduction

In recent years, practical reliability methods have been developed to help engineers tackle the analysis, quantification, monitoring and assessment of structural risks, undertake sensitivity analysis of inherent uncertainties and make appropriate decisions about the performance of a structure. The structure may be at the design stage, under construction or in actual use. This Annex C summarizes the principles and procedures used in formulating and solving risk related problems via reliability analysis. It is neither as broad nor as detailed as available textbooks on this subject, some of which are included in the bibliography. Its purpose is to underpin the updating and decision-making methodologies presented in part 2 of this document. Starting from the principles of limit state analysis and its application to codified design, the link is made between unacceptable performance and probability of failure. It is important, especially in assessment, to distinguish between components and systems. System concepts are introduced and important results are summarized. The steps involved in carrying out a reliability analysis, whose main objective is to estimate the failure probability, are outlined and alternative techniques available for such an analysis are presented. Some recommendations on formulating stochastic models for commonly used variables are also included.

10.2. Concepts

10.2.1. Limit States

The structural performance of a whole structure or part of it may be described with reference to a set of limit states which separate acceptable states of the structure from unacceptable states. The limit states are divided into the following two categories: - ultimate limit states, which relate to the maximum load carrying capacity. - serviceability limit states, which relate to normal use. The boundary between acceptable (safe) and unacceptable (failure) states may be distinct or diffuse but, at present, deterministic codes of practice assume the former.Thus, verification of a structure with respect to a particular limit state is carried out via a model describing the limit state in terms of a function (called the limit state function) whose value depends on all relevant design parameters. In general terms, attainment of the limit state can be expressed as:

Page 37: Probabilistic Model Code Jcss Pmc2000

34

g (s, r) = 0 (C.1)

where s and r represent sets of load (actions) and resistance variables. Conventionally, g (s, r) ≤ 0 represents failure; in other words, an adverse state. The limit state function, g (s, r), can often be separated into one resistance function, r(.), and one loading (or action effect) function, s(.), in which case equation (C.) can be expressed as:

r (r) - s (s) = 0 (C.2)

10.2.2. Structural Reliability

Load, material and geometry parameters are subject to uncertainties, which can be classified according to their nature, see section 3. They can, thus, be represented by random variables (this being the simplest possible probabilistic representation, whereas more advanced models might be appropriate in certain situations, such as random fields). The variables S and R are often referred to as "basic random variables" (where the upper case letter is used for denoting random variables) and may be collectively represented by a random vector X. In this context, failure is a probabilistic event and its probability of occurrence, Pf, is given by:

Pf = Prob g (X) ≤ 0 = Prob M ≤ 0 (C.3a)

where, M = g (X). Note that M is also a random variable, called the safety margin. If the limit state function is expressed in the form of eqn (C.2), eqn (C.3a) can be written as

Pf = Prob r (R) ≤ s (S) = Prob R ≤ S

where R = r (R) and S = s (S) are random variables associated with resistance and loading respectively. This expression is useful in the context of the discussion in section 2.2 on code formats and partial safety factors but will not be further used herein. The failure probability defined in eqn (A.5a) can also be expressed as follows:

Pf ∫ = fX (x) dx

g(x ) ≤ 0 (C.3b)

Page 38: Probabilistic Model Code Jcss Pmc2000

35

where fX(x) is the joint probability density function of X. The reliability, Ps, associated with the particular limit state considered is the complementary event, i.e.

Ps = 1 - Pf (C.4)

In recent years, a standard reliability measure, the reliability index β, has been adopted which has the following relationship with the failure probability

β = - Φ-1

(Pf) = Φ-1

(Ps) (C.5)

where Φ-1

(.) is the inverse of the standard normal distribution function, see Table A.1.

Table C.1: Relationship between β and Pf Pf 10-1 10-2 10-3 10-4 10-5 10-6 10-7

β 1.3 2.3 3.1 3.7 4.2 4.7 5.2

In most engineering applications, complete statistical information about the basic random variables X is not available and, furthermore, the function g(.) is a mathematical model which idealizes the limit state. In this respect, the probability of failure evaluated from eqn (C.3a) or (C.3b) is a point estimate given a particular set of assumptions regarding probabilistic modelling and a particular mathematical model for g(.). The uncertainties associated with these models can be represented in terms of a vector of random parameters Q, and hence the limit state function may be re-written as g(X, Q). It is important to note that the nature of uncertainties represented by the basic random variables X and the parameters Q is different. Whereas uncetainties in X cannot be influenced without changing the physical characteristics of the problem (e.g. changing the steel grade), uncertainties in Q can be influenced by the use of alternative methods and collection of additional data. In this context, eqn (C.3b) may be recast as follows

Pf(θ ∫) = fX|Θ (x | θ) dx

g(x ,θ) ≤ 0 (C.6)

where Pf(θ) is the conditional probability of failure for a given set of values of the parameters θ and fX|θ (x| θ) is the conditional probability density function of X for given θ.

Page 39: Probabilistic Model Code Jcss Pmc2000

36

In order to account for the influence of parameter uncertainty on failure probability, one may evaluate the expected value of the conditional probability of failure, i.e.

Pf = E [Pf(θ ∫)] = Pf(θ) fΘ (θ) dθ

θ (C.7a)

where fθ (θ) is the joint probability density function of θ. The corresponding reliability index is given by

β = - Φ-1(Pf) (C.7b)

The main objective of reliability analysis is to estimate the failure probability (or, the reliability index). Hence, it replaces the deterministic safety check with a probabilistic assessment of the safety of the structure, e.g. eqn (C.3) or eqn (C.7). Depending on the nature of the limit state considered, the uncertainty sources and their implications for probabilistic modeling, the characteristics of the calculation model and the degree of accuracy required, an appropriate methodology has to be developed. In many respects, this is similar to the considerations made in formulating a methodology for deterministic structural analysis but the problem is now set in a probabilistic framework.

10.2.3. System Concepts

Structural design is, at present, primarily concerned with component behaviour. Each limit state equation is, in most cases, related to a single mode of failure of a single component. However, most structures are an assembly of structural components and even individual components may be susceptible to a number of possible failure modes. In deterministic terms, the former can be tackled through a progressive collapse analysis (particularly appropriate in redundant structures), whereas the latter is usually dealt with by checking a number of limit state equations. However, the system behaviour of structures is not well quantified in limit state codes and requires considerable innovation and initiative from the engineer. A probabilistic approach provides a better platform from which system behaviour can be explored and utilised. This can be of benefit in assessment of existing structures where strength reserves due to system effects can alleviate the need for expensive strengthening. There are two fundamental systems, see Fig. C.1:

(1) A series system is a system which fails if one or more of its components fail. (2) A parallel system is a system which fails when all its components have failed.

Page 40: Probabilistic Model Code Jcss Pmc2000

37

The probability of system failure is given by

Pf, sys = P[E1∪E2∪...∪En] for a series system (C.8a) Pf, sys = P[E1∩E2∩...∩En] for a parallel system (C.8b) where Ei (i=1, ...n) is the event corresponding to failure of the ith component. In the case of parallel systems, which are designed to provide some redundancy, it is important to define the state of the component after failure. In structures, this can be described in terms of a characteristic load-displacement response, see Fig. C.2, for which two convenient idealisations are the 'brittle' and the 'fully ductile' case. Intermediate, often more realistic, cases can also be defined. The above expressions can be difficult to evaluate in the case of large systems with stochastically dependent components and, for this reason, upper and lower bounds have been developed, which may be used in practical applications. In order to appreciate the effect of system behaviour on failure probabilities, results for two special systems comprising equally correlated components with the same failure probability for each component are shown in Fig. C.3(a) and C.3(b). Note that in the case of the parallel system, it is assumed that the components are fully ductile. More general systems can be constructed by combining the two fundamental types. It is fair to say that system methods are more developed for skeletal rather than continuous structures. Important results from system reliability theory are summarized in section 4.

10.3. Component Reliability Analysis

The framework for probabilistic modeling and reliability evaluation is outlined in this section. The focus is on the procedure to be followed in assessing the reliability of a critical component with respect to a particular failure mode.

10.3.1. General Steps

The main steps in a component reliability analysis are the following: (1) select appropriate limit state function (2) specify appropriate time reference (3) identify basic variables and develop appropriate probabilistic models (4) compute reliability index and failure probability

Page 41: Probabilistic Model Code Jcss Pmc2000

38

(5) perform sensitivity studies Step (1) is essentially the same as for deterministic analysis. Step (2) should be considered carefully, since it affects the probabilistic modeling of many variables, particularly live loading. Step (3) is perhaps the most important because the considerations made in developing the probabilistic models have a major effect on the results obtained, see section 3.2. Step (4) should be undertaken with one of the methods summarized in section 3.3, depending on the application. Step (5) is necessary insofar as the sensitivity of any results (deterministic or probabilistic) should be assessed before a decision is taken.

10.3.2. Probabilistic Modelling

For the particular failure mode under consideration, uncertainty modeling must be undertaken with respect to those variables in the corresponding limit state function whose variability is judged to important (basic random variables). Most engineering structures are affected by the following types of uncertainty:

- intrincic physical or mechanical uncertainty; when considered at a fundamental level, this uncertainty source is often best described by stochastic processes in time and space, although it is often modelled more simply in engineering applications through random variables.

- measurement uncertainty; this may arise from random and systematic errors in the measurement of these physical quantities

- statistical uncertainty; due to reliance on limited information and finite samples - model uncertainty; related to the predictive accuracy of calculation models used The physical uncertainty in a basic random variable is represented by adopting a suitable probability distribution, described in terms of its type and relevant distribution parameters. The results of the reliability analysis can be very sensitive to the tail of the probability distribution, which depends primarily on the type of distribution adopted. A proper choice of distribution type is therefore important. For most commonly encountered basic random variables there have been studies (of varying detail) that contain guidance on the choice of distribution and its parameters. If direct measurements of a particular quantity are available, then existing, so-called a priori, information (e.g. probabilistic models found in published studies) should be used as prior statistics with a relatively large equivalent sample size (n' ≈ 50). The following comments may also be helpful in selecting a suitable probabilistic model.

Page 42: Probabilistic Model Code Jcss Pmc2000

39

Material properties - frequency of negative values is normally zero - log-normal distribution can often be used - distribution type and parameters should, in general, be derived from large homogeneous

samples and with due account of established distributions for similar variables (e.g. for a new high strength steel grade, the information on properties of existing grades should be consulted); tests should be planned so that they are, as far as possible, a realistic description of the potential use of the material in real applications.

Geometric parameters - variability in structural dimensions and overall geometry tends to be small - dimensional variables can be adequately modelled by the normal or log-normal distribution - if the variable is physically bounded, a truncated distribution may be appropriate (e.g.

location of reinforcement); such bounds should always be carefully considered to avoid entering into physically inadmissible ranges

- variables linked to manufacturing can have large coefficients of variation (e.g. imperfections, misalignments, residual stresses, weld defects).

Load variables - loads should be divided according to their time variation (permanent, variable, accidental) - in certain cases, permanent loads consist of the sum of many individual elements; they may

be represented by a normal distribution - for single variable loads, the form of the point-in-time distribution is seldom of immediate

relevance; often the important random variable is the magnitude of the largest extreme load that occurs during a specified reference period for which the probability of failure is calculated (e.g. annual, lifetime)

- the probability distribution of the largest extreme could be approximated by one of the asymptotic extreme-value distributions (Gumbel, sometimes Frechet)

- when more than one variable loads act in combination, load modelling is often undertaken using simplified rules suitable for FORM/SORM analysis.

In selecting a distribution type to account for physical uncertainty of a basic random variable afresh, the following procedure may be followed:

- based on experience from similar type of variables and physical knowledge, choose a set of possible distributions

- obtain a reasonable sample of observations ensuring that, as far as possible, the sample points are from a homogeneous group (i.e. avoid systematic variations within the sample) and that the sampling reflects potential uses and applications

Page 43: Probabilistic Model Code Jcss Pmc2000

40

- evaluate by an appropriate method the parameters of the candidate distributions using the sample data; the method of maximum likelihood is recommended but evaluation by alternative methods (moment estimates, least-square fit, graphical methods) may also be carried out for comparison.

- compare the sample data with the resulting distributions; this can be done graphically (histogram vs. pdf, probability paper plots) or through the use of goodness-of-fit tests (Chi-square, Kolmogorov-Smirnov tests)

If more than one distributions give equally good results (or if the goodness-of-fit tests are acceptable to the same significance level), it is recommended to choose the distribution that will result in the smaller reliability. This implies choosing distributions with heavy left tails for resistance variables (material properties, geometry excluding tolerances) and heavy right tails for loading variables (manufacturing tolerances, defects and loads). Capturing the essential features of physical uncertainty in a load or in a structure property through a random variable model is perhaps the simplest way of modeling uncertainty and quantifying its effect on failure probability. In general, loads are functions of both time and position on any particular structure. Equally, material properties and dimensions of even a single structural member, e.g. a RC floor slab, are functions which vary both in time and in space. Such random functions are usually denoted as random (or stochastic) processes when time variation is the most important factor and as random fields when spatial variation is considered. Fig. C.4(a) shows schematically a continuous stochastic process, e.g. wind pressure at a particular point on a wall of a structure. The trace of this process over time is obtained through successive realisations of the underlying phenomenon, in this case wind speed, which is clearly a random variable taking on different values within each infinitesimally small time interval, δt. Fig. C.4(b) depicts a two-dimensional random field, e.g. the spatial variation of concrete strength in a floor slab just after construction. Once again, a random variable, in this case describing the possible outcomes of, say, a core test obtained from any given small area, δA, is the basic kernel from which the random field is built up. In considering either a random process or a random field, it is clear that, apart from the characteristics associated with the random variable describing uncertainty within a small unit (interval or area), laws describing stochastic dependence (or, in simpler terms, correlation) between outcomes in time and/or in space are very important.

Page 44: Probabilistic Model Code Jcss Pmc2000

41

The other three types of uncertainty mentioned above (measurement, statistical, model) also play an important role in the evaluation of reliability. As mentioned in section 2.3, these uncertainties are influenced by the particular method used in, for example, strength analysis and by the collection of additional (possibly, directly obtained) data. These uncertainties could be rigorously analysed by adopting the approach outlined by eqns (C.8) and (C.9). However, in many practical applications a simpler approach has been adopted insofar as model (and measurement) uncertainty is concerned based on the differences between results predicted by the mathematical model adopted for g(x) and some more elaborate model believed to be a closer representation of actual conditions. In such cases, a model uncertainty basic random variable Xm is introduced where

Xm = actual valuepredicted value

and the following comments offer some general guidance in estimating the statistics of Xm: - the mean value of the model uncertainty associated with code calculation models can be

larger than unity, reflecting the in-built conservatism of code models - the model uncertainty parameters of a particular calculation model may be evaluated vis-a-

vis physical experiments or by comparing the calculation model with a more detailed model (e.g. finite element model)

- when experimental results are used, use of measured rather than nominal or characteristic quantities is preferred in calculating the predicted value

- the use of numerical experiments (e.g. finite element models) has some advantages over physical experiments, since the former ensure well-controlled input.

- the choice of a suitable probability distribution for Xm is often governed by mathematical convenience and a normal distribution has been used extensively.

10.3.3. Computation of Failure Probability

As mentioned above, the failure probability of a structural component with respect to a single failure mode is given by

Pf ∫ = fX (x) dx

g(x ) ≤ 0 (C.3b)

where X is the vector of basic random variables, g(x) is the limit state (or failure) function for the failure mode considered and fX(x) is the joint probability density function of X. An important class of limit states are those for which all the variables are treated as time independent, either by neglecting time variations in cases where this is considered acceptable or by transforming time-dependent processes into time-invariant variables (e.g. by using extreme value distributions). The methods commonly used for calculating Pf in such cases are

Page 45: Probabilistic Model Code Jcss Pmc2000

42

outlined below. Guidelines on how to deal with time-dependent problems are given in section 5. Note that after calculating Pf via one of the methods outlined below, or any other valid method, a reliability index may be obtained from equation (C.5), for comparative or other purposes. Asymptotic approximate methods

Although these methods first emerged with basic random variables described through 'second-moment' information (i.e. with their mean value and standard deviation, but without assigning any probability distributions), it is nowadays possible in many cases to have a full description of the random vector X (as a result of data collection and probabilistic modelling studies). In such cases, the probability of failure could be calculated via first or second order reliability methods (FORM and SORM respectively). Their implementation relies on: (1) Transformation techniques:

T : X = (X1, X2, ... Xn) → U = (U1, U2, ... Un) (C.9)

where U1, U2, ... Un are independent standard normal variables (i.e. with zero mean value and unit standard deviation). Hence, the basic variable space (including the limit state function) is transformed into a standard normal space, see Fig. C.5. The special properties of the standard normal space lead to several important results, as discussed below. (2) Search techniques:

In standard normal space, the objective is to determine a suitable checking point: this is shown to be the point on the limit-state surface which is closest to the origin, the so-called 'design point'. In this rotationally symmetric space, it is the most likely failure point, in other words its co-ordinates define the combination of variables that are most likely to cause failure. This is because the joint standard normal density function, whose bell-shaped peak lies directly above the origin, decreases exponentially as the distance from the origin increases. To determine this point, a search procedure is required in all but the most simple of cases (the Rackwitz-Fiessler algorithm is commonly used). Denoting the co-ordinates of this point by

u* = (u1*, u2

*, ... un*) its distance from the origin is clearly equal to

∑=

n

1i

2/12*i )u(

This scalar quantity is known as the Hasofer-Lind reliability index, βHL, i.e.

∑=

=n

1i

2/12*iHL )u(β (C.10)

Note that u* can also be written as

u* = βHL α (C.11a)

where α = (α1, α2, ... αn) is the unit normal vector to the limit state surface at u*, and, hence, αi (i=1,...n) represent the direction cosines at the design point. These are also known as the sensitivity factors, as they provide an indication of the relative importance of the uncertainty

Page 46: Probabilistic Model Code Jcss Pmc2000

43

in basic random variables on the computed reliability. Their absolute value ranges between zero and unity and the closer this is to the upper limit, the more significant the influence of the respective random variable is to the reliability. The following expression is valid for independent variables

∑=

=n

1i

21 1α

(C.11b)

(3) Approximation techniques:

Once the checking point is determined, the failure probability can be approximated using results applicable to the standard normal space. Thus, in a first-order approximation, the limit state surface is approximated by its tangent hyperplane at the design point. The probability content of the failure set is then given by

PfFORM = Φ(−βHL) (C.12a)

In some cases, a higher order approximation of the limit state surface at the design point is merited, if only to check the accuracy of FORM. The result for the probability of failure assuming a quadratic (second-order) approximation of the limit state surface is asymptotically given by

∏−

=

−−−Φ=1

1

2/1)1()(n

jjHLHLfSORMP κββ

(C.12b)

for βHL → ∞ , where κj are the principal curvatures of the limit state surface at the design point. An expression applicable to finite values of βHL is also available. Simulation Methods

In this approach, random sampling is employed to simulate a large number of (usually numerical) experiments and to observe the result. In the context of structural reliability, this means, in the simplest approach, sampling the random vector X to obtain a set of sample values. The limit state function is then evaluated to ascertain whether, for this set, failure (i.e. g(x)≤0) has occurred. The experiment is repeated many times and the probability of failure, Pf, is estimated from the fraction of trials leading to failure divided by the total number of trials. This so-called Direct or Crude Monte Carlo method is not likely to be of use in practical problems because of the large number of trials required in order to estimate with a certain degree of confidence the failure probability. Note that the number of trials increases as the failure probability decreases. Simple rules may be found, of the form N > C/Pf, where N is the required sample size and C is a constant related to the confidence level and the type of function being evaluated. Thus, the objective of more advanced simulation methods, currently used for reliability evaluation, is to reduce the variance of the estimate of Pf. Such methods can be divided into two categories, namely indicator function methods and conditional expectation methods.

Page 47: Probabilistic Model Code Jcss Pmc2000

44

An example of the former is Importance Sampling, where the aim is to concentrate the distribution of the sample points in the vicinity of likely failure points, such as the design point obtained from FORM/SORM analysis. This is done by introducing a sampling function, whose choice would depend on a priori information available, such as the co-ordinates of the design point and/or any estimates of the failure probability. In this way, the success rate (defined here as a probability of obtaining a result in the failure region in any particular trial) is improved compared to Direct Monte Carlo. Importance Sampling is often used following an initial FORM/SORM analysis. A variant of this method is Adaptive Sampling, in which the sampling density is updated as the simulation proceeds. Importance Sampling could be performed in basic variable or standard normal space, depending on the problem and the form of prior information. A powerful method belonging to the second category is Directional Simulation. It achieves variance reduction using conditional expectation in the standard normal space, where a special result applies pertaining to the probability bounded by a hypershere centred at the origin. Its efficiency lies in that each random trial generates precise information on where the boundary between safety and failure lies. However, the method does generally require some iterative calculations. It is particularly suited to problems where it is difficult to identify 'important' regions (perhaps due to the presence of multiple local design points). The two methods outlined above have also been used in combination, which indicates that when simulation is chosen as the basic approach for reliability assessment, there is scope to adapt the detailed methodology to suit the particular problem in hand.

10.3.4. Recommendations

As with any other analysis, choosing a particular method must be justified through experience and/or verification. Experience shows that FORM/SORM estimates are adequate for a wide range of problems. However, these approximate methods have the disadvantage of not being quantified by error estimates, except for few special cases. As indicated, simulation may be used to verify FORM/SORM results, particularly in situations where multiple design points might be suspected. Simulation results should include the variance of the estimated probability of failure, though good estimates of the variance could increase the computations required. When using FORM/SORM, attention should be given to the ordering of dependent random variables and the choice of initial points for the search algorithm. Not least, the results for the design point should be assessed to ensure that they do not contradict physical reasoning.

Page 48: Probabilistic Model Code Jcss Pmc2000

45

10.4. System Reliability Analysis

As discussed in section 3, individual component failure events can be represented by failure boundaries in basic variable or standard normal space. System failure events can be similarly represented, see Fig. C.6(a) and C.6(b), and, once more, certain approximate results may be derived as an extension to FORM/SORM analysis of individual components. In addition, system analysis is sometimes performed using bounding techniques and some relevant results are given below.

10.4.1. Series systems

The probability of failure of a series system with m components is defined as

Pf sys = P Fj∪

j = 1

m

(C.13)

where, Fj is the event corresponding to the failure of the jth component. By describing this event in terms of a safety margin Mj (C.14)

)(]0[][ jjj MPFP β−Φ≈≤=

where βj is its corresponding FORM reliability index, it can be shown that in a first-order approximation

Pf sys = 1 − Φm β ; ρ (C.15a)

where Φm[.] is the multi-variate standard normal distribution function, β is the (m x 1) vector of component reliability indices and ρ is the (m x m) correlation matrix between safety margins with elements given by

∑=

=n

iikijjk

1ααρ ,j, k = 1, 2, ...,m

(C.15b)

where αij is the sensitivity factor corresponding to the ith random variable in the jth margin. In some cases, especially when the number of components becomes large, evaluation of equation (C.15) becomes cumbersome and bounds to the system failure probability may prove sufficient. Simple first-order linear bounds are given by

[ ] ⎥⎦

⎤⎢⎣

⎡≤≤ ∑

==

m

1jjfsysj

m

1j1)),F(P(MinP)F(PMax (C.16a)

but these are likely to be rather wide, especially for large m, in which case second-order linear bounds (Ditlevsen bounds) may be needed. These are given by

(A.20b)

Page 49: Probabilistic Model Code Jcss Pmc2000

46

∑ ∑=

=

≤≤⎭⎬⎫

⎩⎨⎧

⎥⎦

⎤⎢⎣

⎡∩−+

m

2jfsys

1j

1kkjj1 P0,)FF(P)F(PMax]F[P

(C.16b)

[ ] ∑= <

∩−+m

2jkjjkj1 )FF(PMax)F(P]F[P

The narrowness of these bounds depends in part on the ordering of the events. The optimal ordering may differ between the lower and the upper bound. In general, these bounds are much narrower than the simple first-order linear bounds given by equation (C.16a). The bisections of events may be calculated using a first-order approximation, which appears below in the presentation of results for parallel systems.

10.4.2. Parallel Systems

Following the same approach and notation as above, the failure probability of a parallel system with m components is given by

⎥⎦⎤

⎢⎣⎡ ≤∩=⎥⎦

⎤⎢⎣⎡∩=

==)0()(

11 j

m

jj

m

jfsys MPFPP (C.17)

and the corresponding first-order approximation is

Pf sys = Φm − β ; ρ (C.18)

Simple bounds are given by

[ ])(01 j

m

jfsys FPMinP=

≤≤ (C.19a)

These are usually too wide for practical applications. An improved upper bound is

[ ])(1, kj

m

kjfsys FFPMinP ∩≤=

(C.19b)

The error involved in the first-order evaluation of the intersections, P[Fj ∩ Fk], is, to a large extent, influenced by the non-linearity of the margins at their respective design points. In order to obtain a better estimate of the intersection probabilities, an improvement on the selection of linearisation points has been suggested. 10.5. Time-Dependent Reliability

10.5.1. General Remarks

Even in considering a relatively simple safety margin for component reliability analysis such as M = R - S, where R is the resistance at a critical section in a structural member and S is the corresponding load effect at the same section, it is generally the case that both S and resistance R are functions of time. Changes in both mean values and standard deviations could

Page 50: Probabilistic Model Code Jcss Pmc2000

47

occur for either R(t) or S(t). For example, the mean value of R(t) may change as a result of deterioration (e.g. corrosion of reinforcement in an RC bridge implies loss of area, hence a reduction in the mean resistance) and its standard deviation may also change (e.g. uncertainty in predicting the effect of corrosion on loss of area may increase as the periods considered become longer). On the other hand, the mean value of S(t) may increase over time (e.g. due to higher traffic flow and/or higher individual vehicle weights) and, equally, the estimate of its standard deviation may increase due to lower confidence in predicting the correct mix of traffic for longer periods. A time-dependent reliability problem could thus be schematically represented as in Fig. C.7, the diagram implying that, on average, the reliability decreases with time. Although this situation is usual, the converse could also occur in reliability assessment of existing structures, for example through strengthening or favourable change in use. Thus, the elementary reliability problem described through equations (C.3a) and (C.3 b) may now be formulated as: Pf (t) = Prob R (t) ≤ S (t) = Prob g (X(t)) ≤ 0 00,0 (C.20a) where g (X(t)) = M(t) is a time-dependent safety margin, and

Pf (t) = fX(t) (x(t)) dx(t)g(x(t)) ≤ 0 (C.20b)

is the instantaneous failure probability at time t, assuming that the structure was safe at time less than t. In time-dependent reliability problems, interest often lies in estimating the probability of failure over a time interval, say from 0 to tL. This could be obtained by integrating Pf(t) over the interval [0, tL], bearing in mind the correlation characteristics in time of the process X(t) -or, sometimes more conveniently, the process R(t), the process S(t), as well as any cross correlation between R(t) and S(t). Note that the load effect process S(t) is often composed of additive components, S1(t), S2(t), ..., for each of which the time fluctuations may have different features (e.g. continuous variation, pulse-type variation, spikes). Interest may also lie in predicting when S(t) crosses R(t) for the first time, see Figure C.8, or the probability that such an event would occur within a specified time interval. These considerations give rise to so-called ‘crossing’ problems, which are treated using stochastic process theory. A key concept for such problems is the rate at which a random process X(t) ‘upcrosses’ (or crosses with a positive slope) a barrier or level ξ, as shown in Figure A.9. This

Page 51: Probabilistic Model Code Jcss Pmc2000

48

upcrossing rate is a function of the joint probability density function of the process and its derivative, and is given by Rice’s formula

νξ

+ = x0

f XX(ξ, x ) dx (C.21)

where the rate in general represents an ensemble average at time t. For a number of common stochastic processes, useful results have been obtained starting from Equation (C.21). An important simplification can be introduced if individual crossings can be treated as independent events and the occurences may be approximated by a Poisson distribution, which might be a reasonable assumption for certain rare load events. Another class of problems calling for a time-dependent reliability analysis are those related to damage accumulation, such as fatigue and fracture. This case is depicted in Fig. C.10 via a fixed threshold (e.g. critical crack size) and a monotonically increasing time-dependent load effect or damage function (e.g. actual crack size at any given time). It is evident from the above remarks that the best approach for solving a time-dependent reliability problem would depend on a number of considerations, including the time frame of interest, the nature of the load and resistance processes involved, their correlation properties in time, and the confidence required in the probability estimates. All these issues may be important in determining the appropriate idealisations and approximations.

10.5.2. Transformation to Time-Independent Formulations

Although time variations are likely to be present in most structural reliability problems, the methods outlined in Sections 3 and 4 have gained wide acceptance, partly due to the fact that, in many cases, it is possible to transform a time dependent failure mode into a corresponding time independent mode. This is especially so in the case of overload failure, where individual time-varying actions, which are essentially random processes, p(t), can be modelled by the distribution of the maximum value within a given reference period T, i.e. X = maxT p(t) rather than the point-in-time distribution. For continuous processes, the probability distribution of the maximum value (i.e. the largest extreme) is often approximated by one of the asymptotic extreme value distributions. Hence, for structures subjected to a single time-varying action, a random process model is replaced by a random variable model and the principles and methods given previously may be applied. The theory of stochastic load combination is used in situations where a structure is subjected to two or more time-varying actions acting simultaneously. When these actions are

Page 52: Probabilistic Model Code Jcss Pmc2000

49

independent, perhaps the most important observation is that it is highly unlikely that each action will reach its peak lifetime value at the same moment in time. Thus, considering two time-varying load processes p1(t), p2(t), 0 ≤ t ≤ T, acting simultaneously, for which their combined effect may be expressed as a linear combination p1(t)+ p2(t), the random variable of interest is: X = maxT p1(t)+ p2(t) (C.22a) If the loads are independent, replacing X by maxTp1(t) + maxTp2(t) leads to very conservative results. However, the distribution of X can be derived in few cases only. One possible way of dealing with this problem, which also leads to a relatively simple deterministic code format, is to replace X with the following

maxT p1(t) + p2(t)

X' = maxT (C.22b) p1(t) + maxT p2(t) This rule (Turkstra's rule) suggests that the maximum value of the sum of two independent load processes occurs when one of the processes attains its maximum value. This result may be generalised for several independent time-varying loads. The conditions which render this rule adequate for failure probability estimation are discussed in standard texts. Note that the failure probability associated with the sum of a special type of independent identically distributed processes (so-called FBC process) can be calculated in a more accurate way, as will be outlined below. Other results have been obtained for combinations of a number of other processes, starting from Rice’s barrier crossing formula. The FBC (Ferry Borges-Castanheta) process is generated by a sequence of independent identically distributed random variables, each acting over a given (deterministic) time interval. This is shown in Fig. C.11 where the total reference period T is made up of ni repetitions where ni=T/τi. Hence, the FBC process is a rectangular pulse process with changes in amplitude occurring at equal intervals. Because of independence, the maximum value in the reference period T is given by

Page 53: Probabilistic Model Code Jcss Pmc2000

50

FmaxTXi(xi)= [FXi(xi)]ni (C.23)

When a number of FBC processes act in combination and the ratios of their repetition numbers within a given reference period are given by positive integers it is, in principle, possible to obtain the extreme value distribution of the combination through a recursive formula. More importantly, it is possible to deal with the sum of FBC processes by implementing the Rackwitz-Fiessler algorithm in a FORM/SORM analysis. A deterministic code format, compatible with the above rules, leads to the introduction of combination factors, ψoi, for each time-varying load i. In principle, these factors express ratios between fractiles in the extreme value and point-in-time distributions so that the probability of exceeding the design value arising from a combination of loads is of the same order as the probability of exceeding the design value caused by one load. For time-varying loads, they would depend on distribution parameters, target reliability and FORM/SORM sensitivity factors and on the frequency characteristics (i.e. the base period assumed for stationary events) of loads considered within any particular combination.

10.5.3. Introduction to Crossing Theory

In considering a time-dependent safety margin, i.e. M(t) = g (X(t) ), the problem is to establish the probability that M(t) becomes zero or less in a reference time period, tL. As mentioned previously, this constitutes a so-called 'crossing' problem. The time at which M(t) becomes less than zero for the first time is called the 'time to failure' and is a random variable, see Fig. C.12(a), or, in a basic variable space, Fig. C.12(b). The probability that M(t) ≤0 occurs during tL is called the'first-passage' probability. Clearly, it is identical to the probability of failure during time tL. The determination of the first passage probability requires an understanding of the theory of random processes. Herein, only some basic concepts are introduced in order to see how the methods described above have to be modified in dealing with crossing problems. The first-passage probability, Pf(t) during a period [0, tL] is

Pf (tL) = 1 - P[ N (tL)=0 | X (0) ∈D] P[X (0) ∈D] (C.24a)

where X(0)∈D signifies that the process X(t) starts in the safe domain and N(tL) is the number of outcrossings in the interval [0, tL]. The second probability term is equivalent to 1 - Pf(0), where Pf(0) is the probability of failure at t=0. Equation (C.28a) can be re-written as

Page 54: Probabilistic Model Code Jcss Pmc2000

51

Pf(tL) = Pf(0) + (1 - Pf(0)) (1 - P[N(tL)=0]) (C.24b) from which different approximations may be derived depending on the relative magnitude of the terms. A useful bound is

Pf (tL) ≤ Pf (0) + E[N (tL)] (C.25)

where the first term may be calculated by FORM/SORM and the expected number of outcrossings, E[N(tL)], is calculated by Rice's formula or one of its generalisations. Alternatively, parallel system concepts can be employed.

Page 55: Probabilistic Model Code Jcss Pmc2000

52

10.6. Figures

Figure C.1: Schematic representation of series and parallel systems

Figure C.2: Idealised load-displacement response of structural elements

Figure C.3: Effect of element correlation and system size on failure probability (a) series system (b) parallel system

Page 56: Probabilistic Model Code Jcss Pmc2000

53

Figure C.4: Schematic representations

(a) random process (b) random field

Figure C.5: Limit state surface in basic variable and standard normal space

Page 57: Probabilistic Model Code Jcss Pmc2000

54

Figure C.6(a): Failure region as union of component failure events for series system

Figure C.6(b): Failure region as intersection of component failure events for parallel system

Figure C.7: General time-dependent reliability problem

Page 58: Probabilistic Model Code Jcss Pmc2000

55

Figure C.8: Schematic representation of crossing problems (a) slowly varying resistance (b) rapidly varying resistance

Figure C.9: Fundamental barrier crossing problem

Figure C.10: Damage accumulation problem

Page 59: Probabilistic Model Code Jcss Pmc2000

56

Figure C.11: Realization of an FBC process

Figure C.12: Time-dependent safety margin and schematic representation of vector outcrossing

Page 60: Probabilistic Model Code Jcss Pmc2000

57

10.7. Bibliography

[C1] Ang A H S and Tang W H, Probability Concepts in Engineering Planning and

Design, Vol. I & II, John Wiley, 1984. [C2] Augusti G, Baratta A and Casciati F, Probabilistic Methods in Structural Engineering,

Chapman and Hall, 1984. [C3] Benjamin J R and Cornell C A, Probability, Statistics and Decision for Civil

Engineers, McGraw Hill, 1970. [C4] Bolotin V V, Statistical Methods in Structural Mechanics, Holden-Day, 1969. [C5] Borges J F and Castanheta M, Structural Safety, Laboratorio Nacional de Engenharia

Civil, Lisboa, 1985. [C6] Ditlevsen O, Uncertainty Modelling, McGraw Hill, 1981. [C7] Ditlevsen O and Madsen H O, Structural Reliability Methods, J Wiley, 1996. [C8] Madsen H O, Krenk S and Lind N C, Methods of Structural Safety, Prentice-Hall,

1986. [C9] Melchers R E, Structural Reliability: Analysis and Prediction, 2nd edition, J Wiley,

1999. [C10] Thoft-Christensen P and Baker M J, Structural Reliability Theory and its Applications,

Springer-Verlag, 1982. [C11] Thoft-Christensen P and Murotsu Y, Application of Structural Systems Reliability

Theory, Springer-Verlag, 1986. [C12] CEB, First Order Concepts for Design Codes, CEB Bulletin No. 112, 1976. [C13] CEB, Common Unified Rules for Different Types of Construction and Materials, Vol.

1, CEB Bulletin No. 116, 1976.

Page 61: Probabilistic Model Code Jcss Pmc2000

58

[C14] Construction Industry Research and Information Association (CIRIA), Rationalisation of Safety and Serviceability Factors in Structural Codes, Report 63, London, 1977.

[C15] International Organization for Standardization (ISO), General Principles on

Reliability for Structures, ISO 2394, Third edition.

Page 62: Probabilistic Model Code Jcss Pmc2000

59

11. Annex D:Bayesian Interpretation of Probabilities

11.1. Introduction

This JCSS Probabilistic Model Code offers distribution functions and corresponding parameter models for loads and structural properties in order to carry out failure probability calculations for comparison with specified reliability targets. This annex gives guidance on the interpretation of both input and results of those calculations. For the sake of discussion three possible alternatives of interpretation will be mentioned: 1. the frequentistic interpretation 2. a purely formal interpretation 3. the Bayesian interpretation They will be discussed in the following section.

11.2. Discussion

The frequentistic interpretation is quite straight forward. It means that if one observes for a long period of time, say T, a large set of say N similar structures, all having a failure rate of p [1/year], one expects the number of failures not to deviate too far from pTN. The deviation should fall within the rules of combinatory probabilistic calculations. Such an interpretation, however, can only be justified in a stationary world where the amount of statistical or theoretical evidence for the all distribution functions is very large. It should be clear that such a frequentistic interpretation of the failure probabilities is out of the question in the field of structural design. In almost all cases the data is too scarce and often only of a very generic nature. Note, however, that a frequentistic interpretation still can be used in a conditional sense. The statement that, given a set of statistical models for the various variables, a structure has some failure probability, is meaningful and helpful. The interpretation mentioned above given as second, that is the formal approach, gives full credit to the fact that the numbers used in the analysis are based on (common) ideas rather than statistical evidence. The probabilistic design is considered as a strictly formal procedure without any meaning or interpretation. Such a procedure can be believed to be a more rich

Page 63: Probabilistic Model Code Jcss Pmc2000

60

and consistent design procedure compared to for instance a Partial Factor Method or Allowable Stress method. The basic philosophy is that a probabilistic design procedure, running on the average the same design result as its successful predecessors, is a at least as good as or even better then the other methods. So calibration on the average result is the key point and the absolute values of the distributions and the failure probabilities have no meaning at all. An alternative code, prescribing higher standard deviations (resulting in higher failure probabilities) and corresponding higher target probabilities is considered as fully equivalent. To some extent this formal interpretation has many advantages, but is difficult to maintain. In many cases, it is at least convenient if the various values in the probabilistic calculations have some meaning in the real world. It should be possible, for instance, to consider the distribution functions in this code as the best estimate to describe our lack of knowledge and use them as priors for Bayesian updating procedures in the case of new data. It should also be possible to use the models for decision making in structural design and optimisation procedures for target reliabilities. If this cannot be done the method loses many features of its attraction. This leads into the direction of a Bayesian probability interpretation, where probabilities are considered as the best possible expression of the degree of belief in the occurrence of a certain event. The Bayesian interpretation does not claim that probabilities are direct and unbiased predictors of occurrence frequencies that can be observed in practice. The only claim is that, if the analysis is carried out carefully, the probabilities will be correct if averaged over a large number of decision situations. The requirement to fulfil that claim, of course, is that the purely intuitive part is neither systematically too optimistic nor systematically too pessimistic. The calibration to common practice on the average may be considered as an adequate means to achieve that goal. The above statement may sound vague and unsatisfactory at first sight. There seems to be an almost unlimited freedom to make unproven assessments based on a very individual intuition only. In this respect, one should keep in mind that: (1) where data is lacking, statistical parameters like means and standard deviations are not

taken as deterministic point estimates but as random quantities usually with a wide scatter; in this code the scatter is not the opinion of an individual engineer, but it is based on the judgement of a group of engineers.

Page 64: Probabilistic Model Code Jcss Pmc2000

61

(2) where data is available, estimates can (and often should) be improved on the basis of it; the minimum requirement is that intuitive probability models should not be in contradiction with the data.

Within the Bayesian Probability Theory these starting points have been rigorously formalised. As long as no data is available, a so called uninformative or vague prior estimate is chosen. Given observations, the prior can be updated to a so called posterior distribution, using Bayes’ Theorem. For details the reader is referred to Part 3.0, Material Properties, General Principles, Annex A. It should be noted that, in the case of sufficient data, this procedure will tend to probability statements that can be interpreted in a purely frequentistic way. Data may of course become available in blocks: in such a case the posterior distribution resulting from the first block may be used as a prior distribution for the second data block. That is, in fact, precisely what is present in the various chapters of Parts 2 and 3: the distributions given can often be considered as “data based priors” based on data from a generic world wide population. These models can be “updated” if data of a specific site or a specific producer are available. Practically spoken, lack of statistical data may lead to (1) uncertainties in statistical parameters (mean, standard deviation, etc) and (2) uncertainty in the type of distribution (normal, lognormal, Weibull, etc). It turns out that the latter type of uncertainty needs unrealistic much data to get a substantial reduction, while calculation results may be very sensitive to it. Also, such large data sets fulfilling the stationarity requirement may hardly be available. It is exactly on this point that there is a need to standardise the input. It should be noted that in this code most distribution types have the nature of a ”point estimate”, neglecting to some extend the distribution uncertainty.

11.3. Conclusion

The conclusion of the foregoing is that distributions and probabilities in this Model Code should be given a Bayesian degree of belief type interpretation. One may use the distributions as a start for updating in the presence of specific structure related data and as a basis for optimisation. Some reflections for closure:

Page 65: Probabilistic Model Code Jcss Pmc2000

62

(1) The numbers given in this code do not include the effect of gross errors. This is one of the main sources of the deviation between calculated probabilities and failure frequencies in practice. (2) The justification for a Bayesian probabilistic approach in decision making is that it makes the anyhow inevitable judgement part explicit and minimises the influence of it. The return to so called deterministic procedures because of a lack of statistical data is no realistic alternative. (3) In the Bayesian procedure the prior, if no explicit data is available, is often referred to as “subjective” or “person dependent”. In the case of this code this would not be the right terminology. The priors given are not the result of the ideas and experience of a single individual, but of a large group of experts. This gives the distributions some flavour of objectivity, however, of course, still on an intuitive basis. (4) The system of given distributions and their use in Bayesian updating and Bayesian decision procedures may be considered as a formal procedure in itself.

Page 66: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

1JCSS-VROU-12-03-97

JCSS PROBABILISTIC MODEL CODEPART 2: LOAD MODELS

2.0 GENERAL PRINCIPLES

Table of contents:

2.0.1 Introduction2.0.2 Classifications2.0.3 Modelling of actions2.0.4 Models for fluctuations in time2.0.4.1 Types of models2.0.4.2 Distribution of extremes for single processes2.0.4.3 Distribution of extremes for hierarchical processes2.0.5 Models for Spatial variability2.0.5.1 Hierarchical model2.0.5.2 Equivalent uniformly distributed load (EUDL)2.0.6 Dependencies between different actions2.0.7 Combination of actions

ANNEX 1 - DEFINITIONSANNEX 2 - DISTRIBUTIONS FUNCTIONSANNEX 3 - MATHEMATICAL COMBINATION TECHNIQUES

Page 67: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

2

2.0 GENERAL PRINCIPLES

2.0.1 Introduction

The environment in which structural systems function gives rise to internal forces, deformations,material deterioration and other short-term or long-term effects in these systems. The causes of theseeffects are termed actions. The environment from which the actions originate can be of a naturalcharacter, for example, snow, wind and earthquake. lt can also be associated with human activitiessuch as living in a domestic house, working in a factory, etc.

The following concepts of actions are used in this document.

1) An action is an assembly of concentrated or distributed forces acting on the structure. Thiskind of action is also denoted by "load".

2) An action is the cause of imposed displacements or thermal effects in the structure. This kindof action is often denoted by "indirect action".

3) An action is an environmental influence which may cause changes with time in the materialproperties or in the dimensions of a structure.

Action descriptions are in most cases based on suitably simple mathematical models, describing thetemporal, spatial and directional properties of the action across the structure. The choice of the levelof richness of details is guided by a balance between the quality of the available or obtainableinformation and a reasonably accurate modelling of the action effect. The choice of the level ofrealism and accuracy in predicting the relevant action effects is, in time, guided by the sensitivity ofthe implied design decisions to variations of this level and the economical weight of these decisions.Thus the same action phenomenon may give rise to several very different action models dependent onthe effect and structure under investigation.

Page 68: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

3

2.0.2 Classifications

Loads can be classified according to a number of characteristics. With respect to the type of theloads, the following subdivision can be made:

- self weight of structures- occupancy loads in buildings, e.g. loads from persons and equipment- actions caused by industrial activities, e.g. silo loads- actions caused by transport: traffic, liquids in pipelines, cranes, impact, etc.- climatic actions, e.g. snow. wind, outdoor temperature etc.- hydraulic actions, e.g. water and ground water pressures- actions from soil or rock, including earth quake

This classification does not cover all possible actions hut most of the common types of actions can beincluded in one or more classes. Some of the classes belong as a whole either to uncontrollableactions or to controllable actions. Other actions may belong to both e.g. water pressure.

With respect to the variations in time the following classification can be made:

- permanent actions, whose variations in time around their mean is small and slow (e.g. selfweight, earth pressure) or which monotonically a limiting value (C.g. prestressing, imposeddeformation from construction processes, effects from temperature, shrinkage, creep orsettlements)

- variable actions, whose variations in time are frequent and large (e.g. all actions caused bythe use of the structure and by most of the external actions such as wind and snow)

- exceptional actions, whose magnitude can be considerable but whose probability ofoccurrence for a given structure is small related to the anticipated time of use. Frequently theduration is short (e.g. impact loads, explosions, earth and snow avalanches).

As far as the spatial fluctuations are concerned it is useful to distinguish between fixed and freeactions. Fixed actions have a given spatial intensity distribution over the structure. They arecompletely defined if the intensity is specified in a particular point of the structure (e.g. earth or waterpressure). For free actions the spatial intensity distribution is variable (e.g. regular occupancyloading, involved although they are variable actions.

Page 69: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

4

2.0.3 Modelling of actions

There are two main aspects of the description of an action: one is the physical aspect, the other is thestatistical aspect. In most cases these aspects can be clearly separated. Then the physical descriptiongives the types of physical data which characterise the action model, for example, vertical forcesdistributed over a given area. The statistical description gives the statistical properties of thevariables, for example, a probability distribution function. In some cases the physical and statisticalaspects are so integrated that they cannot be considered separately.

A complete action model consists in general, of several constituents which describe the magnitude,the position, the direction, the duration etc. of the action. Sometimes there is an interaction betweenthe components. There may in certain cases also be an interaction between the action and theresponse of the structure.

One can in many cases distinguish between two kinds of variables (constituents) Fo and W describingan action F (see also part 1, Basis of Design).

F = ϕ (Fo, W) (2.0.3.1)

Fo is a basic action variable which is directly associated with the event causing the action andwhich should be defined so that it is, as far as possible, independent of the structure. Forexample, for snow load Fo is the snow load on ground, on a flat horizontal surface

W is a kind of conversion factor or model parameter appearing in the transformation from thebasic action to the action F which affects the particular structure. W may depend on the formand size of the structure etc. For the snow load example W is the factor which transforms thesnow load on ground to the snow load on roof and which depends on the roof slope, the typeof roof surface etc.

ϕ(-) is a suitable function, often a simple product.

The time variability is normally included in Fo, whereas W can often be considered as timeindependent. A systematic part of the space variability of an action is in most cases included in W,whereas a possible random part may be included in Fo or in W. Eq. (2.0.3.1) should be regarded as aschematic equation. For one action there may be several variables Fo and several variables W.

Any action model contains a set of parameters and variables that must be evaluated before the modelcan be used. In probabilistic modelling all action variables are in principle assumed to be randomvariables or processes while other parameters may be time or spatial co-ordinates, directions etc.Sometimes parameters may themselves be random variables, for example when the model allows forstatistical uncertainty due to small sample sizes.

An action model often includes two or more variables of different character as is described by eq.(2.0.3.l). For each variable a suitable model should be chosen so that the complete action modelconsists of a number of models for the individual variables.

Page 70: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

5

These models may be described in terms of:

- stochastic processes or random fields- sequences of random variables- individual random variables- deterministic values or functions

The definition of the models for these quantities require probability distributions (see annex 2) and adescription of the correlation patterns.

Page 71: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

6

2.0.4 Models for fluctuations in time

2.0.4.1 Types of models

To describe time depended loads, one needs the probability distribution for the “arbitrary point intime values" and a description of the variations in time. Some typical process models are (see figure2.0.4.l):

a) Continuous and differentiable processb) Random sequencec) Point pulse process with random intervalsd) Rectangular wave process with random intervalse) Rectangular wave process with equidistant intervals ∆

If the load intensities in subsequent time intervals of model (e) are independent, the model is referredto as a FBC model (Ferry Borges Castanheta model).

In many applications a combination of models is used, e.g. for wind the long term average is oftenmodelled as an FBC model while the short term gust process is a continuous Gaussian process. Suchmodels are referred to as hierarchical models (see Part 1, Basis of Design, Section 5.4). Each term insuch a model describes a specific and independent part of the time variability. For a number offurther definitions and notions, reference is made to Annex 1.

a c

t t

b d

t t

Figure 2.0.4.1: Various types of load models

Page 72: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

7

2.0.4.2 Distribution of extremes for single processes

At the design the main interest is normally directed to the maximum value of the load in somereference period of time to. A quite general and useful upperbound formula to calculate thedistribution of the maximum is given by:

Fmax F (a) ≅ exp[-to ν+(a)] (2.0.4.1)

The upcrossing frequency ν+(a) is given by:

ν+(a) = PFt < a and Ft+dt > a / dt (2.0.4.2)

For the FBC model ν+(a) is simply given by:

ν+(a) = (1-FF(a)) FF(a) / ∆t ≅ (1-FF(a))/∆t (2.0.4.3)

And for a continuous Gaussian process:

νπ

ρ β+ = − −( ) ' ' ( ) exp( / )a1

20 22 (2.0.4.4)

where β = (a-µ(F)) / F(F) and ρ = the correlation function.

Page 73: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

8

2.0.4.3 Distribution of extremes for hierarchical processes

Consider the case that the load model contains slowly and rapidly varying parts, as well as randomvariables that are constant in time (see figure 2.0.4.2).

F = R + Q + S (2.0.4.5)

R = random variables, constant in timeQ = slow rectangular process with mean renewal rate λS = fast varying process

In that case the following expression (see Annex 3, A.3.5) can be used:

Fmax F(a) = ER [exp[λto[1-EQ exp(-∆t νs+ (a|RQ))]]] (2.0.4.6)

νs+(a|RQ) = upcrossing rate of level “a” for process S, conditional upon R and Q

∆t = 1/λ = time interval for the rectangular process Q

ER and EQ denote the expectation operator over all variables R and Q respectively.

R

tQ

tS

t

Figure 2.0.4.2: Hierarchical model for time dependent loads

Page 74: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

9

2.0.5 Models for Spatial variability

2.0.5.1 Hierarchical models

As an example for the spatial modelling of actions using a hierarchical model consider the live load inan office building:

F = m + ∆F1 + ∆F2 + ∆F3(x,y) (2.0.5.1)

where:

m is a general mean value for the whole population

∆F1 is a stochastic variable which describes the variation between the load on different floors.The distribution function for ∆F1 has the mean value zero and the standard deviation σ1

∆F2 is a stochastic variable which describes the variation between the load in rooms on the samefloor but with different floor areas. The distribution function for ∆F2 has the mean value zeroand the standard deviation σ2

∆F3 is a random field which describes the spatial variability of the load within a room.

The total variability of the samples taken from the total population is described by ∆F1 + ∆F2 + ∆F3.The variability within the subpopulation of floors is described by ∆F2 + ∆F3.

2.0.5.2 Equivalent uniformly distributed load (EUDL)

Consider a simple hierarchical distribution load model given by:

q(x,y) = qo + qloc (x,y) (2.5.0.2)

qo = the variability between the various structures or structural elementsqloc = the small scale or point to point fluctuation.

In many cases the random field q is replaced by a so called Equivalent Uniformly Distributed Load(EUDL). This load is defined as:

q tq x y t i xy dA

i xy dAEUDL ( )( , , ) ( )

( )= ∫

∫(2.5.0.3)

when i(x,y) is the influence function for some specific load effect (e.g. the midspan bending moment).

For given statistical properties of the load field q(x,y) the mean and standard deviation of qEUDL canbe evaluated. For a homogeneous field, that is a random field where the statistical properties ofq(x,y) do not depend on the location, we give here the resulting formulas:

µ(qEUDL) = µ(qo) (2.5.0.4)

σ2(qEUDL) = σ2(qo)+σ2(qloc)∫ ∫ ∫ ∫ i(x,y)i(ξ,η)ρ(d)dxdydξdη/[∫ ∫ i(x,y)dxdy]2 (2.5.0.5)

Page 75: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

10

Here ρ(d) is the correlation function describing the correlation between the small scale load qloc, onthe two points (x,y) and (ξ,η). This function may be of the form:

ρ(∆r) = exp-∆r2 /dc2 (2.5.0.6)

with ∆r2 = (x-ξ)2 + (y-η)2, ∆r being the distance between the two points, and dc some scale distance.The correlation function tends to zero for distances ∆r much larger than dc.

If the field can be schematised as an FBC-field, the formula for σ2(qEUDL) can be simplified to:

σ2(qEUDL) = σ2(qi) + σ2 (qloc) κAo/A (2.5.0.7)

Here Ao is the reference area of the FBC-field and A stands for the total area under consideration, theso called tributary area. The formula is valid only for A>Ao.

The parameter κ is a factor depending on the shape of the influence line i(x,y). Values are presentedin Figure 2.5.0.1. The figure κ = 1 corresponds to a constant value of i(x,y).

Figure 2.5.0.1: Random fields and corresponding κ-values.

κ = 1.0

1 0ξ

1

η

κ = 1.4

κ = 2.0 κ = 2.4

ξ

ii

ξ = x/l

Page 76: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

11

2.0.6 Interactions and correlations between actions

For describing dependencies between various actions it is useful to distinghuish between:

-actions of the same nature-actions of different nature

Actions of the same nature are for instance floor loads on different floors in one building or the windloads on the front and back wall. The combination of wind and snow is a typical example of thecombinatin of actin of a differente type. Note that sometimes it may be less clear: it may be difficultto decide whether floor loads of a completely different type in one building (say office loads andstorage loads) are loads of the same nature of a different nature.

If the actions are of the same nature, one might better consider them as components of one action,The various components are normally described by similar probabilistic models. The basic question isthen to model the statistical dependency between the processes. In general this is a purelymathematical problem. Details of the mathematical description of the dependencies depend on thenature of the physical relationship and the nature of the processes themselves. One possibility is toconstruct a hierarchical model as has been explained in section 2.5.1. For two stationary continuousGaussian processes x(t) and y(t) the correlation alternatively may be described by the crosscorrelation function Rxy(τ) or alternatively by the cross spectrum Sxy(ω) (see Annex 1). For pulse typeprocesses we may have to distinguish between the correlation in amplitude, arrival time and duration.Floor loads in multi-storey buildings are a good example where all three correlations are ofimportance.

If the actions are of a different nature, they sometimes may show quite complex physical interactions.Typical examples are:

Snow and windIf snow and wind act together, the result may be that the wind reduces the accumulated snow load onthe roof. For some building configuration, however, the combined action by wind and snow mayresult in much higher loads on some specific locations. This dependency between wind and snow ispresent even if wind and snow are statistically completely independent processes (which is not thecase). In such cases we need a more complex model 2.0.3.1 where the final load is calculated as afunction of both wind speed and snow intensity. In addition one may need a statistical correlationbetween wind and snow as compontens of the same multicomponenet atmospheric system.

Earthquake and fireEarthquakes are often followed by fire: due to the damage of the pipes and heating systems gas maycome free and a fire may be started. The earth quake is said to act as a triggger mechanism for fire. Inorder to treat this interaction properly, one should consider

1. the probability of a fire starting given an earth quake has occured and2. the probability of collapse, given earth quake and fire

The second analysis should take into account that all extinction devices may be not working and thatthe structure already may be damamed by the earth quake.

Additional to that, of course, one still needs to consider the standard cases of collapse under earthquake alone and collapse under fire alone.

Page 77: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

12

Wind and traffic on bridgesTraffic on bridges enlarges the wind load, but heavy wind will reduce the traffic. One may need amodel expressing the wind force given traffic and wind speed and a model expressing the conditionalprobabability density of the traffic intensity as a function of wind speed.

So in all above examples one need to build a more advanced physical model on the one hand andconditional probability models of one load given the (extreme) condition of the other. In most cases itmay be convenient to define one of the processes as the “leading one” and describe arrival times andamplitudes of the second process conditional upon the occurrence and amplitude of the first one.

In this model code none or little guidance is presented to this matter. However, the user of this modelcode is always entitled to be aware of these possible correlations and interactions. It is stressed thatthese interactions may be of great importance to the reliability of the structure.

Page 78: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

13

2.0.7 Combinations of actions

From a mathematical modelling point of view the load on a structure is a joint set F(t) of time varyingrandom fields. This set of loads gives a rectorial load effect E(t) in a given cross section or point ofthe structure at time t as a function of F(t) (i.e. a random process). In the scalar case we have:

E(t) = c1 F1(t) + c2 F2(t) + .. (2.0.7.1)

The reliability problem related to the considered point is to evaluate the probability Pf that Fmax(t) E Vfor all future time V is the nonfailure domain defined by the strength properties at the consideredpoint and limit state.

The load combination problem is to formulate a reasonably simple but for the considered engineeringpurpose sufficiently realistic mathematical model that defines F(t). The needed level of detailedmodelling of F(t) depends on the filtering effect of the function that maps F(t) into the load effectE(t). This filtering effect is judged under due consideration of the sensitivity of the probability pf tothe detailing of the models. The sensitivity question is tied to the last part of the load combinationproblem which is actually to compute the value of Pf. Thus, to be operational, the modelling of F(t)should be simple enough to enable at least a computer simulation of the scalar process E(t) to anextent that allows an estimation of Pf.

First the relevant set of different action types is identified. This identification defines the number ofelements in the set F(t) and the subdivision of F(t) into stochastically independent subsets. Themodelling is next concentrated on each of these subsets with dependent components.

The mathematical difficulty to solve probabilities for outcrossing rates of processes of the type(2.0.7.1) is the possible very different nature of the various contributors Fi . Each of these processesmay be of a completely different nature, including all kinds of continuos and intermittent processes.Numerical solutions will often prove to be necessary, but also analytical solutions may prove to bevery helpful. Reference is made to Annex 3 and to the literature.

Page 79: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

14

ANNEX 1 - DEFINITIONS

Covariance function

The covariance function r (t1, t2) is defined by:

r (t1, t2) = E [(F (t=1) - m1) (F (t2) - m2)]

m1 = E [F (tl)] m2 = E [F (t2)]

Stationary processes

The process is defined for - ∞ < t < ∞. If, for all values t1 and for all values τ, chosen such that 0 < ti

< to and 0 < ti + τ < to, the stochastic variable x (ti + τ) has the same distribution function as thestochastic variable x (ti) the stochastic process x (t) is stationary.

If the mean value function m (t) is constant and the covariance function r (t1, t2) depends solely on thedifference τ = (t2 - tl) the process is sold to be wide-sense stationary.

Thus the covariance function for a stationary or a wide sense stationary process may be written

r (τ) = E [(F(t + τ) - m)(F(t) - m)]

The concept of stationary applied to action processes should in most cases be interpreted as wide-sense stationary.

Ergodic processes

A process is ergodic if averaging over several realisations and averaging with respect to time (oranother index parameter) give the same result.

For ergodic processes a relation between the point-in-time value distribution function F and theexcursion time t is determined for a chosen reference period to, by

1 - FF (F) = t/to

The correlation function

The correlation function for a stationary process is:

ρ τ τ( )

( )

( )= r

r 0

For ergodic processes ρ(τ = ∞) = 0

Spectrum

A stationary stochastic process may be characterised with ald of a spectrum:

S n e r di n( ) ( )= −

−∞

∞∫ 2π τ τ τ

Page 80: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

15

S(n) may be regarded as a measure of how the process is built up of components with differentfrequencies. The total variance of the process is:

Var Q S n dn=∞∫20

( )

Gaussian processes

A stochastic process S(t) is a Gaussian process if the multidimensional probability distributionfunctions for all the stochastic variables S(ti) are Gaussian. The stochastic properties of a Gaussianprocess is completely determined by the mean value and the covariance function or by the spectrum.

Scalar Nataf Processes

A special but important class of non-Gaussian, scalar and differentiable processes are built by amemoryless transformation from a normal process, i.e.

S (t) = h (U(t))

where U(t) is a standard normal process and h(u) is an arbitrary function. For S(t) any admissible(unimodal) distribution function can be chosen thus defining a certain class of functions h(u). Inaddition the autocorrelation function ρs(t1,t2) has to be specific. However, there are some restrictionson the type of autocorrelation function.

Scalar Hermite Processes

The Hermite process is a special case of the Nataf process. All marginal distribution must be ofHermite type. For this process the solution of the integral equation occurring for the autocorrelationfunction of the equivalent (or better generating) standard normal process is analytic. The standardHermite process has the representation, i.e. a special case of the function h(u)

S t U t h U t h U t U t( ) ( ( ) ( ( ) ) ( ( ) ( )))~

,i~

= + − + −κ 32 31 3

For the coefficients depending on the first four moments of the marginal distribution of the non-normal process. In addition, the Hermite process requires specification of the autocorrelationfunction of S(t). Again, there are certain restrictions on the moments of the marginal distributions aswell as on the autocorrelation function.

Scalar Rectangular Wave Renewal Processes

Scalar rectangular wave renewal processes are useful models for processes changing their amplitudeat random renewal points in a random fashion. A scalar rectangular wave renewal process ischaracterised by the jump rate λ, and the distribution function of the amplitude. The renewals occurindependently of each other. No specific distribution is assigned to the interarrival times. Therefore,the renewal process characterised only by a jump rate captures only long term statistics. The meanduration of pulses is asymptotically equal to 1/λ,. For the special case of a Poisson rectangular waveprocess the interarrival times and so the durations of the pulses are exponentially distributed withparameter 1/λ.. In the special case of a Ferry Borges-Castanheta process the durations are constantand the repetition number r = (t2 - tl)/∆ with ∆ the duration of pulses is equal to λ(t2 - t1). Also, thesequence of amplitudes is an independent sequence.

Page 81: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

16

The jump rate can be a function of time as well as the parameters of the distribution function of theamplitudes.

It is assumed that rectangular wave processes jump from a random value S(t) to a new value S+(t+δ)with δ → at a renewal without returning to zero. Rectangular wave renewal processes must be regularprocesses, i.e. the occurrence of any two or more renewals in a small time interval must be negligible(of o-order). Non-stationary rectangular wave renewal processes are processes which have eithertime-dependent parameters of the amplitude distributions and/or time-dependent jump rates.

Random fields

A random field may be regarded as a one-, two- or three-dimensional stochastic process. The time t issubstituted by the space co-ordinates x, y, z.

For the two-dimensional case the covariance function is written (for a stationary random field)

r (dx, dy) = E [(F (x + dx, y + dy) - m) (F (x, y) - m)]

The concepts of stationary, ergodicy etc. are in principle the same as for the stochastic processes.

Vector processes

Two stationary Gaussian processes F1 and F2 are statistically completely described by their meanvalues, auto-spectra and the cross spectrum. The latter is defined by:

r12(τ) = E [(F1(t + τ) - m1)(F2(t) - m2)]

ττ= ∫∞

∞−

τπ− d)(re)n(S 12n2i

12

A vector of n stationary Gaussian processes can be described by n mean values, n auto-spectra andn(n-1) cross spectra. Note that Sij is the complex conjugate of Sji.

Page 82: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

17

ANNEX 2 - DISTRIBUTIONS FUNCTIONSDistribution type Para-

metersMoments

Rectangulara x b≤ ≤

f xb aX ( ) =

−1

1 = a2 = b

ma b= +

2

sb a= −

12

Normalσ > 0

f xx

X ( ) exp= − −

1

2

1

2

2

σ πµ

σ

1 = µ2 = σ

m = µs = σ

Lognormalx > >0 0, ζ

f xx

xX ( ) exp

ln= − −

1

2

1

2

2

ζ πλ

ζ

1 = λ2 = ζ

ζ+λ=2

expm2

s = +

−exp exp( )λ ζ ζ

22

21

Shifted Lognormalx > >ε ζ, 0

f xx

xX ( )

( )exp

ln( )=−

− − −

1

2

1

2

2

ε ζ πε λ

ζ

1 = λ2 = ζ3 = ε

ε+

ζ+λ+ε=2

expm2

s = +

−exp exp( )λ ζ ζ

22

21

Shifted Exponentialx ≥ >ε λ, 0

f x xX ( ) exp( ( ))= − −λ λ ε1 = λ2 = ε

ε+λ

= 1m

s = 1

λShifted Gammax b p≥ > >0 0 0, ,

1pp

X )x))(x(bexp()p(

b)x(f −ε−ε−−

Γ=

1 = p2 = b3 = ε

mp

b= + ε

sp

b=

Betaa x b r t≤ ≤ ≥, , 1

f xx a b x

b a B r tX

r t

r t( )

( ) ( )

( ) ( , )= − −

− −

+ −

1 1

1

1 = a2 = b3 = r4 = t

m a b ar

r t= + −

+( )

sb a

r t

rt

r t= −

+ + + 1Gumbel (Maximum)−∞ < < +∞ >x , α 0

f x x u x uX ( ) exp( ( ) exp( ( )))= − − − − −α α α1 = u2 = α

m u= + 0577216.

α

s = πα 6

Frechet (Maximum)ε ≤ < +∞ >x u k, , 0

f xk

u

x

u

x

uX

k k

( ) exp=−

−−

− −−

− − −

εεε

εε

1

1 = u2 = k3 = ε

m uk

= + − −

ε ε( )Γ 11

s uk k

= − −

− −

( )ε Γ Γ12

112

Weibull (Maximum)ε ≤ < +∞ >x u k, , 0

f xk

u

x

u

x

uX

k k

( ) exp=−

−−

− −−

− − −

εεε

εε

1

1 = u2 = k3 = ε

m uk

= + − +

ε ε( )Γ 11

s uk k

= − +

− +

( )ε Γ Γ12

112

ANNEX 3 MATHEMATICAL TECHNIQUES FOR LOAD COMBINATIONS

Page 83: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

18

Combination of two rectangular processes (Ferry Borges-Castanheta model)

Consider the case that two actions Q1(t) and Q2(t) are to be combined. Assume that these actions can

be described as rectangular or sqaure wave models (Figure A3.1). The following assumptions aremade about the processes:

- Q1 (t) and Q2 (t) are stationary ergodic processes

- All intervals τ1 are equal; all intervals τ2 are equal and τ1 ≥ τ2- Q1 and Q2 are constant during each interval τ1 and τ2 respectively

- The values of Q1 for the different intervals are mutually independent; same for Q2- Q1 and Q2 are independent

Figure A3.1: Square wave processes for Q1 (t) and Q2 (t)

Define Q2c as the maximum value of Q2 occurring during the interval τ1 with the probability

distribution function:

FQ2c (Q) = [ ]F QQ*

/( )

τ τ2 1

FQ* = the arbitrary point in time distribution for Q2 (A3.1)

Assume a linear relationship between the actions effect E and the actions:

E = c1 Q1 + c2 Q2 (A3.2)

The maximum action effect Emax from Q1 and Q2 during the reference period to can then be written

as:

tr

τ2

tr

τ1

Q2

time

time

Q2 max

Q1 Q1 max

Page 84: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

19

Emax = max c1Q1 + c2Q2c (A3.3)

The maximum should be taken over all intervals τ1 within the reference period to.

As an approximation, the resulting action effects could be calculated as the maximum of thefollowing two combinations (Turkstra's rule):

- E Q1max , Q2c if Q1 is considered as the dominating action

- E Q2 max , Q1c if Q2 is considered as the dominating action

Written as a formula for the case E = c1 Q1 + c2 Q2

Emax = max c Q c Q c Q c Qc c1 2 2 1 1 2 2max ,max;+ + (A3.4)

It should be noted that the Turkstra Rule gives a lower bound for the failure probability.

Oucrossing approach

Consider the more general event that random state vector Z(τ) representative for a given problem,enters the failure domain

V = Z(τ) | g(z(τ),τ) < 0, 0 < τ < t;

where g(.) is the limit state function. Z(τ) may conveniently be separated into three components as:

Z(τ)T = (RT, Q(τ)T, S(τ)T)

where R is a vector of random variables independent of time t, Q(τ) is a slowly varying random vectorsequence and S(τ) is a vector of not necessarily stationary but sufficiently mixing random processvariables having fast flunctuations as compared to Q(t).

In the general case where all the different types of random variables R, Q(τ) and S(τ) are present, thefailure probability Pf(t) not only must be integrated over the time invariant variables R, but anexpectation operation must also be performed over the slowly varying variables Q(τ):

P t t E nE E N t R Qf R Q( , ) [exp[ [ exp( [ ( , , ))]]]]min max ≈ − − − +1 1 ∆ (A3.5a)

∆t is the characteristic fluctuation time of Q and n = ( tmax - tmin) / ∆t

Or, one step further simplified:

P t t E E E N t t R Qf R Q( , ) [exp[ [ [ ( , ; , )]]]]min max min max≈ − − +1 (A3.5b)

It should be observed that the expectation operation with respect to Q is performed inside theexponent, whereas the expectation operation with respect to R is performed outside the exponentoperator. If the point process of exits is a regular process which can be assumed in most cases, theconditional expectation of the number of exits in the time interval [tmin, tmax] can be determined from:

Page 85: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

20

E N t t r q r q dt

t

[ ( , ; , )] ( ; , )min max

min

max

+ += ∫ ν τ τ (A3.6)

where ν+(τ;p,r.q) is the outcrossing rate defined by:

ν τ τ τ+

+−

= ∈ ∩ + ∈( ; , ) ( ( ( ) ( ) | , )r q l im P N S V S V r q∆ ∆

∆0

1(A3.7)

If the vector S consists out of n components (S1, ..... Sn), all of ractangular wave type, the followingformula can be used:

ν ν+−

+

=

= ∈ ∩ ∈∑ i i n i ni

n

P S S S S V S S S S V( , ,... ,... ) ) ( , ,... ,... ) 1 2 1 21

(A3.8)

where Si- and Si

+ are two realisations of Si, one before and one after some particular jump and νi is thejump rate of Si.

Intermittent processes

Intermittent processes are a practically important generalisation for all types of random processes.Although more general forms are known only the simplest type of intermittancies is discussed below.The renewals of times where the process is "on" follow a Poisson renewal process with rate κ (ormean interarrival time l/κ). At a renewal the process activates an “on"-state (state "1"). The "off”-states are denoted by "0". The initial durations of "on"-states will have exponential distribution withmean l/µ independent of the arrival times. However, we will assume that a "on"-time is also finishedif a next renewal occurs so that the durations have a truncated distribution. By assuming randominitial conditions the probabilities of the “on/off'-states are then determined by

p t toff ( ) exp[ ( ) ]=+

+ −+

− +µκ µ

κ µκ µ

κ µ (A.3.9)

p t ton ( ) exp[ ( ) ]=+

+ −+

− +µκ µ

κ µκ µ

κ µ (A.3.10)

In general it is assumed that the "on/off”-process is already in its stationary state where the last termsin these equations vanishes. In contrast to rectangular wave renewal processes where the duration ofthe rectangular pulse is exactly until the next renewal and the duration of the rectangular pulse isexponentially distributed with mean l/λ for a Poissonian renewal process the "on"-times are nowtruncated at the next renewal. It is easily shown that the effective duration of the "on"-times then arealso exponential but with mean 1/(κ+µ). The so-called interarrival-duration intensity is defined by ρ =κ/µ. For ρ = κ/µ → ∞ the processes are almost always active. For κ/µ → 0 one obtains spike-likeprocesses.

Intermittancies can also be defined for differentiable processes. If this is a dependent vector processthe entire vector process must have a common ρ, that is all components of the vector must have thesame κ and µ. Independent differentiable vector processes, however, can have different ρ's.

In the case of a single intermittent process with κt0 >1 and µto << 1 the periods where the intermittentload are present can conveniently be put together. The failure probability is then approximately givenby:

Page 86: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

21

Pf (tmin , tmax) = νon T + νoff (to - T) (A.3.11)

where T = κ t0 / µ = the total expected time that the intermittent load is active and to = tmax - tmin; νon andνoff are the failure rates for present and absent intermittent load respectively.

In the case of two mostly absent uncorrelated intermittent loads, the same approximation principle canbe applied, leading to:

+νµκ

µκ

= oon,on2

2

1

1maxminf t))(()t,t(P

+νµκ

−µκ

ooff,on2

2

1

1 t)1)((

+νµκ

µκ

− oon,off2

2

1

1 t))(1(

ooff,off2

2

1

1 t)1)(1( νµκ

−µκ

− (A.3.12)

where νon,on is the failure rate for both intermittent loads present, etc.

Page 87: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

22

JCSS PROBABILISTIC MODEL CODEPART 2 - LOAD MODELS

2.1 SELF WEIGHT

Table of contents:

2.1.1. Introduction2.1.2 Basic model2.1.3 Probability density distribution functions2.1.4 Weight density2.1.5 Volume

List of symbols:

d = correlation lengthV = volume described by the boundary of the structural part

γ = weight density of the material.γav = average weight density for a structural partρo = correlation between two far away points in one member∆r = distance between two points within a member

Page 88: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

23

2.1.1 Introduction

The self weight concerns the weight of structural and non-structural components. The main characteristics ofthe self weight can be described as follows:

- The probability of occurrence at an arbitrary point-in-time is close to one- The variability with time is normally negligible- The uncertainties of the magnitude is normally small in comparison with other kinds of loads.

Concerning the uncertainties one can distinguish between (hierarchical model):

- variability within a structural part- variability between different structural parts of the same structure- variability between various structures

The variability within a structural part is normally small and can often be neglected. However, for some typesof problem (c.g. static equilibrium) it may be important.

2.1.2 Basic model

The self weight, G, of a structural part is determined by the relation

G =Vol∫ γ dV (1)

where:

V is the volume described by the boundary of the structural part. The volume of V is Vol.γ is the weight density of the material.

For a part where the material can be assumed to be reasonably homogeneous eq. (1) can be written

G = γav V (2)

where

γav is an average weight density for the part (see further section 2.1.4).

2.1.3 Probability density distribution functions

The weight density and the dimensions of a structural part are assumed to have Gaussian distributions. Tosimplify the calculations the self weight, G, may as an approximation be assumed to have a Gaussiandistribution.

Page 89: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

24

2.1.4 Weight density

Total variability

Mean values, µ γ , and coefficients of variation, Vγ , for the total variability of the weight density of somecommon building materials are given in table 2.1.1.

Table 2.1.1. Mean value and coefficient of variation for weight density 1)

Material Mean value[kN/m3]

Coefficientof variation

Steel 77 < 0.01Concrete

Ordinary concrete 2) 24 0.04High strength concrete 24-26 4) 0.03Lightweight aggregate concrete 4) 0.04-0.08Cellular concrete 4) 0.05-0.10Heavy concrete for special purposes 4) 0.01-0.02

Masonry - ≈ 0.05Timber 3)

Spruce, fir (Picea) 4.4 0.10Pine (Pinus) 5.1 0.10Larch (Larix) 6.6 0.10Beech (Fagus) 6.8 0.10Oak (Quercus) 6.5 0.10

1) The values refer to large populations. They are based on data from various sources.2) The values are valid for concrete without reinforcement and with stable moisture content. In case of

continuous drying under elevated temperature the stable volume weight after 50 days is 1.0-1.5 kN/m3

lower.3) Moisture content 12%. An increase in moisture content from 12% to 22% causes a 10% rise in weight

density.4) Depends on mix, composition and treatment

Spatial correlations

Between densities of two points within one member, the following correlation can considered to be present:

ρ(∆r) = ρo + (1-ρo) exp -(∆r/d)2 (3)

where

d is a so called correlation length which characterises the correlation structures∆r is the distance between two points within a memberρo correlation between two far away points in one member

Only correlation in the length dimensions of a structural part are of importance. For beams the weight densityover the cross section and for plates over the height may be considered as fully correlated.

Between points in two different members, but within one building, a constant correlation ρm is assumed to bepresent.

Page 90: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

25

In the absence of more detailed information the following values can be used:

d 10 m (beam/column)6 m (plate)3 m (volume)

ρo 0.85ρm 0.70

Note: For large members the variability of the weight density may be taken as V ρo; for a whole structureconsisting out of many members the variability may be taken as V ρm , where V is the total variabilityaccording to table 2.1.1.

2.1.5 Volume

In most cases it may be assumed that the mean values of the dimensions are equal to the nominal values i.e. thedimensions given on drawings, in descriptions etc. The mean value of the volume, V, of the structural parts iscalculated directly from the mean values of the dimensions.

The standard deviation of the volume, V, is calculated directly from the values of the standard deviation for thedimensions. Standard deviations for cross section dimensions are given in table 2.1.2 for some commonbuilding materials and types of structural elements.

Table 2.1.2. Mean values and standard deviations for deviations of cross-section dimensions from theirnominal values.

Structure or structural member Mean value Standard deviation

Rolled steelsteel profiles, area A 0.01 Anom 0.04 Anom

steel plates, thickness t 0.01 tnom 0.02 tnom

Concrete members 2)

anom < 1000 mm 0.003 anom 4 + 0.006 anom

anom > 1000 mm 3 mm 10 mmMasonry membersunplastered 0.02 anom 0.04 anom

plastered 0.02 anom 0.02 anom

Structural timbersawn beam or strut 0.05 anom 2 mmlaminated beam, planed ≈ 0 1 mm

1) The values refer to large populations. They are based on data from various sources and they concernmembers with currency acceptation dimension accuracy.

2) The values are valid for concrete members cast in situ. For concrete members produced in a factory thedeviations may be considerably smaller.

The variability within a component (e.g. the variability of the cross section area along a beam)may be treated according to the same principles that is presented for the weight density in section 2.1.4.

Reference

Page 91: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

26

CIB W81, Actions on Stuctures, Self weight, Report no. 115, Rotterdam

Page 92: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

27

JCSS PROBABILISTIC MODEL CODEPART 2: LOAD MODELS

2.2 LIVE LOAD

Table of contents:

2.2.1 Basic Model2.2.2 Stochastic Model2.2.3 Variations in Time2.2.4 Load Parameters

List of symbols:

A = area [m2]dp = duration of intermittend load [year]i = influence functionm = mean load intensity in [kN/m2]p = intermittent load in [kN/m2]q = sustained load in [kN/m2]S = load effect in [kN/m2]T = reference time in [year]V = zero mean normal distributed variable in [kN/m2]W = load intensity in [kN/m2]

λ = occurrence rate of sustained load changes in [1/year]ν = occurrence rate of intermittent load changes in [1/year]

Page 93: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

28

2.2.1 Basic Model

The live loads on floors in buildings are caused by the weight of furniture, equipment, stored objectsand persons. Not included in this type of load are any structural or non-structural elements, partition walls orextraordinary equipment. The live load is distinguished according the intended user category of the building, i.e. domestic buildings, hotels, hospitals, office buildings, schools, and stores. At design stage considerationsshould also be given to eventual changes of use during the life-time. Areas dedicated to store goods, materials,etc. must be treated separately. Live loads vary in time and space in a random manner. The spatial variationsare assumed to be homogeneous in a first approximation. With respect to the variation in time, it is divided intotwo components, the sustained load and the intermittent load.

The sustained load contains the weight of furniture and heavy equipment. The load magnitudeaccording to the model represents the time average of the real fluctuating load. Changes usually related tochanges in use and of users in a building. Short term fluctuations are included in the uncertainties of this loadpart.

The intermittent load represents all kinds of live loads, which are not covered by the sustained load.The sources are like gathering of people, crowded rooms during special events, or stacking of furniture duringremodelling. The relative duration of an intermittent loads is fairly small.

2.2.2 Stochastic Model

The load intensity is represented by a stochastic field W(x,y), whereby the parameters depend onthe user category of the building.

( ) ( )yxUVmyxW ,, ++= (1)

where m is the overall mean load intensity for a particular user category, V is a zero mean normal distributedvariable and U(x,y) is a zero mean random field with a characteristic skewness to the right. The quantities Vand U are assumed to be stochastically independent.

The load effects calculated from the model shall describe the load effects caused by the real load, withsufficient accuracy. For linear elastic systems, where superposition is possible, the load effect S is written as:

( ) ( )∫=A

dAyxiyxWS ,, (2)

where W(x,y) is the load intensity and i(x,y) is the influence function for the load effect over a considered areaA.

For non-linear structural response a stepwise linearity can be assumed, whereby the proposed relationfor the load effect can be used in each step. The load intensity W is substituted by the step ∆W and theinfluence function i(x,y) must reflect the total load situation, which results in a corresponding step ∆S for theload effect. When applying the theory of plasticity, then the influence function is proportional to the deflectioncorresponding to the mechanism.

An equivalent uniformly distributed load for the sustained load per unit area is that load having the sameload effect as the original load field, i. e.

Page 94: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

29

( ) ( )

( )∫

∫=

A

A

dAyxi

dAyxiyxW

q,

,,

(3)

The statistical parameters of the sustained load are:

[ ][ ] κσ+σ=

=

A0A2

U2VqVar

mqE(4)

whereby the factor κ is given in Figure (2.0.5.1) in Part 2.0. Note that for A<A0 one should take A0/A = 1.

The variable V describes the variability of sustained loads related to areas A1 and A2, which are assumedto be independent and non overlapping. These areas can be either on the same floor or on different floors. Thecovariance between the corresponding loads q1 and q2 is given as:

[ ] 221 Vq,qCov σ= (5)

The stochastic distribution of V is assumed to be normally distributed. The random field U(x,y) has aspecific skewness to the right, and in consequence also the load effect S and the sustained load q. A Gammadistribution for the sustained load fits best the actual observations, with parameters defined through the

relations [ ] UkqE µ= and [ ] 2UkqVar µ= .

The load intensity for the intermittent load p is represented by the same stochastic field as the sustainedload, whereby the parameters depend on the user category of the building. The intermittent load can generallybe considered as concentrated load. But, for design purposes, the same approach as for the sustained load isused. The duration of the intermittent load dp is considered as deterministic.

The equivalent uniformly distributed load for intermittent loads p has the statistical properties as thesustained load and can be evaluated in the same manner. Generally, there is a lack of data for this load. Thestandard deviation normally gets values in the same magnitude as the mean value, E[p] = µp. Therefore, theintermittent load is assumed to be exponentially distributed.

2.2.3 Variations in Time

The time between load changes is assumed to be exponentially distributed, then the number of loadchanges is Poisson distributed. The probability function for the maximum sustained load is given by:

( ) ( )( )[ ]xFTexpxF qqmax−−= 1λ (6)

where Fq(x) is the probability function of the sustained load, T is the reference time, like the anticipatedlifetime of the building, and λ is the occurrence rate of sustained load changes. Thus Tλ is the mean of thenumber of occupancy changes.

The maximum of the intermittent load is defined to occur as a Poisson process in time with the meanoccurrence rate ν. The average duration of the intermittent load depends on the process, i.e. personnel,emergency or remodeling.

Page 95: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

30

The maximum load which will occur in a building is a combination of sustained load and intermittentload. Assuming a stochastic independence between both load types, the maximum load during one occupancyis obtained from the convolution integral. The total maximum load during the reference time T is obtained byemploying the extreme value theory.

In cases with high share in sustained load the duration statistics becomes of interest, especially for creepand shrinkage problems. Generally, the intermittent load will be of little interest. From the assumed extremevalue distribution the statistical quantities of the excursion time τ over a certain level x can be derived.

( )[ ] ( )( )( )[ ] ( )( ) λτ

τxFTxVar

xFTxE

q

q

−=

−=

12

1(7)

2.2.4 Load Parameters

A list of parameters in table(2.2.1) are to be used in the live load model.

Page 96: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

31

Sustained Load Intermittent Load

Type of use Ao

[m2]mq

[kN/m2]σv

[kN/m2]σu

[kN/m2]1/λ[a]

mp

[kN/m2]σU

[kN/m2]1/ν[a]

dP

[d]

Office 20 0.5 0.3 0.6 5 0.2 0.4 0.3 1 - 3Lobby 20 0.2 0.15 0.3 10 0.4 0.6 1.0 1 – 3Residence 20 0.3 0.15 0.3 7 0.3 0.4 1.0 1 – 3Hotel guest room 20 0.3 0.05 0.1 10 0.2 0.4 0.1 1 – 3Patient room 20 0.4 0.3 0.6 5 – 10 0.2 0.4 1.0 1 – 3Laboratory 20 0.7 0.4 0.8 5 – 10Libraries 20 1.7 0.5 1.0 >10School classroom 100 0.6 0.15 0.4 >10 0.5 1.4 0.3 1 – 5Merchant/retail:first floorupper floor

100100

0.90.9

0.60.6

1.61.6

1 – 51 – 5

0.40.4

1.11.1

1.01.0

1 – 141 – 14

Storage 100 3.5 2.5 6.9 0.1–1.0Industrial:lightheavy

100100

1.03.0

1.01.5

2.84.1

5 – 105 – 10

Concentration ofpeople

20 1.25 2.5 0.02 0.5

Table 2.2.1 Parameters for live loads depending on the user category.

References[1] CIB W81. Actions on Structures - Live Loads in Buildings. Conseil International du Bâtiment pour la

Recherche l'Etude et la Documentation (CIB). Report 116, Rotterdam, 1989.[2] EC 1-Part 2.1: Actions on structures - Densities, self-weight, imposed loads. Eurocode 1 - Basis of Design

and Actions on Structures. Comité Européen de Normalisation (CEN). Pre-standard draft, Brussels, 1994.[3] Rackwitz R: Live Loads in Buildings. Manuscript, unpublished, Munich, 1995.[4] PMC Part 1: Basis of Design. Probabilistic Model Code - third draft. Joint Committee on Structural Safety

(JCSS), 1995.

Page 97: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

32

JCSS PROBABILISTIC MODEL CODE,PART 2, LOAD MODELS

2.6 LOADS IN CAR PARKS

Table of contents:

2.6.1 Basic Model2.6.2 Stochastic Model

List of symbols:

i = influence coefficienttd = busy time per yearty = busy days per year

L = weight of car in kNS = load effectT = reference time in yearsN = number of parking places

λd = renewal rate in [1/d]τ = mean dwell time in hours

Page 98: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

33

2.6.1 Basic Model

In car parks the loads on parking areas and drive ways may be distinguished. In general, the loads forregulated parking are dominating the loads for spatially free parking. Further, the entries and parking places aresuch that only certain categories of vehicles can use the facility. It is sufficient to distinguish between facilitiesfor light vehicles like normal passenger cars, station wagons and vans and for heavy vehicles like trucks andbusses. For each parking facility it can conservatively be assumed that the vehicles form an independentsequence each vehicle with random weight remaining the same at arrival and when leaving the place. At thebeginning of the busy periods it can conservatively be assumed that parking places left by a car willimmediately be occupied by another car. Thus the loading process due to vehicles is a rectangular waverenewal process.

2.6.2 Stochastic Model

With respect to the temporal fluctuations one can distinguish the following usage categories for lightvehicles:• car parks belonging to residential areas• car parks belonging to factories, offices etc.• car parks belonging to commercial areas• car parks belonging to assembly halls, sport facilities etc.• car parks connected with railway stations airports etc.

The temporal fluctuations are summarized in table 1. For parking facilities for heavy vehicles similardistinctions can be made.

The mean weight of light vehicles can be assumed to be about E[L] ≈ 15 kN with coefficient ofvariation of 15 to 30% depending on the usage of the parking facility and the traffic mixture. The parking placecovers an area of about 2.4 ⋅ 5.0 m2. A normal distribution can be assumed. In general, light vehicles can bemodeled by point loads located in the middle of the parking places.

Location ofcar park

Busy daysper year

ty[d]

Busy timesper day

td[h]

Mean dwelltimeτ[h]

Number of carsper dayλd[1/d]

Commercial areas 312 84

2.4 3.2

Railway stationsAirports

30 14-18 10-14 1.3

Assembly halls 50-150 2.5 2.5 1.0Office factories 260 8-12 8-12 1.0Residential areas 360 17 8 2.1

Table 1: Typical temporal fluctuations in car parks

Calculation of load effects has to take proper account of influence functions according to

∑=

=n

ijjjLi)t(S (1)

Page 99: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

34

If the negative parts of the influence functions can be neglected the distribution of extreme load effects can becomputed from

≥λ−≈ ∑

=

n

1jjjdydSmax xLiTPttexp)x(F (2)

with

[ ]

[ ]

−Φ≈

>

∑∑

=

=

=2/1

n

1jjj

n

1jjj

n

1jjj

LVari

LEix

xLiP (3)

T is the reference time. On driveways where only one vehicle determines the load effect one has

( )

−Φ−λ−≈2/1ydSmax

]L[Var

]L[Ex1TNtexp)x(F (4)

where N ist the numer of parking places associated with the drive way.

References

CIB W81, Actions on Structures: Live Load in Buildings, Rep. N0. 116, Rotterdam, 1989

Page 100: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

35

JCSS PROBABILISTIC MODEL CODEPART 2: LOAD MODELS

2.12 SNOW LOAD

Table of contents:

2.12. Snow Load2.12.1 Basic Model for Snow Load on roofs2.12.2 Probabilistic Model for Sg

2.12.3 Conversion ground to roof snow load2.12.3.1 General2.12.3.2 The exposure coefficient Ce

2.12.3.3 The thermal coefficient Ct

2.12.3.4 The redistribution coefficient Cr

List of symbols:

Ce = exposure coefficientCr = redistribution (due to wind) coefficientCt = deterministic thermal coefficientd = snow depthh = altitute of the building sitehr = reference altitudek = coefficient for altitude conversionr = conversion factor of snow load on ground to snow load on roofsSr = snow load on the roofSg = snow load on ground at the weather stationγ (d) = average weight density of the snow for depth d

ηa = shape coefficient

Page 101: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

36

2.12 SNOW LOAD

2.12.1. Basic Model for Snow Load on roofs

The snow load on roofs, Sr , is determined by the relation

r g/

S = S r k h hr (1)

where

Sg is the snow load on ground at the weather stationr is a conversion factor of snow load on ground to snow load on roofs (see 2.12.3).h is the altitute of the building sitehr is a reference altitude (= 300 m)k is a coefficient: k = 1.25 for coastal regions, k = 1.5 for inland mountainous regions

The snow load Sr acts vertically and refers to a horizontal projection of the area of the roof. Sg is timedependent but not space dependent within a specified region with similar climatic conditions and withapproximately the same altitude.

The characteristics of the ground snow load Sg should be determined on the basis of observations fromweather stations. The results of such observations are either water-equivalents of snow or depths of snow. In thefirst case the values can be used directly to determine the ground snow load. In the second case the data on snowdepth must be converted to snow load by the relation

gS = d (d)γ (2)

where

d is the snow depthγ (d) is the average weight density of the snow

The density γ (d) follows from:

γ λγ γγ λ

(d) =( )

ln( )

( )[exp( ) ]

∞ +∞

d

d1

01 (3)

where

γ(∞) = 5 kN/m3, γ(0) = 1.70 kN/m3 and x = 0.85 m

Page 102: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

37

2.12.2. Probabilistic model for Sg

A probability model of the ground snow load Sg is presented by:

- a probability distribution function for the total duration T of the load- a probability distribution function for the maximum load Sgmax within one year.

The distribution function Fsg max, its mean µ and its coefficient of variation V are denoted as:

for maritime climate : Fs1 , µ1 , V1

for continental climate : Fs2 , µ2 , V2

The probability distribution functions in these two cases are gamma distributions. The parameters shouldbe based on local observations. As prior distribution a vague prior should be used. In some cases data from"similar stations" can be used as prior with n' = 3 and ν' = 2.

In those cases when the climate is a mixture of maritime and continental climate, a part p of theobservations are associated with a continental climate and a part 1-p with a maritime climate. The combinedprobability distribution function F for the mixed climates can then be written as s s1 s2F = (1- p) F + p F .

2.12.3. Conversion ground to roof snow load

2.12.3.1 General

The conversion factor r is subdivided into a number of factors and terms according to the expression

r Ct= C + Ca e rη (6)

where

ηa is a shape coefficient, a random variable according to 2.12.3.2Ce is a deterministic exposure coefficient according to 2.12.3.2Ct is a deterministic thermal coefficient according to 2.12.3.3Cr is a redistribution (due to wind) coefficient, a random variable according to 2.12.3.4. If redistribution is not

taken into account Cr = 0

Page 103: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

38

2.12.3.2 The exposure coefficient Ce and shape factor ηηηηa

The exposure coefficient, Ce and the shape factor ηa are a reduction coefficients taking account of theexposure to wind of a building and the slope of the roof α:

α = 0o Ceηa = 0.4 + 0.6 exp (-0,1 u(H))α = 25o Ceηa = 0.7 + 0.3 exp (-0,1 u(H))α = 60o Ceηa = 0

u(H) is the wind speed, averaged over a period of one week, at roof level H.For intermediate values of α linear interpolation should be used.

2.12.3.3 The thermal coefficient Ct

The thermal coefficient, Ct , accounts for the reduction of snow load on roofs with high thermaltransmittance, in particular glass covered roofs. Ct is equal to 1.0 for buildings which are not heated and forbuildings where the roofs are highly insulated. A value of 0,8 shall be used for most other cases.

2.1.3.4 The redistribution coefficient Cr

The redistribution coefficient, Cr , takes account of the redistribution of the snow on the roof caused bywind but in some cases also by other causes.

For monopitch roofs the redistribution of snow load may be neglected.

For symmetrical duopitch roofs the coefficient Cr is assumed to be constant and equal to ± Cro for eachhalf of the roof according to FIG 1. Cro has a β-distribution with µ(Cro) according to FIG 2; the coefficient ofvariation of Cr is equal to 1.0. For other types of roofs the numerical values given in ENV 1991-2-3 and ISO 4355shall be used. These values can assumed to correspond to the mean value plus one standard deviation.

Page 104: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

39

Figure 1: The redistributed snow load on a duopitch roof

Figure 2: Cro as function of the roof angle

Cro

Cro

ρ2

ρ1 = CeCtηa + Cro

ρ2 = CeCtηa - Cro

CeCtηa

ρ1

0 20° 40° 60°

0.15

0.10

0.05

0 α

Cro

Page 105: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

40

Summary of snow load variables

X designation distribution mean scatterSg

dg

snow depth on the grund snowloadon the ground

gamma observation1) observation1)

ρ climate type parameter det observation observationk parameter det 1.5/1.25 m -hr reference height det 300 m -γ(0) unit weight at t = 0 det 1,7 kN/m3 -γ(∞) unit weight at t = ∞ det 5.0 kN/m3 -λ parameter det 0.85 m -Ceηa shape coefficient beta 2.13.3.2 V = 0.15Ct insulation parameter det 0.8-1.0 -Cro redistribution coefficient beta Fig. 2 V = 1.0

1) Data from similar stations can be used as prior with n' = 3 and ν' = 2.

Page 106: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

41

JCSS PROBABILISTIC MODEL CODEPART 2 : LOADS

2.13 WIND

Table of contents:

2.13.1 Introduction2.13.2 Wind forces2.13.3 Mean wind velocity2.13.4 Terrain roughness (category)2.13.5 Variation of the mean wind with height2.13.6 Intensity of turbulence2.13.7 Power spectral density and autocorrelation function of gustiness2.13.8 Coherence function2.13.9 Peak velocities2.13.10 Mean velocity pressure and the roughness factor2.13.11 Gust factor for velocity pressure2.13.12 Exposure factor for peak velocity pressure2.13.13 Aerodynamic shape factors2.13.14 Uncertainties consideration

Related Literature and References

List of symbols:

fc = Coriolis parameter (= 2Ω sin φ)f0 = mean frequency of zero up crossing, in Hzg = peak factor (no dimension)Gu(n), Gv(n), Gw(n) = half-sided power spectral density for longitudinal, transversal and

vertical components of velocity fluctuationsIu(z) = turbulence intensity of longitudinal velocity fluctuations (dimensionless)k = von Karman`s constant (= 0.4)Lu(z) = integral length scale for longitudinal velocity fluctuations, in mLv(z) = integral length scale for transversal velocity fluctuations, in mLw(z) = integral length scale for vertical velocity fluctuations, in mN = number of reference time, in yearsn = frequency, in Hertznu,n v, nw = dimensionless frequency of fluctuations in longitudinal, transversal and

vertical directionQref = reference wind velocity pressure

Page 107: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

42

List of symbols:

Q z( ) = mean velocity pressure at height z (=(1/2) ρ U z2( ) )Sij(n) = cross spectral power densityT = reference time

T up( ) = mean recurrence interval of maximum annual mean velocity, in years

Uref = reference wind velocity, in m/s

U z( ) = mean longitudinal velocity of the wind at height zu1 = mode of the maximum annual mean wind speed in Gumbel distributionu(x,z,t)=u = longitudinal component of the wind velocity fluctuations, in m/sv(y,z,t)=v = transversal component of wind velocity fluctuations, in m/sw(z,t)=w = vertical component of wind velocity fluctuations, in m/sz = height above ground, in mz0 = roughness length, in mzr = a reference height above ground, in mzref = the reference height above ground (10 - 30 m)α1 = dispersion parameter for the maximum annual mean wind speed in

Gumbel distributionδ = height of the atmospheric boundary layerκ = surface drag coefficient (dimensionless) (=[k/ln(zref/z0)]

2)λk = k-th moment of spectral densityν(x) = mean upcrossing rate for level xφ = geographical latitudeρ = air density (= 1.25 kg/m3)σu, σv,σw = standard deviation of velocity fluctuations in x-, y- and z-direction, in m/s

Page 108: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

43

2.13.1 Introduction

Wind effects on buildings and structures depend on the general wind climate, the exposure ofbuildings, structures and their elements to the natural wind, the dynamic properties, the shape and dimensionsof the building (structure). The section presents basic data and procedures for the estimation of wind loads onbuildings and structures. Tropical cyclones, tornados, thunderstorms and orographic wind phenomena requireseparate treatment.

The field of wind velocities over horizontal terrain is decomposed into a mean wind (average over 10minutes) in the direction of general air flow (x-direction) averaged over a specified time interval and afluctuating, turbulent part with zero mean and components in the longitudinal (x) direction, the transversal (y-)direction and the vertical (z-) direction

2.13.2 Wind forces

The wind force acting per unit area of structure is determined with the relations:

(i) For rigid structures of smaller dimensions:

w = c c c c ca g r a eQ Qref ref= (1)

(ii) For structures sensitive to dynamic effects (natural frequency < 1Hz) and for large rigid structures:

w = c c cd a e Qref (2)

where:Qref = the reference (mean) velocity pressurecr = roughness factorcg = gust factorca = aerodynamic shape factorcd = dynamic factor.

2.13.3 Mean wind velocity

The reference wind velocity, U ref is the mean velocity of the wind averaged over a time interval of 10min = 600 s, determined at an elevation of 10 m above ground, in horizontal open terrain exposure (z0 = 0.03m).1

The distribution of the mean wind velocities (for any terrain category, height above ground andaveraging time interval) is the Weibull distribution:

F xx

U

k

( ) exp= − −

11

2 σ(3)

with k close to 2.

1 For other than 10 min averaging intervals, in open terrain exposure, the following relationships maybe used: 105 10 0 67. . .U U 0.84U U1h 10min 1min(fastest mile) 3sec= = = .

Page 109: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

44

The same distribution is valid for direction dependent mean wind flows. Generally, it can not beassumed that the mean wind direction is uniformly distributed over the circle.

Mean wind velocities vary over the year. If no data are available it can be assumed in the northern hemispherethat σ(t) ≈ σ[1+ a cos(2π(t-t0)/365] with the constant a between 1/3 and 1/2 and t0 ≈ 15 to 45, with t in days.

The mean wind velocities are highly autocorrelated. Mean wind velocities with separation of about 4 to12 (8 on average) hours can be considered as independent in most practical applications.

If seasonal variations are neglected, the mean period the mean wind velocities are between levels x1

and x2 ( )x x2 1≥ is asymptotically

[ ]E T T F x F xx x U U1 2 2 1, [ ( ) ( )]= − (4)

with T the reference time. For higher levels of x2 the distribution of individual times above x is approximately[ ( )] / ( )1− F x x

Uν with ν( )x the mean upcrossing rate for level x.

The maximum mean wind speeds for longer periods follows a Gumbel distribution for maxima.Generally, it is not possible to infer the maxima over more years from observations covering only a few years.If the annual maxima are used, provided that the maximum annual data are homogenous as exposure andaveraging time, the distribution function is:

F x x uUmax

( ) exp exp[ ( )]= − − −α1 1 (5)

The mode u and the parameter α1 of the distribution are determined from the mean m1 and the standarddeviation σ1 of the set of maximum annual velocities: u = m1 1 − 0 577 1. / α , α σ1 11282= . / . The coefficient

of variation of maximum annual wind speed, V1 = σ1 / m1 depends on the climate and is normally between 0.10and 0.35. For reliable results, the number of the years of available records must be of the same order ofmagnitude like the required mean recurrence interval.

The lifetime (N years) maxima of wind velocity is also Gumbel distributed and the mean and thestandard deviation of lifetime maxima are functions of the mean and of the standard deviation of annualmaxima: mN + 0.78 σ1 ln N, σN = σ1.

Under special climatic conditions, the distribution of mean wind speeds is a mixed distributionreflecting different meteorological phenomena.

For load combination purposes it is proposed to model storms, for example those wind regimes where amean velocity > 10 m/s lasts for some time, as an intermittent rectangular wave renewal process. The numberof storms per year is approximately 50 corresponding to the frequency with which weather systems pass by, atleast in middle Europe. The mean duration of the storm is approximately 8 hours. Consecutive storms areindependent. The representative mean wind velocity in a storm can also be modeled by a Weibull distribution.The exponent of the Weibull distribution should be around 2. The location parameter should be based on localdata.

2.13.4 Terrain roughness (category)

The roughness of the ground surface is aerodynamically described by the roughness length z0, which isa measure of the size and spacing of obstacles on the ground surface. Alternatively, the terrain roughness canbe described by the surface drag coefficient, κ corresponding to the roughness length z0:

Page 110: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

45

0

2

lnz

zk

ref

=κ (6)

where k ≅ 0.4 is von Karman´s constant and zref is the reference height (Table 2, Table 3). Various terraincategories are classified in Table 1 according to their approximate roughness lengths. The distribution of thesurface roughness with wind direction must be considered.

Table 1. Roughness length z0, in meters, for various terrain categories 1) 2)

Terraincategory

Terrain description Range of z0 Recom-mendedvalue

A. Open sea.Smooth flatcountry

Areas exposed to the wind coming from large bodies ofwater; snow surface;Smooth flat terrain with cut grass and rare obstacles.

0.0001|

0.00050.003

B. Open country High grass (60 cm) hedges, and farmland with isolatedtrees;Terrain with occasional obstructions having heights lessthan 10 m (some trees and some buildings)

0.01|

0.10.03

C. Sparsely built-upurban areas.Wooded areas

Sparsely built-up areas, suburbs, fairly wooded areas(many trees)

0.1|

0.70.3

D. Densely built-upurban areas.Forests

Dense forests in which the mean height of trees is about15m;Densely built-up urban areas; towns in which at least15% of the surface is covered with buildings havingheights over 15m

0.7|

1.21.0

E. Centers ofvery largecities

Numerous large high closely spaced obstructions: morethan 50% of the buildings have a height over 20m

1.0≥ 2.0 2.0

1) Smaller values of z0 provoke higher mean velocities of the wind2) For the full development of the roughness category, the terrains of types A to D must prevail in the up winddirection for a distance of at least of 1000m, respectively. For category E this distance is more than 5 km.

2.13.5 Variation of the mean wind with height

The variation of the mean wind velocity with height over horizontal terrain of homogenous roughnesscan be described by the logarithmic law. The logarithmic profile is valid for moderate and strong winds (meanhourly velocity > 10 m/s) in neutral atmosphere (where the vertical thermal convection of the air may beneglected).

U(z)1

ku ln

z

z

z-1.33 + 0.25*

0

2 3 4

= + −

( ) . .z

z z z0 5 75 187

δ δ δ δ(z > d0 >> z0) (7)

where:

Page 111: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

46

u(z0) =

052

z

zln.

)z(U= friction velocity in m/s

δ =c

*

f

)z(u

60 = depth of boundary layer in m

U z( ) = mean velocity of the wind at height z above ground in m/sz = height above ground in mz0 = roughness length in mk = von Karman’s constant (k ≅ 0.4d0 = the lowest height of validity of Eq.(7) in mfc = 2Ωsin(φ) = Coriolis parameter in 1/sΩ = 0.726 10-4 = angular rotation velocity in rad/sφ = latitude of location in degree

For lowest 0.1 δ or 200m of the boundary layer only the first term needs to be taken into account(Harris and Deaves, 1981). The lowest height of validity for Eq.(7), d0, is close to the average height ofdominant roughness elements : i.e. from less than 1 m, for smooth flat country to more than 15 m, for centers ofcities. For z0 ≤ z ≤ d0 a linear interpolation is recommended. In engineering practice, Eq.(7) is conservativelyused with d0 = 0.

With respect to the reference (open terrain) exposure, the relation between wind velocities in twodifferent roughness categories at two different heights can be written approximately as (Bietry, 1976, Simiu,1986):

07.0

,0

0

,0

0

ln

ln)(

=

ref

ref

refref z

z

z

zz

z

U

zU. (8)

At the reference height zref, the ratio of the mean wind velocity in various terrain categories to the meanwind velocity in open terrain is given by the factor p in Table 2. The corresponding ratio for the mean velocitypressure is p2 .

Table 2. Scale factors for the mean velocity (and the mean velocity pressure) at reference height in variousterrain exposure

Terraincategory

A. Open sea.Smooth flat

country

B. Open country C. Sparselybuilt-up

urban areas.Wooded areas

D. Denselybuilt-up

urban areas.Forests

E. Centers oflarge cities

zref, m 10 10 10 15 30p 1.19 1.00 0.71 0.56 0.39

2.13.6 Intensity of turbulence

The turbulent fluctuations of the wind velocity can be assumed to be normally distributed with meanzero. The root mean squared value of the velocity fluctuations in the airflow, deviating from the longitudinalmean velocity, may be normalised to the friction velocity as follows:

Page 112: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

47

σβ

δu

uu

z

*

= −

1 Longitudinal (9a)

σβ

δv

vu

z

*

= −

1 Transversal (9b)

σβ

δw

wu

z

*

= −

1 Vertical . (9c)

The approximate linear variation with height (Hanna, 1982) can be used only in moderate and strongwinds. For neutral atmosphere, the ratios σv/σu and σw/σu near the ground are constant irrespective theroughness of the terrain (ESDU 1993):

σσ

πδ

v

u

z= −

1 0 25

24. cos (10a)

σσ

πδ

w

u

z= −

1 055

24. cos (10b)

For z<<δ the variance of the velocity fluctuations can be assumed independent of height above ground :

σ βu u u= * (11a)

σ βv v u= * (11b)

σ βw w u= * (11c)

and, for z < 0.1 δ:σσ

v

u

≅ 0 75. (12a)

σσ

w

u

≅ 0 50. (12b)

The variance of the longitudinal velocity fluctuations can also be expressed from non-linear regressionof measurement data, as function of terrain roughness (Solari, 1987):

4 5 4 5 0 856 7 52. . . ln .≤ = − ≤β u z0 (13)

The longitudinal intensity of turbulence is the ratio of the root mean squared value of the longitudinalvelocity fluctuations to the mean wind velocity at height z (i.e. the coefficient of variation of the velocityfluctuations at height z :

( ) ( )( ) ( )I z

u z, t

U z U zuu= =

21 2/

( )σ z(14)

The turbulence intensity at height z can be approximated by:

Page 113: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

48

I zz

z

z

z

u( ). ln ln

= ≈β

2 5

1

0 0

(15)

The transversal and vertical intensities of turbulence can be determined by multiplication of thelongitudinal intensity Iu(z) by the ratios σv/σu and σw/σu. Representative values for intensity of turbulence at thereference height are given in Table 3.

Table 3: Wind parameters depending on terrain category

Terraincategory

A. Open sea.Smooth flat

country

B.Open country C. Sparselybuilt-up

urban areas.Wooded areas

D. Denselybuilt-up

urban areas.Forests

E. Centers oflarge cities

z0 [m] 0.01 0.05 0.3 1.0 2.0d0 [m] 0 2 8 15 30

κ 0.0024 0.0047 0.013 0.022 0.022βu 3.1 2.7 2.3 2.1 2.0βv 2.3 2.1 1.8 1.6 1.5βw 1.55 1.35 1.15 1.05 1.0

zref [m] 10 10 10 15 30I(zref) 0.15 0.19 0.26 0.31 0.39

2.13.7 Power spectral density and autocorrelation functions of gustiness

The normalised half-sided von Karman power spectral densities and autocorrelation functions of gustvelocity are given in Table 4.

Table 4. The von Karman model of isotropic turbulence

Component of gustvelocity

Normalised spectral densitynG ni

i

( )

σ 2

Normalised autocorrelation functionρi(τi)

LongitudinalI = u ( )

4

1 70 8 2 5 6

n

n

u

u+ ./ ( ) ( )2

1 3

2 31/3

1/3

/

/Γτ τu uK

TransversalI = v

Verticali = w

( )( )

2 1 188 6

1 708

2

2 11 6

n n

n

i i

i

+

+

.

./

2

1 3

1

2

2 31 3

1 3 2 3

//

/ /( / )( ) ( )

Γτ τ τ τi i i iK K−

The notations in Table 4 are as follows:

σ i2 = variance of velocity fluctuations in direction i, in m2/s2; i = u, v or w

ni = ni(z) =n L z

U zi ( )

( )= is a non-dimensional height dependent frequency

n = frequency, in Hertz

Page 114: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

49

U z( ) = longitudinal mean velocity at height z, in m/sLi(z) = length of integral scale of turbulence in direction i, in m/s.

τ τi

i

U z

aL z= ( )

( )= non-dimensional time (a = 1.339)

Kµ ( ) = modified Bessel function of second kind of order µτ = time lag, in s

The integral length scale of turbulence in direction i at the height z is:

Li(z) = U(z) ρ τ τi i id( )0

∫ (16)

where the autocorrelation ρi(τi) is the Fourier transform of spectral density. An estimation of the length of theintegral scale of longitudinal turbulence, for heights up to 300 m is given by ESDU (1993), as:

Lu(z) =A u z

K z h z hu

z

3 2 3

3 2 22 5 1 1 5 75

/*

/

( / )

. ( / ) ( . / )

σ− +

(17)

where

A = 0.115 1 0 315 16 2 3

+ −

.

/z

δKz = 0.188[1-(1-z/zc)

2]1/2

zc/δ = 0.39u

f zc

*

/

0

1 8

For the lateral and vertical direction (ESDU, 1993):

Lv(z) = 0.5 (σv/σu)3 Lu(z) (18a)

Lw(z) = 0.5 (σw/σu)3 Lu(z) (18b)

Lv(z) ≅ 0.24 Lu(z) (18c)Lw(z) ≅ 0.08 Lu(z) (18d)

2.13.8 Coherence functions

The cross-spectral density for two separated points P1 and P2 with distance r perpendicular to directioni are given in terms of the point spectra and the coherence function by:

( ) ( ) ( ) ( )S n P P S n P P S n P P Coh n P Pij ii jj ij, , , , , , , ,/ / /1 2

1 21 2

1 21 2

1 21 2≈ ⋅ (19)

with:

Longitudinal ( ) ( ) ( ) ( )[ ] 516165

65

21 151265

2 .uu/uu/

/u

/uu .exp(KK

/k,rCoh ψψψψ

Γ

ψ

−≈−

= (20a)

Page 115: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

50

Transversal ( ) ( ) ( ) ( )( )

( ) ).exp(Kkr

krK

/k,rCoh .

vv/v

v

v/

/v

/vv

316122

2

65

65

21 65053

62

65

2 ψψψψ

ψΓ

ψ

−≈

++

=

(20b)

Vertical ( ) ( ) ( ) ( )( )

( ) ).exp(Kkr

rLK

/k,rCoh .

ww/w

w

w/

/w

/ww

316122

2

65

65

21 65053

62

65

2 ψψψψ

ψΓ

ψ

−≈

+−

=

(20c)

where kn

Um

=2π

and ( )ψ i ir k r L2 2 2 2 2= + / . All coherence function ( )Coh n P Pij1 2

1 2/ , , with i≠j can be

assumed to vanish.

The longitudinal coherence can also be approximated by (Kareem, 1987):

( )Coh n rr

L

nr

U

r

zuuu m m

1 2

2 2 2 1 2

1211/

/

, exp≈ −

+

+

(21)

implying a coherence coefficient of C r zm= +12 11 / and where

z z zm = 1 2

U U z U zm = 1 1 2 2( ) ( ) .

For structures of small dimension, i.e. r much smaller than Lu, r can be taken as zero.

2.13.9 Peak velocities

Spectral moments, λi of higher than the i = 0 order formally do not exist for turbulence spectra

(including von Karman and other spectra) fulfilling the Kolmogorov asymptote (asymptotic f −5 3/ behaviour).However, for high frequencies the spectra fall off more rapidly so that truncation of these spectra at frequenciesof 5÷20 Hz makes them finite. Also, filtering by finite areas on which the wind blows removes thismathematical inconvenience. Then, the distribution of extreme gust velocities, umax is asymptotically a Gumbeldistribution with mean:

[ ] ( )E u t t tmax , , ln / lnλ λ ν γ ν λ0 2 0 0 02 2= + (22)

and variance:

[ ]Var u t tmax , , [( / ) / ln ]λ λ π ν λ0 22

0 06 2= (23)

Page 116: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

51

γ=0.5772 is Euler´s constant, t = 600 s and ν0 is the mean frequency of zero upcrossing, in Hz:

ν λ λ0 2 0= / . (24)

The mean and standard deviation of the random peak factor for gust velocities, g are defined as:

ttg 00 ln2/577.0ln2 ν+ν= (25)

σg =π

ν6

1

2 0ln t(26)

The calculation of g from turbulence spectra is sensitive to the choice of cut-off frequency (5-20 Hz).Empirically and theoretically one can assume that the mean of g is about 3.2 for 1 hour (3.8 for 8 hours) and itsstandard deviation about 0.4. Since the fluctuating velocity pressure is a linear function of fluctuating velocityof gusts, the above values of g and σg also apply to the peak pressure.

2.13.10 Mean velocity pressure and exposure factor

The mean wind velocity pressure 2) at height z is defined by:

Q(z) 2= 1

2ρU z( ) (27)

where ρ is the air density (ρ=1.25 kg/m3 for standard air).

The coefficient of variation of the maximum annual velocity pressure is approximately the double ofthe coefficient of variation of the maximum annual velocity, V1 : VQ ≅ 2 V1 .

The roughness factor describes the variation of the mean velocity pressure with height above groundand terrain roughness as function of the reference velocity pressure. From Eq.(13) one gets:

c zQ z

Q

U z

U

z

z

z

z

z

zrref ref

ref

ref

ref

( )( ) ( )

lnln

,

.

,

= = =

2

2

0

0 07

0

2

0

2

(28)

and Q(z) = cr(z) Qref (29)

2.13.11 Gust factors for velocity pressure

2 Conversion of the open country velocity pressure for different averaging time intervals can be guidedby the following values obtained from Section 2.13.2:

11 0 7 0 44. . .Q Q Q Q1h 10min 1min(fastest mile) 3s= = =

Page 117: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

52

The gust factor for velocity pressure is the ratio of the peak velocity pressure to the mean velocitypressure of the wind:

( )( )

( )( )

( ) ( )[ ]c zq z

Q z

Q z g

Q zg V g I zg

peak q

q u= =+ ⋅

= + ⋅ = + ⋅σ

1 1 2 (29)

where:Q(z) = the mean velocity pressure of the wind

σq =212 /

)t,z(q = the root mean squared value of the longitudinal velocity pressurefluctuations from the mean

VQ = coefficient of variation of the velocity pressure fluctuations (approximately equalto the double of the coefficient of variation of the velocity fluctuations):

VQ ≅ 2 I(z)g = the peak factor for velocity pressure.

Approximately, the longitudinal velocity pressure fluctuation, q(z,t) is a linear function of the velocityfluctuation. Since:

[ ]1

2

1

2

1

2

1

22 2 2 2 2ρ ρ ρ ρ ρ ρU z u z t U z U z u z t u z t U z U z u z t( ) ( , ) ( ) ( ) ( , ) ( , ) ( ) ( ) ( , )+ = + + ≅ +

it is:

Q z U z

q z t U z u z t

( ) ( )

( , ) ( ) ( , )

=

1

22ρ

ρ

and consequently, the mean value and the standard deviation of the peak factor for 10 min. velocity pressureare the same like that for the gust velocity g ≅ 3.2 and σg ≅ 0.4. The values of the peak factor depend on theaveraging time interval of the reference velocity.3

2.13.12 Exposure factor for peak velocity pressure

The peak velocity pressure at the height z above ground is the product of the gust factor: the roughnessfactor and the reference velocity pressure;

Qg(z) = cg(z) cr(z) Qref (30)

The exposure factor is defined as the product of the gust and roughness factors:

ce(z) = cg(z) cr(z). (31)

3 Since: ( ) ( ) ( )1hrefr

h1g

10minrefr

min10g

1minrefr

min1gpeak QccQcc=Qcc=q = from Section 2.13.8, the following

approximate relations hold: c cg1 min

g10 min= 0 7. = h1

gc

Page 118: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

53

2.13.13 Aerodynamic shape factors

The aerodynamic shape factor, ca is the ratio of the aerodynamic pressure exerted by the wind on thesurface of a structure and its components to the velocity pressure. The aerodynamic pressure is acting normal tothe surface. By convention ca is assumed positive for pressures and negative for suctions.

As the pressure exerted on a surface is not uniformly distributed over the whole area of the surface oron the different faces of a building, the aerodynamic coefficients should be assessed separately for the differentparts and faces of a building.

The aerodynamic shape factors refer either to the mean pressure or to the peak pressure of the wind.

The shape factors are dependent on the geometry and the dimensions of building, the angle of attack ofthe wind i.e. the relative position of the body in the airflow, terrain category, Reynolds number, etc.

In certain cases the aerodynamic factors for external pressure must be combined with those for internalpressure.

There are two different approaches to the practical assessment of wind effects on rigid structures: usingpressure coefficients and using force coefficients.• In the former case the wind force is the result of the summation of the aerodynamic forces normal to acertain surface. It is intended for parts of the structure.• In the later case, the wind force is the product of the velocity pressure multiplied by the overall forcecoefficient times the frontal area of the building. This approach is used within the procedures for calculatingthe structural response.

Typical values of the aerodynamic shape factors can be selected from appropriate national andinternational documents or from wind tunnel tests. The aerodynamic shape factors should be determined inwind tunnels capable of modelling the atmospheric boundary layer.

2.13.14 Uncertainties consideration

The factors involved in the assessment of the wind forces on structures contain uncertainties.

The mean and the coefficient of variation of the wind forces expressed through the product ofuncorrelated variables in Eq.(1) or Eq.(2) may be written as follows:

E(w) = E(cg) E(ca) E(cr) E(Qref) (32)

V V V V Vw c c c Qg a r ref

2 2 2 2 2= + + + (33)

and

E(w) = E(cd) E(ca) E(cr) E(Qref) (34)

V V V V Vw c c c Qd a r ref

2 2 2 2 2= + + + . (35)

Statistics of the above factors are suggested in Table 5.

Page 119: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

54

Table 5 Statistics of random variables involved in the assessment of the wind loading

VariableRatio

Expected

Computed

Coefficient of variation,V

Reference

Qref

cr

ca - pressure coefficients- force coefficients

cg

cd

Structure period- small amplitudes- large amplitudes

Structure damping- small amplitudes- large amplitudes

0.80.81.01.01.01.0

0.851.15

0.81.2

0.2 - 0.30.1 - 0.20.1 - 0.30.1 - 0.150.1 - 0.150.1 - 0.2

0.3 - 0.350.3 - 0.35

0.4 - 0.60.4 - 0.6

Davenport,1987

Vanmarcke,1992

Generally, but not necessarily, the lognormal distribution is the recommended probability distribution functionfor each of the partial factors involved in Eq. (32) and Eq. (34).

Relevant Literature and References

Arya S.P., 1993. Atmospheric boundary layer and its parametrization. Proceedings of the NATO Advanced Study Instituteon Wind Climate in Cities, Waldbronn, Germany, July 5-16, Kluwer Academic Publishers, Dordrecht/Boston/London,p.41-66ASCE 7-93, 1993 and Draft of ASCE7-95, 1995. Minimum design loads for buildings and other structures. AmericanSociety of Civil Engineers, New YorkCIB W81 Commission, 1994. Actions on structures. Wind loads, 6th draft, MayDavenport N.G., 1995. The response of slender structures to wind. In the wind climate and cities. Kluwer AcademicPublishers, p.209-239Davenport A.G., 1987. Proposed new international (ISO) wind load standard. High winds and building codes. Proceedingsof the WERC/NSF Wind engineering symposium. Kansas City, Missouri, Nov., p.373-388Davenport A.G., 1967. Gust loading factors. Journal of the Structural Division, ASCE, Vol.93, No.3, p.1295-1313Davenport A.G., 1964. Note on the distribution of the largest value of a random function with application to gust loading.Proceedings. Institution of Civil Engineering, London, England, Vol. 28 June, p.187-195Davenport A.G., 1961. The application of statistical concepts to the wind loading of structures. Proceedings, Institution ofCivil Engineering, London, England, Vol.19, Aug., p.449-472ESDU 85020, Characteristics of atmospheric turbulence near the ground. Part II: single point data for strong winds (neutralatmosphere), April 1993, 36 p. London, U.K.ESDU 86010, Characteristics of atmospheric turbulence near the ground. Part III: variation in space and time for strongwinds (neutral atmosphere), Sept. 1991, 33 p., London, U.K.European Prestandard ENV 1991-2-4, 1994. EUROCODE 1: Basis of design and actions on structures, Part 2.4 : Windactions, CENGerstoft P., 1986. An assessment of wind loading on tower shaped structures. Technical University of Denmark, Lingby,Serie R, No.213Ghiocel D., Lungu D., 1975. Wind, snow and temperature effects on structures, based on probability. Abacus Press,Tunbridge Wells, Kent, U.K.Harris R.I., Deaves D.M., 1980. The structure of strong winds. The wind engineering in the eighties. Proceedings of CIRIAConference 12/13 Nov., Construction Industry, Research and Information Association, London, p.4.1-4.93ISO / TC 98 / SC3 Draft International Standard 4354, 1990. Wind actions on structures. International Organisation forStandardisationJoint Committee on Structural Safety CEB-CECM-CIB-FIP-IABSE, 1974. Basic data on loads. Second draft. LisbonKareem, A., Wind Effects on Structures, Prob. Eng. Mech., 2, 4, 1987, pp. 166-200Karman v., T., 1948. Progress in statistical theory of turbulence. Proceedings, National Academy of Science, WashingtonD.C., p.530-539

Page 120: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

55

Lumley J.L., Panofsky H.A., 1964. The structure of atmospheric turbulence. J.Wiley & SonsLungu D., Gelder P., Trandafir R., 1995. Comparative study of Eurocode 1, ISO and ASCE procedures for calculating windloads. IABSE Colloquium, Basis of design and actions on structures, Background and application of Eurocode 1, Delft, TheNetherlands, 1996NBC of Canada, 1990. Code National du Bâtiment du Canada, 1990 and Supplement du Code, Comité Associé du CodeNational du Bâtiment, Conseil National de Recherche, CanadaPlate E.J., 1993. Urban climates and urban climate modelling: An introduction. Proceedings of the NATO Advanced StudyInstitute on Wind Climate in Cities, Waldbronn, Germany, July 5-16, Kluwer Academic Publishers,Dordrecht/Boston/London, p.23-40Plate E.J., Davenport A.G., 1993. The risk of wind effects in cities. Proceedings of the NATO Advanced Study Institute onWind Climate in Cities, Waldbronn, Germany, July 5-16, Kluwer Academic Publishers, Dordrecht/Boston/London, p.1-20Ruscheweyh H., 1995. Wind loads on structures from Eurocode 1, ENV 1991-2-3. In Wind climate in cities. KluwerAcademic Publishers, p.241-258Schroers H., Lösslein H., Zilch K., 1990. Untersuchung der Windstructur bei Starkwind und Sturm. Meteorol. Rdsch., 42,Oct., p.202-212Simiu E., Scanlan R.H., 1986. Wind effects on structures. Second edition. J. Wiley & SonsSimiu E., 1980. Revised procedure for estimating along-wind response. Journal of the Structural Division, ASCE, Vol.106,No.1, p.1-10Simiu E., 1974. Wind spectra and dynamic along-wind response. Journal of the Structural Division, ASCE, Vol.100, No.9,p.1897-1910Solari G., 1993. Gust buffeting. I Peak wind velocity and equivalent pressure. Journal of Structural Engineering, ASCE,Vol.119, No.2, p.365-382Solari G., 1993. Gust buffeting. II Dynamic along-wind response. Journal of Structural Engineering, Vol.119, No.2, p.383-398Solari G., 1988. Equivalent wind spectrum technique: theory and applications. Journal of Structural Engineering ASCE,Vol.114, No.6, p.1303-1323Solari G., 1987. Turbulence modelling for gust loading. Journal of Structural Engineering, ASCE, Vol.113, No.7, p.1150-1569Theurer W., Bachlin W., Plate E.J., 1992. Model study of the development of boundary layer above urban areas. Journal ofWind Engineering and Industrial Aerodynamics, Vol. 41-44, p.437-448, ElsevierUniform Building Code, 1991 Edition. International Conference of Building Officials, Whittier, CaliforniaVellozi J., Cohen E., 1968. Gust response factors. Journal of the Structural Division, ASCE, Vol.97, No.6, p.1295-1313Vickery B.J., 1994. Across - wind loading on reinforced concrete chimneys of circular cross-section. ACI StructuralJournal, May-June, p.355-356Vickery B.J., Basu R., 1983. Simplified approaches to the evaluation of the across-wind response of chimneys. Journal ofWind Engineering and Industrial Aerodynamics, Vol.14, p. 153-166.Vickery B.J., 1970. On the reliability of gust loading factors. Proceedings, Technical meeting concerning wind loads onbuildings and structures, Building Science Series 30, National Bureau of Standards, Washington D.C., p.93-104Vickery B.J., 1969. Gust response factors. Discussion. Journal of the Structural Division, ASCE, ST3, March, p.494-501Wieringa J., 1993. Representative roughness parameters for homogenous terrain. Boundary Layer Meteorology, Vol.63,No.4, p.323-364Wind loading and wind-induced structural response, 1987. State of the art report prepared by the Committee on Windeffects of the Structural Division of ASCE. ASCE, N.Y.

Page 121: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

56

JCSS PROBABILISTIC MODEL CODEPART 2: LOAD MODELS

2.18 IMPACT LOAD

Table of contents:

2.18 Impact Load2.18.1 Basic Model for Impact Loading2.18.1.1 Introduction2.18.1.2 Failure probability2.18.1.3 Distribution function for the impac tload2.18.2 Impact from vehicles2.18.2.1 Distribution of impact force2.18.2.2 Specifications of impact force2.18.3 Impact from ships2.18.3.1 Distribution of impact force2.18.3.2 Specifications of impact force2.18.4 Impact from airplanes2.18.4.1 Distribution of impact force

List of symbols:

a = decelerationAb = the area of the building including the shadow aread = distance from the structural element to the roadfs(y) = distribution of initial object position in y directionFc(x) = static compression strength at a distance x from the nosek = stiffnessm = massm'(x) = mass per unit lengthn = number of vehicles, ships or planes per time unitn(t) = number of moving objects per time unit (traffic intensity)Pa = the probability that a collision is avoided by human intervention.

Page 122: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

57

List of symbols:

Pf q(xy) = the probability of structural failure given a mechanical or human failure on the ship,vehicle, etc. at point (x,y).

r = d/sin α = the distance from "leaving point" to "impact point"R = radius of airport influence circleT = period of time under considerationvc = the object velocity at impactvc(t) = velocity of the crashed partvc (xy) = object velocity at impact, given initial failure at point (x,y)vo = velocity of the vehicle when leaving the trackx,y = coordinate system;

α = angle between collision course and track directionΛ(r) = collision rate for crash at distance r from the airport with r < Rλ(x,t) = failure intensity as a function of the coordinate x and the time t.

Page 123: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

58

2.18 IMPACT LOAD

2.18.1 Basic Model for Impact Loading

2.18.1.1Introduction

The basic model for impact loading constitutes of (see figure 2.18.1):

- potentially colliding objects (vehicles, ships, airplanes) that have an intended course, which may be thecentre line of a traffic lane, a shipping lane or an air corridor; the moving object will normally have somedistance to this centre line;

- the occurrence of a human or mechanical failure that may lead to a deviation of the intended course; theseoccurences are described by a homogeneous Poison process;

- the course of the object after the initial failure, which depend on both object properties and environment;- the mechanical impact between object and structure, where the kinetic energy of the colliding object is

partly transferred into elastic-plastic deformation or fracture of the structural elements in both the buildingstructure and the colliding object.

Figure 2.18.1: Probabilistic collision model

Q

object

structure

B

y

x

x = 0

Page 124: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

59

2.18.1.2Failure probability

The probability that a single object, moving in x-direction, suffers from a human or mechanical failure inthe square [dx, dy] (see figure 2.1.8.1) and causes collapse at some structure is given by:

Pfq(x,y) fs(y) dy λ(x,t) dx

where:

fs(y) = distribution of initial object position in y direction (see figure 2.18.1)Pf q(xy) = the probability of structural failure given a mechanical or human failure on the ship, vehicle,

etc. at point (x,y).x,y = coordinate system; the x coordinate follows the centre line of the traffic lane, while the y

coordinate represents the (horizontal) distance of the object to the centre; the structure thatpotentially could be hit, is located at the point with coordinates x=0 and y=d.

λ(x,t) = failure intensity as a function of the coordinate x and the time t. The length dependencyexpresses the variability in circumstances along the centre line (for instance curved versusstraight trajectories). The time dependency indicates differences in summer and winter, dayand night, etc. Note that although λ(x,t) is a function of x and t, its dimension is [1/Length].

The probability of structural failure for a period T can then be presented as:

dtdydx(y)f s(xy)Pfqt)(x,n(t)exp--1=(T)Pf λ∫∫∫ (2.18.1)

or for small probability and constant n and λ:

dydx(y)f sy)(x,PfqnT=(T)Pf ∫∫λ (2.18. 2)

where:

T = period under considerationn(t) = number of moving objects per time unit (traffic intensity)

2.18.1.3Distribution function for the impact load

In principle, impact is an interaction phenomenon between the object and the structure. It is not possibleto formulate a separate action and a separate resistance function. However, an upper bound for the impact load canbe found using the "rigid structure" assumption. If the colliding object is modelled as an elastic single degree offreedom system, with equivalent stiffness k and mass m, the maximum possible resulting interaction force equals:

Fc = vc √ (km)(2.18.3)

vc = the object velocity at impact

Note that (2.18.3) gives the maximum for the external load; dynamic effects within the structure still needto be considered. Note further that simple upperbounds also may be obtained if the structure and or the objectbehaves plastic: Fc = min[Fys, Fyo] where Fys = yield force of the structure and Fyo = yield force of the object; theduration of this load is ∆t = mvc/Fc.

Based on formulation (2.18.3) the distribution function for the load Fc can be found:

Page 125: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

60

dxdydt(y)f sX]>km(xy)vcP[nexp--1=X<FcP λ∫∫∫ (2.18.4)

vc (xy) = object velocity at impact, given initial failure at point (x,y)

For small probabilities:

dydx(y)f sX]>kmvcP[nT=(T)Pf=X>FcP ∫∫λ (2.18. 5)

For the designation of the variables, see clause 2.18.1.2.

2.18.2 Impact from vehicles

2.18.2.1Distribution of impact force

Consider a structural element in the vicinity of a road or track. Impact will occur if some vehicle,travelling over the track, leaves its intended course at some critical place with sufficient speed (see Figure 2.18.2).

Figure 2.18.2: A vehicle leaves the intended course at point Q with velocity v0 and angle a. A structuralelement at distance r is hit with velocity vr.

The collision force probability distribution based on (2.18.5), neglecting the variability in y-direction isgiven by:

P(Fc > X) = n T λ ∆x P [ √ m k (vo2 - 2ar) > X] (2.18.6)

n = number of vehicles per time unitT = period of time under considerationλ = probability of a vehicle leaving the road per unit length of track∆x = part of the road from where collisions may be expectedvo = velocity of the vehicle when leaving the tracka = deceleration

x

Q

d

bB

v0ϕ

r

α

Page 126: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

61

r = d/sin α = the distance from "leaving point" to "impact point"d = distance from the structural element to the roadα = angle between collision course and track direction

λ ∆x is the probability that a passing vehicle leaves the road at the interval ∆x, which is approximated by:

∆x = b / sin µ(α) (2.18.7)

The value of b depends on the structural dimensions. However, for small objects such as columns aminimum value of b follows from the width of the vehicle, so b > 2.5 m.

Numerical values and probabilistic models can be found in Table 2.18.1.

variable designation type mean stand devλ accident rate deterministic 10-10 m-1 -α angle of collision course rayleigh 10 ° 10°v vehicle velocity

- motorway- urban area- court yard- parking garage

lognormallognormallognormallognormal

80 km/hr401510

10 km/hr765

a deceleration lognormal 4 m2/s 1.3 m/s2

m vehicle mass- truck- car

normalnormal

20.000 kg*1500 kg

12.000 kg*400 kg

k vehicle stiffness lognormal 300 kN/m 60 kN/m*Combined with F = k√mv these estimates are quite conservative. One might consider possiblereductions due to transformation of energy into rotational movements, etc. e.g. by the concept of“effective mass”

Table 2.18.1: Numerical values for vehicle impact

Page 127: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

62

2.18.2.2Specifications of impact force

The collission force is a horizontal force; only the force component perpendicular to the structural surfaceneeds to be considered.

The collision force for passenger cars affects the structure at 0.5 m above the level of the driving surface;for trucks the collision force affects it at 1.25 m above the level of the driving surface. The force application areais 0.25 m (height) times 1.50 m (width).

For impact loads on horizontal structural elements above traffic lanes the following rules hold (see Figure2.18.3):a) on vertical surfaces the impact actions follow from 2.18.2.1 and the height

reduction as specified at c)b) on horizontal lower side surfaces upward inclination of 10% should be considered. The force application

area is 0.25 m (heigh) times 0.25 m (width).c) for free heights h larger than 6.0 m the forces are equal to zero; for free

heights between 4.0 m and 6.0 m a linear interpolation should be used

Figure 2.18.3: Impact loads on horizontal structural elements above traffic lanes

2.18.3 Impact from ships

2.18.3.1Distribution of impact force

A co-ordinate system (x,y) is introduced as indicated in Figure 2.18.4. The x coordinate follows the centreline of the traffic lane, while the y co-ordinate represents the (horizontal) distance of the ship to the centre. Thestructure that potentially could be hit is located at the point with co-ordinates x=0 and y=d.

Figure 2.18.4: Ingredients for a ship collision model

x

y

m

point (x,y)

v0

d

f0 (y)

object

driving direction

F

F

hh

10° 10°

Page 128: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

63

Ship impact may be the result of:(a) either a ship being on collision course, while no avoidance action is taken(b) a mechanical or human failure leading to a change of course.

In case (a) a ship is on collission course, which is not corrected due to inattendance, bad visibility, oldcards and so on. In case (b) the orginal course is correct, but changed, due to e.g. rudder problems ormisjudgement.

Both origins (a) and (b) are present in the following model which is a modification of (2.18.1):

P(F > X) dydx(y)f sX]>kmy)(x,vcP[y

P anTn∆∫∫=

dydx(y)f s]X>kmy)(x,vcP[Tn+∞+

∞−∫∫λ (2.18.8)

T = period of time under considerationn = number of ships per time unit (traffic intensity)λ = probability of a failure per unit travelling distancev(x,y) = impact velocity of ship, given error at point (x,y)k = stiffness of the shipm = mass of the shipfs(y) = distribution of initial ship position in y directionPna = the probability that a collision is not avoided by human intervention, given collision course∆y = values of y coinciding with a collision course

For the evaluation in practical cases, it may be necessary to evaluate (2.18.8) for various ship types andtraffic lanes, and add the results in a proper way at the end of the analysis.

Page 129: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

64

Table 2.18.2 gives a number of standard ship characteristics and velocities that could be chosen by thedesigner.

variable designation type mean standard dev

Pna avoidanceprobability- small- medium- large- very large

-

0.0450.0030.0020.001

-

λ failure rate - 10-6 km-1 -

v velocity- harbour- canal- sea

lognormallognormallognormal

1.5 m/s36

0.5 m/s1.01.5

m mass- small- medium- large- very large

lognormallognormallognormallognormal

1000 ton4000

20000200000

2000 ton800040000200000

k equivalent stiffness lognormal 15 MN/m 3 MN/m

Table 2.18.2: Numerical values for ship impact

2.18.3.2Specifications of impact force

Bow, stern and broad side impact shall be considered where relevant; for side and stern impact the designimpact velocities may be reduced.

Bow impact shall be considered for the main sail direction with a maximum deviation of 30o.

If a wall structure is hit under an angle a, the following forces should be considered:- perpendicular to the wall: Fy = F sinα- in wall direction: Fx = f F sinα

where F is the collision force at α = 90° and f = 0.3 is the friction coefficient.

Impact is to be considered as a free horizontal force; the point of impact depends on the geomertry of thestructure and the size of the vessel. As a guideline one could take the most unfavourable point ranging from 0.1 Lbelow to 0.1 L above the design water level. The impact area is 0.05 L * 0.1 L unless the stuctural element issmaller.

L is the typical ship length (L = 15, 40, 100 and 300 m for respectively small, medium, large and verylarge ship size).

Page 130: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

65

The forces on the superstructure of the bridge depend on the height of the bridge and the type of ships tebe expected. In general the force on the superstructure of the bridge will be limited by the yield strenght of theships superstructure. A maximum of 10 000 kN for large and very large ships and 3000 kN for small and mediumships can be taken as a guideline averages.

2.18.4 Impact from airplanes

2.18.4.1 Distribution of impact force

The probability of a structure being hit by an airplane is very small. Only for exceptional structures likenuclear power plants, where the consequences of failure may be very large, is it mandatory to account for aircraftimpact during design.

For air corridors, using (2.18.3) and for small probabilities:

(y)f simpact)|X>FcP(PnaAbTn=X)>FcP( λ (2.18.9)

n = number of planes passing per time unit through an air corridor (traffic intensity)T = time period of interest (for instance reference period)λ = probability of a crash per unit distance of flyingfs(y) = distribution of ground impact perpendicular to the corridor direction, given a crashAb = the area of the building including the shadow areaPna = probability of not avoiding a collision, given an airplane on collision course

The area Ab is the area of the building itself, enlarged by a so called shadow area (see figure 2.18.5). Thestrike angle α is random.

For the vicinity of an airport (at a distance r) the impact force distribution is based on:

impact|X>FcPbA)r(PnaTn=X)>FcP( Λ (2.18.10)

r2R

=(r)ΛΛ (2.18.11)

Λ−

= average air plane collision rate for a circular area with radius R = 8 kmΛ(r) = collision rate for crash at distance r from the airport with r < Rn = number of planes approaching the airport per windtunnelR = radius of airport influence circler = distance to the airport

Numerical values are presented in Table 2.18.3

Page 131: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

66

Figure 2.18.5: Strike area Ab for an airplane crash.

For airplanes the impact model (2.18.3) is not sufficient. A better model is given by:

(t)v2c)(m,+)(Fc=(t)Fc ξξ (2.18.12)

ττ∫ξ d)(vct0= (2.18.13)

Fc(x) = static compression strength at a distance x from the nosem'(x) = mass per unit length at a distance x from the nosevc(t) = velocity of the crashed part of the plane at time t

Sometimes vc(t) is taken as constant and equal to vr for further simplification. Results from calculationsbased on this model can be found in table (2.18.4).

It is recommended to make the analysis for each type of aircraft (small, large, civil, military) separatelyand add the results afterwards.

λ Crash rate- military plane- civil plane

10-8 km-1

10-9 km-1

Λ Average collission rate for airport area- small planes (< 6 ton)- large planes (> 6 ton) 10-4 yr-1 km-2

4 10-5 yr-1 km-2

R Radius of aiport influence circle 8 km

α Strike angle mean 10o

standard deviation 10o

Rayleigh Distribution

Table 2.18.3: Numerical values for the air plane impact model

10°

building

H

H/tan 10° = 6Hshadow area

Page 132: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

67

Table 2.18.4: Impact characteristics for various aircrafts (perpendicular on immovable walls

A = cross sectional area of the plane or enginem = massvr = velocity at impact

Type t[ms] F [MN]

Cessna 210A 0 0m = 1.7 ton 3 7v = 100 m/s 6 7A = 7 m2 18 4engine m = 0.2 ton 18 4

A = 0.5 m2

Lear Jet 23 A 0 0m = 5.7 ton 20 2v = 100 m/s 35 6A = 12 m2 50 6

70 1280 20

100 0

MRCA (Multi Role Combat) 0 0m = 25 ton 10 55v = 215 m/s 30 55A = 4 m2 40 154engine m = 1.2 ton 50 154

A = 0.5 m2 701 0

Boeing 707-320 0 0m = 90 ton 30 20v = 100 m/s 150 20A = 36 m2 200 90

230 90250 20320 10330 0

t

F

t

F

t

F

t

F

Page 133: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

68

JCSS PROBABILISTIC MODEL CODEPART 2: LOAD MODELS

2.20 FIRE

Table of contents:

2.20 Fire2.20.1 Fire ignition model2.20.2 Flashover occurrence2.20.3 Combustible material modelling2.20.4 Temperature-time relationship2.20.4.1 Scientific models2.20.4.2 Engineering models

List of symbols:

Af = floor areaAi = area of the vertical opening i in the fire compartment [m2]At = total internal surface areaf = ventilation openingHi = specific combustible energy for material imi = derating factor between 0 and 1, describing the degree of combustionMki = combustible mass present at ∆A for material iqo = fire load density per unit floor areat = timeteq = equivalent time of fire duration

α = parameterβf = coefficient (model uncertainty)θ = temperature in the compartmentθo = temperature at the start of the fireθA = parameter

Page 134: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

69

2.20. FIRE

2.20.1 Fire ignition model

The probability of a fire starting in a given building or area is modelled as a Poisson process with constantoccurrence rate:

P ignition in (t,t+dt) in a compartment = νfire dt (2.20.1)

The occurrence rate νfire can be written as a summation of local values over the floor area:

fire = (x, y) dxdyν λ∫ ∫Af

(2.20.2)

where λ(x,y) corresponds to the probability of fire ignition per year per m2 for a given occupancy type; Af is thefloor area of the fire compartment. As in most applications λ(x,y) can be simplied as a constant, and (2.20.2) canbe simplified to:

νfire = Af λ (2.20.3)

Values for l are presented in Table 2.20.1.

Type of building λ [m-2 year-1]

dwelling/school 0.5 to 4 * 10-6

shop/office 1 to * 10-6

industrial building 2 to 10 * 10-6

Table 2.20.1: Example values of annual fire probabilities λ per unit floor area for several types ofoccupancy.

2.20.2 Flashover occurrence

After ignition there are various ways in which a fire can develop. The fire might extinguish itself after acertain period of time because no other combustible material is present. The fire may be detected very early and beextinguished by hand. An automatic sprinkler system may operate or the fire brigade may arrive in time to preventflash over. Only in a minority of cases does a fire develop into a fully developed room or compartment fire;sometimes the fire may break through a barrier and start a fire in another compartment. From the structural pointof view only these fully developed or post flashover fires (see Figure 2.20.1) may lead to failure. For very largefire compartments having a very large concentration of fire loads, e.g. industrial buildings, a local fire of highintensity also may lead to (localised) structural damage.

The occurrence rate of flashover is given by:

νflash over = Pflash over | ignition νfire (2.20.4)

The probability of a flashover once a fire has taken place, can obviously be influenced by the presence ofsprinklers and fire brigades. Numerical values for the analysis are presented in Table 2.20.2.

Page 135: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

70

Figure 2.1: Schematic presentation of a temperature-time curve* Curve (a) represents the temperature-time curve when a sprinkler system or a timely fire brigade action is

successful.* Curve (b) presents the temperature-time relation for a fully developed fire.* Curve (c) indicates the limited influence of a fire brigade arriving after flashover has taken place.* Curve (d) indicates the ISO-standard temperature curve (see section 2.20.4.2).

Protection method Pflashover|ignition

Public fire brigade 10-1

Sprinkler 10-2

High standard fire brigade on site, combinedwith alarm system (industries only)

10-3 to 10-2

Both sprinkler and high standard residential firebrigade

10-4

Table 2.20.2: Probability of flashover for given ignition, depending on the type of active protection measures

d

c

b

a

T

t

flash over

ignition

taq

flamephase

ignitionphase

coolingphase

Page 136: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

71

2.20.3 Combustible material modelling

The available combustible material can be considered as a random field, which in general might benonhomogeneous as well as nonstationary. The intensity of the field q at some point in space and time is definedas:

q = HA

iΣm Mi ki

f

(2.20.5)

mi = derating factor between 0 and 1, describing the degree of combustionMki = combustible mass present at Af for material iHi = specific combustible energy for material iAf = considered floor area

In some cases the intensity q may also depend on a vertical ordinate.

The non-dimensional factor µi is a function of the fuel type, the geometrical properties of the fuel, and theposition of the fuel in the fire compartment, among other things. For some types of fire load components, mi

depends on the time of fire duration and on the gas temperature-time characteristics of the compartment fire.Probabilistic models for q are presented in tabel 2.20.3.

Type of fire compartment Mean valueµ(q0) [MJm-2]

Coefficient of variationV(q0)

1 : Dwellings 500 0.202 : Offices 600 0.303 : Schools 350 0.204: Hospitals 450 0.305: Hotels 300 0.25

Table 2.20.3: Recommended values for the average fire load intensity qo

2.20.4 Temperature-time relationship

2.20.4.1 Scientific models

For known characteristics of both the combustible material and the compartment, the post flash overperiod of the temperature time curve can be calculated from energy and mass balance equations.

Many variables can be introduced as random in the model, for instance:- the amount and spatial distributions of combustible material;- the effective energy value;- the rate of combustion;- the ventilation characteristics;- air use and gas production parameters;- thermal conductivity properties;- model uncertainties.

In addition, the development of the fire may depend on events like collapse of windows or containments,which may change the ventilation conditions or the available amount of combustible material respectively.

Page 137: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

72

As a simplification the following assumptions may be used.

1. the combustible material is wood;2. the wood is spread uniformly over the floor area;3. the fire compartment is of a standard building material (brick, concrete);4. the fire is controlled by ventilation and not by the amount of fuel load

(this is conservative);5. the initial temperature is 20 o C .

In this case the temperature time curve depends on two parameters:

- the floor averaged fire load density qo ;- the opening factor f.

The opening factor f is defined as:

f =A

Ah; h = A h

A; A = A

t

i ii

v

vvwith

∑∑ (2.20.7)

where:

At = total internal surface area of the fire compartment, i.e. the area of the walls, floor and ceiling,including the openings [m2]

Ai = area of the vertical opening i in the fire compartment [m2]hi = value of the height of opening i [m]

For a fire compartment which also contains horizontal openings, the opening factor can be calculatedfrom a similar expression. In calculating the opening factor, it is assumed that ordinary window glass isimmediately destroyed when fire breaks out.

In many cases it will be possible to indicate a physical maximum fmax. The actual value of f in a fire shouldbe modelled as a random quantity according to:

f = fmax (1 - ζ) (2.20.8)

ζ = random parameter (see Table 2.20.4)

To avoid negative values of f, this lognormal distribution should be cut off at ζ = 1. In addition one shouldmultiply the resulting temperatures by an overall model uncertainty factor θmodel.

Page 138: Probabilistic Model Code Jcss Pmc2000

99-CON-DYN/M0037Februari 2001

73

2.20.4.2 Engineering models

In many engineering applications, use is made of equivalent standard temperature-time-relationshipaccording to ISO 834:

θ = θo + θA log10 αt + 1 for 0 < t < teq (2.20.9)

with:

eqf o f

tt =

q A

A f

β(2.20.10)

θ = temperature in the compartmentθo = temperature at the start of the fireθA = parameterα = parametert = timeteq = equivalent time of fire durationβf = coefficient (model uncertainty)qo = fire load density per unit floor areaAf = floor areaAt = total internal surface areaf = ventilation opening (see 2.20.7, 2.20.8)

Numerical values and probabilistic models are given in Table 2.20.4.

Variable Distribution Mean Standard deviation

ζβf

θo

θA

α

truncated lognormal 1)

lognormaldeterministicdeterministicdeterministic

0.24.0 sm2.25/MJ20oC345 K0.13 s-1

0.21.0---

1) values of ζ > 1 should be supressed

Table 2.20.4: Numerical values for random variables

Page 139: Probabilistic Model Code Jcss Pmc2000

1 10.10.2000

JCSS PROBABILISTIC MODEL CODE

PART 3: MATERIAL PROPERTIES

3.0. GENERAL PRINCIPLES

Table of contents:

3.0.1 Introduction3.0.2 Material properties3.0.3 Uncertainties in material modelling3.0.4 Scales of modelling variations3.0.5 Hierarchical modelling3.0.6 Quality control strategies3.0.6.1 Types of strategies3.0.6.2 Sampling3.0.6.3 Updating versus selecting

Annex A: Bayesian evaluation procedure for the normal and lognormal distribution – charactersitcvaluesAnnex B: Bayesian evaluation procedure for regression – characteristic value

List of symbols:

fx(x|q) = the variability of property x within a given lotfq(q) = the variability of the parameters q from lot to lot; statistical description of the

productionf'(q) = prior distribution of qf''(q) = posterior distribution of qL(data|q) = likelihood functionq = vector of distribution parameters (e.g. mean and std. dev.)C = normalising constantd = decision rule

Page 140: Probabilistic Model Code Jcss Pmc2000

2 10.10.2000

3.0.1 Introduction

The description of each material property consists of a mathematical model (e.g. elastic-plasticmodel, creep model, etc.) and random variables or random fields (e.g. modulus of elasticity, creepcoefficient). Functional relationships between the various variables may be part of the material model(e.g. the relation between tensile stress and compressive stress for concrete).

In general, it is the response to static and time dependent mechanical loading that matters forstructural design. However, also the response to physical, chemical and biological actions is importantas it may affect the mechanical properties and behaviour.

It is understood that modelling is an art of reasonable simplification of reality such that theoutcome is sufficiently explanatory and predictive in an engineering sense. An important aspect of anengineering models also is its operationability, i.e. the ease in handling it inapplications.

Models and values should follow from (standardised) tests, representing the actualenvironmental and loading conditions as good as possible. The set of tested specimen should berepresentative for the production of the relevant fabrication sites, cover a sufficient long period of timeand may include the effect of standard quality control measures. Allowance should be made for possibledifferences between test circumstances and structural environment (conversion).

For the classical building materials, knowledge about the various properties is generally available fromexperience and from tests in the past. For new materials models and values should be obtained from anextensive and well defined testing program.

3.0.2 Material properties

Material properties are defined as the properties of material specimens of defined size andconditioning, sampled according to given rules, subjected to an agreed testing procedure, the results ofwhich are evaluated according to specified procedures.

The main characteristics of the mechanical behaviour is described by the one dimensional σ-ε-diagram, as presented in figure 3.0.1. As an absolute minimum for structural design the

• modulus of elasticity• the strength of the material

both for tensile and compression should be known. Other important parameters in the one-dimensionalσ-ε-diagram are:

• yield stress (if present)• limit of proportionality• strain at rupture and strain at maximum stress

The strain at rupture is a local phenomenon and the value obtained may heavily depend on theshape and dimensions of the test specimen.

Page 141: Probabilistic Model Code Jcss Pmc2000

3 10.10.2000

σσσσ

f

E

εεεεεεεεu

Figure 3.0.1: Stress strain relationship

Additional to the one dimensional σ-ε-diagram, information about a number of other quantitiesand effects is of importance, such as:

• Multi-axial stress condition• Duration and strain rate effects• Temperature effects• Humidity effects• Effects of notches and flaws• Effects of chemical influences

In general, the various properties of one material may be correlated.

In the present version of this JCSS model code not all properties will be considered.

3.0.3 Uncertainties in material modelling

Material properties vary randomly in space: The strength in one point of a structure will not bethe same as the strength in another point of the same structure or another one. This item will be furtherdeveloped in the sections 3.04 and 3.05.

Additional to spatial variations of materials, the following uncertainties between measuredproperties of specimen and properties of the real structure should be accounted for.

1. Systematic deviations identified in laboratory testing by relating the observed structuralproperty to the predicted property, suggesting some bias in prediction.

2. Random deviations between the observed and predicted structural property, generallysuggesting some lack of completeness in the variables considered in the model.

3. Uncertainties in the relation between the material incorporated in the structural sample and thecorresponding material samples.

Page 142: Probabilistic Model Code Jcss Pmc2000

4 10.10.2000

4. Different qualities of workmanship affecting the properties of (fictitious) material samples, i.e.when modelling the material supply as a supply of material samples.

5. The effect of different qualities of workmanship when incorporating the material in actualstructures, not reflected in corresponding material samples.

6. Uncertainties related to alterations in time, predictable only by laboratory testing, fieldobservations, etc.

3.0.4 Scales of modelling variations

Material properties vary locally in space and, possibly, in time. As far as the spacial variationsare concerned, it is useful to distinguish between three hierarchical levels of variation: global (macro),local (meso) and micro (see table 3.0.1).

For example, the variability of the mean and standard deviation of concrete cylinder strengthper job or construction unit as shown in figure 3.0.2 is a typical form of global parameter variation. Thisvariation primarily is the result of production technology and production strategy of the concreteproducers. Parameter variations between objects are conveniently denoted as macroscale variations. Theunit of that scale is in the order of a structure or a construction unit. Parameter variations may also bedue to statistical uncertainties.

Given a certain parameter realisation in a system the next step is to model the local variationswithin the system in terms of random processes or fields. Characteristically, spatial correlations(dependencies) become negligible at distances comparable to the size of the system. This is a directconsequence of the hierarchical modelling procedure where it is natural to assume that the variationwithin the system is conditional on the variations between systems and the first type of variation isconditionally independent of the second. At this level one may speak of meso-scale variations.Examples are the spatial variation of soils within a given (not too large) foundation site or the number,size and spatial distribution of flaws along welding lines given a welding factory (or welding operator).The unit of this scale is in the order of the size of the structural elements and probably mostconveniently measured in meters.

At the third level, the micro-level, one focuses on rapidly fluctuating variations andinhomogenities which basically are uncontrollable as they originate from physical facts such as therandom distribution of spacing and size of aggregates, pores or particles in concrete, metals or othermaterials. The scale of these variations is measured in particle sizes, i.e. in centimeters down to the sizeof crystals.

The modelling process normally uses physical arguments as far as possible. Quite generally, theobject is taken as an arrangement of a large number of small elements. The statistical properties of theseelements usually can only be assessed qualitatively as well as their type of interaction. This, however, issufficient to perform some basic operations such as extreme value, summation or intersection operationswhich describe the overall performance. The large number of elements greatly facilitates suchoperations because one can make use of certain limit theorems in probability theory. The advantage ofusing asymptotic concepts rests on the fact that the description of the element properties can then bereduced to some few essential characteristics. The central limit theorem of probability theory,asymptotic extreme value concepts, convergence theorems to the Poisson distribution, etc. will play animportant role. In particular, size effects have to be taken into account at this level.

Page 143: Probabilistic Model Code Jcss Pmc2000

5 10.10.2000

A useful concept is to introduce a reference volume of the material which in general is chosenon rather practical grounds. It most frequently corresponds to some specified test specimen volume atwhich material testing is carried out. This volume generally neither corresponds to the volume of thevirtual strength elements introduced at the micro-scale modelling level nor to a characteristic volume forin situ strength. It needs to be related to the latter one and these operations can involve not only simplesize scaling but more complicated functional relationships if the material produced is subject to furtheruncertain factors once put in place. Concrete is the most obvious example for the necessity of suchadditional consideration. Of course, scale effects may also be present at the meso-scale level ofmodelling.

The reason for this concept of modelling at several levels (steps) is the requirement foroperationability not only in the probabilistic manipulations but also in sampling, estimation and qualitycontrol. This way of modelling and the considerations below are, of course, only valid under certaintechnical standards for production and its control. At the macro-scale level it is assumed that theproduction process is under control. This simply means that the outcome of production is stationary insome sense. Should a trend develop, production control corrects for it immediately or with somesufficiently small delay. Therefore, it is assumed that at least for some time interval (or spatial region)whose length (size) is to be selected carefully, approximate stationarity on the meso- and micro-scale isguaranteed. Quite frequently, the operational, mathematical models available so far even requireergodicity. Variations at the macro scale level, therefore, can be described by stationary sequences. Ifthe sequences are or can be assumed independent, it is possible to handle macro-scale variations by theconcept of random variables. Stationarity is also assumed at the lower levels. However, it may benecessary to use the theory of random processes (fields) at the lower levels, especially in order to takeinto account of significant effects of dependencies in time or space.

3.0.5 Hierarchical modelling

Consider a random material property X which is described by a probability density functionfx (x | q), where q = (q1, q2, ...) is the statistical parameter vector, e.g. q1 is the mean and q2 the standarddeviation. The density function fx (x | q) applies to the property of a finite reference volume, identicalwith or clearly related to the volume of the test specimen within a given unit of material. Guidance onthe type of distribution may be obtained by assessing the performance of the reference volume undertesting conditions in terms of some micro system behaviour. The performance of test specimens,regarded as a system of micro elements, can usually be interpreted by one of the following strengthmodels

- Weakest link model- Full plasticity model- Daniel’s bundle of threads model

When applying these models to systems with increasing number of elements, they generallylead to specific distributions for the properties of the system at the meso-scale level. The weakest linkmodel leads to a Weibull distribution, the other two models to a normal distribution. For largercoefficients of variation the normal distribution must be replaced by a lognormal distribution in order toavoid physically impossible, negative strength values.

In the next step (see Table 3.0.1) a unit (a structural member) is considered as meso-scale (localvariations). The respective unit is regarded as being constituted from a sequence of finite volumes.Hence, a property in this unit is modelled by a random sequence X1, X2, X3 ... of reference volume

Page 144: Probabilistic Model Code Jcss Pmc2000

6 10.10.2000

properties. The Xi may have to be considered as correlated, with a coefficient of correlation dependingon the distance ∆rij and a correlation parameters ρo and dc, for example:

ρ(∆rij) = ρo + (1-ρo) exp [ - (∆rij / dc )2 ] (3.0.1)

In general ρo =0.In the subsequent step the complete structure or some relevant part of it is considered as a lot.

A lot is defined as a set of units, produced by one producer, in a relatively short period, with no obviouschanges in production circumstances and intended for one building site. In practice lots correspond toe.g.:

- the production of ready-mix concrete for a set of elements- structural steel from one melt processed according to the same conditions- foundation piles for a specific site

As a lot is a set of units it can also be conceived as a set of reference volumes Xi. Normally theparameters q defined before are defined on the lot level. The correlation between the Xi values withindifferent members normally can be modelled by a single parameter

ρij = ρ0 (3.0.2)

Finally, at the highest macro scale level, we have a sequence of lots, represented by a random sequenceof lot parameters (in space or in time). Here we are concerned with the estimation of the distribution oflot parameters, either from one source or several sources. The individual lots may be interpreted asrandom samples taken from the enlarged population or gross supply. The gross supply comprises allmaterials produced (and controlled) according to given specifications, within a country or groups ofcountries. The macro-scale model may be used if the number of producers and structures is large ordifferences between producers can be considered as approximately random.

Table 3.0.1: Scales of fluctuation

Scale Population Reference name Descriptionmacro (global) set of structures gross supply X

meso set of elements lot X | q and ρρρρo

meso (local) one element unit X | q and ρρρρ(∆∆∆∆r)

micro aggregate level referencevolume

type of distribution

Page 145: Probabilistic Model Code Jcss Pmc2000

7 10.10.2000

a

b

c

Figure a) production description fq(q)

Figure b) lot description fx(x|q)

Figure c) total supply fx(x)

Figure 3.0.2: Description of production parameters, lots and total supply

Page 146: Probabilistic Model Code Jcss Pmc2000

8 10.10.2000

The gross supply is described by f(q). Type and parameters should follow from a statisticalsurvey of the fluctuations of the various lots which belong to the production under consideration. It willbe assumed here that f(q) is known without statistical uncertainty. If statistical uncertainties cannot beneglected, they can be incorporated. The distribution f(q) should be monitored more or less continuouslyto find possible changes in production characteristics.

The probability density function (predictive density function) for an arbitrary unit (arbitrarymeans that the lot is not explicitly identified) can be found from the total probability theorem:

dq(q)fq)|(xf=(x)f qxx ∫ (3.0.3)

The density function f(x) may be conceived as the statistical description of x within a largenumber of randomly selected lots. For some purposes one could also identify f(x) directly.

3.0.6 Definition of characteristic value

The characteristic value of a material with respect to a given property X is defined as the px – quantile inthe predictive distribution, i.e.

xc = Fx-1(p) (3.0.4)

Examples for predictive distributions can be found in Annex A. Others may be found in [1], [2] and [3].

3.0.7 Quality control strategies

3.0.7.1 Types of strategies

Normally the statistical parameters of the material properties are based on general tests, takinginto account standard production methods. For economic reasons it might be advantageous to have morespecific forms of quality control for a particular work or a particular factory.

Quality control may be of a total (all units are tested) or a statistical nature (a sample is tested).Quality control will lead to more economical solutions, but has in general the disadvantage that theresult is not available at the time of design. In those cases, the design value has to be based on thecombination of the unfiltered production characteristics and the expected effect of the quality controlselection rules.

Various quality control procedures can be activated, each one leading to a different designvalue. In Figure 3.0.3 an overview is presented. The easiest procedure is to perform no additionalactivities (option "no tests"). This means that the units, lots, production should be defined, theirdescriptions f(x|q) and f(q) should be established and only f(q) should be checked for long term changesin production characteristics.

If on the other hand tests are performed one may distinguish between a total (unit by unit)control and sampling on the one hand and between selection and updating on the other hand. Thevarious options will be discussed.

Page 147: Probabilistic Model Code Jcss Pmc2000

9 10.10.2000

Page 148: Probabilistic Model Code Jcss Pmc2000

10 10.10.2000

no test

test

update

selection

sample

total

sample

total

Figure 3.0.3: Strategies for Quality Control

3.0.7.2 Total testing versus Sampling

Both for updating and for selection one may test all units which go into a structure (total testing)or one may test a (random) sample only (statistical testing).

If the control is total, every produced unit is inspected. The acceptance rules imply that a unit isjudged as good (accepted) or bad (not accepted). This type of control is also referred to as unit by unitcontrol. Typically, testing all units requires a non-destructive testing technique. Therefore some kind ofmeasurement error has to be included resulting in a smooth truncation of the distribution.

If the control is statistical only a limited member of units is tested. The procedure generallyconsists of the following parts:

- batching the products;- sampling within each lot;- testing the samples;- statistical judgement of the results;- decision regarding acceptance.

One normally takes a random sample. In a random sample each unit of the lot has the sameprobability of being sampled. Where knowledge on the inherent structure of the lot is available, thiscould be utilised, rendering more efficient sampling techniques, e.g.:

- sampling at weak points, when trends are known;- sampling at specified intervals;- stratified sampling;

Page 149: Probabilistic Model Code Jcss Pmc2000

11 10.10.2000

The larger efficiency results in smaller sample sizes for obtaining the same filtering capabilityof a test. No further guidance, however, will be given here.

3.0.6.3 Updating versus selecting

Testing can be done with two purposes:

(1) to update the probability density function f(x) or f(q) of some particular lot or item (updating);(2) to identify and reject inadequate lots or units on the basis of predefined sampling procedures

and selection rules (selection).

The basic formula for the first option is given by:

′′ ′f (q) = C L(data|q))f (q) (3.0.5)

where:

f''(q) = posterior distribution of qf'(q) = prior distribution of qL(data/q) = likelihood of the dataq = vector of distribution parameters (e.g. mean and std. dev.)

C = normalising constant = dq(q)fq))|L(data ′∫For the normal case more detailed information is presented in Annex A.

The first option can only be used after production of the lot or item under consideration. This data maynot be known at the time of the design (e.g. ready mix concrete). The second option, on the other hand,offers the possibility to predict the posterior f”(q) for the filtered supply for a given combination off(x|q), f(q) and a selection rule d. In such a case the control may lead to two possible outcomes:

- the lot (or unit) is rejected : d ∉A

- the lot (or unit) is accepted : d ∈ A

Here d is a function of the test result of a single unit or of the combined test result of the units ina sample and A is the acceptance domain.

Page 150: Probabilistic Model Code Jcss Pmc2000

12 10.10.2000

One may then calculate the posterior distributions for an arbitrary accepted lot:

f”(q | d ∈ A) = C P(d ∈ A | q ) f’(q) (3.0.6)

Here f(q) is the distribution function for the unfiltered supply and the acceptance probability P(d ∈ A | q)should be calculated from the decision rule.

The updated distribution for X can be obtained through (3.0.3) with f(q) replaced by f ”(q).

More information about the effect of quality cobntrol on the distribution of material properties can befound in [4].

Page 151: Probabilistic Model Code Jcss Pmc2000

13 10.10.2000

Annex A: Bayesian evaluation procedure for the normal and lognormal distribution – characteristicvalues

If X has a normal distribution with parameters q1 = µ and q2 = σ it is convenient to assume a priordistribution for µ and σ according to:

)m-(n+s2

1exp-k=),(f 22

21)+)n(+(- ′µ′′ν′

σσσµ′ ′δν′ (1)

k = normalizing constantδ(n') = 0 for n' = 0δ(n') = 1 for n' > 0

This special choice enables a further analytical treatment of the various operations. The priordistribution (1) contains four parameters: m', n', s' and ν'.

Using equation (3.0.4) one may combine the prior information characterised by (1) and a testresult of n observations with sample mean m and sample standard deviation s. The result is a posteriordistribution for the unknown mean and standard deviation of X, which is again given by (1), but withparameters given by the following updating formula's:

n” = n’ + n (2)

ν” = ν’ + ν + δ(n’) (3)

m”n” = n’m’ + nm (4)

[ν”s”2 + n”m”2] = [ν’s’2 + n’m’2] + [ν s2 + n m2] (5)

Then, using equation (3.0.3) the predictive value of X can be found from:

5.0

)n"

1+(1s"tm"= ν ′′+X

(6)

where tν'' has a central t-distribution.

In case of known standard deviation σ eq. (2) and (4) still hold for the posterior mean. The predictivevalue of X is

5.0

)n"

1+(1m"= σ+ uX

(7)

where u has a standard normal distribution.

The characteristic value is thus defined as

Page 152: Probabilistic Model Code Jcss Pmc2000

14 10.10.2000

σ++

σ+σ+=

ν unknownforn

sptm

knownforn

pumx

x

x

c5.0"

5.0

)"

11()("

)"

11()("

"

(8)

For n”,ν” → ∞ xc = m” + u(px) s” in both cases with s” = σ.

If X has a lognormal distribution, Y = ln (X) has a normal distribution. One may then use theformer formula’s on Y and use X = exp(Y) for results on X.

Page 153: Probabilistic Model Code Jcss Pmc2000

15 10.10.2000

Annex B: Bayesian evaluation procedure for regression – characteristic value

If only indirect measurements for the quantity of interest are possible and a linear regression modely = a 0 + a1 x is suitable the predictive value of y has also a t-distribution given by

y a a x t sn

x x

x xii

n= + + + + −

F

H

GGGG

I

K

JJJJ=∑

0 10

2

2

1

1 2

11

ν( )

( )

/

where

a y a x

ax y nxy

x nx

xn

x

yn

y

sn

y a a x

n

i ii

n

ii

n

ii

n

ii

n

i ii

n

0 1

11

2 2

1

1

1

20 1

2

1

1

1

1

2

2

= −

=−

=

=

=−

− −

= −

=

=

=

=

=

∑b gν

The characteristic value corresponding to the quantile p is

y a a x T p sn

x x

x xc

ii

n= + + + + −

F

H

GGGG

I

K

JJJJ−

=∑

0 11 0

2

2

1

1 2

11

( , )( )

( )

/

ν

For example, for S-N curves, it is y = ln(N), x = ln(∆σ), a1 = -m und a0 = lna. The characteristic valueof N for given ln(∆σE) = x0 is Nc = exp[yc].

Page 154: Probabilistic Model Code Jcss Pmc2000

16 10.10.2000

References

[1] Aitchison, J., Dunsmore, I.R., Statistical Prediction Analysis, Cambridge University Press,Cambridge, 1975

[2] Raiffa, H., Schlaifer, R., Applied Statistical Decision Theory, MIT Press, Cambridge, 1968

[3] Engelund, S., Rackwitz, R., On Predictive Distribution Functions for the Three AsymptoticExtreme Value Distributions, Structural Safety, Vol. 11, 1992, pp. 255-258

[4] Kersken-Bradley, M., Rackwitz, R., Stochastic Modeling of Material Properties and QualityControl, JCSS Working Document, IABSE-publication, March 1991

Page 155: Probabilistic Model Code Jcss Pmc2000

1 10.10.2000

JCSS PROBABILISTIC MODEL CODE PART 3: RESISTANCE MODELS

3.1 CONCRETE PROPERTIES Table of contents: 3.1.1 Basic Properties 3.1.2 Stress-strain-relationship 3.1.3 The probabilistic model 3.1.4 Distribution for Ykj 3.1.5 Distribution for fco List of symbols: fco = basic concrete compression strength Mj = the logarithmic mean at strength job j Σj = the logarithmic strength standard deviation at job j Y1,j = a log-normal variable representing additional variations due to the special placing, curing

and hardening conditions of in situ concrete at job j Uij = a standard normal variable λ = lognormal variable with mean 0.96 and coefficient of variation 0.005; generally it

suffices to take λ deterministically α(t,τ) = is a deterministic function which takes into account the concrete age at the loading time t

and the duration of loading τ ϕ(t,τ) = is the creep coefficient. βd = total load and depends from the type of the structure Ec = modulus of elasticity fc = in situ strength εe = strain at yielding εu = ultimate strain

Page 156: Probabilistic Model Code Jcss Pmc2000

2 10.10.2000 3.1.1 Basic Properties The reference property of concrete is the compressive strength fco of standard test specimens (cylinder of 300 mm height and 150 mm diameter) tested according to standard conditions and at a standard age of 28 days (see ISO/DIS 2736 and ISO 3893). Other concrete properties are related to the reference strength of concrete according to:

In situ compressive strength: cf = (t, ) α τ λfco [MPa] (1)

Tensile strength: ct c2/3f = 0.3 f [MPa] (2)

Modulus of elasticity: ))(t,+1

1( f 10.5 = Ed

3/1cc τϕβ

[GPa] (3)

Ultimate compression strain: ))(t,+(1f10.6 = d

6/1-c

3u τϕβε − [m/m] (4)

λ is a factor taking into account the systematic variation of in situ compressive strength and

strength of standard tests (see 3.1.3) α(t,τ) is a deterministic function which takes into account the concrete age at the loading time t [days]

and the duration of loading τ [days]. The function is given by: α(t,τ) = α1(τ) α2(t) (5a) α1(τ) = α3(∞) + [1-α3(∞)]exp[-a τ τ] with α3(∞) ≈ 0.8 and a τ = 0.04. α2(t) = a + b 1n(t) (5b) In most applications α1(τ) = 0.8 can be used. The coefficients a and b in α2(t) depend on the

type of cement and the climatical environment; under normal conditions a = 0.6 and b = 0.12. ϕ(t,τ) is the creep coefficient according to some modern code assumed to be deterministic. βd is the ratio of the permanent load to the total load and depends on the type of the structure;

generally βd is between 0.6 and 0.8. 3.1.2 Stress-strain-relationship For concrete under compression the following simplified stress-strain relationship holds: σ = Ec ε for ε < εe (6) σ = fc for εe < ε < εu (7) εe = fc/Ec (8)

Page 157: Probabilistic Model Code Jcss Pmc2000

3 10.10.2000 For calculations where the form of the stress-strain relationships is important the following relationship should be used:

σ εε

= − −

fcs

k

1 1

(9) εs = 0.0011 fc

1/6 (10)

k

Efc s

c=

ε

(11) The relationship holds for 0 < ε < εs. 3.1.3 The probabilistic model The strength of concrete at a particular point i in a given structure j as a function of standard strength fc0 is given as: fc,ij = α(t,τ) (fco,ij)

λ Y1,j (12) fco,ij = exp((Uij Σj + Mj )) (13) in which fc0,ij = log-normal variable, independent of Y1,j , with distribution parameters Mj and Σj Mj = the logarithmic mean at job j Σj = the logarithmic standard deviation at job j Y1,j = a log-normal variable representing additional variations due to the special placing, curing

and hardening conditions of in situ concrete at job j Uij = a standard normal variable representing the variability within one structure λ = lognormal variable with mean 0.96 and coefficient of variation 0.005; generally it

suffices to take λ deterministically

Page 158: Probabilistic Model Code Jcss Pmc2000

4 10.10.2000 The variable Y1,j can also be taken as a spatially varying random field whose mean value function takes account of systematic influences in space. Correspondingly, for the other three basic properties:

Y f 0.3 = f j2,3/2

ijc,ct ij, (14)

1

j3,3/1ijc,ij, )),(1(Y f 10.5 = E −τϕβ+ tdc (15)

))(t,+(1Y f 10 6 = j4,

6/-1ijc,

-3iju, τϕβε d (16)

where the variables Y2,j to Y4,j mainly reflect variations due to factors not well accounted for by concrete compressive strength (e.g., gravel type and size, chemical composition of cement and other ingredients, climatical conditions). The variables Uij and Ukj within one member are correlated by:

ρ−+ρ=ρ2

c

2kjij

kjijd

)rr(exp)1()U,U( (17)

where dc = 5 m and ρ = 0.5. For different jobs Uij and Ukj are uncorrelated. 3.1.4 Distributions of Ykj Unless direct measurements are available, the parameters of the variables Yk,j can be taken from Table 3.1.1. The variables are distributed according to the log-normal distibution. The variability of the variables Yk,j can further be split into a part depending only on the job under consideration and a part representing spatial variability. If direct measurements are available, the parameters in Table 3.1.1 are taken as parameters of an equivalent prior sample with size n' = 10 (see Part 1 for the details of updating). Variable Distribution type Mean Coefficient of variation Related to Y1,j LN 1.0 0.06 compression Y2,j LN 1.0 0.30 tension Y3,j LN 1.0 0.15 E-modulus Y4,j LN 1.0 0.15 ultimate strain

Table 3.1.1: Data for parameters Yi 3.1.5 Distribution for fco The distribution of xij = ln(fco,ij) is normal provided that its parameters M and Σ obtained from an ideal infinite sample. In general it must be assumed that concrete production varies from production unit, site, construction period, etc. and that sample sizes are limited. Therefore, the parameters M and Σ must also be treated as random variables. Then, xij has a student distribution according to:

Page 159: Probabilistic Model Code Jcss Pmc2000

5 10.10.2000

+= −

ν

5.0)"

11("

)"/ln()(" ns

mxFxF tx

where Ftν′′ is the Student distribution for ν′′ degrees of freedom. fco,ij can be represented as

5.0"

"", )11(exp( " n

stmf ijco ++= ν

The values of m”, n”, s” and ν” depend on the amount of specific information. Table 3.1.2 gives the values if no specific information is available (prior information). Table 3.1.2: Prior parameters for concrete strength distribution (fco in MPa) [1, 2] Concrete type Concrete grade Parameters m’ n’ s’ ν’ Ready mixed C15

C25 C35 C45 C55

3.40 3.65 3.85 3.98 -

3.0 3.0 3.0 3.0 -

0.14 0.12 0.09 0.07 -

10 10 10 10 -

Pre-cast elements C15 C25 C35 C45 C55

- 3.80 3.95 4.08 4.15

- 3.0 3.0 4.0 4.0

- 0.09 0.08 0.07 0.05

- 10 10 10 10

The prior parameters may depend on the geographical area and the technology with which concrete is produced. If n”,ν” > 10, a good approximation of the concrete strength distribution is the log-normal distribution

with mean m” and standard deviation s”

21 "

"

"

"

−νν

−nn

.

References [1] Kersken-Bradley, M., Rackwitz, R., Stochastic Modeling of Material Properties and Quality Control, JCSS Working Document, IABSE-publication, March 1991 [2] Rackwitz, R., Predictive Distribution of Strength under Control, Materials & Structures, 16, 94, 1983, pp. 259 - 267

Page 160: Probabilistic Model Code Jcss Pmc2000

PROBABILISTIC MODEL CODE, PART 3, RESISTANCE MODELS

3.* Static Properties of Structural Steel (Rolled Sections)

Properties Considered

The following properties of structural steel are dealt with herein:

fy = yield strength [MPa]fu = ultimate tensile strength [MPa]E = modulus of elasticity [MPa]

ν = Poisson's ratio

εu = ultimate strain

A probabilistic model is proposed for the random vector X = (fy, fu, E, ν, εu) to be used forany particular steel grade, which may be defined in terms of nominal values verified bystandard mill tests (e.g. following the procedures of EN 10025 for sampling and selection oftest pieces and the requirements of EN 10002-1 for testing) or in terms of minimum(hereinafter referred to as code specified) values given in material specifications (e.g. EN10025: 1990).

Only distinct points or parts of the full stress-strain curve are considered, thus the proposedmodel can be used in applications where this type of information is compatible with theparameters of the mechanical model used for strength analysis.

In applications where strain-hardening (and in particular the extent of the yield plateau andthe initial strain-hardening) are important (e.g. inelastic local buckling) a more detailedmodel, which describes the full stress-strain behaviour, may be warranted. Severaldeterministic models exist in the literature which would allow a probabilistic model to bedeveloped. The parameters of the model chosen to describe the full stress-strain curve shouldbe selected in a way that does not invalidate the statistics given below for the key points ofthe stress-strain diagram.

In certain cases, where an absence of a yield phenomenon is noted, the values given for theyield strength may be used instead for the 0.2% proof strength. However, it should beemphasised that most of the data examined refers to steels exhibiting a yield phenomenon,hence this is only a tentative proposal.

Probabilistic Models and Range of Applicability

Mean values and coefficients of variation for the above vector are given in Table A whereasthe correlation matrix is given in Table B. A multi-variate log-normal distribution isrecommended. The values given are valid for static loading.

The values in Table A may be used for steel grades and qualities given in EN 10025: 1990,which have code specified yield strength of up to 380 MPa. Some studies suggest that it is thestandard deviation of the yield strength, rather than its coefficient of variation (CoV), thatremains constant, whilst others point to the converse.

Page 161: Probabilistic Model Code Jcss Pmc2000

A practice which creates problems with sample homogeneity, and hence with consistency ofestimated statistical properties, is downgrading of material, i.e. re-classifying higher gradesteel to a lower grade if it fails to meet the code specified values for the higher grade on thebasis of quality control tests. This practice produces bi-modal distributions and is clearly seenin some of the histograms reported in the studies referenced below. Higher mean values butalso significantly higher CoV’s than those given in Table A are to be expected in such cases.

The values given in Tables A and B should not be used for ultra high strength steels (e.g. withcode specified fy = 690 MPa) without verification. In any case, ultra high strength carbon steel(and stainless steel) grades are characterised by a non-linear uniaxial stress-strain response,usually modelled through the Ramberg-Osgood expression. Practically no statistical data havebeen found for the three parameters describing the Ramberg-Osgood law (initial modulus,0.2% proof stress and non-linearity index).

The CoV values refer to total steel production and are based primarily on European studiesfrom 1970 onwards. In US and Canada higher CoV’s have been used (on average, about 50%higher). The main references on which these estimates are based are given below.

The estimates for ultimate strain, εu, are very sensitive to test instrumentation and rate ofloading up to the point of failure. Both significantly higher and lower CoV’s have, onoccasions, been reported.

Within-batch COVs can be taken as one fourth of the values given in Table A but within-batch variability for the modulus of elasticity, E, and Poisson’s ratio, ν, may be neglected.Variations along the length of a rolled section are normally small and can be neglected.

If direct measurements are available, the numbers in Table A should be used as prior statisticswith a relatively large equivalent sample size (e.g. n' ≈ 50).

For applications involving seismic loads, a random variable called ‘yield ratio’, denoted by rand defined as the ratio of yield to ultimate strength, is often of interest. The statisticalproperties of this ratio can be derived from those given in Tables A and B for the two basicrandom variables. Given the positive correlation between fy and fu , it follows that there is alsoa positive correlation between r and fy. It can also be shown that the CoV for r lies betweenthe CoV’s for fy and fu.

Table A: Mean and COV values

Property Mean Value, E[.] COV, v

fy fysp . α. exp (-u . v) - C 0.07

fu B. E[fu] 0.04

E Esp 0.03

ν νsp 0.03

εu εusp 0.06

Table B: Correlation Matrix

fy fu E ν εu

Page 162: Probabilistic Model Code Jcss Pmc2000

fy 1 0.75 0 0 -0.45

fu 1 0 0 -0.60

E 1 0 0

ν 1 0

εu 1

Definitions and Remarks

- the suffix (sp) is used for the code specified or nominal value for the variable considered

- α is spatial position factor (α=1.05 for webs of hot rolled sections and α=1 otherwise)

- u is a factor related to the fractile of the distribution used in describing the distancebetween the code specified or nominal value and the mean value; u is found to be in therange of -1.5 to -2.0 for steel produced in accordance with the relevant EN standards; ifnominal values are used for fysp the value of u needs to be appropriately selected.

- C is a constant reducing the yield strength as obtained from usual mill tests to the staticyield strength; a value of 20 MPa is recommended but attention should be given to therate of loading used in the tensile tests.

- B = 1.5 for structural carbon steel= 1.4 for low alloy steel= 1.1 for quenched and tempered steel

References

[1] Baker M J, 'Variability in the Strength of Structural Steels - A Study in MaterialVariability; Part 1: Material Variability', CIRIA Technical Note 44, 1972.

[2] Edlund L, 'Coefficients of Variation for the Yield Strength of Steel', 2nd Colloquium onStability of Steel Structures, Final Report, Liege, 1977.

[3] Galambos T V and Ravindra M K, 'Properties of Steel for Use in LRFD', J. Str. Div.,ASCE, Vol. 104, ST9, 1978.

[4] Kennedy D J L and Baker K A, 'Resistance Factors for Steel Highway Bridges', Can. J.Civ. Eng., Vol. 11, 1984, 324-34.

[5] Yamanouchi H, Kato B and Aoki H, 'Statistical Features of Mechanical Properties ofCurrent Japanese Steels', Document for ECCS TC13: Seismic Design, 1990.

[6] Manzocchi G M E, Chryssanthopoulos M K and Elnashai A S, 'Statistical Analysis ofSteel Tensile Test Data and Implications on Seismic Design Criteria', ESEE Report 92-7,Imperial College, 1992.

[7] Agostoni N, Ballio G and Poggi C, 'Statistical Analysis of the Mechanical Properties ofStructural Steel', Costruzioni Metalliche, No.2, 1994, pp. 31-39.

Page 163: Probabilistic Model Code Jcss Pmc2000

JCSS PROBABILISTIC MODEL CODEPART 3: RESISTANCE MODELS

3.2 STATIC PROPERTIES OF REINFORCING STEEL

Table of contents:

3.2.1 Basic Model3.2.2 The probabilistic model3.2.3 Effect of Prior Investigations and Statistical Quality Control3.2.4 Strength of Bundles of Bars

List of symbols:

fy = basic yield stressσ = standard deviation

Page 164: Probabilistic Model Code Jcss Pmc2000

3.2.1 Basic Model

Reinforcing steel generally is classified and produced according to grades, for exampleS300, S400 and S500, the numbers denoting a specified (minimum) yield stress limit.The basic mechanical property is the static yield strength fy defined at strain 0.2‰. Thestress-strain curve for hot rolled steels can be approximated by a bi-linear relationship upto strains of 1% to 2%. The (initial) modulus of elasticity can be taken as constantEa=205[Gpa]. The stress-strain relationship for cold worked steel can also be representedby a bi-linear law but more realistically by a continuous curve for which severalconvenient analytical forms exist.

3.2.2 Probabilistic Model

The yield stress, denoted by X1, can be taken as the sum of three independent Gaussianvariables

1312111 )( XXXdX ++= [MPa] (1)

where X11~N(µ11(d), σ11) represents the variations in the global mean of different mills,X12~N(0, σ12) he variations in a mill from batch(melt) to bath and X13~N(0, σ13) thevariations within a melt. D is the nominal bar diameter in [mm]. For high standard steelproduction the following values have been found: σ11=19 [MPa], σ12 =22 [MPa], σ13=8[MPa] resulting in an overall standard deviation σ1 of about 30 [MPa]. The mean µ11 = µ1

is under controlled conditions Sxxx + 2 σ1. Strength fluctuations along bars arenegligible. The value of µ1(d) is defined as the overall mean from the entire productiongiven a particular bar diameter.

[ ] 111 )08.0exp13.087.0()( −−+µ=µ dd [MPa] (2)

Statistical parameters of some other relevant properties are given in the following table:Quantity Mean σ C.o.V. ρij

Bar area[mm2]

Nom.Area

- 0.02 1.00 0.50 0.35 0

Yieldstress[MPa]

Snom +2σ

30 - 1.00 0.85 -0.50

Ultimatestrength[MPa]

- 40 -sym

1.00 -0.55

δ10

[%]- - 0.09 1.00

For these quantities a normal distribution can be adopted.

3.2.3 Effect of Prior Investigations and Statistical Quality Control

Tests of the lot of reinforcing steel to be used can considerably diminish steel variations,if the lot is known to belong to the production of a specific mill and if it originates from

Page 165: Probabilistic Model Code Jcss Pmc2000

the same batch. Very few direct tests are necessary. Acceptance control for a given lotcan be very efficient to eliminate bad quality lots.

3.2.4 Strength of Bundles of Bars

The yield forces of bundles of bars under static loading is the sum of the yield forces ofeach contributing bar (full plasticity model). In general, it can be assumed that allreinforcing steel used at a job originates from a single (but unknown) mill. Thecorrelation coefficient between yield forces of individual bars of the same diameter canthen be taken as 0.9. The correlation coefficient between yield forces of bars of differentdiameter and between the yield forces in different cross-sections in different beams in astructure can be taken as 0.4. Along structural members the correlation is unity withindistances of roughly 10m (representative for bar length) and vanishes outside.

Page 166: Probabilistic Model Code Jcss Pmc2000

1

JCSS PROBABILISTIC MODELCODEPART 3: RESISTANCE MODELS

3.9 MODEL UNCERTAINTIES

Table of Contents

3.9.1 General3.9.2 Types of models for structural analysis3.9.3 Recommendations for practice

List of Symbols

Y = response of the structure according to the modelY′ = real response of the structuref( ) = model functionƒ′( ) = model function including model uncertaintiesXi = basic variableθi = model uncertainty

Page 167: Probabilistic Model Code Jcss Pmc2000

2

3.9 MODEL UNCERTAINTIES

3.9.1 General

In order to calculate the response of a structure with certain (random) properties under certain(random) actions use is made of models (see Part I, section 5). In general such a model can bedescribed as a functional relation of the type:

Y = f (X1,X2,…Xn) (3.9.1)

Y = response of the structuref( ) = model functionXi = basic variables (actions and structural properteis)

The model function f (..) is usually not complete and exact, so that the outcome Y cannot bepredicted without error, even if the values of all random basic variables are known. The realoutcome Y’ of the experiment can formally be written down as:

Y′ = ƒ′ (X1… Xn , θ1…θ2) (3.9.2)

The variables θi are referred to as parameters which contain the model uncertainties and aretreated as random variables. The model uncertainties account for:

• random effects that are neglected in the models• simplifications in the mathematical relations

Ideally model uncertainties should be obtained from a set of representative laboratoryexperiments and measurements on real structures where all values of Xi are measured orcontrolled. In those case a model uncertainty has the nature of an intrinsic uncertainty. If thenumber of measurements is small the statistical uncertainty may be large. Additional theremay be uncertainty due to measurement errors both in the Xi and in the Y. Bayesian regressiananalysis usually is the appropriate tool to deal with the above situation.

In many cases, however, a good and consistent set of experiments is lacking and statisticalproperties for model uncertainties are purely based on engineering judgement. Sometimes acomparison between various models may help to defend certain propositions.

The most common way of introducing the model uncertainty into the calculation model is asfollows:

Y′ = θ1 ƒ (X1… Xn ) (3.9.3)or

Y′ = θ1 + ƒ (X1… Xn ) (3.9.4)

or a combination of both. The first definition is clarified in Figure 3.9.1

It should be kept in mind that this way the statistical properties of the model uncertaintiesdepend on the exact definition of the model output. A theoretical elegant way to avoid thesedefinition dependency is to link model uncertainties directly to the basic variables, that is tointroduce X’i = θ1 Xi..

Page 168: Probabilistic Model Code Jcss Pmc2000

3

θ = Y′ / f(X1,..Xn)

1 2 3 4 5 6 7 8 experiment number

Figure 3.9.1: estimation of model uncertainty statistics on a number of tests followingdefinition 3.9.3

3.9.2 Types of models for structural analysis

Model uncertainties can be subdivided into:

-load calculations models-load effect calculation models-local stiffness and resistance models

For the model uncertainties in the load models reference is made to Part 2.

The load effect calculation models have to do with the linear or nonlinear calculation ofstresses, axial forces, shear forces and bending and torsional moments in the various structuralelements. The model uncertainties are usually the result of negligence of for example 3D-effects, inhomogenities, interactions, boundary effects, simplification of connectionbehaviour, imperfections and so on. The scatter of the model uncertainty will also depend onthe type of structure (frame, plates, shell, solids, etc).

The local models are used to define the behaviour of an element, a typical cross section oreven of the material in a single point. One may think in this respect of the visco-elastic model,the elastic plastic model, the yield condition (Von Mises, Tresca, Coulomb), the hardeningand softening behaviour, the thermal properties and so on.

3.9.3. Recommendations for practice

Models may be of a numerical , analytical or empirical nature. In the recommended values inTable 3.9.1 a more or less standard structural Finite Element Model has been kept in mind.

The model uncertainties are assumed to be partly correlated throughout the structure: on onepoint of the structure the circumstances will usually be different from another point whichmakes it unlikely that a full correlation exists. For that reason the Table 3.9.1 also includes anestimate for the degree of correlation between various points or critical cross sections in onestructure.

Page 169: Probabilistic Model Code Jcss Pmc2000

4

Table 3.9.1 Recommended probabilistic models for Model Uncertainties

Model type Distr mean CoV correlation

load effect calculationmoments in framesaxial forces in framesshear forces in framesmoments in platesforces in platesstresses in 2D solidsstresses in 3D solids

LNLNLNLNLNNN

1.01.01.01.01.00.00.0

0.10.050.10.20.10.050.05

resistance models steel (static)bending moment capacity (1)

shear capacitywelded connectio capacitybolted connection capacity

LNLNLNLN

1.01.01.151.25

0.050.050.150.15

resistance models concrete (static)bending moment capacity (1)

buckling

shear capacityconnection capacity

LNLNLN

1.21.41.0

0.150.250.1

(1) including the effects of normal and shear forces.

Page 170: Probabilistic Model Code Jcss Pmc2000

Memorandum99-CON-M002 1 April 26, 1999/

3rd draft

JCSS-VROU/HOL 99

JCSS PROBABILISTIC MODEL CODEPART 3: RESISTANCE MODELS

3.10 DIMENSIONS

Table of contents:

3.10.1 External dimensions of concrete components3.10.2 Concrete cover3.10.3 Differences between concrete columns, slabs and beams3.10.4 Cross-section dimensions of hot rolled steel products3.10.5 Theoretical models3.10.6 Correlations3.10.7 References

Page 171: Probabilistic Model Code Jcss Pmc2000

Memorandum99-CON-M002 2 April 26, 1999/

3rd draft

3.10 DIMENSIONS

3.10.1 External dimensions of concrete components

In the following the only time independent effects are considered. Dimensionaldeviations of a dimension X is described by statistical characteristics of its deviations Y fromthe nominal value Xnom:

Y = X - Xnom (1)

Concerning external (perimeter) dimensions of reinforced concrete cross-section ofhorizontal members (beams, plates), available data are quite extensive, although notconvincing. The following general remarks follow from recent analysis of large samples ofmeasurements [1,2,3,4]. It has been observed that the following aspects do not significantlyaffect dimensional deviations of reinforced concrete cross-section:- the type of the elements (reinforced, prestressed),- the shape of the cross/section (rectangular, I, T, L),- the class of concrete (strength of concrete),- dimension orientation (depth, width),- position of the cross-section (mid-span, support).

It has been found [4] that external dimensions of concrete cross-sections are onlyslightly dependent on the mode of production (precast, cast in situ).

When precast and cast in situ elements are taken together [2] then for the mean andstandard deviation (the normal distribution seems to be satisfactory) of Y may be expectedwithin the limits:

0 < µy = 0.003 Xnom < 3 mm (2)

σy = 4 mm + 0.006 Xnom < 10 mm (3)

These formulae are valid for the nominal value Xnom, up to about 1000 mm (nosignificant dependence is observed beyond this size). Note that recent European document [6]on execution of concrete structures specifies is in a good agreement with the above mentioneddata. The maximum permitted deviation ± 19 mm (corresponding to about σy = 12 mm isspecified given for Xnom = 1000 mm

3.10.2 Concrete cover

Top SteelAccording to the data reported in [1] the average concrete cover to the top steel of

beams and slabs in systematically greater than the nominal value (by about 10 mm), thestandard deviation is also around 10 mm (within an interval from 5 to 15 mm). Reasonableaverage formulae (with a great uncertainty) for the cover to beam and plate to steel may bewritten in an approximate form as:

5 mm < µy < 15 mm (4)

Page 172: Probabilistic Model Code Jcss Pmc2000

Memorandum99-CON-M002 3 April 26, 1999/

3rd draft

5 mm < σy < 15 mm (5)

Bottom steelEven more scattered and less conclusive are data indicated [1] for cover to bottom steel

of beams and slabs. Depending on type of spacers (and perhaps on many other productionconditions) data reported in [1] indicate that the mean µy may be expected within an extremelybroad range from -20 mm to +20 mm, while the standard deviation seems to be relativelysmall, around 5 mm only, thus

-20 mm < µy < 20 mm (6)

σy ≅ 5 mm (7)

Effective depthObviously, the above relations are providing only gross estimates and particular values

must be chosen taking into account other specific conditions. Nevertheless, they are in areasonable agreement with observations concerning effective depth of the cross-section (thedepth and concrete cover could be highly correlated). If no further information is available, itis indicated in [2] that the characteristics may be assessed by:

µy ≅ 10 mm (8)

σy ≅ 10 mm (9)

Further experimental measurements (related to specified production procedure) with aspecial emphasis on internal dimensions of horizontal, as well as vertical elements areobviously needed.

3.10.3 Differences between concrete columns, slabs and beams

Concerning external dimensions no significant differences have been found betweencolumns, slabs and beams [4]. There are, however, some differences in concrete cover of theseelements. Table 2 shows characteristics of concrete cover based on data reported in [1]collected in UK.

Table 1. Characteristics of concrete cover.Concrete cover Mean µY

[mm]Standard deviation σY

[mm]in column - [1] (two samples) 1; 3 0,2; 7in wall - [1] (one sample-241 obs.) 1 12of slab bottom steel - [1] and UK −8 to 5 7 to 23 6 to 15 3 to 4of slab top steel - [1] and UK − 13 to 11 5 to 16 11 to17 6 to 16of beam bottom steel in UK − 17* to 3 2 to 5of beam top steel in UK 1 to 12 8 to14

* Note: The negative mean of deviations was observed when using plastic spacers.

According to the data in Table 1 the following characteristics of concrete cover may beconsidered as a first approximations (intervals indicated for the mean and standard deviation

Page 173: Probabilistic Model Code Jcss Pmc2000

Memorandum99-CON-M002 4 April 26, 1999/

3rd draft

represent a reasonable bonds which are dependent on particular conditions and quality ofproduction):

- column and wall:µY = 0 to 5 mm (10)

σY = 5 to 10 mm (11)- slab bottom steel:

µY = 0 to 10 mm (12)σY = 5 to 10 mm (13)

- beam bottom steel:µY = − 10 to 0 mm (14)σY = 5 to 10 mm (15)

- slab and beams top steel:µY = 0 to 10 mm (16)

σY = 10 to 15 mm (17)

Obviously, these values represent only very gross estimates of basic statisticalcharacteristics of concrete cover and particular values should be chosen in accordance withrelevant production conditions. Further experimental measurements (related to givenproduction procedure) with a specific emphasis on internal dimensions of horizontal, as wellas vertical elements are obviously needed.

Note that recent European document [6] on execution of concrete structures specifies isin a good agreement with the above mentioned data. The minimum permitted deviation ofconcrete cover is -10 mm (corresponding to σy = 6), the maximum permitted deviation is from10 mm up to 20 mm (corresponding to about σy from 6 to 13 mm).

3.10.4 Cross-section dimensions of hot rolled steel products

In Czech Republic [5] some data on dimensional deviations of cross-sections of rolledproducts (profile I, L, T) are collected and evaluated at present. Preliminary results obtainedfor profile I (IPE 80 to 200) indicate that the mean and standard deviation of Y of the basicdimensions (height, width and thickness) is less than 1 mm, while the coefficient of skewnessis negligible, thus the indicative values of the deviation characteristics are

− 1,0 mm ≤ µY ≤ + 1,0 mm (18)

σY ≤ 1,0 mm (19)

For cross-section area and modulus it has been found that independently on the profileheight the mean of both quantities differ from their nominal values insignificantly (thedifferences are practically zero) and the coefficients of variations for cross-section area areabout 3,2 %, for cross-section modulus about 4,0 %. The normal distribution seems to be fullysatisfactory model for all geometrical properties.

3.10.5 Theoretical models

Page 174: Probabilistic Model Code Jcss Pmc2000

Memorandum99-CON-M002 5 April 26, 1999/

3rd draft

Several theoretical models were considered in previous studies [1] and [2]. It appearsthat unless further data are available, normal distribution provides a good general model forexternal dimensions of both reinforced concrete and steel elements and also for effectivedepth of reinforced concrete cross-section.

However, concrete cover to reinforcement in concrete cross-sections of various concreteelements is a special random variable, which may hardly be described by a normaldistribution. In this case different types of one or two side-limited distribution should beconsidered.

Taking into account various combination of the coefficient of variation w= σ/µ, andskewness α (the subscripts are omitted here), the following commonly used distributionscould be considered:

- for all w and α beta distribution with general lower and upper bound a and b, denotedBeta(µ;σ,a;b),

- for all w and α >0 shifted lognormal distribution with lower bound a, denoted sLN(µ;σ;a),

- for all α < 2w beta distribution with the lower a at zero (a = 0), and a general upperbound b, which is denoted Beta(µ;σ;0;b),

- for α = 3w+w3 lognormal distribution with the lower bound a at zero (a = 0),- for α = 2w gamma distribution (which has the lower bound a at zero (a = 0) by

definition), denoted Gamma(µ;σ).

3.10.6 Correlations

It has been found [4] that external dimensions of concrete cross-sections are onlyslightly dependent on the mode of production (precast, cast in situ). No significant correlation(the correlation coefficients being around 0,12) has been found between vertical andhorizontal dimensions. No data are available concerning correlation of internal (concretecover) and external dimensions even though the depth and concrete cover of some elementscould be highly correlated. There may be a strong auto-correlation along the element;correlation distance may be assessed as multiple (say from 3 to 5) of the cross section heightor as a part of the span (say 1/4 to 1/2).

3.10.7 References

[1] Casciati, F., Negri, I, Rackwitz, R. Geometrical Variability in Structural Members andSystems, JCSS Working Document, January 1991.

[2] Tichý, M. Dimensional Variations. In: Quality Control of Concrete Structures, RILEM,Stockholm, June 1979, pp. 171-180.

[3] Tichý, M. Variability of Dimensions of Concrete Elements. In: Quality Control ofConcrete Structures, RILEM, Stockholm, June 1979, pp. 225-227.

[4] Bouska, P., Holický, M. Statistical analysis of geometric parameters of concrete crosssections. Research report, Building Research Institute, Prague, 1983 (in Czech -Summary in English is provided).

Page 175: Probabilistic Model Code Jcss Pmc2000

Memorandum99-CON-M002 6 April 26, 1999/

3rd draft

[5] Fajkus, M., Holický, M., Rozlívka L., Vorlíček M.: Random Properties of SteelElements Produced In Czech Republic. Proc. Eurosteel'99, Paper No. 90. Prague 1999.

[6] ENV 13670-1 Execution of concrete structures - Part 1: Common, Brussels, 2000.

Page 176: Probabilistic Model Code Jcss Pmc2000

Memorandum99-CON-M0034 1 10 May 1999

JCSS PROBABILISTIC MODEL CODEPART 3: RESISTANCE MODELS

3.11 EXCENTRICITIES

Table of contents:

3.11.1 Introduction3.11.2 Basic model3.11.3 Probability modelling3.11.4 References

List of symbols:

ρ(i,j) = coefficient of correlation for two columns i and je = average eccentricitiesf = central eccentricity due to curvatureφ = out of plumbnessµ = meanσ = standard deviation

Page 177: Probabilistic Model Code Jcss Pmc2000

Memorandum99-CON-M0034 2 10 May 1999

3.11.1 Introduction

The bearing capacity of slender elements depends to some extend on the difference betweenthe actual and theoretical lining, the so called excentricity. In this section we will present the modelsfor the excentrities of columns in braced and unbraced frameworks.

3.11.2 Basic model

In the analysis three types of eccentricities can be distinguished (see figure 3.11.1)

• the average excentricity• the initial curvature• the out of plumbness φ

For the braced frame the out of plumpness is only relevant for the bracing system, but not for thecolumn under consideration; for the unbraced frame especially the out of plumpness is usuallydominant over the end point eccentricity and the curvature.

e fφ

Figure 3.11.1: The three basic excentiricties e, f and φ

3.11.3 Probabilistic models

Distribution type, mean and scatter

The probabilistic model for the three basic parameters are presented in Table 3.11.1. For all threecases it is assumed that the distribution is symmetrical around zero and that small eccentricities aremore likely than large ones, although large ones are more dangerous. Note that in special cases non-symmetrical cross sections may have µ(f) ≠ 0 due to the fabrication process.

In many cases only the absolute values of the excentricities are important. From the table itcan be derived that these absolute values are distributed with a truncated normal distribution, thetruncation point being the mean of the untruncated distribution. The absolute value has a mean of

Page 178: Probabilistic Model Code Jcss Pmc2000

Memorandum99-CON-M0034 3 10 May 1999

about 0.80 times the standard deviation of the untruncated distribution; the coefficient of variation is0.75.

X description type µ σ

e average excentricity normal 0 m L/1000f out of straightness normal 0 m L/1000φ the out of plumbness normal 0 rad 0.0015 rad

Table 3.11.1 statistical properties for excentricities (for steel and concrete columns)

All eccentricity parameters e, f and φ shall be regarded as independent variables.

Time and spatial dependency

In general eccentricities may be treated as being time independent. An exception might betimber where in particular the initial curvature may depend on the moisture content.

For the spatial fluctuation the dependency between various columns in one building isimportant. In this code the average eccentricity e as well as the out of straightness f will beconsidered as being uncorrelated for all members. For φ the following correlation pattern isrecommended:

ρ(φi,φj) = 0,5 for two columns on the same floorρ(φi,φj) = 0 for columns on different floors

In this model some possible negative correlation between columns in vertical direction,resulting from (over) corrections for out of plumbness on lower storeys is not considered. This is aconservative assumption.

Note on applications

The limit state function for a simple slender column, clamped at the bottom and free at the top, maybe presented as:

hPPP

PMZ

E

Ep φ

−−=

Mp = plastic momentP = vertical loadPE = Euler buckling loadh = height of the column

Page 179: Probabilistic Model Code Jcss Pmc2000

Memorandum99-CON-M0034 4 10 May 1999

3.11.4. References

Ellingwood, B., Galambos, Th., MacGregor, J., Cornell, A.: Development of a probability based loadcriterion for American national standard A58. Building code requirements for minimum design loadsin buildings and other structures. NBS special publication 577, June 1980

Bjorhovde, R.: A probabilistic approach to maximum column strength. Reliability of MetalStructures, ASCE, Speciality Conference, Pittsburgh, 1972.

Geometrical and cross-sectional properties of steel structures. Chapter 2. European Convention forconstructional steelwork. Second international colloquium on stability. Introductory report, sept.1976, pp. 19-46, 58-59.

Edlund, B., Leopoldson, U.: Monte Carlo simulation of the load carrying capacity of steel beams.Chalmers university of technology, division of steel and timber structures, publ. S71-3 & S71-5,Göteborg, 1973.

Alpsten, G.A.: Statistical Investigation of the strength of rolled and welded structural steel shapes.Report 39.4, Swedish institute of steel construction, Stockholm

Hardwick, T.R., Milner, R.M.: Dimensional Variations - Frame structures for schools.The architects, Journal information library, 20 September 1967, vol. 146, technical study, AJ SfBBa4, pp. 745-748.

Klingberg, L.: Studies on the dimensional accuracy of a column and beam framework. NationalSwedish building research summaries, R38:1970.

Klingberg, L.: Studies of dimensional accuracy in prefab building with flexible joints.National Swedish building research summaries, R28:1971.

Maass, G.: Statistische Untersuchungen von geometrischen Abweichungen an ausgeführtenStahlbetonbauteilen. Teil II: Messergebnisse geometrischer Abweichungen bei Stützen, Wänden,Balken und Decken des Stahlbetonhochbaus. Berichte zur Zuverlässigkeitstheorie der Bauwerke, TUMünchen Sonderforschungsbereich 96, Heft 28/1978.

Fiorato, A.E.: Geometric imperfections in concrete structures. National Swedish building research.Document D5: 1973.

Page 180: Probabilistic Model Code Jcss Pmc2000

1

JCSS PROBABILISTIC MODEL CODE

EXAMPLE APPLICATIONS

Ton Vrouwenvelder

Milan Holicky

Jana Markova

1 REINFORCED CONCRETE SLAB

2 STEEL BEAM

3 TWO STOREY STEEL FRAME

4 REINFORCED CONCRETE COLUMN IN MULTI STOREY FRAME

Page 181: Probabilistic Model Code Jcss Pmc2000

2

EXAMPLE 1: REINFORCED CONCRETE SLAB

As d

a

x

h

Figure 1.1 Simply supported reinforced concrete slab and its cross-section.

Table 1.1 Probabilistic models for the reinforced concrete slab example (acc. to JCSS Probabilistic Model Code 2001).

Basic variable Sym- bol

Distr. type Dimen- sion

Mean Standard deviation

V λ ρ

Compression concrete strength

fc

lognormal MPa 30 5 0.17

Yield strength fy lognormal MPa 560 30 0.05 Span of the slab L determin. m 5 - Reinforcement area As determin. m2 nom. - Slab depth h normal m 0.2 0.005 0.025 Distance of bars to the slab bottom

a gamma m c + φ/2 0.005 0.17

Density of concrete γcon normal MN/m3 0.025 0.00075 0.03 Imposed long-term load

qlt

gamma kN/m2

0.5 0.75 1.5 0.2/year perm.

Imposed short-term load

qst

exponenc. kN/m2

0.2 0.46 2.3 1/year 1/365

Uncertainty of resistance

θR lognormal - 1.1 0.077 0.07

Uncertainty of load effect

θE lognormal - 1 0.2 0.2

qg

L

Page 182: Probabilistic Model Code Jcss Pmc2000

3

The simply supported reinforced concrete slab has the span of 5 m and cross-sectional depth

of 0.20 m. The slab carries permanent load g and imposed load q (office areas) which cause

the bending moment. The model of permanent load is determined as the weight of a concrete

floor of a uniform equivalent thickness of 0.25 m (including weight of the slab and floor

layers). The following material characteristics for concrete and reinforcing steel are

considered: concrete class C 20/25 and reinforcing steel S 500.

The reliability of the designed slab is verified using probabilistic methods. The limit state

function Z for slab may be expressed as

Z = θR As fy ( h – a – 0.5 As fy / fc ) – θE (g + qlg+ qst ) L2 / 8 (1.1)

where a is the axial distance of reinforcement to the slab bottom (a = c + φ/2, c is the concrete

cover, φ is the diameter). The basic variables applied in the reliability analysis are listed in

Table 1.1. Statistical properties of the random variables are further described by the moment

characteristics, the mean and standard deviation. Models of variables follows

recommendations of JCSS [1]. Some of the basic variables are assumed to be deterministic

values (As, L), while the others are considered as random variables having the normal,

lognormal, exponential and Gamma distribution.

Coefficients of model uncertainties θR and θE are random variables to cover imprecision and

incompleteness of the relevant theoretical models for resistance and load effects, imposed load

q is assessed by imposed long-term load qlt and imposed short-term load qst for office areas

[1].

The mean and standard deviation of the imposed long-term load correspond to the

distribution of 5 years maximum. This is expressed in Table 1.1 by means of the renewal rate

λ = 1/5. Interarrival-duration intensity ρ is considered as permanent. Following the

recommendations of JCSS [1] for office areas the mean of long-term load m = 0.5 kN/m2,

standard deviations σ(v) = 0.3 kN/m2 and σ(u) = 0.6 kN/m2, the reference area A0 = 20 m2.

The influence area A in this example is assumed to be A = 30 m2 and factor κ for the shape of

influence line i(x,y) κ = 2. Following JCSS [1], Clause 2.2.2 the standard deviation of long-

term load is given as

σ(qlg)(u) = κσσAAuv 022 )()( + = 2

30206.03.0 22 + = 0.75 kN/m2 (1.2)

For short-term imposed loads the renewal rate λ = 1 (1 occurrence per year) and the

interarrival-duration intensity ρ = 1/365 corresponds to the arrival rate and mean duration of

Page 183: Probabilistic Model Code Jcss Pmc2000

4

one day. For the short-term imposed load JCSS [1] gives m = 0.2 kN/m2 and σ(u) =

0.4 kN/m2. The standard deviation of the short-term load is assessed as

σ(qst)(u) = κσAAu 02)( = 2

30204.0 2 = 0.46 kN/m2 (1.3)

The model of the reinforcement cover follows Section 3.10 of JCSS [1], µ = 0.03 nom, σ =

0.005 m (bottom steel), coefficient of variation v = 0.17, gamma distribution.

The models of uncertainty are considered according to Section 3.9 of JCSS [1]. Model

uncertainty for the bending moment capacity θR has the mean µ = 1.1 and standard deviation

σ = 0.077, model uncertainty for the load effect θE has the mean µ = 1 and standard deviation

σ = 0.2, lognormal distribution.

The software product Comrel [2] is used for time-dependent reliability analysis of the

reinforced concrete slab. The reference period of fifty years is taken into account. The

increasing reliability index β1 (lower bound (Pf)) from 1.9 to 4.5 and β2 (upper bound (Pf))

from 0.4 to 3.9 for the slab depth of 0.20 m depending on designed reinforcement ratio As/[b(h

– a)], considering this ratio in the interval from 0.2 % to 0.5 %, is shown in Figure 1.2. The

reliability index β of the slab is considered to be within the lower and upper bound of

reliability indices β1 and β2. The target value βt = 3.8 for fifty year reference period and the

ultimate limit states is recommended in common cases in JCSS Probabilistic Model Code [1],

in ISO 2394 General principles on reliability for structures and EN 1990 Basis of structural

design.

Page 184: Probabilistic Model Code Jcss Pmc2000

5

Fig. 1.2 The reliability of reinforced concrete slab versus reinforcement ratio.

The selected sensitivity factors α shown in Table 1.2 express the influence of basic

variables to the resulting reliability of reinforced concrete slab. The active imposed load is

considered.

Table 1.2 Sensitivity factors α of selected basic variables for active imposed load.

Basic variable Sensitivity factor

α

Basic variable FORM factor α

cross-sectional height h 0.05 concrete strength in compr. fc 0.01

distance of bars a -0.10 yield strength fy 0.16

imposed load qst -0.18 uncertainty of resistance θR 0.21

imposed load qlt -0.71 uncertainty of load θE -0.61

References

[1] JCSS Probabilistic Model Code. 2001. [2] Comrel, RCP Consulting software, version 7.10, Munich, 1999.

00,5

11,5

22,5

33,5

44,5

5

0,2 0,25 0,3 0,35 0,4 0,45 0,5 0,55

A s / [b (h - a )] [%]

β

β2

3.8

β1

Page 185: Probabilistic Model Code Jcss Pmc2000

6

EXAMPLE 2: STEEL BEAM

Figure 2.1 Steel beam as a load-bearing floor element in shopping areas.

Table 2.1 Probabilistic models for the steel beam example (acc. to JCSS Probabilistic Model Code 2001).

Basic variable Sym- bol

Distr. type Dimen- sion

Mean Standard deviation

V λ ρ

Yield strength fy lognormal MPa 280 19.6 0.07 Span of the beam L determin. m 5 - Section modulus W determin. m3 param. - Concrete density γcon normal MN/m3 0.024 0.00096 0.04 Slab depth h normal m 0.25 0.01 0.04 Distance of beams d determin. m 3 - Imposed long-term load categ. D

qlt

gamma kN/m2

0.9 2.15 0.2/year perm.

Imposed short-term load categ. D

qst

exponenc. kN/m2

0.4 1.42 1/ year 14/365

Uncertainty of resistance

θR lognormal - 1 0.05 0.05

Uncertainty of load effect

θE lognormal - 1 0.2 0.2

The simply supported beam of a rolled steel section I is a load-bearing floor element in the

shopping areas of the category D [1], its span is 5 m. The beam carries permanent load g due

to its self-weight, the weight of concrete slab and floor layers. The distance of beams d = 3 m.

The model of the permanent load is considered here as the weight of a reinforced concrete

floor of a uniform equivalent thickness of 0.25 m including self-weight of the steel beam. The

beam is made of steel grade S 235 (nominal yield strength fyk = 235 MPa).

The reliability of the designed steel beam is verified using probabilistic methods. The limit

state function Z for the beam is expressed as

Z = θR W fy – θE d(g + qlg+ qst ) L2 / 8 (2.1)

The basic variables applied in the reliability analysis are listed in Table 2.1. Models of

variables follows recommendations of JCSS [2]. Some of the basic variables are assumed to

be deterministic values (d, L, W), while the others are considered as random variables having

the normal, lognormal, exponential and Gamma distribution.

qg

L

Page 186: Probabilistic Model Code Jcss Pmc2000

7

Coefficients of model uncertainties θR and θE are random variables to cover imprecision

and incompleteness of the relevant theoretical models for resistance and load effects. The

imposed load q for shopping areas is assessed by imposed long-term load qlt and imposed

short-term load qst following JCSS, Clause 2.2.4 [2] for live load models.

The mean and standard deviation of the imposed long-term load for shopping areas

correspond to the distribution in range from 1 to 5 years maximum (here it is assumed

5 years). This is expressed in Table 2.1 by means of renewal rate λ = 0.2. It is considered

permanent inter-arrival duration intensity ρ. Following the recommendations for shopping

areas indicated in JCSS [2], the mean of long-term load m = 0.9 kN/m2, the standard

deviations σ(v) = 0.6 kN/m2 and σ(u) = 1.6 kN/m2, the reference area A0 = 100 m2. The

influence area A is assumed to be A = 120 m2 in this example and factor κ for the shape of

influence line i(x,y) κ = 2. Following JCSS, Clause 2.2.2 [2] the standard deviation of long-

term load is given as

σ(qlg)(u) = AA

uv 022 )()( κσσ + = 12010026.16.0 22 + = 2.15 kN/m2 (2.2)

For short-term imposed loads are considered 14 occurrences per year (the range may be

from 1 to 14 occurrences per year depending on the shopping process). Thus, the inter-arrival

duration intensity ρ = 14/365. For the short-term load m = 0.4 kN/m2 and σ(u) = 1.1 kN/m2.

The standard deviation of the short-term load is assessed as

σ(qst)(u) = AAu 02)( κσ =

12010021.1 2 = 1.42 kN/m2 (2.3)

The models of uncertainty are considered according to Table 3.9.1 of JCSS [2]. The model

uncertainty for the bending moment capacity has for steel the mean µ = 1 and standard

deviation σ = 0.05, the model uncertainty for the load effect µ = 1 and standard deviation σ =

0.1, lognormal distribution.

The software product Comrel [3] is used for time-dependent reliability analysis of the steel

beam. The reference period of fifty years is taken into account. The reliability index β1 (lower

bound (Pf)) increases from 3.1 to 4.9, and β2 (upper bound (Pf)) increases from 2.3 to 4.3 with

the increasing section modulus W of the steel beam (see Figure 2.2). The reliability index β of

the beam is assumed to be within the lower and upper bounds of reliability indices β1 and β2.

Horizontal dash line in Figure 2.2 indicates the recommended reliability index βt = 3.8 for the

ultimate limit states following recommendations of JCSS [2].

Page 187: Probabilistic Model Code Jcss Pmc2000

8

Fig. 2.2 The reliability index β of a steel beam versus section modulus W.

The selected sensitivity factors α shown in Table 2.2 express the influence of basic

variables to the resulting reliability of steel beam. The active imposed load is considered.

Table 2.2 Sensitivity factors α of selected basic variables.

Basic variable Sensit. factor α Basic variable FORM. factor α

imposed load qst -0.06 uncertainty of load θE -0.33

imposed load qlt -0.92 yield strength fy 0.16

resistance uncert. θR 0.11 concrete density -0.01

References

[1] prEN 1991 Actions on Structures, Part 1.1 Densities, Self-weight and Imposed Loads on Buildings. European Committee for Standardisation, CEN/TC 250 Final Draft, July 2001. [2] JCSS Probabilistic Model Code, 2001. [3] Comrel, RCP Consulting software, version 7.10, Munich, 1999.

2

2,5

3

3,5

4

4,5

5

0,001 0,0015 0,002 0,0025W [m3]

β

β 1

β 2

βt = 3.8

Page 188: Probabilistic Model Code Jcss Pmc2000

9

EXAMPLE 3: TWO STOREY STEEL FRAME

Figure 3.1. Two storey steel frame

Consider the simple two storey steel frame of Figure 3.1. The floors are supposed to be of

concrete. Let the limit state function for a particular member failure be given by:

Z = R – 0.16 mE h (G + Q + W) (3.1)

where R is the resisting bending moment, G the self weight, Q the live load and W the wind

load. The factor 0.16 is the result of a structural analysis. The details of that analysis are not

relevant for this example. The mE is the model factor and h the storey height. The resistance R

and the forces G, Q and W are respectively given by:

R = mR Zp fy (3.2a)

G = a b t ρc g (3.2b)

Q = a b (qlong + qshort) (3.2c)

W = 2 h b ca cg cr (0.5 mq ρa U2) (3.2d)

The designation of the variables as well as their deterministic values or probabilistic models as

derived from the JCSS Probabilistic Model Code are given in Table 3.1.

Q h W

G h

a

Page 189: Probabilistic Model Code Jcss Pmc2000

10

Table 3.1 Probabilistic models for the steel frame example (according to the JCSS Probabilistic Model Code

2001)

X Designation Distribution Mean V λ

a in plane column distance Deterministic 6 m -

b frame to frame distance Deterministic 5 m -

h storey height Deterministic 3 m -

t thickness concrete floor slab Normal 0.20 m 0.03

Zp plastic section modulus Normal 0.0007m3 0.02

fy steel yield stress Lognormal 300 MPa 0.07

g acceleration of gravity Deterministic 10 m/s2 -

ρc mass density concrete Normal 2.4 ton/m3 0.04

qlong long term live load (sustained) Gamma 0.50 kN/m2 1.15 0.2/year

qshort short term live load (1 day) Exponential 0.20 kN/m2 1.60 1.0/year

ρa mass density air Deterministic 0.125kg/m3 -

ca aerodynamic shape factor Normal 1.10 0.12

cg gust factor Normal 3.05 0.12

cr roughness factor Normal 0.58 0.15

u ref wind speed (8 hours) Weibull 5 m/s 0.60 3.0/day

U ref wind speed (one year) Gumbel 30 m/s 0.10 1.0/year

mq model factor wind pressure Normal 0.80 0.20

mR model factor resistance Normal 1.00 0.05

mE model factor load effect Normal 1.00 0.10

The information in Table 3.1 can be derived from the JCSS Probabilistic Model code. For some

of the variable some clarification will be given:

The gust factor cg in the wind model is defined as:

cg(z) = 1 + 2 gp Iu(z) (3.3)

where z is the building height, gp is the peak factor and Iu(z) = 1/ln(z/zo) the turbulence

intensity. The building height in this example is equal to two times the storey height, so z =

2h. The peak factor gp for a storm period of 8 hours is about 4.2. Finally the roughness

parameter z0 is assumed to be 0.10 m. The coefficient of variation depends primarily on the

variability of gp.

Page 190: Probabilistic Model Code Jcss Pmc2000

11

The roughness factor cr is given by:

07.02

0

lnlnlnln

8.0)(

−−

=oreforefref

r zz

zzzzzc (3.4)

The reference values zref and z0ref have the standard values 10 m and 0.03 m respectively. The

factor 0.8 is an average correction factor. For the coefficient of variation of the roughness

model the JCSS code recommends 0.15.

For the steel resistance the Model Code gives:

µ(fy) = fynom exp(1.64V) – 20 MPa and V = 0.07 (3.5)

Starting from a nominal yield stress of 290 MPa we derive at µ(fy) = 300 MPa. Normally for steel

no job specific tests are performed, so the “gross supply estimates” for µ(fy) and V are used.

For the long term live load the model code gives for offices m = 0.50 kN/m2, σ(v) = 0.30 kN/m2,

σ(u) = 0.60 kN/m2, and Ao = 20 m2. The influence area in this example is A = 2ab = 60 m2 and let

κ = 2. In that case we have:

σ(qlong) = √ σ(v)2 + σ(u)2 κ Ao/A = 0.57 kN/m2 (3.6)

According to the code the average renewal time is 5 years. This is expressed in the last column of

Table 3 by means of the renewal rate λ = 1 / 5 years.

For the short term load the model code gives m = 0.2 kN/m2 and σ(u) = 0.4 kN/m2. In that case

we find:

σ(qlong) = √ σ(u)2 κ Ao/A = 0.33 kN/m2 (3.7)

According to the code the average renewal time for the short term is one year and each time the

duration of the short term load is 1 day. For the wind speed u the Model Code recommends a 2

parameter Weibull distribution to describe the daily fluctuations. A certain wind condition is

Page 191: Probabilistic Model Code Jcss Pmc2000

12

supposed to last for about 8 hours. For yearly extremes the Gumbel distribution is recommended.

For both distributions the parameters depend on the local wind climate.

Given these data, the failure probability for a design life time of 50 years can be determined as

follows. First the probability that the structure fails for a period of 5 years is calculated,

assuming the short term live load to be absent all the time. The loading scheme, presented as

FBC models, is given in Figure 3.2. The floor load Q (sustained part) is defined for a period of

5 years (λ = 0.2/yr), so that is okay. The wind speed distribution in Table 3.1 is of the Gumbel

type and defined as a one year extreme. The maximum for a 5 year period can be found by

raising the mean of the one year period according to µ [5 yr] = µ [1 yr] + 0.78 σ ln(5) = 34

m/s. The standard deviation σ of the wind does not change. There is no need to adjust the

distribution of the permanent variables.

Given these data, the failure probability for an assumed design lifetime of 50 years can be

determined. We will follow a simplified procedure, comprising out of two load cases:

Load case 1: Self weight, Long term live load and Wind

The short-term live load will be neglected in this part of the analysis. First the probability is

calculated that the structure fails in a period of 5 years, being the average renewal period of

the long-term live load. This means that we can directly use the data from Table 3.1 for the

live load, the self-weight and the resistance. The wind speed distribution in Table 3.1 is

defined for the one year extreme, so an adjustment to 5 years has to be made. According to the

theory of extreme values the mean value for the 5 year extreme should be taken as µ [5 years]

= 30 m/s + 0.78 σ ln(5) = 34 m/s. A standard time independent FORM analysis for this case

leads to a reliability index β = 4.1 and a failure probability PF = 2.3 10-5. For the assumed

design life of 50 years, using a simple upper bound approximation for convenience, we find

PF = 10 * 2.3 10-5 = 2.3 10-4. The FORM influence coefficients α can be found in Table 3.2

Load case 2: Self weight, Long and Short term live load and Wind.

Next we look at a single day where the short-term load is active. So now the short-term floor

load is present in the limit state equation with the distribution as given in Table 3.1. For the

long-term load we also take the distribution from the table. The wind speed model assumes an

FBC model with ∆t = 8 hrs, so we have to consider the maximum of three Weibull distributed

Page 192: Probabilistic Model Code Jcss Pmc2000

13

variables with parameters as given in the table. Using standard FORM again we arrive at β =

4.4 and PF = 5.0 10-6. Recall that this result holds for a single day with the short-term floor

load present. According to the model code there is one such a day every year, so during the

design period of 50 years we have 50 days of short-term live load activity. An upper bound

approximation, neglecting correlation due to resistance parameters and the permanent and

sustained loads, gives PF = 2.5 10-4.

Load combination and checking with the target

Adding conservatively the two results for both cases one arrives at PF = (2.3 + 2.5) 10-4 = 4.8

10-4 or PF = 3.3 for the life time. As an average over the lifetime we have a nominal failure

rate of PF = 10-5 per year corresponding to β = 4.3. Looking at the model code this seems quite

acceptable and we consider the design to be satisfactory.

Of course, this analysis could haven been performed in a more accurate way. In principle, the

Model Code recommends the Outcrossing Approach to deal with time fluctuating phenomena.

Load effect

5 years 5 yr max

wind W

sustained floor load Q

self weight G

time

Figure 3.2: Self weight, sustained floor load and wind load (yearly maximum) as function of time based on FBC

models.

Page 193: Probabilistic Model Code Jcss Pmc2000

14

Table 3.2 Probabilistic influence coefficients (Alfa values) from the FORM analysis for the

case with the short term load absent

X Designation α

Zp plastic section modulus 0.12

fy steel yield stress 0.41

ρc mass density concrete -0.20

t thickness concrete floor slab -0.15

qlong long term live load (sustained) -0.65

ca aerodynamic shape factor -0.02

cg gust factor -0.02

cr roughness factor -0.03

U ref wind speed (one year) -0.01

mq model factor wind pressure -0.03

mR model factor resistance 0.37

mE model factor load effect -0.42

Page 194: Probabilistic Model Code Jcss Pmc2000

15

EXAMPLE 4: REINFORCED CONCRETE COLUMN IN MULTI STOREY FRAME

Figure 4.1. Multistorey Concrete frame

The multi-storey concrete structure considered in the this study is schematically shown in

Figure 4.1. Each plenary frame in the transversal direction of the structure may be considered

as unbraced sway frame. These transversal sway frames consist of four columns at a constant

distance a1; in the longitudinal direction of the structure they are located within a constant

distance a2. The edge bottom column having the height L and rectangular cross section with

h/b = 2 is considered. The column is considered as fully clamped in at top and bottom end.

The axial column force N is considered as a simple sum of axial forces due to all the

considered actions:

N = NW + Nimp + Nwind (4.1)

where NW is the axial force due to self weight, Nimp is the axial force due to the long and/or

short imposed load and Nwind is the axial force due to wind action (positive values correspond

9 x hs

L

3 x a 1

h

b

0.5 As

0.5 A s

Page 195: Probabilistic Model Code Jcss Pmc2000

16

Table 4.1 Probabilistic models for the concrete column example (acc to JCSS Model Code 2001)

X Designation Distribution Mean V λ

a1 column distance in plane deterministic 5 m -

a2 perpend. dist. of column deterministic 5 m -

t plate thickness (conventional) deterministic 0.30 m 0.03

hs storey height deterministic 3 m -

L height of bottom column deterministic 6 m -

n number of floors deterministic 10 -

b width of cross section normal 350 mm 0.007

h height of cross section normal 700 mm 0.014

As reinforcement area deterministic 0.01 bh -

d1(2) distance of bars from edge normal 75 mm 0.07

ζ initial overall sway(1) normal 0 σ=15 mrad

g acceleration of gravity deterministic 10 m/s2 -

ρc mass density concrete normal 2.4 ton/m3 0.04

qlong long term live load (sustained) gamma 0.50 kN/m2 0.75 (1) 0.2/year

qshort short term live load (1 day) exponential 0.20 kN/m2 1.60 1.0/year

ρa mass density air deterministic 1.25 kg/m3 -

ca aerodynamic shape factor normal 1.10 0.12

cg gust factor normal 4.06 0.12

cr roughness factor normal 0.45 0.15

u ref wind speed (8 hours) weibull 4 m/s 0.60 3.0/day

U ref wind speed (one year) gumbel 24 m/s 0.10 1.0/year

fc concrete strength (C35) logstudent 30 MPa 0.18 (2)

α long term reduction factor normal 0,85 0,10

fy yield strength normal 560 MPa 0.06

E modulus of elasticity for steel deterministic 200 GPa -

mq model factor wind pressure normal 0.80 0.20

mR uncertainty of column normal 1,10 0,11

mE model factor load effect normal 1.00 0.10

(1) Including the effect of correlation over the various floors (ρ =0.5)

(2) More precisely: the logarithm of fc in MPa has a Student distribution with m=3.85 s=0.12, n=3 and ν=6

Page 196: Probabilistic Model Code Jcss Pmc2000

17

to compressive forces):

NW = (n +1) a1 a2 t ρc / 2 (4.2)

Nimp = n a1 a2 qimp / 2 (4.3)

Nwind = (1/2)(L + nhs )2 a2 ca cg cr mq qref /(3 a1) (4.4)

For the designation of all the variables as well as for their values and probabilistic models,

reference is made to Table 4.1. In this analysis the weight of beams and columns are

incorporated in the plate thickness for convenience. In (4.4) qref stands for 0.5 ρa U2. The

bending end moment M is given by:

M = M0 + N ( ea + e2) (4.5)

where M0 is the first order moment. Assuem that th total horizontal wind force

W = ca cg cr mq qref (L+nhs) a2

is taken equally by the 4 columns, leading to a bending moment of 0.5 WL at top and bottom.

In that case the wind part of the first order moment is

M0 = L[ca cg cr mq qref (L+nhs) a2]/8 (4.6)

The first order moments by self weight are zero if we sum them up over the four columns. As

a consequence there is no need to take them into account if plastic redistribution is assumed. A

similar argument holds for the contribution of the imposed load. Therefore, (4.6) represents

the total first order bending moment M0.

The additional eccentricity ea and the second order eccentricity e2 (according to the Eurocode

2, Design of Concrete Structures) in (4.5) are given by:

ea = ζ L/2 (4.7)

e2 = 0,2 L2 K2 fy / (0,9 Es (h - d1)) (4.8)

where the initial sway ζ, and K2 is given by (according to Eurocode 2):

Page 197: Probabilistic Model Code Jcss Pmc2000

18

K2 = (Nu - N) / ( Nu - Nbal) ≤ 1 (4.9)

In K2 the symbol N stands for the normal force according to (4.1); Nu and Nbal are respectively

given by Nu = αbhfc + As fy and Nbal = αbh fc/2. The limit state function Z for the right hand

loweredge column may be expressed as the difference of the resistance moment and the load

induced end moment about the centroid:

Z = mR MR - mE M (4.10)

The two coefficients mR and mE are the model uncertainties. The bending moment M is

according to (4.5). Using a calibrated approximation for the resistance model according to

Eurocode 2, we can elaborate MR to:

MR = [Asfy(h-2d1)/2+hN(1-N/(2αbhfc)] (4.11)

MR = Κ2 [Asfy(h-2d1)/2+αbh2fc/8 ] (4.12)

for N < αbhfc / 2 and for N > αbhfc / 2 respectively. All basic variables applied in the model

are listed in Table 4.1.

Given these data, the failure probability for a design life time of 50 years can be determined as

follows. First the probability that the structure fails for a period of 5 years is calculated,

assuming the short term live load to be absent all the time. For a more detailed explanation,

see Example 1. The wind speed distribution in Table 4.1 is defined for the one year extreme,

so an adjustment has to be made. According to the theory of extreme values the mean value

should be taken as µ = 30 m/s + 0.78 σ ln(5) = 34 m/s. A FORM analysis for this case leads to

a reliability index β = 3.8 and a failure probability PF = 6.9 10-5. The alfa-values are presented

in Table 4.2. For a period of 50 years, using a simple upperbound approximation, we find PF =

6.9 10-4. In this case the short term live load is of little significance, which means that the final

result is P = 1.4 10-5 per year, corresponding to ß = 3.6. This is an acceptable result according to

the target values in Part 1, Basis of Design.

Page 198: Probabilistic Model Code Jcss Pmc2000

19

Table 4.2 Probabilistic influence coefficients (Alfa values) from the FORM analysis for the

case with the short term load absent

X Designation α

t plate thickness (conventional) 0.01

b width of cross section 0.08

h height of cross section 0.10

d1(2) distance of bars from edge 0.07

ζ initial overall sway(1) 0.55

ρc mass density concrete 0.02

qlong long term live load (sustained) 0.02

ca aerodynamic shape factor -0.19

cg gust factor -0.19

cr roughness factor -0.22

U ref wind speed (one year) -0.40

fc concrete strength (C35) 0.03

α long term reduction factor 0.02

fy yield strength -0.05

mq model factor wind pressure 0.28

mR uncertainty of column 0.56

mE model factor load effect -0.35