chapter 2 literature review -...

42
18 CHAPTER 2 LITERATURE REVIEW Design optimization is the application of numerical algorithms and techniques to engineering systems to assist designers in improving the system's performance. Optimization is a process of maximizing or minimizing a desired objective function while satisfying the prevailing constraints. Typical engineering systems are described by a very large number of variables, and it is the designer's task to specify appropriate values for these variables. Optimization techniques can be applied during the product development stage to ensure that the finished design will have high performance, high reliability, low weight, and/or low cost. Alternatively, optimization methods can be applied to existing products to achieve potential design improvements. Design optimization techniques have been used in a number of fields, including automobile design, naval architecture, electronics, computers, and electricity distribution. However, the largest number of applications has been in the field of aerospace engineering, such as aircraft and spacecraft design. An overview of design optimization considering uncertainty and an insight into the Reliability Based Design Optimization problem is presented in this chapter. First, a review of traditional design optimisation is presented and several mathematical models of uncertainty in engineering are introduced. Then, a literature survey of the existing formulations of non-deterministic design optimization problems is reported. The concepts of design optimization under uncertainty with emphasis on reliability based design, as

Upload: hahanh

Post on 28-Aug-2018

228 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

18

CHAPTER 2

LITERATURE REVIEW

Design optimization is the application of numerical algorithms and

techniques to engineering systems to assist designers in improving the

system's performance. Optimization is a process of maximizing or minimizing

a desired objective function while satisfying the prevailing constraints.

Typical engineering systems are described by a very large number of

variables, and it is the designer's task to specify appropriate values for these

variables. Optimization techniques can be applied during the product

development stage to ensure that the finished design will have high

performance, high reliability, low weight, and/or low cost. Alternatively,

optimization methods can be applied to existing products to achieve potential

design improvements. Design optimization techniques have been used in a

number of fields, including automobile design, naval architecture, electronics,

computers, and electricity distribution. However, the largest number of

applications has been in the field of aerospace engineering, such as aircraft

and spacecraft design.

An overview of design optimization considering uncertainty and an

insight into the Reliability Based Design Optimization problem is presented in

this chapter. First, a review of traditional design optimisation is presented and

several mathematical models of uncertainty in engineering are introduced.

Then, a literature survey of the existing formulations of non-deterministic

design optimization problems is reported. The concepts of design

optimization under uncertainty with emphasis on reliability based design, as

Page 2: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

19

well as the fundamental differences between the robust design and the

Reliability-based Design Optimization (RBDO) are presented. A review of

“RBDO formulation, methodologies and applications” is then presented. The

application design optimization for composite laminates is also reported.

2.1 TRADITIONAL DESIGN OPTIMIZATION

Design Optimization is a process of obtaining the best parameters

such as thickness, height, length, module, number of teeth etc. of a component

or a system under certain given circumstances (Belegundu 1981). In a

traditional design optimization problem, the free parameters that need to be

determined to obtain the desired performance are known as the design

parameters. The function for evaluating the merits of a design is called the

objective function. Generally, a number of restrictions must be satisfied in a

design optimization problem. These restrictions define the feasible domain in

the design variable space and are referred to as the design constraints.

Additionally, bound limits may be imposed on the design variables and they

are known as side constraints. In a design optimization problem, the objective

function and the constraints are often expressed as implicit functions of the

design variables and the evaluation of these functions generally involves

numerical simulation techniques such as the finite element method

(Rao 1996).

The traditional deterministic design optimization problem is

mathematically represented as given below (Belegundu and

Chandrupatla 2003).

find d

minimizing f(d)

Page 3: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

20

subject to gi(d) ≤ 0 (i = 1, 2, …., k),

hj(d) = 0 (j = 1, 2, …., p)

dL < d < dU

where d Є Rn+1 is the vector of design variables, f the objective

function, gi (i = 1, 2, ..., k) the inequality constraint functions, hj (j=1, 2, .., p),

the equality constraints and dL and dU denote the lower and upper bound

limits of the design variables, respectively. The design variables can be the

parameters defining the geometrical dimensions, the shape or the topology of

the structure. In practical applications, it is common to make use of the design

variable linking technique to reduce the number of the independent design

variables by imposing a relationship between coupled design parameters. The

objective function and the constraint functions are usually the cost, the

material volume/weight, performances such as nodal displacements, stresses,

natural frequencies, and buckling loads. Since these functions are typically

implicit functions of the design variables, a structural analysis such as the

finite element analysis must be performed whenever their values are required.

In the deterministic formulation of the design optimization problems, the

design variables and other parameters are assumed to be deterministic and the

objective functions as well as the constraints are determined based on their

nominal values (Arora 1990). A solution to an optimization problem specifies

the values of the decision variables, and also the value of the objective

function which must be feasible and optimal. A feasible solution satisfies all

constraints and an optimal solution is the one that minimizes the objective

function. Figure 2.1 shows the feasible region which satisfies all the

constraints (Deb 2003).

Page 4: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

21

Figure 2.1 Feasible region

2.1.1 Classification

Design Optimization problems are classified based on the existence

of constraints, the number of variables, nature of variables, nature of

expressions, and permitted values of the design variables. They are

summarized as below:

According to the existence of constraints, an optimization problem

can be classified as a constrained or unconstrained problem (Johnson 1961).

Based on the nature of the design variables encountered,

optimization problems can be classified into Static and Dynamic Optimization

problems (Pappalambras and Wilde 1988).

Another important classification of optimization problems is based

on the nature of expressions for the objective function and the constraints.

According to this, optimization problems can be classified as linear,

nonlinear, geometric and quadratic programming problems (Arora 1990).

g (X) ≤ 0

h (X) ≥ 0

Page 5: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

22

Depending on the values permitted for the design variables,

optimization problem may be classified as integer and real-valued

programming problems (Rao 1996).

According to the types of the design parameters to be considered,

design optimization problems can be broadly classified into three categories:

Sizing optimization: design variables are geometrical

dimensions such as cross sectional areas of truss members,

beam section parameters and plate thickness (Arora 1990,

Haftka and Gurdal 1992, Deb 2003).

Shape optimization: design variables are the geometry

parameters describing the shape of the designed parts

(Belegundu and Rajan 1988, Gu and Cheng 1990,

Broudiscou et al 1995).

Topology and layout optimization: the number and locations

of voids in a continuous structure or the number and

connectivity of members in a discrete structure (e.g. truss

and frame structure) are to be determined (Kirsch 1989,

Bendsoe and Motasoarse 1993, Buhl et al 2000).

Research on the methods and applications of design optimization

has increased rapidly during the past decades. A variety of numerical

techniques have also been developed and applied to both linear and nonlinear

problems (Arora 1990, Bhul et al 2000).

Page 6: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

23

2.1.2 Methodologies

Basically, the solution methods for design optimization problems

can be classified into Optimality Criteria (OC) methods and Mathematical

Programming methods. In the Optimality Criteria method (Zou and

Haftka 1995), the optimality conditions for a given type of problem are

derived based on the Karush-Kuhn-Tucker condition or by heuristic

assumptions, and then the optimal design satisfying these condition is to be

sought using different forms of resizing rules. Such methods are recognized to

be especially efficient for problems involving a large number of design

variables.

The Mathematical Programming method may be broadly classified

into gradient-based methods (requiring derivatives of the functions) (Johnson

1961, Arora 1990) and non-gradient or direct methods (requiring no

derivatives) (Fox 1971, Rao 1996, Deb 2003).

The use of the gradient-based method for minimization is first

presented by Cauchy. Modern optimization methods are pioneered by

Courant’s paper on penalty functions, Dantzig’s paper on the simplex method

for linear programming and Karush, Khun and Tucker who derived the

“KKT” optimality conditions for constrained problems (Johnson 1961).

Particularly in the 1960s, several numerical methods to solve nonlinear

optimization problems were developed. Mixed integer programming received

an impetus from the branch and bound technique, originally developed by

Land and Doig, and the cutting plane method by Gomory (Fox 1971).

Methods of unconstrained minimization include the Conjugate gradient

methods of Fletcher and Reeves, and the variable metric methods of

Davidon-Fletcher-Powell (DFP) (Siddall 1972).

Page 7: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

24

The Constrained optimization method is pioneered by Rosen’s

gradient Projection method, Zoutendijik’s method of feasible directions, the

generalized reduced gradient method by Abadie, Carpentier and Hensgen and

Fiacco and McCormick’s SUMT techniques (Siddall 1972). The traditional

interval search methods, using Fibonacci numbers or the golden section ratio

are followed by the efficient hybrid polynomial-interval methods of Brent.

Sequential Quadratic Programming (SQP) methods for constrained

minimization are then developed. The Development of interior methods for

linear programming started with the work of Karmarkar in 1984 (Papalambras

and Wilde 1988).

In the 1960s, side-by-side with developments in gradient-based

methods, there were developments in non-gradient methods, principally

Rosenbrock’s method of orthogonal directions, the pattern search method of

Hooke and Jeeves, Powell’s method of Conjugate directions, the simplex

method of Nedler and Meade and the method of Box (Haug and Arora 1979).

Most recent among the direct methods are genetic algorithms (Holland 1975,

Goldberg 2002) and the simulated annealing algorithm which originated from

Metropolis. Special methods that exploit some particular structure of a

problem have also been developed (Rao 1996). Dynamic programming

originated from the work of Bellman, who stated the principle of optimal

policy for system optimization. Geometric programming originated from the

work of Duffin Peterson, Zener. Pareto optimality was developed in the

context of multiobjective optimization.

In addition to these conventional methods, some innovative

approaches using analogies of physics and biology, such as Simulated

Annealing, Genetic Algorithm and Evolutionary Algorithms

(Papadrakakis et al 1998, Deb 2001), are also employed for the solution of

global optimization problems. These approaches are characterized by

Page 8: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

25

gradient-free methods and utilize only function values. Generally, these

algorithms require a large number of function evaluations to achieve

convergence, and thus have limited use in applications involving complicated

designs.

Different methods of design optimization are widely used in the

design of engineering structures for the purpose of improving the performance

and reducing their costs. The use of design optimization techniques has

rapidly increased, mainly due to the development of sophisticated computing

techniques and the extensive applications of the finite element method

(Deb 2003). Moreover, recently, it is widely recognized that design

optimization methodologies should account for the stochastic nature of

engineering systems, and that concepts and methods of life-cycle engineering

should be used to obtain a cost-effective design during a specified time

horizon. To ensure high reliability and safety, uncertainties inherent to or

encountered by the product during the entire life cycle must be considered and

treated in the design process. The various types of uncertainty, the

mathematical models of uncertainty reported in literature, and the

optimization methodologies which include these uncertainties are described

below.

2.2 TYPES OF UNCERTAINTY

Uncertainty is an acknowledged phenomenon in the natural and

technological worlds. Engineers are continually faced with uncertainties in

their designs. However, there is no unique definition of uncertainty. A useful

functional definition of uncertainty is: the information/knowledge gap

between what is known and what needs to be known for optimal decisions,

with minimal risk. Uncertainties can be modeled or quantified using the

probability theory, convex models of uncertainty and fuzzy set theory

Page 9: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

26

(Dubois and Prade 1988, Fedrizzi et al 1994), Dempster-shafer theory

(Liu et al 2006).

The design uncertainties include variations in certain parameters

which are either controllable (e.g. dimension) or uncontrollable (e.g. material

properties), and model uncertainties and error associated with the simulation

based design. In general, a distinction can be made between aleatory

uncertainty (also referred to as stochastic uncertainty, irreducible uncertainty,

inherent uncertainty, and variability), epistemic uncertainty (also referred to

as reducible uncertainty, subjective uncertainty, model form uncertainty or

simply uncertainty), and numerical uncertainty (also known as error)

(Zissimos and Zhou 2006). Oberkampf et al (2004) have described various

methods for estimating the total uncertainty by identifying all possible sources

of variability, uncertainty, and error in mathematical models and simulation

tools.

2.2.1 Aleatory uncertainty

Aleatory uncertainty (originating from the Latin aleator or

aleatorius, meaning dice thrower) is used to describe the inherent spatial and

temporal variation associated with the physical system or the environment

under consideration, as well as the uncertainty associated with the measuring

device (Ben-Haim and Elishakoff 1990). Sources of aleatory uncertainty can

be represented as randomly distributed quantities. Aleatory uncertainty is

also referred to in the literature as variability, irreducible uncertainty,

inherent uncertainty, and stochastic uncertainty. Aleatory uncertainty can

occur in the form of manufacturing tolerances (Elishakoff et al 1994).

Probability distributions can be used to model such uncertainty.

Page 10: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

27

2.2.2 Epistemic uncertainty

Epistemic uncertainty (originating from the Greek episteme,

meaning knowledge) is defined as any lack of knowledge or information in

any phase or activity of the modeling process. Examples of sources of

epistemic uncertainty are scarcity or unavailability of experimental data for

fixed (but unknown) physical parameters, limited understanding of complex

physical processes or interactions of processes in the engineering system, and

the occurrence of fault sequences or environment conditions not identified for

inclusion in the analysis of the system (Zissimos and Zhou 2006). Epistemic

uncertainty can be either parametric or model-based. Parametric uncertainty

is associated with the uncertain parameter for which the information available

is sparse or inadequate. A model form of the uncertainty, also known as tool

uncertainty, is associated with improper models of the system due to lack of

knowledge of the physics of the system (Mahadevan and Ramesh 2006).

Fuzzy sets are used to model such uncertainty.

2.2.3 Numerical Uncertainty

Error (numerical uncertainty) is commonly associated with the

numerical models used for simulations and modeling. In the convergence of a

coupled system analysis, round-off errors, truncation errors and errors

associated with the solution of Ordinary Differential Equation (ODE) and

Partial Differential Equations (PDE) (Huibin et al 2006) are considered as

numerical uncertainities.

2.3 MATHEMATICAL MODELS OF UNCERTAINTY

The formulation of a design optimization problem under

uncertainty is closely related to the modeling of the uncertainty. There exist

Page 11: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

28

various mathematical models of uncertainty, when dealing with design

optimization problems. The existing models can be classified into

probabilistic models e.g. stochastic randomness, and non-probabilistic models

including interval set, convex modeling, fuzzy set and leveled noise factors. A

brief introduction to these uncertainty models is given below.

2.3.1 Randomness

The prevailing model for uncertainties in engineering design is

stochastic randomness (Doltsinis 1999, Schueller 2001). The Probability

Density Function (PDF) and Cumulative Distribution Function (CDF) are

used to define the occurrence properties of uncertain quantities which are

random in nature. Randomness accounts for most of the uncertainties in

engineering problems (Zou et al 2002). In computational engineering

problems, the model errors and the uncertainties that arise from incomplete

knowledge about the system, are often regarded as random uncertainties as

well. In practical design problems the randomness of the uncertain parameters

is often modeled as a set of discretized random variables (Youn and

Choi 2004). The statistical description of a random variable X can be

completely described by a cumulative density function F(x) or a probability

density function (PDF) f(x) defined as

dx(x)fx)P(X(x)F X

x

X (2.1)

where P(.) is the probability that an event will occur. The probability

distribution of the random variable X can be also be characterized by its

statistical moments (Haldar and Mahadevan 2000). The most important

statistical moments are the first and second moment known as mean value

Page 12: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

29

μ(X), also referred to as expected value and denoted by E(X), and variance

denoted by Var(X) or σ2(X), respectively, as given by

dx)x(xf)x(xdF)X(E)X( XX

, (2.2)

and

dxxfXxxdFXxX XX )())(()())(()( 222

(2.3)

Probability distributions such as Normal, lognormal and Weibull

are the most commonly used distributions in design optimization problems.

Nevertheless, precise information on the probabilistic distribution of the

uncertainties is sometimes scarce or even absent. Moreover, some

uncertainties are not random in nature and cannot be defined in a probability

framework (Nikolaidis et al 2008). For these reasons, non-probabilistic

methods for modeling of uncertainties have been developed in recent years.

These methods do not require a priori assumptions on PDFs for the

description of uncertain variables.

2.3.2 Interval Set

An Interval set is used to model uncertain but non-random

parameters. These uncertainties are assumed to be bounded within a specified

interval and a small variation of the interval parameter is treated as a

perturbation around the midpoint of this interval, allowing the interval

perturbation method to be used for the analysis of the performance variation

(Rao and Berke 1997, Qui and Elishakoff 1998). Using the so-called

anti-optimization techniques, the least favourable response can be determined

under the assumption of small variations. The term anti-optimization is

referred to as the task of finding the worst-scenario of a given problem

Page 13: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

30

(Elishakoff et al 1994). The methods based on the interval set do not allow for

the distinction of the more or less probable occurrence of the variables

(Lombardi and Haftka 1998). Moreover, it is difficult to consistently define

bounded intervals for the uncertainties, without a confidence level.

2.3.3 Convex Modeling

To overcome the difficulties when data are insufficient to conduct a

reliability analysis using conventional probabilistic approaches, the

worst-case scenario analysis based on Convex Modeling can be formulated

(Ben-Haim and Elishakoff 1990). Convex Modeling is connected to

uncertain-but-bounded quantities. In this method, the uncertainty which has

bounded values is assumed to fall into a multi-dimensional ellipsoid or

hypercube. In some sense, the convex model can be regarded as a natural

extension of the interval set model. By virtue of the Convex Model theory,

the worst-case performance of the design is determined using the

anti-optimization technique (Yoshikawa et al 1998). This method has been

proved to be more advantageous than the traditional worst-case approach,

where all the possible combinations of the extreme values of the uncertain

parameters need to be examined, so that the worst-case scenario can be

determined (Pantelides and Ganzerli (1998).

2.3.4 Fuzzy Set

The fuzzy set theory has been developed as a mathematical tool for

quantitative modeling of the uncertainty associated with vagueness in

describing subjective judgments under uncertainty using linguistic

information (Rao 1987). In the fuzzy set method for engineering design

problems, the uncertainty is modeled as fuzzy numbers rather than random

values with certain distribution (Sakawa 1993). In other words, the fuzzy set

Page 14: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

31

theory presents a possibility rather than a probability description of the

uncertainty. The fuzzy analysis method has been used to deal with certain

problems such as design analysis under uncertain loading conditions (Gerhard

and Haftka 1998). Venter and Haftka (1999) used response surface

approximation in fuzzy set based optimization. Liu et al (2006) proposed

possibility-based design optimization methods for design problems with fuzzy

data.

2.3.5 Leveled Noise Factors

In Taguchi’s robust design methodology (Tsui 1992), the system

uncertainties are modeled as leveled noise factors. Here, no a priori

assumptions on the statistics of the uncertainties are required. Lee et al (1996)

devised a robust design for unconstrained optimization problems using the

Taguchi method. Following the method of experimental design, the system

outputs are examined at planned combinations of the discrete levels of these

noise factors. Thus, the interactions between the system performance and

noise factors can be explored (Montgomery 2001). Wang and Kodiyalam

(2002) proposed an efficient method for a probabilistic and robust design with

non-normal distributions.

The various design optimization methods that incorporate

uncertainty are given in the following Section.

2.4 DESIGN OPTIMIZATION UNDER UNCERTAINTY

Conventional design procedures accounting for system

uncertainties are based on safety factors. This method often produces far too

conservative designs. Use of deterministic methods to reduce cost or weight

may result in systems that are vulnerable to variability and uncertainty,

Page 15: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

32

because these methods operate on very tight margins. Moreover, in the design

of novel structures or products, little prior knowledge for determining an

appropriate safety factor is available. More sophisticated formulations

incorporating system uncertainty into design optimization have been reported

in the past on the basis of various mathematical models of uncertainty (Tonon

and Bernardini 1998).

2.4.1 Worst Case Scenario-based Design Optimization

In some non-deterministic design optimization problems, the design

against system failure is based on the worst case analysis. In practical

applications of this approach, the convex model or interval set can be used to

model the system uncertainties. Elishakoff et al (1994) applied this method to

the optimal design problem considering bounded uncertainty.

Yoshikawa et al (1998) presented a formulation to evaluate the worst case

scenario for a homology design, caused by uncertain fluctuation of loading

conditions using the convex model of uncertainty. In these approaches, the

anti-optimization strategy is adopted to determine the least favorable

combination of the parameter variations and the problem is then converted

into a deterministic Min-max optimization (Qiu and Elishakoff 1998). Here,

no probability density function of the input variables is required. The validity

of the proposed method is demonstrated by its application to the design of

simple truss structures (McWilliam 2000).

Lombardi and Haftka (1998) combined the worst-case scenario

technique of anti-optimization and the optimization techniques in the design

that considers uncertainty. The proposed method is suitable in particular for

uncertain loading conditions. Since a complete optimization routine needs to

be nested for the worst case analysis at each design optimization cycle, this

approach may become prohibitively expensive when many uncertain

Page 16: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

33

parameters are presented in the problem. Additionally, this design technique

often results in too conservative designs.

2.4.2 Robust Design Optimization

In robust design, the performance of the system is required to be

less sensitive to the random variations induced at different stages of the

product’s life cycle. Robust design is an engineering methodology for the

optimal design of products and process conditions that are less sensitive to

system variations (Ranganathan 1990). It has been recognized as an effective

design method to improve the quality of the product/process. Among the three

stages of engineering design, viz. conceptual, parameter and tolerance design,

robust design may be involved in the stages of parameter design and tolerance

design.

For design optimization problems, the performance function

defined by design objectives or constraints may be subject to large scatter at

different stages of the service life-cycle. It can be expected that this may be

more crucial for systems with nonlinearities (Rao 1992). Such scatters may

not only significantly worsen the product quality and cause deviations from

the desired performance, but may also add to the product’s life-cycle costs,

including inspection, repair and other maintenance costs (Lee et al 1996).

From an engineering perspective, well-designed structures minimize these

costs by performing consistently in the presence of uncontrollable variations

during the whole life-cycle. In other words, excessive variations of the system

performance indicate the poor quality of a product. This raises the need for

robust design (Montgomery 2001).

One possible way to decrease the scatter of the system

performance, is to reduce or even eliminate the scatter of the input

Page 17: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

34

parameters, which may either be practically impossible or add much to the

total costs of the structure; another way is to find a design in which the system

performance is less sensitive to the variation of parameters without

eliminating the cause of parameter variations, as in robust design (Rao and

Berk 1997).

The robust design optimization approach not only shifts the

performance mean to the target value, but also reduces the product’s

performance variability, achieving Six-sigma level robustness on the key

product performance characteristics with respect to the quantified variation.

The Taguchi methodology (Montgomery 2001) indicates, that by conducting

planned experiments under some assumptions, uncontrollable or noise

variables can be precisely controlled, and thereby the designer can choose the

levels of controllable variables to accomplish a robust system that is

insensitive to inevitable changes of the noise variables (Huibin et al 2006).

2.4.3 Fuzzy Set based Design Optimization

The fuzzy set theory has been initially used by Rao (1987) to

handle design optimization under uncertainties. The design problem involves

maximizing the safety level of a structure. The response surface methodology

is also used throughout the design process to reduce the computational effort

(Ranganathan 1990, Rao 1992). A random set approach has been proposed

by Tonon and Bernardini (1998) as an extension of the fuzzy set method for

the design optimization problem which is characterized by imprecise or

incomplete observations on the uncertain design parameters. Gerhard and

Haftka (1998) used the fuzzy set theory for modeling the uncertainty

associated with the design with future materials in the aircraft industry.

Page 18: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

35

In the fuzzy set-based design optimization, the vague quantities

which cannot be clearly defined in a system, are characterized by membership

functions (Venter and Haftka 1999). In this context the possibility of system

failure is restricted to the optimal design. Since this method is featured as a

non-probabilistic description of system reliability, it can be regarded as a

possibility-based approach. In a similar way, as in Reliability Based Design

Optimization, this approach focuses exclusively on the issue of the system

safety with the purpose of avoiding system catastrophe in the presence of

parameter uncertainties.

2.4.4 Reliability Based Design Optimization

The term reliability-based design optimization is used, in a narrow

sense, exclusively to the optimal design where the cost function of the

problem is to be minimized under the observance of probabilistic constraints

instead of conventional deterministic constraints (Rackwitz et al 1995). Until

recently, the RBDO has been the only way of taking account of uncertainty in

design optimization problems. When the occurrence of the catastrophic failure

of the system or component is crucial, the design optimization problem is

usually characterized as a problem of reliability-based design optimization. In

this framework, the probability of failure is involved in the constraint

conditions of the design optimization problems. The failure of a system or a

component is defined with limit state functions (Kuschel and Rackwitz 2000).

From the theoretical point of view, reliability-based design optimization has

been a well-established concept. Prior to the reliability analysis, the statistical

characteristics of the random quantities are first defined by suitable

probability distributions.

In RBDO the probability of failure is evaluated by numerical

procedures such as the Monte Carlo Simulation, the First Order Reliability

Page 19: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

36

Method and Second Order Reliability Method (Rackwitz 2001). In the direct

Monte Carlo simulation or Importance Sampling method, the probability of

failure is derived from the test data of a large number of samples

(Mohsine et al 2006). In the First Order Reliability Method (FORM), and the

Second Order Reliability Method (SORM) or the Advanced Mean Value

method, an additional nonlinear constrained optimization procedure is

required for locating the design point or Most Probable Point of failure (MPP)

and thus the reliability based design optimization becomes a two-level

optimization process with lengthy calculations of sensitivity analysis in the

inner loop for locating the MPP (Nikolaidis et al 2008).

2.4.5 Differences between robust design and RBDO

Compared to the RBDO, robust design is a relatively new issue in

engineering design. As representative non-deterministic design optimization

formulations, both of them aim at incorporating random performance

variations into the optimal design process, and therefore they are sometimes

not clearly distinguished in the literature. However, the two approaches differ

in some fundamental aspects, despite the fact that the optimal solution of the

robust design often exhibits an increased reliability. First of all, the robustness

is assessed by the measure of the performance variability around the mean

value, most often by its standard deviation, whereas reliability is connected to

the probability of failure occurrence (Figure 2.2).

In general, RBDO is concerned more with satisfying the reliability

requirements under known probabilistic distributions of the input, and less

concerned with minimizing the variation of the performance function, while

the robust design aims to reduce the system variability to unexpected

variations (Kaymaz and McMahon 2004). In RBDO, the objective function is

to be minimized under the observance of probabilistic constraints. However,

Page 20: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

37

in robust design optimization, the objective function usually involves the

performance variations, and the design constraints may be simply defined by

the variance (Anukal and Mahadevan 2005). Actually, RBDO is usually

accomplished by moving the mean of the performance as depicted in

Figure 2.3, whereas robust design is often implemented by diminishing the

performance variability, as shown in Figure 2.4.

Figure 2.2 Difference between robustness and reliability

Figure 2.3 RBDO strategy

Prob

abili

ty D

ensit

y Fu

nctio

n P(

f)

Performance f

Probability of failure

Limit State function

μf

σf

Prob

abili

ty D

ensi

ty

Func

tion

P(f)

Performance f

μf

Less reliable More reliable

Page 21: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

38

Figure 2.4 Robust design strategy

Secondly, in RBDO particular care is paid to the issue of system

safety in extreme events, while in robust design more emphasis is put on the

behavior under everyday fluctuations of the system during the whole service

life (Zang et al 2002).

2.5 RBDO FORMULATION, METHODOLOGY AND

APPLICATIONS – A REVIEW

Reliability Based Design Optimization (RBDO) methodologies not

only provide improved design but also a confidence range of the simulation

based optimum design (Huibin et al 2006, Nicholaidis et al 2008). The basic

idea in reliability based design optimization, is to employ numerical

optimization algorithms to obtain optimal designs ensuring reliability. There

are two Approaches of RBDO, namely: the Reliability Index Approach (RIA)

and the Performance Measure Approach (PMA) (McDonald and

Mahadevan 2008).

Prob

abili

ty D

ensi

ty

Func

tion

P(f)

Performance f

μf

σf Less Robust More Robust

Page 22: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

39

2.5.1 Reliability Index Approach

The Reliability Index Approach (RIA) was first introduced by

defining a probabilistic constraint as reliability (Enevoldsen and Sorensen

1994, Frangopol and Moses 1994 Yu et al 1997, Tu and Choi 1999, Alan et al

2007). Many researchers (Enevoldsen and Sorensen 1994, Chandu and Grandi

1995, Yu et al 1997,Wu and Wang 1998, Grandhi and Wang 1998, Choi et al

2004) have used the reliability index evaluated in the traditional reliability

analysis to prescribe the probabilistic constraint. For a reliability based

design, a performance function can be defined as G = R-S where R and S are

statistically independent and normally distributed random variables of the

resistance and load measurements of the structure. Typically, R can be the

yield stress and S the maximum Von Mises stress. The G function is also

called the limit state function or failure function as shown in Figure 2.5. The

curve G = 0 divides the design space into two regions, the safe region when

G > 0 and the unsafe region when G < 0. Since R and S have a variation, G

will also exhibit variation.

Figure 2.5 Reliability Index

β

G(u)

SORM

FORM

u*=MPP

u1

u2

Page 23: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

40

The ratio (β) of the mean value of the G function (μG) and the

standard deviation of the G function (σG) is defined as safety index or

reliability index. If Φ is the cumulative distribution function and G has a

normal distribution then:

β = -Φ(1-Reliability) = μG / σG (2.4)

Pf = Φ (-β) (2.5)

where β is the distance from the origin to the Most Probable Point of Failure

(MPP). In both the First Order Reliability Method (FORM) and the Second

Order Reliability Method (SORM), the original random variables, which are

generally non-normal and correlated, are first transformed into an equivalent

set of statistically independent normal variates. A general transformation for

this purpose is the Rosenblatt transformation (Tvedt 1990, Nikolaidis 2008).

During optimization, the corresponding MPP in X-space needs to be

calculated to evaluate the probabilistic performance functions. The MPP of

failure in X-space is found by mapping u*β=ρ to the original space. If the

random variables in X-space are independent and normally distributed, then

the MPP in the original space is given by x*=μx-u*β=ρσx. If the variables have

a non-normal distribution, then the equivalent means and equivalent standard

deviations of an approximate normal distribution are computed and used in

the above expression to estimate the MPP in X-space (Rao 1992).

The RBDO problem can be formulated to maximize the Reliability

Index β while simultaneously minimizing the weight than the target value

(Type-I) as given below.

Page 24: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

41

Maximize β

Subject to: Weight < Target-Weight

DVmin< DV < DVmax

where DV = Design Variable

The RBDO problem can also be formulated to minimize the mass

while simultaneously maximizing the Reliability Index β than the target

(Type-II) as given below:

Minimize Weight

Subject to: β > Target β

DVmin< DV < DVmax

A typical target value β commonly used in the literature is 3, which

corresponds to a failure probability of 0.00135. It is observed from the

RBDO literature that the Reliability Index Approach exhibits very slow

convergence or even divergence for some problems (Tu and Choi 1999, Tu et

al 1999, Youn et al 2003, Choi and Youn 2002, Youn and Choi 2004).

2.5.2 Performance Measure Approach (PMA)

To overcome the disadvantages of the Reliability Index Approach,

the Performance Measure Approach (PMA) is introduced by solving an

inverse problem (Palle and Michael 1982, Madsen et al 1987, Tu and Choi

1999, Youn et al 2003). A significant effort has been devoted to formulate

reliability based design optimization using the Performance Measure

Approach (Tu et al 1999).

Page 25: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

42

The RBDO formulation for a Target Reliability of 95% using PMA

is given below.

Minimize Weight

Subject to:

P (Load > 1kN) ≤ Pfailure = 5%

P(Displacement >30 mm) ≤ Pfailure = 5%

Design Variable: Thickness ‘t’

maxmin

maxmin

iii

iii

ttt

ttt

The fact that the PMA is inherently robust and more effective is not

surprising, since it is easier to minimize a complicated cost subject to a simple

constraint expressed as the known distance (i.e., reliability index), than to

minimize a simple cost subject to a complicated constraint (Choi and

Youn 2002,Youn et al 2003). In addition, the RIA introduces an undesirable

nonlinearity, whereas the PMA does not, which becomes more serious for

non-normal distributions (Choi and Youn 2002).

The following Section explains the possible formulations of RBDO.

2.6 RBDO FORMULATIONS

There are two important concepts in RBDO formulation. They are

efficiency and robustness. An efficient formulation is one in which the

solution can be obtained faster as compared to the other formulations.

Robustness, on the other hand, means that the RBDO formulation does not

depend on the starting point. In the last two decades, researchers have

Page 26: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

43

proposed a variety of frameworks for efficiently performing reliability based

design optimization (Shan and Wang 2008). A careful survey of the literature

reveals that the various RBDO methods can be divided into three broad

categories.

2.6.1 Double Loop Methods

A straightforward approach to solve an RBDO problem is to

conduct a double-loop optimization process in which the outer loop iteratively

selects feasible designs that approach the minimum objective, while the inner

loop evaluates the reliability constraints for each selected design

(Tu et al 1999, Youn and Choi 2004, Yang et al 2005, Jun and Mourelatos

2008). However, for complicated G-functions and objective functions, the

repeated inner-loop reliability analysis can cause the Reliability Based Design

Optimization to be prohibitively time-consuming (Nikolaidis 2008).

2.6.2 Decoupled Method or Sequential Method

Chen et al (1997) proposed a sequential RBDO methodology for

normally distributed random variables. Wang and Kodiyalam (2002)

generalized this methodology for non-normal random variables and reported

enormous computational savings when compared to the nested RBDO

formulation. Chen and Du (2002) proposed a Sequential Optimization and

Reliability Assessment methodology (SORA).

Agarwal et al (2003) extended this methodology for

multidisciplinary systems. The basic concept behind the sequential RBDO

technique is to decouple the upper level optimization from the reliability

analysis to avoid a nested optimization problem (Du and Chen 2004,

Yang et al 2005). Zou and Mahadevan (2006) proposed a direct decoupling

Page 27: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

44

approach for efficient reliability-based design optimization. In SORA, the

boundaries of the violated constraints (with low reliability) are shifted

towards the feasible direction, based on the reliability information obtained in

the previous iteration. Therefore, a consistent reliable design is almost

guaranteed to be obtained from this framework. However, a true local

optimum cannot be guaranteed. This is because the MPP of failure for the

hard constraints is obtained at the previous design point. A shift factor, Si,

from the mean values of the random variables is calculated and is used to

update the MPP of failure for probabilistic constraint evaluation during the

deterministic optimization phase in the next iteration, as this technique varies

the mean values of the random variables. This MPP update may be inaccurate

because of the fact that as the optimizer varies the design variables, the MPP

of failure (and hence the shift factor) also changes and this is not addressed in

SORA. This may lead to spurious optimal designs (Yuan et al 2007 Jun and

Mourelatos 2008).

2.6.3 Single Loop RBDO or Unilevel RBDO

As outlined before, RBDO is typically a nested optimization

problem, requiring a large number of system analysis evaluations. The major

concern in evaluating reliability constraints is the fact that the reliability

analysis methods are formulated as optimization problems. To overcome the

difficulty of SORA, a unilevel RBDO formulation was developed by Kuschel

and Rackwitz (2000). In this method, the lower level optimization is replaced

by the corresponding first order Karush-Kuhn-Tucker (KKT) optimality

conditions of the first order reliability problem. As mentioned earlier, the

direct FORM problem can be ill conditioned, and the same may be true for

the unilevel formulation given by Rackwitz (2001). The probabilistic hard

constraints may have a zero failure probability at a particular design setting,

and hence the solution may not converge due to the hard constraints (which

Page 28: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

45

are posed as equality constraints) not being satisfied. Moreover, the condition

under which such a replacement is equivalent to the original bi-level

formulation is not detailed in Kuschel and Rackwitz (2001). During the last

few years, researchers in the area of multidisciplinary optimization have

continuously faced the challenge to develop more efficient techniques to solve

the RBDO problem (Yang et al 2005, Harish et al 2007). A new unilevel

method is being developed which enforces the constraint qualification of the

KKT conditions and avoids the singularities associated with zero probability

of failure (Alan et al 2007, Jinghong et al 2007, Yuan et al 2007, McDonald

and Mahadevan 2008).

2.7 RBDO- RELIABILITY ESTIMATION METHODS

The methodologies used to determine the failure probability in

RBDO problems for both the approaches may be classified into two

categories, viz. Analytical methods and Simulation-based methods.

2.7.1 Analytical Reliability Methods

In this method, the accurate estimation of failure probability

requires multidimensional integration, which is difficult and highly time

consuming. Hence, approximation of the failure surface is required. The First

Order Reliability Method (FORM) (Tu and Choi 1999, Anukal and

Mahadevan 2005) uses a linear approximation of the limit-state function at

the most probable point (MPP). Consequently, for the nonlinear limit state

equation, FORM will overestimate the probability of failure, since it considers

the contribution of the region between the real limit state and the

approximation in calculating the failure probability integral (Zhao and Ono

1999). The linear approximation errors may be too large (Mitteau 1996) and

so the “Second Order Reliability Method” (SORM) is introduced, which uses

Page 29: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

46

the first order reliability index with a correction term (Rao 1996). Studies on

the applicable ranges, history, and potential improvements and extensions of

both FORM and SORM can be found in Rackwitz (2001). Both methods

require the calculation of the MPP, which is an auxiliary optimization process

or “loop” that adds to the computational cost. Several methods have been

proposed to improve the efficiency of the “double loop” solution, namely, the

design and MPP.

Several Second-Order Reliability Methods (SORM) have also been

developed, which use parabolic approximation (Koyluolu and Neilsen 1988,

Cai and Elishakoff 1994). The Advanced Mean Value method (AMV), the

Conjucate Mean Value method (CMV) and Hybrid Mean Value method

(HMV) are the advancements of the analytical methods used in the

probabilistic constraint assessment during the RBDO process (Cruse et al

1988, Wu et al 1990, Youn et al 2003, Youn and Choi 2004). In general, the

AMV method exhibits divergence or a slow rate of convergence in addressing

a concave performance function, although it is good for a convex performance

function. Therefore, a robust and efficient hybrid mean value (HMV) method

has been proposed for the numerical solution of the inverse PMA problem

(Youn et al 2003).

Even with the HMV method, the RBDO process may not be

efficient enough to affordably obtain a reliability based optimum design for

large-scale applications or applications where design sensitivity is

unavailable. A new RBDO methodology is developed to integrate the

proposed HMV method with many MPP search algorithms such as the

Modified HL-RF and AMVFO and general optimization algorithms such as

Sequential Linear Programming (SLP) (Kuei et al 2007, Yuan et al 2007).

Sequential Quadratic Programming (SQP) and the augmented Lagrangian

Page 30: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

47

method can be used to find the MPP (Yu et al 1997, Grandhi and Wang 1998,

Huibin et al 2006, Kuei-Yuan et al 2007).

2.7.2 Simulation-Based Reliability Methods

A variety of approximation schemes are employed to compute these

probabilities, including sampling techniques based on the Monte Carlo

Simulation (MCS) procedure. The Accuracy of MCS estimations increases

with increased sampling size, but setting low failure probability levels Pf and

dealing with costly constraint functions makes this impractical. Many

sampling techniques have been proposed to maintain the advantage of MCS

with smaller samples (Kim and Diwekar 2002).

Monte Carlo simulation methods do not require any

transformations of the random variables to an uncorrelated standard normal

space, like the FORM methods. A Monte Carlo simulation draws samples

directly from the probability distribution of the random variables and

generates the probability space of the output variables through integration.

One of these methods is Importance Sampling (IS)( Karamchandani et al

1989). The basic idea of importance sampling is to minimize the total number

of sampling points by concentrating on the sampling in the failure region

where the probability density is the greatest. However, in many cases, it is

difficult to know the shape of the failure region, in advance. To overcome this

difficulty, the concept of Adaptive Importance Sampling (AIS) has been

proposed (Bucher 1988, Melchers 1989).

The AIS is based on the idea that the importance-sampling density

function can be gradually refined to reflect the increasing state of knowledge

of the failure region. The sampling space is adaptively adjusted based on the

generated sampling points. Two versions of AIS have been developed. The

Page 31: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

48

first version uses an adaptive surface to approximate the limit state. Based on

different adaptive surfaces, a radius-based method, a plane-based method, and

a curvature-based method have been developed (Wu 1998). The second

version of AIS is called multimodal adaptive importance sampling (Wu 1998,

Zou et al 2002). It uses a multimodal sampling density to emphasize all

important sample points in the failure domain, each in proportion to the true

probability density at the particular sampling point (Zou et al 2002). This

method is applied to the component and system reliability analyses of large

structures (Zou et al 2002, Mahadevan and Dey 1997), Mahadevan and

Raghothamachar 2000) with very satisfactory results. To address the high

computational cost of the Monte Carlo method, several more-efficient

simulation-based methods have been developed (Haldar and Mahadevan

2000, Qu and Haftka 2004), McDonald and Mahadevan 2008). Zou et al

(2002) proposed a Reliability-based design method using simulation

techniques and an efficient optimization approach.

2.7.3 Response Surface Methodology

Response Surface Methodology (RSM) is developed to reduce the

computational burden of RBDO, by replacing the original failure function

g(x) by an equivalent function R(X) by which computational procedure can

be simplified maintaining the accuracy (Ranganathan 1990). While

developing the new model, it is important that it allows an easy and efficient

computation of the failure function under the loading/system condition but

still preserves the essential features of the system. This new mathematical

model representing the original limit state function is called the response

surface (Breitung 1996). The representation of the limit state function by the

response surface should be independent of the properties of the basic

variables involved. However, for improving the efficiency and accuracy of the

method including the subsequent reliability analysis, some prior knowledge of

Page 32: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

49

the stochastic properties of the variables is to be used. The limit state surface

can be represented in a polynomial form as given below:

2i

n

1iii

n

1ii XcXba)X(R

(2.6)

where Xi , i = 1,2,…..n are the basic variables and parameters a, bi, ci

i= 1,2.. n are the constants to be determined.

Methods such as orthogonal designs or small composite designs

have been considered to fit second-order probabilistic responses in RBDO. It

has been found that RSMs developed for deterministic design optimization

are not suitable for the reliability analysis or RBDO. Lancaster (1986)

proposed a new RSM that is specifically suited for a reliability analysis which

is based on the moving least square method and design experiments. The

moving least square method better approximates the implicit response by

imposing a variable weight over a compact support. In the literature most

RSMs have been developed utilizing only the response data, and little attempt

has been made to use both the response and sensitivity data to construct

approximate responses (Roux et al 1998, Zheng and Das 2000). An RSM that

utilizes only response data may be acceptable as far as the response values are

concerned; however, the approximate design sensitivity obtained from the

approximate response may contain sufficient error to cause difficulty in

RBDO. If accurate sensitivity information can be obtained efficiently, it can

be incorporated in the proposed moving least square method to substantially

improve the accuracy of the approximate responses, as well as the sensitivities

required for RBDO (Kharmanda et al 2002, Byeng and Choi 2004, Kaymaz

and McMohan 2004, Qu and Haftka 2004).

Page 33: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

50

2.7.4 Hybrid Method

A hybrid reliability method combines the best features of

FORM/SORM, MCS and response surface approaches to achieve both

accuracy and efficiency (Zou et al (2002). Youn et al (2003) presented the

hybrid mean value method to adaptively select either the AMV method or the

CMV method, once the performance function type is identified. In order to

reduce the high computational time of the nested problems, Kharmanda et al

(2002), proposed a new formulation hybrid design space by combining

deterministic and random spaces. Mohsine et al (2006) have proposed a

modification of the formulation of the hybrid method to improve the optimal

solutions. The proposed method is called the Improved Hybrid Method

(IHM).

2.8 RBDO SOFTWARES

Numerous computer programs have been developed by researchers

to implement the FORM/SORM procedures. NESSUS (Numerical Evaluation

of Stochastic Structures Under Stress), developed at the Southwest Research

Institute combines probabilistic analysis with a general purpose finite

element/boundary element code. Design analysis is performed using the

displacement method, the mixed-iterative formulation or the boundary

element method, and the iterative perturbation is used for the sensitivity

analysis (Wu 1998). PROBAN (PROBability ANalysis) is developed at Det

Norske Veritas (Hovik, Norway). It is designed to be a general probabilistic

analysis tool. PROBAN is capable of estimating the probability of failure

using the FORM or SORM. The approximate FORM/SORM results can be

updated through importance sampling. The probability of general events can

be computed by the Monte Carlo simulation and directional sampling.

Page 34: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

51

CALREL (CAL-RELiability) is a general-purpose reliability

analysis program designed to compute probability integrals. It incorporates

four general techniques for computing the probability of failure, namely,

FORM, SORM, directional simulation with exact or approximate surfaces,

and Monte Carlo simulation. Khalessi et al (1993) developed FEBREL (Finite

Element-Based RELiability) as a general-purpose, probabilistic, finite

element computer program at Rockwell International Corporation's Space

System Division. This software uses the ANSYS general purpose finite

element computer program to provide the necessary computational framework

for analyzing complex structures, while the FEBREL reliability computer

program provides the basis for modeling, analysis of uncertainties, and

computation of probabilities.

Frangopol and Estes (1998) developed RELSYS (RELiability of

SYStems) to compute the system reliability of structures modeled as a

series-parallel combination of its components. A probabilistic fracture

mechanics code called DARWIN (Darwin’s user guide 2006) has been

developed to predict the risk of fracture associated with rotors and disks

containing material anomalies.

2.9 RBDO APPLICATIONS

Several applications of Reliability Based Design Optimization are

reported in the literature.

2.9.1 Civil Structure

RBDO methods have been applied to a wide range of design and

maintenance problems in civil and architectural engineering (Frangopol 1997,

Enright and Frangopol 1998). The methods have been successfully

Page 35: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

52

disseminated into several areas of civil applications such as buildings,

bridges, nuclear and off-shore structures (Davidson et al 1980, Feng and

Moses 1986). Enright and Frangopol (1999) studied the condition of

reinforced concrete girder bridges, using a time-variant system reliability

approach, in which both load and resistance are time-variant quantities.

Several system models are considered, including failure of any girder (series

system) and failure of a specified number of adjacent girders (series-parallel

system). Adaptive importance sampling is used to determine the

cumulative-time system failure probability. The influence of resistance

degradation and post-failure load redistribution is included. Pettit and Grandhi

(2000) addressed the use of a light weight composite modular deck to replace

the existing deteriorated reinforced concrete deck.

2.9.2 Automotive Industry

Yang et al (2002) applied the reliability-based optimal design to the

crashworthiness design of a full vehicle system in multi-crash scenarios. They

demonstrated that the weight could be reduced compared with a deterministic

(baseline) design, while satisfying the safety constraints. Zou et al (2002)

proposed an efficient method for the reliability analysis of a vehicle

body-door subsystem with respect to one of the important quality issues—the

door closing energy. The developed method combines the optimization-based

and simulation-based approaches and is particularly applicable for problems

with highly non-linear and implicit limit state functions. Byeng and Choi

(2004) proposed a new methodology for RBDO by integrating the RSM and

the HMV method of PMA and applied this method for a large-scale

application of the design of vehicle side impact.

Page 36: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

53

2.9.3 Aerospace Industry

Uncertainty is introduced primarily on the conceptual design level,

where reliability analysis methods are combined with system level

deterministic analyses (Rao 1986, Yang et al 1990). Yang and Nikolaidis

(1991) applied RBDO to a preliminary design of the wing of a small

commuter airplane subjected to gust loads that are modeled using

probabilistic distributions. Grandhi and Wang (1998) applied optimization to

minimize the weight of a twisted gas turbine blade subjected to a probabilistic

constraint on natural frequency with the consideration of uncertainties in the

material properties and thickness distributions. A general framework for the

stochastic multi-disciplinary aircraft design is presented in accounting for

various sources of uncertainties, such as modeling and economic variability,

and aiming for system affordability (Mavris and DeLaurentis 1998, Leverant

et al 2003). Qu et al (2000) applied RBDO to a hydrogen storage tank under

cryogenic temperature. Stroud et al (2002) proposed the probabilistic design

of a plate-like wing to meet flutter and strength requirements.

2.9.4 Composite Materials

Composite materials are being widely used in modern structures,

such as aircraft and space vehicles, because of their high performance, high

temperature resistance, tailoring facility, and light weight. Considerable

research has been carried out on the design and failure analysis of composite

structures (Thanedar and Chamis 1995, Chao 1996). Several applications of

RBDO methods to the design of composite thin walled structures have been

reported in the literature (Richard and Perreux 2000, Biagi and Medico 2008).

Yang and Ma (1989), Miki et al (1993) and Su et al (2002) have considered

probabilistic load conditions and material properties including manufacturing

Page 37: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

54

uncertainties. Antonio et al (1996) considered Composite structures with

degradation models and buckling instabilities.

Kogiso et al (1997) applied the RBDO to a symmetric laminated

plate. The reliability based optimization procedure is developed and applied

to minimize the weight of eight fiber reinforced polymer composite bridge

deck panel configurations (Enright and Frangopol 1999). Mahadevan and

Liu (1998) proposed a probabilistic optimum design for composite laminates.

Qu et al (2000) applied RBDO for composites under a cryogenic

environment. The results of experiments and research into composite

materials show large statistical variations in their mechanical properties (Lin

2000). Therefore, probabilistic analysis plays an important role in reliability

assessment (Nicholaidis et al 2008).

2.9.5 Other Applications

Pu et al (1997) applied RBDO to a typical frame of a

small-waterplane-area twin-hull ship subjected to system-reliability

constraints on failure criteria, considering uncertainties on the loads and

material strength. Yang et al (2005) solved an exhaust system problem using

different RBDO methods and the results are compared. Kharmanda and

Olhoff (2004) applied RBDO for Topology Optimization. Alan et al (2007)

focuses on improving the design and reliability of robotic systems by

addressing the uncertainties in the operational point.

The percentage contribution of the RBDO study in different

applications is shown in Figure 2.6.

Page 38: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

55

Figure 2.6 RBDO in different Applications

2.10 DESIGN OPTIMIZATION OF COMPOSITE LAMINATES

The scope of composite materials in engineering design and the use

of composite material design in various real life scenarios, are extensively

reviewed (Agarwal and Broutman 1990). Cheng et al (1980) analyzed more

generally, the buckling problems of non-homogeneous anisotropic cylindrical

shells under combined axial, radial and torsional loads with all four boundary

conditions at each end of the cylinder. Methods are proposed by Nshanian and

Pappast (1983) for the determination of the optimal ply angle variation through

the thickness of symmetric angle-ply shells of uniform thickness. Weeton et al

(1986) briefly described the application possibilities of composites in the field

of the automotive industry to manufacture composite elliptic springs, drive

shafts and leaf springs. Beard and Johnson (1986) have discussed the potential

for composites in automotive applications. The Shell theory, based on the

critical speed analyses of drive shafts, has been presented by Dos Reis et al

(1987). Pollard (1989) studied the possibility of using polymer Matrix

composites in driveline applications. Patricia (1990) investigated the dynamic

behavior of supercritical composite drive shafts for helicopter applications.

Faust et al (1990) described the considerable interest on the part of both the

helicopter and automobile industries in the development of lightweight drive

shafts. Haftka and Walsh (1992) discussed extensively the stacking-sequence

Page 39: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

56

optimization for buckling of laminated plates by integer programming.

Kim et.al (1992) minimized the weight of composite laminates with ply drop

under a strength constraint. Rajeev and Krishnamoorthy (1992) proposed a

method for converting a constrained optimization problem into an

unconstrained optimization problem.

Serge (1994) examined the optimum design of laminated plates and

shells subjected to constraints of strength, stiffness, buckling loads, and

fundamental natural frequencies. Ganapathi and Varadan (1994) studied

extensively the nonlinear free flexural vibrations of laminated circular

cylindrical shells. A method of analysis involving Kirchoff-Love’s first

approximation theory and Ritz’s procedure, and is used to study the influence

of boundary conditions and fiber orientation on the natural frequencies of thin

orthotropic laminated cylindrical shells, is presented (Lam and Toy, 1995).

A first order theory is presented by Lee (1995) to determine the

natural frequencies of an orthotropic shell. A theoretical analysis is presented

for determining the buckling torque of a cylindrical hollow shaft with layers of

arbitrarily laminated composite materials by means of various thin-shell

theories (Bert and Kim 1995). Lien-Wen Chen et al (1998) analyzed the

stability behaviour of rotating composite shafts under axial compressive loads.

Bauchau et al (1998) measured the torsional buckling loads of graphite/epoxy

shafts, which are in good agreement with theoretical predictions based on a

general shell theory including elastic coupling effects and transverse shearing

deformations.

Riche and Haftka (1993) studied the use of the Genetic Algorithm to

optimize the stacking sequence of composite laminates for buckling load

maximization. Various genetic parameters including the population size, the

probability of mutation, and the probability of crossover are optimized by

Page 40: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

57

numerical experiments. Park et al (2001) used Genetic Algorithms for the

optimal design of symmetric composite laminates, subject to various loading

and boundary conditions. Kim et al 2001 studied an adhesively bonded joint

for composite propeller shaft. The main features of Genetic Algorithms and

the several ways in which they can solve difficult design problems, such as the

design of composite materials, are discussed by Gabor and Ekart (2003). The

General Motors pickup trucks, which adopted the composite drive shaft

(Spicer product), enjoyed a demand three times that of the projected sales in its

first year (Lee et al 2004).

2.11 CONCLUSION BASED ON REVIEW

The following observations are made from the literature review:

Uncertainties are inherently present in all real life engineering

systems. To ensure high reliability and safety, uncertainties

inherent to or encountered by the product during the entire life

must be considered in the design process (Oberkampf et al

2004). It is found from the literature that design optimization

methodologies should account for the stochastic nature of

engineering systems.

In traditional deterministic design optimization methods, the

uncertainty is handled through the safety factor, which does

not provide any scientific meaning and results in more

conservative designs (Qu and Haftka 2004). The literature

review clearly shows that there is a need to use new design

optimization methods such as the Robust Design Optimization

and Reliability Based Design Optimization (RBDO) that

incorporate uncertainty in the engineering design.

Page 41: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

58

In general, it is found that the Robust Design optimization

method yields designs which are insensitive to uncertainties

(Frangopol 2003). The literature review reveals that

Reliability Based Design Optimization is a more rational

approach that quantifies the reliability or risk of failure in

probabilistic terms and includes these terms directly in design

optimization as reliability constraints (Zou and Mahadevan

2006).

Many researchers have used reliability estimation methods

like the First Order Reliability Method (FORM) and the

Second Order Reliability Method (SORM) in Reliability

Based Design Optimization study (Zhao and Ono 1999).

These approximation methods do not guarantee optimal

solutions for highly non-linear limit state functions. Monte

Carlo Simulation based methods are more suitable to solve

such problems (Mahadevan and Dey 1997).

Composite materials find wide applications in industries such

as Aerospace and Automotive industries seeking ways to

reduce the weight, because these materials have higher

strength to weight and stiffness to weight ratios. These

composite materials exhibit large variations in the Material

and Geometric properties (Philippids 2000). Several

researchers have reported the use of Reliability Based Design

Optimization study in Civil structure and Aerospace industry

applications (Papadrakakis 1998, Stroud et al 2002). But the

application of RBDO for composite material design is

scarcely examined.

Reliability Based Design Optimization applications to

determine the optimum stacking sequence of the composite

Page 42: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/26943/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW ... well as the fundamental differences between

59

plates, cylindrical shell and pipes is reported in the literature

(Kogiso 1997, Enright and Frangopol 1999). The application

of RBDO for composite drive shaft design is seldom

investigated.

The composite drive shaft design optimization problem is

considered in this study. The design of composite laminates is a complex

combinatorial optimization problem. It is difficult to solve this problem using

traditional mathematical programming techniques. Hence an attempt is made

to solve the design optimization problem using search heuristics such as the

Genetic Algorithm, the Particle Swarm Optimization and the Evolutionary

Programming.

2.12 SUMMARY

Reliability Based Design Optimization considers the uncertainties

to quantify the risk and reliability of the components or the system at the

design level. In the literature review on traditional design optimization,

various types and models of handling design uncertainties and a review on

design methodologies under uncertainty are reported. An overview of the

formulation, methodologies and applications of RBDO is also presented. The

literature review reveals that there is an increase in interest among researchers

to explore the use of RBDO in real-life applications. Though a lot of research

has been carried out in the area of RBDO, there appears to be a lack of

attention in RBDO applications for composite materials. Composite materials

offer an excellent strength-to-weight ratio, and also find wide applications in

automotive and aerospace industries. The application of heuristics for the

reliability based design optimization of a composite drive shaft is carried out

in this study.