adaptive generalized polynomial chaos for …...2. generalized polynomial chaos. 2.1. the...

16
ADAPTIVE GENERALIZED POLYNOMIAL CHAOS FOR NONLINEAR RANDOM OSCILLATORS D. LUCOR AND G. E. KARNIADAKIS SIAM J. SCI. COMPUT. c 2004 Society for Industrial and Applied Mathematics Vol. 26, No. 2, pp. 720–735 Abstract. The solution of nonlinear random oscillators subject to stochastic forcing is investi- gated numerically. In particular, solutions to the random Duffing oscillator with random Gaussian and non-Gaussian excitations are obtained by means of the generalized polynomial chaos (GPC). Adaptive procedures are proposed to lower the increased computational cost of the GPC approach in large-dimensional spaces. Adaptive schemes combined with the use of an enriched representation of the system improve the accuracy of the GPC approach by reordering the random modes according to their magnification by the system. Key words. uncertainty, Duffing oscillator, polynomial chaos, stochastic modeling AMS subject classifications. 65C20, 60H35, 65C30 DOI. 10.1137/S1064827503427984 1. Introduction. Fully nonlinear oscillators subject to mild or extreme noisy forces are of great interest for multiple disciplinary engineering communities (e.g., ocean structures [1]). Many mechanical systems involving flow-structure interaction can be modeled by the Duffing oscillator equation; see [2, 3]. In the present work, we determine the response of nonlinear single-degree-of-freedom mechanical systems sub- ject to random excitations (Gaussian or non-Gaussian). We are particularly interested in the determination of second-moment characteristics of the response of stochastic Duffing oscillators. The method we adopt in this work is an extension of the classical polynomial chaos approach [4]. This representation is an infinite sum of multidimensional orthogonal polynomials of standard random variables with deterministic coefficients. Practically, only a finite number of terms in the expansion can be retained, as the sum has to be truncated. Consequently, the multidimensional random space has a finite number of dimensions n, and the highest order of the orthogonal polynomial is finite, denoted here by p. The Hermite-chaos expansion, which is the basis of the classical polynomial chaos, is effective in solving stochastic differential equations with Gaussian inputs as well as certain types of non-Gaussian inputs [5, 6, 7]. Its theoretical justification is based on the Cameron–Martin theorem [8]. However, it has been found that for general non-Gaussian random inputs, optimal exponential convergence rate is not achieved, and in some cases the convergence rate is in fact severely deteriorated; see [9, 10]. Another issue with the polynomial chaos decomposition is the fast growth of the dimensionality of the problem with respect to the number of random dimensions and the highest order of the retained polynomial; see Table 1.1. This issue becomes critical if one deals with a very noisy input (white noise) or a strongly nonlinear problem or both. Indeed, an accurate representation of a noisy input requires using a large number of random dimensions, while strong nonlinear dynamics can only be captured accurately with the use of a high polynomial order. Received by the editors May 15, 2003; accepted for publication (in revised form) March 22, 2004; published electronically December 22, 2004. This work was supported by ONR and NSF, and computations were performed at the DoD HPCM centers. http://www.siam.org/journals/sisc/26-2/42798.html Division of Applied Mathematics, Brown University, Providence, RI 02912 ([email protected]. edu, [email protected]). 720

Upload: others

Post on 23-Oct-2020

26 views

Category:

Documents


0 download

TRANSCRIPT

  • ADAPTIVE GENERALIZED POLYNOMIAL CHAOS FORNONLINEAR RANDOM OSCILLATORS∗

    D. LUCOR† AND G. E. KARNIADAKIS†

    SIAM J. SCI. COMPUT. c© 2004 Society for Industrial and Applied MathematicsVol. 26, No. 2, pp. 720–735

    Abstract. The solution of nonlinear random oscillators subject to stochastic forcing is investi-gated numerically. In particular, solutions to the random Duffing oscillator with random Gaussianand non-Gaussian excitations are obtained by means of the generalized polynomial chaos (GPC).Adaptive procedures are proposed to lower the increased computational cost of the GPC approachin large-dimensional spaces. Adaptive schemes combined with the use of an enriched representationof the system improve the accuracy of the GPC approach by reordering the random modes accordingto their magnification by the system.

    Key words. uncertainty, Duffing oscillator, polynomial chaos, stochastic modeling

    AMS subject classifications. 65C20, 60H35, 65C30

    DOI. 10.1137/S1064827503427984

    1. Introduction. Fully nonlinear oscillators subject to mild or extreme noisyforces are of great interest for multiple disciplinary engineering communities (e.g.,ocean structures [1]). Many mechanical systems involving flow-structure interactioncan be modeled by the Duffing oscillator equation; see [2, 3]. In the present work, wedetermine the response of nonlinear single-degree-of-freedom mechanical systems sub-ject to random excitations (Gaussian or non-Gaussian). We are particularly interestedin the determination of second-moment characteristics of the response of stochasticDuffing oscillators.

    The method we adopt in this work is an extension of the classical polynomial chaosapproach [4]. This representation is an infinite sum of multidimensional orthogonalpolynomials of standard random variables with deterministic coefficients. Practically,only a finite number of terms in the expansion can be retained, as the sum has to betruncated. Consequently, the multidimensional random space has a finite number ofdimensions n, and the highest order of the orthogonal polynomial is finite, denotedhere by p. The Hermite-chaos expansion, which is the basis of the classical polynomialchaos, is effective in solving stochastic differential equations with Gaussian inputs aswell as certain types of non-Gaussian inputs [5, 6, 7]. Its theoretical justificationis based on the Cameron–Martin theorem [8]. However, it has been found that forgeneral non-Gaussian random inputs, optimal exponential convergence rate is notachieved, and in some cases the convergence rate is in fact severely deteriorated; see[9, 10]. Another issue with the polynomial chaos decomposition is the fast growth ofthe dimensionality of the problem with respect to the number of random dimensionsand the highest order of the retained polynomial; see Table 1.1. This issue becomescritical if one deals with a very noisy input (white noise) or a strongly nonlinearproblem or both. Indeed, an accurate representation of a noisy input requires usinga large number of random dimensions, while strong nonlinear dynamics can only becaptured accurately with the use of a high polynomial order.

    ∗Received by the editors May 15, 2003; accepted for publication (in revised form) March 22,2004; published electronically December 22, 2004. This work was supported by ONR and NSF, andcomputations were performed at the DoD HPCM centers.

    http://www.siam.org/journals/sisc/26-2/42798.html†Division of Applied Mathematics, Brown University, Providence, RI 02912 ([email protected].

    edu, [email protected]).

    720

  • ADAPTIVE GPC FOR NONLINEAR RANDOM OSCILLATORS 721

    Table 1.1Number of unknown deterministic coefficients in the polynomial chaos representation as a func-

    tion of the number of random dimensions n and the highest polynomial order p.

    p = 3 p = 5 p = 7 p = 9n = 2 10 21 36 55n = 4 35 126 330 715n = 8 165 1,287 6,435 24,310n = 16 969 20,349 245,157 2,042,975

    In this paper, we consider the case of the random response of a Duffing oscil-lator subject to nonstationary additive noise, where the forcing is represented by adeterministic time-dependent periodic function multiplied by a random variable withdifferent distributions. We also study the case of the random response of a Duff-ing oscillator subject to a stationary additive noise represented by a random processwith different distributions. The objective is twofold: First, we investigate what typeof stochastic solutions we obtain in comparison with the well-studied deterministicDuffing oscillator. Second, we obtain the stochastic solutions at reduced cost usingadaptive procedures first pioneered by Li and Ghanem in [11].

    The paper is organized as follows. In section 2 we give a brief overview of thegeneralized polynomial chaos, and subsequently we apply it to the stochastic Duffingoscillator. In section 3 we first consider the case of a periodic excitation with randomamplitude and then focus on the adaptive procedure and present an extension toGhanem’s original proposal. We conclude in section 4 with a brief summary.

    2. Generalized polynomial chaos.

    2.1. The Wiener–Askey representation. The Wiener–Askey polynomialchaos or generalized polynomial chaos (GPC) expansion is an extension of the orig-inal polynomial chaos. It is well suited to represent more general (Gaussian andnon-Gaussian) random inputs. The expansion basis in this case consists of polyno-mial functionals from the Askey family [9, 12]. Since each type of polynomial fromthe Askey scheme forms a complete basis in the Hilbert space, each correspondingWiener–Askey expansion converges to an L2 functional in the L2 sense in the appro-priate Hilbert functional space; this is a generalized result of the Cameron–Martintheorem; see [8, 13].

    A general second-order random process X(θ) is represented by

    X(θ) = a0I0

    +

    ∞∑i1=1

    ci1I1(ζi1(θ))

    +

    ∞∑i1=1

    i1∑i2=1

    ci1i2I2(ζi1(θ), ζi2(θ))

    +

    ∞∑i1=1

    i1∑i2=1

    i2∑i3=1

    ci1i2i3I3(ζi1(θ), ζi2(θ), ζi3(θ))

    + · · · ,(2.1)

    where In(ζi1 , . . . , ζin) denotes the GPC of order n in terms of the random vectorζ = (ζi1 , . . . , ζin).

  • 722 D. LUCOR AND G. E. KARNIADAKIS

    For example, one possible choice for In are the Hermite polynomials which cor-respond to the original Wiener–Hermite polynomial chaos Hn.

    In the GPC expansion, the polynomials In are not restricted to Hermite poly-nomials but rather can be all types of the orthogonal polynomials from the Askey

    scheme. For example, the expression of the Jacobi polynomials P(α,β)n is given by

    In(ζi1 , . . . , ζin) ≡ P(α,β)n (ζi1 , . . . , ζin)

    =(1 − ζ)−α(1 + ζ)−β

    2nn!(−1)−n∂n

    ∂ζi1 · · · ∂ζin[(1 − ζ)n+α(1 + ζ)n+β

    ],

    where ζ denotes the vector consisting of n Beta random variables (ζi1 , . . . , ζin).For notational convenience, we rewrite (2.1) as

    X(θ) =

    ∞∑j=0

    ĉjΦj(ζ),(2.2)

    where there is a one-to-one correspondence between the functions In(ζi1 , . . . , ζin) andΦj(ζ).

    The orthogonality relation of the GPC takes the form

    〈ΦiΦj〉 = 〈Φ2i 〉δij ,(2.3)

    where δij is the Kronecker delta and 〈·, ·〉 denotes the ensemble average which is theinner product in the Hilbert space of the variables ζ. We also have

    〈f(ζ)g(ζ)〉 =∫

    f(ζ)g(ζ)W (ζ)dζ(2.4)

    or

    〈f(ζ)g(ζ)〉 =∑

    ζ

    f(ζ)g(ζ)W (ζ)(2.5)

    in the discrete case. Here W (ζ) is the weighting function corresponding to the GPCbasis {Φi}.

    Most of the orthogonal polynomials from the Askey scheme have weighting func-tions that take the form of probability function of certain types of random distribu-tions. We then choose the type of independent variables ζ in the polynomials {Φi(ζ)}according to the type of random distributions as shown in Table 2.1. Legendre polyno-mials, which are a special case of the Jacobi polynomials with parameters α = β = 0,correspond to an important distribution—the Uniform distribution.

    2.2. Duffing oscillator and its GPC representation. We consider the Duff-ing oscillator subject to external forcing, i.e.,

    ẍ(t, θ) + cẋ(t, θ) + k[x(t, θ) + �x3(t, θ)

    ]= f(t, θ).(2.6)

    This equation has been normalized with respect to the mass, so the forcing f(t) hasunits of acceleration. The damping factor c and spring factor k are defined as follows:

    c = 2ζω0 and k = ω20 ,(2.7)

  • ADAPTIVE GPC FOR NONLINEAR RANDOM OSCILLATORS 723

    Table 2.1Correspondence between the type of GPC and the type of random inputs (N ≥ 0 is a finite

    integer).

    Random inputs GPC Support

    Continuous Gaussian Hermite-chaos (−∞,∞)Gamma Laguerre-chaos [0,∞)

    Beta Jacobi-chaos [a, b]Uniform Legendre-chaos [a, b]

    Discrete Poisson Charlier-chaos {0, 1, 2, . . . }Binomial Krawtchouk-chaos {0, 1, . . . , N}

    Negative binomial Meixner-chaos {0, 1, 2, . . . }Hypergeometric Hahn-chaos {0, 1, . . . , N}

    where ζ and ω0 are, respectively, the damping ratio and the natural frequency ofthe system. This system can become stochastic if the external forcing or the inputparameters or both are some random quantities. Those random quantities can evolvein time (random process) or not (random variable).

    Nonconservative restoring forces tend to correspond to hysteretic materials whosestructural properties change in time when subjected to cyclic stresses. A popularrestoring force model used in random vibration analysis consists of the superpositionof a linear force αx(t) and a hysteretic force (1 − α)Q(t) (see [14]), so that we have

    ẍ(t) + cẋ(t) + k (αx(t) + (1 − α)Q(t)) = f(t),(2.8)

    Q̇(t) = αẋ(t) − βẋ(t)|Q(t)|n − ρ|ẋ(t)|Q(t)|Q(t)|n−1.(2.9)

    The coefficients α, β, ρ, and n control the shape of the hysteretic loop; some of thesecoefficients may vary in time.

    Here, we focus on the case of the Duffing oscillator, while other cases can bededucted from this one. Let us consider the stochastic differential equation (2.6),where the damping factor c and the spring constant k are random processes withunknown correlation functions and the external forcing is a random process with agiven correlation function. We decompose the random process representing the forcingterm in its truncated Karhunen–Loève expansion up to the nth random dimension;see [15]. We have

    f(t, θ) = f̄(t) + σf

    n∑i=1

    √λiφi(t)ξi(θ) =

    n∑i=0

    fi(t)ξi.(2.10)

    Assuming that the correlation functions for the coefficients c and k are not known,we can decompose the random input parameters in their GPC expansion [5, 6, 7] asfollows:

    c(t, θ) =

    P∑j=0

    cj(t)Φj(ξ(θ)) and k(t, θ) =

    P∑j=0

    kj(t)Φj(ξ(θ)).(2.11)

    Finally, the solution of the problem is sought in the form given by its truncated GPCexpansion

    x(t, θ) =P∑i=0

    xi(t)Φi(ξ(θ)),(2.12)

  • 724 D. LUCOR AND G. E. KARNIADAKIS

    where n is the number of random dimensions and p is the highest polynomial orderof the expansion.

    By substituting all expansions in the governing equation (see (2.6)), we obtain

    (2.13)P∑i=0

    ẍi(t)Φi +

    P∑j=0

    cj(t)Φj

    P∑i=0

    ẋi(t)Φi

    +

    P∑j=0

    kj(t)Φj

    (P∑i=0

    xi(t)Φi + �

    (P∑i=0

    xi(t)Φi

    P∑k=0

    xk(t)Φk

    P∑l=0

    xl(t)Φl

    ))=

    n∑i=0

    fi(t)ξi.

    We project the above equation onto the random space spanned by our orthogonalpolynomial basis Φm; i.e., we take the inner product with each basis and then use theorthogonality relation. We obtain a set of coupled deterministic nonlinear differentialequations

    ẍm(t) +1

    〈Φ2m〉

    P∑i=0

    P∑j=0

    cj(t)ẋi(t)eijm

    = − 1〈Φ2m〉

    P∑i=0

    P∑j=0

    kj(t)xi(t)eijm

    − �〈Φ2m〉

    ⎛⎝ P∑

    i=0

    P∑j=0

    P∑k=0

    P∑l=0

    kjxi(t)xk(t)xl(t)eijklm

    ⎞⎠ + fm(t),(2.14)

    where m = 0, 1, 2, . . . , P , eijm = 〈ΦiΦjΦm〉, and eijklm = 〈ΦiΦjΦkΦlΦm〉; here 〈·, ·〉denotes an ensemble average. These coefficients as well as 〈Φ2m〉 can be determined an-alytically or numerically using multidimensional numerical quadratures. This systemof equations consists of (P + 1) nonlinear deterministic equations with each equationcorresponding to one random mode. Standard solvers can be employed to obtain thenumerical solutions.

    3. Duffing oscillator.

    3.1. Periodic excitation with random amplitude. We consider a viscouslydamped nonlinear Duffing oscillator subject to random external forcing excitations:

    ẍ(t, θ) + cẋ(t, θ) + k[x(t, θ) + �x3(t, θ)

    ]= f(t, θ),

    x(0, θ) = x0 and ẋ(0, θ) = ẋ0, t ∈ [0, T ] .(3.1)

    In this case, the random forcing is treated as a nonstationary random variable andhas the form

    f(t, θ) = f̄(t) + σf (t)ξ(θ) + γf (t)ξ2(θ) + δf (t)ξ

    3(θ),(3.2)

    where ξ is a random variable of known distribution and the coefficients are given by

    f̄ = Ā(α + Ā2β

    ), σf = σA

    (α + 3Ā2β

    ), γf = 3Āσ

    2Aβ, δf = σ

    3Aβ,

    α =(k − ω2

    )cos (ωt) − cω sin (ωt) , β = k� cos3 (ωt) .

  • ADAPTIVE GPC FOR NONLINEAR RANDOM OSCILLATORS 725

    An analytical solution can be obtained for this forcing of the form

    x(t) =(Ā + σAξ

    )cos(ωt + φ)(3.3)

    with φ = 0 and Ā, σA, and ω being some fixed constants. The random variable ξ canhave different distributions. In this section, we focus on a Gaussian (Case I) and aUniform distribution (Case II).

    Case I. If ξ is a Gaussian random variable, the forcing can be represented exactlyby the GPC basis using Hermite-chaos and has the following form:

    f(t, θ) =(f̄(t) + γf (t)

    )+ (σf (t) + 3δf (t)) ξ + γf (t)

    (ξ2 − 1

    )+ δf (t)

    (ξ3 − 3ξ

    ).(3.4)

    Case II. If ξ is a random variable with a Uniform distribution (particular case ofa Beta distribution), the forcing can be represented exactly by the GPC basis usingLegendre-chaos (particular case of Jacobi polynomials) and has the following form:

    f(t, θ) = f̄(t) +γf (t)

    3+

    [σf (t) +

    3

    5δf (t)

    ]ξ +

    2

    3γf (t)

    [1

    2(3ξ2 − 1)

    ]

    +2

    5δf (t)

    [1

    2(5ξ3 − 3ξ)

    ].(3.5)

    We decompose the random forcing and the sought solution in its GPC expansion.After substituting in the equation and projecting onto the random space, we obtaina set of coupled equations similar to (2.14). This nonlinear system is simplified if wewrite it as a state equation. We obtain the following discrete system which consistsof a set of simultaneous nonlinear first-order differential equations:⎧⎪⎪⎪⎪⎪⎪⎪⎨

    ⎪⎪⎪⎪⎪⎪⎪⎩

    Ẋ1m(t) = X2m(t),

    Ẋ2m(t) + c

    P∑i=0

    X2i (t) = −kP∑i=0

    X1i (t),

    − k�〈Φ2m〉

    (P∑i=0

    P∑k=0

    P∑l=0

    X1i (t)X1k(t)X

    1l (t)eiklm

    )+ fm(t),

    where eiklm = 〈ΦiΦkΦlΦm〉. These coefficients as well as 〈Φ2m〉 can be determinedanalytically or numerically very efficiently using multidimensional Gauss–Legendrequadratures.

    Obviously, when � �= 0, we need at least a third-order GPC expansion to representthe forcing exactly. Because of the form of the solution, we expect the energy injectedin the system through the forcing to concentrate mainly in the mean and the firstmode of the solution. The energy present in the other random modes should be zero.

    Since the resulting ODEs are deterministic, we use standard explicit schemes(Euler-forward and Runge–Kutta of second order and fourth order) to check the con-vergence rate of the solution in time. The following results are obtained using thestandard fourth-order Runge–Kutta scheme. The structural parameters in the system(see (3.1) and (3.3)) and initial conditions are set to

    c = 0.05, k = 1.05, (Ā, σA) = (0.6, 0.06), ω = 1.05, φ = 0;x0(t = 0) = Ā, x1(t = 0) = σA,ẋ0(t = 0) = ẋ1(t = 0) = 0, xi>1(t = 0) = ẋi>1(t = 0) = 0.

  • 726 D. LUCOR AND G. E. KARNIADAKIS

    0 50 100 150–0.6

    –0.4

    –0.2

    0

    0.2

    0.4

    0.6

    time

    Solu

    tion

    x0 (mean)

    x1

    0 50 100 150– 4

    – 2

    0

    2

    4x 10

    – 12

    time

    Solu

    tion

    x2

    x3

    x4

    x5

    Fig. 3.1. Time evolution of the random modes solution for Case I (Gaussian) using a GPCexpansion of six terms (p = 5); � = 1.0.

    Figure 3.1 shows the time evolution of the random modes solution for Case I(Gaussian) with � = 1.0. A fifth-order polynomial is used to solve the problem. Thetop plot shows the mean and the first mode of the solution. We notice that theyhave the proper amplitude and frequency that we imposed by assuming the form ofthe solution. The lower plot represents the higher modes, which should be indenticallyzero. They are very small and completely controlled by the temporal discretizationerror. In this case, for fixed ∆t, an increase of the polynomial order p does not improvethe solution error. In fact, we obtain the same results with a cubic order polynomial,as we know that it is enough to represent exactly the forcing term. We notice in thelower plot that there exists a transient state with a burst of energy in the high modeswhich interact in a nonlinear manner. At longer times the amplitude of the high modesremains bounded and the system is stable. Similar observations and conclusions can bemade for the case of the Uniform input (Case II) for the same values of the parameters.

    Different values of the nonlinear parameter � were investigated for fixed valuesof the other parameters. The magnitude and duration of the observed transient ofthe high modes mentioned above depends on the value of � (and σA). As � increases,the transient state takes place earlier in time with an increased magnitude. Next, wechoose � = 5 with the same set of parameters and an input with Uniform distribution(Case II). We perform a long-time integration for different values of the polynomialorder (from p = 3 to p = 11). Figure 3.2 shows results for p = 3. We present the timeevolution of the four random modes (x0 (mean), x1, x2, and x3); see Figure 3.2. Inthis case, we notice that both the mean and the first mode eventually deviate fromthe expected solution. Higher modes also deviate toward another solution, and theirmagnitude becomes nonnegligible. The temporal location of the onset of the bifurca-tion varies as a function of the temporal error introduced by the scheme. However,the bifurcation always exists, even if the temporal error introduced is slightly abovemachine precision. Moreover, we observe very similar asymptotic behavior for higher

  • ADAPTIVE GPC FOR NONLINEAR RANDOM OSCILLATORS 727

    0 50 100 150 200 250 300

    −0.5

    0

    0.5

    x0

    0 50 100 150 200 250 300

    −0.1

    0

    0.1

    x1

    0 50 100 150 200 250 300

    −0.1

    0

    0.1

    x2

    0 50 100 150 200 250 300−0.1

    0

    0.1

    x3

    time

    Fig. 3.2. Time evolution of the random modes solution for Case II (Uniform) using a GPCexpansion of four terms (p = 3); � = 5.

    values of p even though the transient states are somewhat different.The critical value of � for Case II is around � ≈ 4.8. No bifurcation of the solution

    is obtained for an � below this threshold value. The critical value of � for Case I isaround � ≈ 3.7. Slightly above this value, a long-term instability develops that bringsthe initially regular (expected) solution to a chaotic state. For both distributions, fora fixed value of �, a change in the standard deviation of the input noise can changethe regularity of the solution and bring it to another state. For instance, for � = 5 forCase II, the transition in the solution to another state takes place if σA/Ā > 4%.

    Because of the way the forcing term is defined, increasing values of the nonlinearparameter can be seen as increasing forcing magnitudes in the equivalent normalizedform of the Duffing equation [16]. Moreover, multifrequencies are introduced in theforcing for � within some critical range. For instance, for small values of �, the forcingis very close to a perfectly single-frequency harmonic signal. However, in our case,the multifrequencies forcing brings the oscillator’s mean value to two limit cycles ofdifferent stability which coexist for certain values of the control parameters; see plot(c-1) in Figure 3.3. For a limited parameter range, two stable closed orbits coexist.This kind of jump phenomenon is observed for the Duffing oscillator for which wechange slightly the forcing frequency [16]. We verified that once the oscillator jumpsto the new solution, it does not switch back to the original one. Concerning thefirst mode x1, a flip bifurcation-like occurs [16] where the initial limit cycle loses itsstability, while another closed orbit takes place whose period is half the period of theoriginal cycle; see plot (c-2) in Figure 3.3.

    One fundamental question is whether the bifurcation is intrinsic to the deter-ministic system or whether it is in fact triggered by the uncertainty of the randominput. Deterministic computations for this case are done using the extreme valuesof the random input for the deterministic forcing. This investigates the response ofthe deterministic oscillator subject to deterministic forcing whose amplitude is evalu-ated at the boundary of the density probability support (here Uniform distribution).This is equivalent to setting the parameters (Ā, σA) = (0.6 ± 0.06, 0.0). While one

  • 728 D. LUCOR AND G. E. KARNIADAKIS

    −0.5 0 0.5

    −0.5

    0

    0.5

    y

    y t

    Deterministic

    (a)

    −1 −0.5 0 0.5 1

    −0.5

    0

    0.5

    1

    Deterministic

    y

    y t

    (b)

    −0.2 0 0.2

    −0.3

    −0.2

    −0.1

    0

    0.1

    0.2

    Stochastic; First mode

    y

    y t

    (c−2)

    −0.5 0 0.5−0.6

    −0.4

    −0.2

    0

    0.2

    0.4

    0.6

    y

    y t

    Stochastic; Mean

    (c−1)

    Fig. 3.3. Phase projections of deterministic solutions and stochastic (Uniform distribution)solutions.

    case gives a single limit cycle solution (see plot (a) in Figure 3.3), the other case((Ā, σA) = (0.6 + 0.06, 0.0); see plot (b) in Figure 3.3) exhibits two limit cycles withdifferent amplitudes but the same frequency. So it seems that in this case the bifur-cation is intrisic to the system.

    In summary, what we have studied in this section shows complex and differentdynamics for the stochastic Duffing oscillator. A straightforward implementation ofGPC is possible since for the problem considered the relatively simple forcing doesnot require very high order in the GPC expansion. However, in the general case ofarbitrary stochastic forcing, the computational complexity increases tremendously asshown in Table 1.1. To this end, we need to implement adaptive procedures to lowerthe computaional complexity of stochastic nonlinear oscillators.

    3.2. Solutions via adaptive GPC. We consider a nonlinear Duffing oscillatorsubject to a random process excitation f(t, θ) applied over a time interval. Theequation governing the motion is given by

    ẍ(t, θ) + 2ζω0ẋ(t, θ) + ω20(x(t, θ) + µx

    3(t, θ)) = f(t, θ), x(0) = ẋ(0) = 0.(3.6)

    We assume that the input process f(t, θ) is a weakly stationary random process, withzero mean and correlation function Rff (t1, t2), given by

    Rff (t1, t2) = σ2fe

    − |t1−t2|A , A > 0,(3.7)

    where A is the correlation length and σf denotes the standard deviation of the process.If we normalize the equation using nondimensional time τ = ω0t and nondimensionaldisplacement y = x/σx, where σx represents the standard deviation of the linearsystem (µ = 0) with a stationary excitation of infinite duration (T → ∞), we have

    ÿ(t, θ) + 2ζẏ(t, θ) + (y(t, θ) + �y3(t, θ)) =f(t, θ)

    σxω20, y(0) = ẏ(0) = 0,(3.8)

  • ADAPTIVE GPC FOR NONLINEAR RANDOM OSCILLATORS 729

    where � = µσ2x. Using the above nondimensional time, the autocorrelation functiontakes the form

    Rff (∆τ) = σ2fe

    − |τ1−τ2|Aω0 , A > 0,(3.9)

    where σx is given by

    σx =

    (2ζω0 +

    1A

    2ζω30

    )(σ2f

    ω20 +(

    1A

    )2+ 2ζω0A

    ).(3.10)

    We also use (2.10) to represent the stochastic forcing, i.e.,

    f(t, θ) =M∑i=0

    fi(t)ξi(θ) = f̄ + σf

    M∑i=1

    √λiφi(t)ξi(θ),(3.11)

    and represent the solution y(t, θ) of the problem by its GPC expansion

    y(t, θ) =P∑i=0

    yi(t)Φi(ξ(θ)).(3.12)

    The number (P + 1) of terms required in the expansion grows very rapidly as thenumber of (M + 1) terms in the expansion for the input process f(t, θ) increases;see Table 1.1. However, some of the terms in the expansion for y(t, θ) do not con-tribute significantly to its value. An adaptive procedure, first introduced by Li andGhanem [11], can be designed in order to keep only the terms which have the greatestcontribution to the solution.

    The expansion for the excitation (3.11) is decomposed into two summations:

    f(t, θ) = f̄ + σf

    (K∑i=1

    fi(t)ξi(θ) +

    M∑i=K+1

    fi(t)ξi(θ)

    ).(3.13)

    The first summation contains the terms whose higher-order (nonlinear) contributionsto the solution y(t, θ) will be kept at a given step of the iterative process. The secondsummation contains the terms whose higher-order (nonlinear) contributions will beneglected in the computation. Correspondingly, the expansion of the solution becomes

    y(t, θ) = ȳ +K∑i=1

    yi(t)ξi(θ) +

    M∑i=K+1

    yi(t)ξi(θ)

    +

    N∑j=M+1

    yj(t)Ψj(ξi(θ) |Ki=1) with N < P.(3.14)

    The first two summations represent the linear contributions. The third summationrepresents higher-order terms, i.e., at least quadratic polynomials in the random vari-ables {ξi}Ki=1. Another way to understand the method is to consider that we enrichthe space of random variables by adding L = (M −K) linear terms to the standardGPC expansion (see (2.12)). With the expansions for y(t, θ) and f(t, θ), we now solvethe system for the random modes yi(t) over the time domain. Once the current com-putation is completed, we then evaluate the L2 norm of each function yi(t) over the

  • 730 D. LUCOR AND G. E. KARNIADAKIS

    0 5 10 15 20 25 300

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    Seco

    nd−

    orde

    r m

    omen

    t res

    pons

    e

    ω0t

    I

    II

    III

    IV

    V

    I: GPC (K=20, L=0, p=1); ε=0II: MC (M=30; 500,000 events)III: GPC (K=20, L=0, p=1)IV: GPC (K=10, L=0, p=3)V: AGPC (K=10, L=10, p=3)

    Fig. 3.4. Case I (Gaussian): Comparison of second-order moment response obtained by adaptiveGPC (AGPC ) and Monte Carlo simulation (MC ) (500,000 events). ω0 = 1.0; ζ = 0.1; A = 1.0;� = 1.0.

    time interval. The K linear components yi(t), among i ≤ M , with the largest norm,are sorted and reordered and then used to produce the higher-order components inthe next iteration. The iterative process is repeated. Convergence is reached and theiterative process stops when the ordering of the K largest contributors to the solutiondoes not change.

    We present numerical results for both Case I (Gaussian) and Case II (Uniform)for different values of the nonlinear parameter � and different K,L, and polynomialorder p combinations. The values for the structural parameters are

    ω0 = 1.0, (f̄ , σf ) = (0.0, 1.0), A = 1.0.(3.15)

    The time domain extends over 30 nondimensional units (T = 30). Values of thedamping coefficient ζ will be specified for the different cases, as we will see that itplays a key role in the efficiency of the adaptive method.

    Because of the mean forcing f̄ being zero, we have an asymptotic value of themean of the solution that tends to zero, and only the random modes associated withpolynomials of odd order are excited due to the form of the nonlinearity. Therefore,we compare the second-order moment responses obtained by GPC with and withoutthe use of the adaptive method and also by Monte Carlo simulation. The variance ofthe solution includes the square of all random modes (except mode zero), so we expecta truncated representation of the solution (without reordering) to always underpredictthe exact variance of the solution.

    Figure 3.4 shows results for Case I (Gaussian case) with � = 1.0. The dampingcoefficient ζ = 0.1 is quite large, so the solution converges quickly within the imposedtime domain. Because the problem has been normalized, the asymptotic value of thevariance of the solution for the linear case (� = 0.0) has to be one. The linear caseis run first to estimate how many terms are needed to capture the scale associatedwith the correlation length A of f . We found that 20 terms (K = 20) are enough forthe variance of the solution to reach its asymptotic value. Monte Carlo simulation

  • ADAPTIVE GPC FOR NONLINEAR RANDOM OSCILLATORS 731

    0 5 10 15 20 25 300

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    ω0t

    Seco

    nd−

    orde

    r m

    omen

    t res

    pons

    eI

    II

    III

    IV

    V

    I: GPC (K=20, L=0, p=1); ε=0II: MC (M=30; 500,000 events)III: GPC (K=20, L=0, p=1)IV: GPC (K=10, L=0, p=3)V: AGPC (K=10, L=10, p=3)

    Fig. 3.5. Case II: Comparison of second-order moment response obtained by adaptive GPCand Monte Carlo simulation (500,000 events). ω0 = 1.0; ζ = 0.1; A = 1.0; � = 1.0.

    (MC) with 500,000 realizations for the nonlinear case (� = 1.0) was performed with 30random dimensions to keep a safety margin. We notice that cubic order (p = 3) GPCwith only 10 random dimensions (K = 10) is far from converging to the Monte Carlosimulation. Linear chaos with 20 random dimensions (K = 20) still underestimatesthe Monte Carlo simulation. The adaptive GPC of cubic order with the addition of10 more random dimensions (L = 10) shows very clear improvement to the standardGPC, and it also improves the phasing of the solution, but it still underestimates thevalue of the variance. In this case, cubic polynomials are not enough to capturethe strong nonlinear behavior of the oscillator. It is worth mentioning that the useof the incomplete, adaptive third-order GPC (p = 3, K = 10, L = 10) versus thecomplete standard GPC expansion (p = 3, K = 20, L = 0) lowers significantly thenumber of unknown random coefficients from 1,771 to 296.

    Figure 3.5 shows very similar results for Case II. Structural parameters, correla-tion length, and nonlinear parameter are set to the same values, and only the typeof distribution of the input is changed to the Uniform distribution. Here again, thenonlinearity is too large for a cubic polynomial order even with reordering of themodes.

    Figures 3.6 and 3.7 show results for Cases I and II with an � = 0.1 smaller thanthe previous cases. The damping coefficient ζ = 0.02 is kept low. Consequently, thesolution does not converge to its asymptotic value within the imposed time domain.However, low damping implies sharper peaks in the energy spectrum of the oscillator.Therefore, a finite number of random dimensions is more likely to capture most ofthe energy in the system. Reordering in this case also helps by sorting out the mostsignificant random modes corresponding to the resonant frequencies and keeping theassociated nonlinear components.

    Figure 3.6 shows the time evolution of variance of the solution for Case I. Wesee that the adaptive GPC with reordering is very close to the Monte Carlo simu-lation. In Figure 3.7, we present differently the same type of result for Case II by

  • 732 D. LUCOR AND G. E. KARNIADAKIS

    0 5 10 15 20 25 300

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    ω0t

    Seco

    nd−

    orde

    r m

    omen

    t res

    pons

    e

    I: GPC(K=20,L=0,p=1), ε=0.0II: MC(K=40), ε=0.1III: GPC(K=20,L=0,p=1), ε=0.1IV: AGPC(K=10,L=10,p=3), ε =0.1,NO reorderingV: AGPC(K=10,L=10,p=3), ε=0.1,WITH reordering

    ω0=1.0

    ζ=0.02A=1

    Fig. 3.6. Comparison of second-order moment response obtained by adaptive GPC and MonteCarlo simulation (1,000,000 events). ω0 = 1.0; ζ = 0.02; A = 1.0; � = 0.1 (Case I: Gaussian).

    0 5 10 15 20 25 30

    −0.01

    0

    0.01

    0.02

    ω0t

    Poin

    twis

    e E

    rror

    III: APC (K=10, L=0, p=1)IV: APC (K=10, L=10, p=3), NO reorderingV: APC (K=10, L=10, p=3), WITH reordering

    Fig. 3.7. Comparison of second-order moment response obtained by adaptive GPC and MonteCarlo simulation (1,000,000 events). ω0 = 1.0; ζ = 0.02; A = 1.0; � = 0.1 (Case II: Uniform).

    showing the pointwise error of the adaptive GPC solution against the Monte Carlosimulation.

    Figure 3.8 shows the energy distribution among the random modes for Case Ibefore and after reordering. Region I in the figure represents the linear terms orrandom modes associated with the linear polynomials. Similarly, region II representsthe quadratic terms (which are zero as explained previously). Finally, the cubic termsare all grouped in region III. The linear terms distribution before reordering clearly

  • ADAPTIVE GPC FOR NONLINEAR RANDOM OSCILLATORS 733

    0 50 100 150 200 250 30010

    −8

    10−6

    10−4

    10−2

    100

    102

    P

    ||yi(t

    )||

    I II III

    NO reorderinglast reordering

    Fig. 3.8. L2 norm of the random adaptive GPC modes with no reordering and with reordering.ω0 = 1.0; ζ = 0.02; A = 1.0; � = 0.1. Relates to V: adaptive GPC (K = 10, M = 10, p = 3).

    illustrate the concentration of energy in the system around the peak of resonance. Wealso represent the last reordering after the iterative process has converged. We noticethat the most energetic frequencies have been placed first and the corresponding cubicterms have increased by as much as four orders of magnitude and about two ordersof magnitude on average.

    3.3. A new adaptive approach to GPC. The concept of truncated repre-sentation of the solution in the framework of the adaptive GPC method can be ex-tended further. This time, the solution is again expanded as in (3.14) but with thedistinction that not all of the nonlinear terms from the third summation based on theK first random dimensions are kept in the decomposition. In our case, we observethat the modal energy is always large for the nonlinear terms corresponding to crossproducts between random dimensions (see Figure 3.8). Accordingly, we keep only thecoefficients corresponding to the nonlinear polynomials of the form

    Ψj(ξi(θ) |Ki=1) =∏

    Cξ1(θ)···ξi(θ)···ξK(θ)l with l = 1, 2, . . . ,K,(3.16)

    where the operator∏

    C represents the product of the combination of the K possiblelinear polynomials ξi(θ) taken l at a time.

    The case of Figure 3.4 is repeated using the aforementioned method with adaptiveseventh-order GPC (p = 7, K = 7, L = 13) which represents a total number of 141random modes instead of 888,030 for a standard complete GPC expansion (p= 7,K = 20, L= 0). Results are shown in Figures 3.9 and 3.10. In this case, the adaptiveGPC solution does not approach uniformly the Monte Carlo solution over the entiretime domain, but it is locally very accurate. The error is very small in some places,and this is an example of local nonuniform convergence of the method. A finitenumber of modes might be enough to capture the behavior of the oscillator at someinstants of time but insufficient at others.

  • 734 D. LUCOR AND G. E. KARNIADAKIS

    0 5 10 15 20 25 300

    0.2

    0.4

    0.6

    ω0t

    Seco

    nd−

    orde

    r m

    omen

    t res

    pons

    e

    I: MC(K=40), ε=0.1II: AGPC(K=7,L=13,p=7), ε =0.1, NO reorderingIII: AGPC(K=7,L=13,p=7), ε=0.1,WITH reordering

    Fig. 3.9. Comparison of second-order moment response obtained by adaptive GPC and MonteCarlo simulation (1,000,000 events). ω0 = 1.0; ζ = 0.02; A = 1.0; � = 0.1 (Case I: Gaussian).

    0 5 10 15 20 25 300

    0.01

    0.02

    0.03

    ω0t

    |Poi

    ntw

    ise

    Err

    or|

    II: AGPC(K=7,L=13,p=7), ε =0.1, NO reorderingIII: AGPC(K=7,L=13,p=7), ε =0.1,WITH reordering

    Fig. 3.10. Absolute value of second-order moment pointwise error obtained by adaptive GPCand Monte Carlo simulation (1,000,000 events). ω0 = 1.0; ζ = 0.02; A = 1.0; � = 0.1 (Case II:Uniform).

    4. Summary. High-order polynomial chaos solutions are prohibitively expen-sive for strongly nonlinear systems when the number of dimensions of the stochasticinput is large. Progress can be made, however, by careful adaptive procedures andselectively incorporating the nonlinear expansion terms. In this paper, we demon-strated such a procedure, proposed previously by Li and Ghanem [11], in the contextof the stochastic Duffing oscillator. The adaptive scheme improves the accuracy ofthis method by reordering the random modes according to their magnification by thesystem. An extension of the originally proposed adaptive procedure was presented

  • ADAPTIVE GPC FOR NONLINEAR RANDOM OSCILLATORS 735

    that uses primarily contributions corresponding to cross products between randomdimensions.

    REFERENCES

    [1] M. F. Shlesinger and T. Swean, Stochastically Excited Nonlinear Ocean Structures, WorldScientific, River Edge, NJ, 1998.

    [2] M. Ding, E. Ott, and C. Grebogi, Controlling chaos in a temporally irregular environment,Phys. D, 74 (1994), pp. 386–394.

    [3] P. Sekar and S. Narayanan, Periodic and chaotic motions of a square prism in cross-flow,J. Sound Vibration, 170 (1994), pp. 1–24.

    [4] N. Wiener, The homogeneous chaos, Amer. J. Math., 60 (1938), pp. 897–936.[5] R. G. Ghanem and P. Spanos, Stochastic Finite Elements: A Spectral Approach, Springer-

    Verlag, New York, 1991.[6] R. G. Ghanem, Stochastic finite elements for heterogeneous media with multiple random non-

    Gaussian properties, ASCE J. Engrg. Mech., 125 (1999), pp. 26–40.[7] R. G. Ghanem, Ingredients for a general purpose stochastic finite element formulation, Com-

    put. Methods Appl. Mech. Engrg., 168 (1999), pp. 19–34.[8] R. H. Cameron and W. T. Martin, The orthogonal development of nonlinear functionals in

    series of Fourier-Hermite functionals, Ann. of Math. (2), 48 (1947), pp. 385–392.[9] D. Xiu and G. E. Karniadakis, The Wiener–Askey polynomial chaos for stochastic differential

    equations, SIAM J. Sci. Comput., 24 (2002), pp. 619–644.[10] R. V. Field and M. Grigoriu, A new perspective on polynomial chaos, in Computational

    Stochastic Mechanics, CSM4, P. D. Spanos and G. Deodatis, eds., Millpress, Rotterdam,The Netherlands, 2003, pp. 199–205.

    [11] R. Li and R. Ghanem, Adaptive polynomial chaos expansions applied to statistics of extremesin nonlinear random vibration, Prob. Engrg. Mech., 13 (1998), pp. 125–136.

    [12] W. Schoutens, Stochastic Processes and Orthogonal Polynomials, Springer-Verlag, New York,2000.

    [13] H. Ogura, Orthogonal functionals of the Poisson process, IEEE Trans. Inform Theory, 18(1972), pp. 473–481.

    [14] T. T. Soong and M. Grigoriu, Random Vibration of Mechanical and Structural Systems,Prentice–Hall, Englewood Cliffs, NJ, 1993.

    [15] M. Loève, Probability Theory, 4th ed., Springer-Verlag, New York, 1977.[16] J. M. T. Thompson and H. B. Stewart, Nonlinear Dynamics and Chaos, John Wiley and

    Sons, Chichester, UK, 1986.