koutsovasilis model reduction of large elastic systems

7
Model Reduction of Large Elastic Systems A Comparison Study on the Elastic Piston Rod P. Koutsovasilis  ∗ , M. Beitelschmidt  Professur f¨ ur Fahrzeugmodellierung und -simulation TU Dresden, Germany Abstract  V arious model reduction techniques (Guyan,  Dynamic, IRS, SEREP , CMS, Krylov) are applied to the elastic pis ton rod . Their res ult s ar e ver ie d compar - ing eigenfrequencies and eigenvectors, respectively, using modal correlation crit eria [10]: Fr equency Comparison, modie d Modal Assurance Criterion, Norma lized Modal  Differ ence, Mass Normalized V ector Differ ence and Stiff- ness Normalized Vector Difference. A numeric approach is  proposed for the iterative Preconditi oned Conjugate Gradi- ent (PCG) solution [12] of the linearized system  Ax =  b in cas e of A ill-conditioness . User intervention is discussed  for all methods. Keywords: Model reduction methods, modal correlation criteria, preconditioned conjugate gradient. I. Intr oduct ion Model reduction is a key issue for the analysis of me- chani cal syst ems. For the const ant demand of working with increasingly large models while aiming to control and pos- sibl y reduce stor age and simula tion time needs , the appli ca- tio n of the rig ht tec hni que consti tutes an impor tant dec isi on. A common spatial discretization method used for me- chanical MBS is the Finite Element Method (FEM). The Partial Differential Equation (PDE), which describes the behavior of the elastic body is transformed into a second order Ordinary Differential Equation (ODE) of the form M¨ x(t) + C ˙ x(t) + Kx(t) =  Bu(t)  (1) where  M, C, K ∈  R n×n are the system matrices (mass- , dampi ng- and stif fnes s matri x respe cti vely ),  Bu(t)  R n×1 the load vector and  x   R n×1 the unknown vec- tor with  n  Degr ees of Fre edo m (DOF). In man y cas es n ∈  (10 4 , 6 · 10 5 ), which leads to large dimension system matr ices and thus to vast st orage and simulatio n time nee ds. The general concept of model reduction is to nd a low- dimensional subspace  T ∈  R n×n in order to approximate the state vector  x  =  Tx R  +   . By proje cting (1) on this subspace a lower dimension 2nd-order ODE is obtained M R ¨ x R (t) + C R  ˙ x R (t) + K R x R (t) =  b R  (2) E-mail: [email protected] E-mail: michael.bei telschmid t@tu-dresd en.de with  M R  =  T T MT, C R  =  T T CT, K R  =  T T KT be- ing the reduced system matrices and  b R  =  T T B the re- duced load vecto r. The reducti on effect ive ness and reli- ability depends on the size of  . Bas ed on the c hoi ce of T various techniques have been developed through the last decades. II. Reduction T echniques  A. Guyan Reduction The oldest reduction method, which was introduced by Guyan [4], is based on the notion of master/external and slave/i nternal DOFs [13]. Suppose we have the following undamped system: M¨ x(t) + Kx(t) = f  (3) The  m-set of Master DOFs is dened as the set of total DOFs that remain in (3). Analogously the  s-set contains all DOFs that will be eliminated from (3). m s = n, n = DOF total , m s =  (4) By partitioning the system matrices of (3) into block ma- trices that depend explicitly on the  m-set {mm}  or  s -set {ss} or a combination of them {ms, sm} the following re- ordered system is obtained   M  ¨ x m ¨ x s +   K  x m x s  =  f m f s  (5)   M =  M mm  M ms M sm  M ss ,   K =  K mm  K ms K sm  K ss We solve the second equation of (5) for  x s  and assume there is no force applied on the external DOFs, i.e  f s  = 0. By omitting the equiva lent inert ia terms (’st atic ’), the transformation matrix for the static reduction is obtained according to (2), where the low-dimension subspace is in this case represented by  T static :  x m x s  =  I K 1 ss  K sm x m  =  T static x m  (6) Gen era ll y, Guyan reduct ion is a good approxima ti on for the lower eigenfrequencies respectively eigenvectors. For high frequency motion the effect of inertia terms is signicant, thus the method becomes inaccurate.

Upload: kk

Post on 13-Apr-2018

220 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Koutsovasilis Model Reduction of Large Elastic Systems

7/26/2019 Koutsovasilis Model Reduction of Large Elastic Systems

http://slidepdf.com/reader/full/koutsovasilis-model-reduction-of-large-elastic-systems 1/6

Model Reduction of Large Elastic SystemsA Comparison Study on the Elastic Piston Rod

P. Koutsovasilis  ∗ , M. Beitelschmidt  †

Professur fur Fahrzeugmodellierung und -simulation

TU Dresden, Germany

Abstract — Various model reduction techniques (Guyan,

 Dynamic, IRS, SEREP, CMS, Krylov) are applied to the

elastic piston rod. Their results are verified compar-

ing eigenfrequencies and eigenvectors, respectively, using

modal correlation criteria [10]: Frequency Comparison,

modified Modal Assurance Criterion, Normalized Modal

 Difference, Mass Normalized Vector Difference and Stiff-

ness Normalized Vector Difference. A numeric approach is

 proposed for the iterative Preconditioned Conjugate Gradi-

ent (PCG) solution [12] of the linearized system   Ax =  b

in case of A ill-conditioness. User intervention is discussed 

 for all methods.Keywords: Model reduction methods, modal correlation criteria,

preconditioned conjugate gradient.

I. Introduction

Model reduction is a key issue for the analysis of me-

chanical systems. For the constant demand of working with

increasingly large models while aiming to control and pos-

sibly reduce storage and simulation time needs, the applica-

tion of the right technique constitutes an important decision.

A common spatial discretization method used for me-

chanical MBS is the Finite Element Method (FEM). The

Partial Differential Equation (PDE), which describes thebehavior of the elastic body is transformed into a second

order Ordinary Differential Equation (ODE) of the form

Mx(t) + Cx(t) + Kx(t) =  Bu(t)   (1)

where  M, C, K ∈   Rn×n are the system matrices (mass-

, damping- and stiffness matrix respectively),   Bu(t)   ∈Rn×1 the load vector and   x ∈   R

n×1 the unknown vec-

tor with   n   Degrees of Freedom (DOF). In many cases

n ∈  (104, 6 · 105), which leads to large dimension system

matrices and thus to vast storage and simulation time needs.

The general concept of model reduction is to find a low-

dimensional subspace T ∈   Rn×n in order to approximatethe state vector  x   =   TxR  +  . By projecting (1) on this

subspace a lower dimension 2nd-order ODE is obtained

MRxR(t) + CR xR(t) + KRxR(t) =  bR   (2)

∗E-mail: [email protected]†E-mail: [email protected]

with  MR   =  TT MT, CR   =   TT CT, KR   =   TT KT be-

ing the reduced system matrices and  bR   =   TT B   the re-

duced load vector. The reduction effectiveness and reli-

ability depends on the size of  . Based on the choice of 

T various techniques have been developed through the last

decades.

II. Reduction Techniques

 A. Guyan Reduction

The oldest reduction method, which was introduced by

Guyan [4], is based on the notion of master/external and

slave/internal DOFs [13]. Suppose we have the following

undamped system:

Mx(t) + Kx(t) = f    (3)

The  m-set of Master DOFs is defined as the set of total

DOFs that remain in (3). Analogously the s-set contains all

DOFs that will be eliminated from (3).

m ∪ s =  n, n =  DOFtotal, m ∩ s = ∅   (4)

By partitioning the system matrices of (3) into block ma-

trices that depend explicitly on the  m-set {mm}  or  s-set

{ss} or a combination of them {ms, sm} the following re-ordered system is obtained

 M  xm

xs

+ K

  xm

xs

 =

  f m

f s

  (5)

 M =

  Mmm   Mms

Msm   Mss

, K =

  Kmm   Kms

Ksm   Kss

We solve the second equation of (5) for  xs  and assume

there is no force applied on the external DOFs, i.e   f s   =0. By omitting the equivalent inertia terms (’static’), the

transformation matrix for the static reduction is obtained

according to (2), where the low-dimension subspace is in

this case represented by   Tstatic:  xm

xs

 =

  I

−K−1ss  Ksm

xm  =  Tstaticxm   (6)

Generally, Guyan reduction is a good approximation for the

lower eigenfrequencies respectively eigenvectors. For high

frequency motion the effect of inertia terms is significant,

thus the method becomes inaccurate.

Page 2: Koutsovasilis Model Reduction of Large Elastic Systems

7/26/2019 Koutsovasilis Model Reduction of Large Elastic Systems

http://slidepdf.com/reader/full/koutsovasilis-model-reduction-of-large-elastic-systems 2/6

 B. Dynamic Reduction

This technique [3] is an expansion of the static reduc-

tion. Applying a Laplace transformation on (3) we get the

equivalent system

(−Mω2 + K)X(ω) =  B(ω)X(ω) = F(ω)   (7)

which is then re-ordered into the block partitioned mas-

ter/slave DOFs as defined previously, giving in that way thetransformation matrix for the dynamic reduction:

⇒ Tdynamic =

  I

−B(ω)−1ss  B(ω)sm

  (8)

B(ω)i,j   := −Mi,jω2 + Ki,j, i , j ∈ {s, m}Dynamic reduction approximates better high-frequency

motion than Guyan reduction. Still the dependence of 

Tdynamic   on   ω  constitutes the choice of an appropriate ini-

tial frequency, which is not a trivial task.

C. Improved Reduction System Method (IRS)

IRS perturbs the static transformation by taking into ac-

count the inertia terms as pseudo-static forces [1]. By using

the free vibration of the equivalent to (3) reduced system

and the basic equations of Guyan reduction the transforma-

tion matrix for IRS is obtained:

xs  =−K−1

ss  Ksm + K−1ss  SM−1

R  KR

xm

S =  Msm − MssK−1ss  Ksm

  xm

xs

 = TIRSxm,   P =

  0 0

0 K−1ss

TIRS =  Tstatic + PMTstaticM−1R

  KR   (9)

TIRS  depends on the reduced mass and stiffness matrices

obtained by the static reduction. In order to minimize the

error produced by this scheme, IRS could be extended to

the iterated IRS method  [8], where the improved estimates

MR, KR are used in the definition of TIRS according to the

subsequent iterations:

TIRS,i+1  = Tstatic + PMTIRS,iMR−1,i   KR,i   (10)

The subscript   i   denotes the   i-th iteration. In (10) TIRS,i

is the current IRS transformation and  MR,i, KR,i  are the

associated reduced system matrices. A new transformationTIRS,i+1  is obtained, which then becomes the current IRS

transformation for the next step.

The algorithm converges to yield the eigenvalues and

eigenvectors of the full system. However, the reduced IRS

stiffness matrix is stiffer than the analogous Guyan or Dy-

namic reduced matrix producing small deviations by or-

thogonality checks.

 D. Component Mode Synthesis Method (CMS/Craig-

 Bampton)

CMS uses the same sub-structuring of internal and exter-

nal DOFs as previously. For the external DOFs the Craig-

Bampton set [7] is introduced; it consists of the lower eigen-

modes of the internal/slave structure, which is calculated

having the external/master DOFs blocked, i.e:

xs =  Φ sin(ωt)

 ⇒ (K

ss −ω2M

ss)Φ = 0   (11)

The displacement of the slave-coordinates is then given by

a superposition of the master DOFs and the Craig-Bampton

modes:

xs = −K−1ss  Ksmxm   +

lk=1

φkyk  = Γxm + Φy   (12)

l ≤ n − m

Thus, the transformation matrix for the CMS method is ob-

tained:

x =   I 0

Γ Φ   xm

y  = Tcms   xm

y   (13)

CMS as IRS delivers good approximation results of the re-

duced structure with a drawback of having to define what

kind of lower eigenvectors [5] (rigid or non-rigid body) and

how many are to be introduced for the Craig-Bampton set.

 E. System Equivalent Expansion Reduction Process (SEREP)

In SEREP [3] the eigenmodes and eigenfrequencies of 

the original full model are calculated. Thus x = Φq, where

Φ   is the modal matrix and   q   the modal coordinate. By

partitioning the displacement   x  and the modal matrix into

the active (master) and omissive (slave) part we have

⇒   xm

xs

 =   Φm

Φs

Φ+mxm  =  TSEREPxm   (14)

where   Φ+m   := (ΦT 

mΦm)−1ΦT m , the   Φm   pseudo-

inversion.

SEREP approximates high-frequency motion (eigenfre-

quencies and eigenvectors) perfectly, up to the predefined

limit.

F. Krylov Subspace Method 

Suppose the constant matrix A ∈   Rn×n, a start vector

b ∈ Rn×1 and q  ∈  Z

+. The Krylov subspace is defined as

follows:

K q(A, b) :=  spanb, Ab, . . . , Aq−1b ,

a subspace spanned by the q  column-vectors b, Ab, . . . ,Aq−1b. In the case of undamped systems of the form (3),

it is proved that A ≡ K−1M   and b ≡ K−1f , i.e

Mx(t) + Kx(t) = f    ⇒ Tkrylov ∈ K q(K−1M, K−1f )

Page 3: Koutsovasilis Model Reduction of Large Elastic Systems

7/26/2019 Koutsovasilis Model Reduction of Large Elastic Systems

http://slidepdf.com/reader/full/koutsovasilis-model-reduction-of-large-elastic-systems 3/6

Tkrylov =   (15)

span

K−1f , (K−1M)K−1f , . . . , (K−1M)q−1K−1f 

.

(3) can always be written as an input-output system if we

write the outputs as a linear combination of states

y =  CT x   (16)

An evidence that proves Krylov’s reduction efficiency of 

{(3), (16)}   is the equality of some input-output behav-ior parameters for both full and reduced {(3), (16)}   un-

der the assumption of   AR   regularity, where AR   =TT 

krylov(K−1M)Tkrylov; the so called moment matching [2].

Moments   mi   are defined as the Taylor-coefficients of 

the transfer function   G   for the Laplace transformation of 

{(3), (16)}:

G(s) =  CT (s2M + K)−1f    (17)

It is proven that the first q  moments for the full and reduced

{(3), (16)}   system agree. For the first   moment  mR0   (re-

duced model) and  m0  (full model) the proof is given ac-

cording to [11]:

mR0  =  CT RK−1

R   f R  = CT R(TT 

krylovKTkrylov)−1TT krylovf 

= CT R(TT 

krylovKTkrylov)−1TT krylovKTkrylovr0  =  CT 

Rr0

= CT Tkrylovr0  = CT K−1f  = m0

Krylov subspace method is implemented using the classical

Arnoldi algorithm and a modified Gram-Schmidt orthogo-

nalization in order to obtain the q  orthonormal basis vectors.

This method ends up with well approximated eigen-

modes and eigenfrequencies. The non-necessity to define

the partitioned master/slave DOF space is an advantage that

minimizes user intervention.

III. Solution Methods

All the above mentioned reduction methods (except

SEREP) require the matrix inverses   K−1ss   or   K−1 in or-

der to calculate the equivalent transformation matrices

((6),(8),(9),(10),(13),(14),(15)).

Due to   dim(s)  ≈   dim(n)  ∈   (104, 6 ·  105)   a direct

calculation of the inverses using a decomposition method

(LU,LT DL, Cholesky) could lead to memory capacity

problems. For that reason iterative methods [12] are pre-

ferred, in this case the Preconditioned Conjugate Gradient

Method (PCG). By taking for instance the static method,

(6) can be rewritten

  xm

xs

 =

  I

T

xm  = Tstatic xm

T = −K−1ss  Ksm

⇒ KssT = −Ksm   (18)

⇔ Ax  = b,   A := Kss,   x := T,   b := −Ksm

where the last linearized equation system can be iteratively

solved for as many steps as defined by  m,   (10, 5 · 102) dim(m)   << dim(n). Then   T   is known, i.e   Tstatic   is

known. This procedure is also applied for the other reduc-

tion methods.

By choosing a right preconditioner depending on the

structure of the matrix (incomplete LU-, Cholesky factor-

ization, block Jacobi, incomplete band-diagonal in case of band-diagonal matrices) faster convergence is achieved.

There is always the case though, in which the matrix  A

is ill-conditioned(large condition number) and the selection

of a preconditioner does not accelerate the convergence re-

sulting in a large number of iteration steps, i.e increase of 

the simulation time as shown below

C (A) =  λmax/λmin

C   : condition number , λ :  eigenvector

N CG ≈√ 

C, N   : Number of iteration steps-CG

Different kinds of techniques [9] have been developed(e.g deflation) in order to reduce the condition number of 

the ill-posed matrix. Here a different approach is proposed.

Instead of solving the original linearized systemAx = b,

we solve the perturbed system shown below

(A + αAd)x =  b   (19)

α := 10−(n+k),

n = maxj∈N

{f ( j),   ∀i ∈ (1,dim(A))} , k ∈ Z

f ( j) := 10±j · aii ∈ A, floating point number form

k

 ≥ min( j)

−max( j),

Ad  :=  diag(diag(A)),  diagonal matrix A

By this small perturbation of the diagonal elements of 

A   the eigenvalues are vastly affected reducing the condi-

tion number and consequently the convergence rate. This

method produces the following numeric error:

|(αAd)x|2  = 10−(n+k) |Adx|2   .

Simulation results of piston rod by the Krylov method have

shown that a choice of   k ≥  0  ends up with almost identi-

cal solutions for  x. For the results shown in the following

chapter  k  = 2   is chosen.

IV. Modal Correlation Criteria (MCC)

This chapter refers to MCC [10] that are applied to the

piston rod model in order to check the correlation of the

original (full) and reduced model during a modal analysis.

The original model was discretized with FE in ANSYS.

The number of elements (tetrahedron  SOLID95) produced

is   nelem   = 13868   and the number of nodes   nnode   =

Page 4: Koutsovasilis Model Reduction of Large Elastic Systems

7/26/2019 Koutsovasilis Model Reduction of Large Elastic Systems

http://slidepdf.com/reader/full/koutsovasilis-model-reduction-of-large-elastic-systems 4/6

23835. Each node was appointed with 3 DOFs (UX, UY,

UZ - unconstrained model). The produced system matri-

ces (M, K) have a dimension of   dim(M) = dim(K) =(3 · nnode, 3 · nnode) = (71505, 71505). For the first five

previously introduced methods the master nodes respec-

tively DOF set is selected as shown in Fig.1 The  m-set is

Fig. 1. Master Nodes Selection - Piston Rod

selected according to standard criteria [13] made for this

purpose. The number, though, of master nodes is restricted

(m = 10); this is done in order to prove that a possible in-

appropriate master node selection vastly affects the resultsof certain reduction methods.

The re-ordering into block matrices according to (5) pro-

duces a different sparsity pattern (Fig.2) for the system

matrices, because of the wavefront solver implemented in

ANSYS for the solution of the linearized system.

The sparsity pattern of the stiffness matrix   K   is the

same, but with notedly more non-zero elements.

Fig. 2. Mass matrix produced by ANSYS (right) - Re-Ordered MassMatrix (left)

 A. Eigenfrequency Comparison

Theory implies the eigenfrequencies of the reduced

model to be higher as the eigenfrequencies of the

original (full) model. Thus, in Fig.3 the differenceeigdif   :=  eigsub − eigfull  and normalized relative differ-

ence reigdif   := |eigsub − eigfull|2 / |eigsub|2  for the first

14 of the non-rigid body eigenfrequencies are presented.

 B. Modified Modal Assurance Criterion (modMAC)

modMAC gives information concerning the eigenvec-

tor’s angle; by this criterion (in comparison to MAC) the

0 2 4 6 8 10 12 14−1

0

1

2

3x 10

4 All Methods Frequency Difference Comparison

Static Red.

Dyn.Reduction

CMS

IRS

SEREP

Krylov

0 2 4 6 8 10 12 140

0.2

0.4

0.6

0.8

All Methods Normalized Relative eigdif

 Comparison

Static Red.

Dyn.Reduction

CMS

IRS

SEREP

Krylov

0 2 4 6 8 10 12 140

0.05

0.1

0.15

0.2

 IRS, SEREP, Krylov − Relative eigdif

 

IRS

SEREP

Krylov

0 2 4 6 8 10 12 140

1

2

3

4

5x 10

−3  SEREP, Krylov − Relative eigdif

 

SEREP

Krylov

Fig. 3.   eigdif   and reigdif  - Piston Rod

eigenvectors are mass-normalized.

modMAC k,l  =  (ΦT 

k MΨl)2

(ΦT k MΦk)(ΨT 

l   MΨl)

Φk  :   k-th eigenvector of the full model

Ψl :   l-th expanded eigenvector

The dimension of the eigenvectors must be the same; ei-

0 2 4 6 8 10 12 140

0.2

0.4

0.6

0.8

1

All Methods modMAC Comparison

0 2 4 6 8 10 12 140

0.2

0.4

0.6

0.8

1

Compare Best Methods: IRS, SEREP, Krylov

Static Red.

Dyn.Reduction

CMS

IRS

SEREP

Krylov

IRS

SEREP

Krylov

Fig. 4.   modMAC  - Piston Rod

ther the reduced are expanded to the dimension of the orig-

inal via the transformation matrix or the opposite. A value

’modMAC   = 1’ means absolute correlation; the less this

value becomes, the worst the eigenvector correlation is, as

shown in Fig. 4. The correlation of the first six eigenvec-

tors is unimportant, since it concerns the rigid body motion

eigenvectors, which are generally of no interest. Thus the

x-axes of Fig. 4,Fig. 5, Fig. 6 depict the 14 eigenvectors

starting from the 7th up to the 20th.

C. Mass Normalized Vector Difference (MNVD)

This criterion givesthe relative vector difference of mass-

normalized eigenvectors.

MN V Dk,l  =

Msub − Mfull

2Msub

2

Page 5: Koutsovasilis Model Reduction of Large Elastic Systems

7/26/2019 Koutsovasilis Model Reduction of Large Elastic Systems

http://slidepdf.com/reader/full/koutsovasilis-model-reduction-of-large-elastic-systems 5/6

Mfull  :   modal mass of the original model

Msub :   modal mass of the reduced model

By this criterion all methods are identical, except for CMS,

0 2 4 6 8 10 12 142

4

6

8

10

12

14x 10

−7 All Methods Mass Normalized Vector Difference Comparison

Static Red.

Dyn.Reduction

CMS

IRS

SEREP

Krylov

Fig. 5.   MNVD - Piston Rod

which gives minor deviations due to numeric implementa-

tion, in comparison to the other.

 D. Stiffness Normalized Vector Difference (SNVD)

Analogously to the MNVD criterion, SNVD gives in-

formation about the relative vector difference of stiffness-normalized eigenvectors.

SNV Dk,l  =

Ksub − Kfull

2 Ksub

2

Kfull  :   modal stiffness of the original model

Ksub :   modal stiffness of the reduced model

In this case results are sensitive, concerning small devia-

tions of the modal stiffness for both the original and reduced

model as shown in Fig. 6.

0 2 4 6 8 10 12 140

0.2

0.4

0.6

0.8

1All Methods Stiffness Normalized Vector Difference Comparison

0 2 4 6 8 10 12 140

0.05

0.1

0.15

0.2

0.25

0.3

0.35Compare Best Methods: CMS, SEREP, Krylov

Static Red.

Dyn.Reduction

CMS

IRS

SEREP

Krylov

IRS

SEREP

Krylov

Fig. 6.   SNVD - Piston Rod

 E. Normalized Modal Difference (NMD)

NMD is a criterion that delivers important information

concerning the deviation of single coordinates (DOFs) of 

eigenvector pairs. The fact that this criterion is normalized

makes it an important tool for modal correlation. The cal-

culation is based on Modal Scale Factor (MSF), which is a

scale factor according to the principle of least-square error.

M DM k,r  = |Ψk(r) − M SF  · Φk(r)|2

Ψk(r)

M SF i,j  =  ΨT 

i  Φj

ΨT i  Ψi

Φk(r) :   r-th coordinate of the k-th full eigenvector

Ψl(r) :   r-th coordinate of the l-th expanded eigenvector

Eigenvectors Nr.7,8,18 have been randomly chosen in order

to illustrate the results of this criterion shown in the figures

below.

SEREP seems to deliver the best results concerning

N MD   followed by Krylov and an interchange between

IRS, CMS. All these results are discussed in the following

chapter.

0 2 4 6 8

x 104

−1

−0.5

0

0.5

1

1.5x 10

4 Eigenvector 7 Static

0 2 4 6 8

x 104

−5

0

5

10

15x 10

4 Eigenvector 7 Dynamic

0 2 4 6 8

x 104

−1

−0.5

0

0.5

1

1.5

2

2.5x 10

4 Eigenvector 7 CMS

0 2 4 6 8

x 104

−1

−0.5

0

0.5

1

1.5

2

2.5x 10

4 Eigenvector 7 IRS

0 2 4 6 8

x 104

−4

−2

0

2

4

6Eigenvector 7 SEREP

0 2 4 6 8

x 104

−300

−200

−100

0

100

200Eigenvector 7 Krylov

Fig. 7.   NMD  Eigenvector 7 - Piston Rod

0 2 4 6 8

x 104

−5

−4

−3

−2

−1

0

1

2x 10

4 Eigenvector8 Static

0 2 4 6 8

x 104

−1

−0.5

0

0.5

1x 10

4 Eigenvector8 Dynamic

0 2 4 6 8

x 104

−5

0

5

10

15x 10

5 Eigenvector8 CMS

0 2 4 6 8x 10

4

−1

0

1

2

3

4

5

6

7x 10

4 Eigenvector8 IRS

0 2 4 6 8x 10

4

−10

0

10

20

30

40

50

60

70Eigenvector8 SEREP

0 2 4 6 8x 10

4

−10

0

10

20

30

40

50

60

70Eigenvector8Krylov

Fig. 8.   NMD  Eigenvector 8 - Piston Rod

Page 6: Koutsovasilis Model Reduction of Large Elastic Systems

7/26/2019 Koutsovasilis Model Reduction of Large Elastic Systems

http://slidepdf.com/reader/full/koutsovasilis-model-reduction-of-large-elastic-systems 6/6

0 2 4 6 8

x 104

−2

1.5

−1

0.5

0

0.5

1x 10

5 Eigenvector 18 Static

0 2 4 6 8

x 104

−20

−15

−10

−5

0

5x 10

4 Eigenvector 18 Dynamic

0 2 4 6 8

x 104

−4

−2

0

2

4

6x 10

4 Eigenvector 18 CMS

0 2 4 6 8

x 104

−2

−1

0

1

2

3x 10

5 Eigenvector 18 IRS

0 2 4 6 8

x 104

−100

0

100

200

300

400

500

600Eigenvector 18 SEREP

0 2 4 6 8

x 104

−100

0

100

200

300

400

500

600Eigenvector 18 Krylov

Fig. 9.   NMD  Eigenvector 18 - Piston Rod

V. Results and Discussion

MCC lead to the conclusion that SEREP and Krylov de-

liver the best eigenfrequency/eigenvector results.

Guyan reduction is by far the least reliable method for

approximating high-frequency motion due to its static na-

ture.IRS, as a perturbed Guyan method, ends up with good

correlation results for the lower as well as a great number

of higher-frequency motions. The IRS transformation ma-

trix can always be optimized by additional iteration steps

[8] reducing the modeling error and thus giving good ap-

proximation results.

CMS and Dynamic reduction are interchangeable, with

CMS yielding qualitatively much better results. The algo-

rithms of both methods promise good correlation for high-

frequency motion, but in many cases this is not feasible due

to the following reasons: the error of dynamic reduction

depends on the right choice of the initial frequency (8), the

finding of which is not a trivial task. On the other hand,CMS depends on the definition of a sufficient number of 

eigenmodes (Craig-Bampton modes) for the internal struc-

ture. Especially for the case of the piston rod, different

kinds of Craig-Bampton modes were applied, obtaining at

the end various results (for the results depicted in the figures

5 CB-eigenmodes were calculated). A selective choice of 

Craig-Bampton modes belonging to the whole eigenmodes

spectrum (some of the lower, middle and higher eigen-

modes) seems to radically improve the end results.

All the above results could have been improved, if a dif-

ferent set of master DOFs were chosen. This fact enables

Krylov as a promising reduction method for mechanical

MBS, since there is no such dependence. The user has onlyto define the maximum dimension of the reduced system,

without having to select dominant eigenmodes (SEREP) or

master DOFs. Thus, user intervention is minimized.

Commercial MBS program packages contain imple-

mented interfaces (e.g FEMBS in SIMPACK) for two of the

above mentioned reduction methods (Guyan, CMS). The

interface implementation for the Krylov subspace method

as well as other combined methods is a matter of ac-

tual research, since the obtained results constitute these

techniques competitive to the already standardized CMS

method.

References

[1] J.C O’ Callahan. A procedure for an improved reduced system (IRS)model. Las Vegas, 1989. Proceedings 7. International Modal Analy-sis Conference.

[2] Michael Lehner, Peter Eberhard. Modellreduktion in elastischenMehrkorpersystemen.   Automatisierungstechnik , 54, 4/2006.

[3] Geritt Gloth. Vergleich zwischen gemessenen und berech-neten modalen Parametern. Oberpfaffenhofen, 2001. Carl-CranzGesellschaft e.V.

[4] J. Guyan. Reduction of stiffness and mass matrices.   AIAA, 3, 1965.[5] Roy R. Craig, Jr. Coupling of substructures for dynamic analyses:

an overview.   AIAA-2000-1573.[6] E.B. Rudnyi, J. Lienemann, A. Greiner, J.G. Korvink. mor4ansys:

Generating Compact Models Directly From ANSYS Models.   Nan-otech - MEMS Modeling, 2:279 – 282, 2004.

[7] R. Craig, M.Bampton. Coupling of substructures in dynamic analy-sis.   AIAA, 6, 1968.

[8] M. I. Friswell, S. D. Garvey, J. E. T Penny. Model reduction usingdynamic and iterated IRS techniques.   Journal of Sound and Vibra-tion, 186:311–323, 1995.

[9] K. Burrage, J. Erhel, B. Pohl. A deflation technique for linear sys-tems of equations. Technical Report 94-02, Eidgenossische Technis-

che Hochschule Zurich, 1994.[10] M. Reichelt. Anwendung neuer Methoden zum Vergleich der Ergeb-

nissen aus rechnerischen und experimentellen Modalanalyseunter-suchungen.  VDI Berichte, 1550:481–495, 2000.

[11] Boris Lohmann, Behnam Salimbahrami. Ordnungsreduktion mittelsKrylov-Unterraummethoden.   Automatisierungstechnik , 52, 1/2004.

[12] Sami A., Faisal Seid, Ahmed Sameh. Efficient iterative solvers forstructural dynamic problems.  Computers and Structures, 82:2363–2375, 2004.

[13] Manuela Waltz.   Dynamisches Verhalten von gummigefederten Eisenbahnr¨ adern. PhD thesis, Technische Hochschule Aachen,Fakultat fur Maschinenwesen, 2005.