Introduction to Model Order ReductionIntroduction to Model Order Reduction
II.2 The Projection Framework Methods II.2 The Projection Framework Methods
Luca DanielLuca Daniel
Massachusetts Institute of TechnologyMassachusetts Institute of Technology
with contributions from:with contributions from:
Alessandra Nardi, Joel Phillips, Jacob WhiteAlessandra Nardi, Joel Phillips, Jacob White
2
1
1
1
ˆ
ˆq
q
N
q
x
x
u u
x
x
U
Projection Framework:Projection Framework:Non invertible Change of CoordinatesNon invertible Change of Coordinates
Note: q << NNote: q << N
reduced statereduced state
original stateoriginal state
3
• Original System Original System
Projection FrameworkProjection Framework
• Note: now few variables (q<<N) in the state, but still Note: now few variables (q<<N) in the state, but still thousands of equations (N)thousands of equations (N)
)()(
)()(
txcty
tubtxAdt
dx
T
• SubstituteSubstitute
)(ˆ)(ˆ
)()(ˆ)(ˆ
txUcty
tubtxUAdt
txdU
qT
)(ˆ txUx q
4
)(ˆ)(
)()(ˆ)(ˆ
txUcty
tubVtxUAVdt
txdUV
qT
Tqq
Tqq
Tq
Projection Framework (cont.)Projection Framework (cont.)
• If If VqT and and Uq
T are are
chosen biorthogonal chosen biorthogonal
Tq qV U I
)(ˆ)(
)()(ˆ)(ˆ
txUcty
tubtxUAdt
txdU
qT
Reduction of number of equations: test by multiplying by Vq
T
)(ˆˆ)(
)(ˆ)(ˆˆ)(ˆ
txcty
tubtxAdt
txd
T
5
Projection Framework (graphically)Projection Framework (graphically)
nxnnxn
x TqV bu
nxqnxq
qU xdt
dxA
Eqxqqxq
qxnqxn
TqV
dt
xdˆ
nxqnxq
qU
)(ˆ)(ˆˆˆtubtxA
dt
xd
6
spaceqV
Equation TestingEquation Testing(Projection)(Projection)
ˆqx xU
x
spaceqU
Non-invertible changeNon-invertible changeof coordinates (Projection)of coordinates (Projection)
x
Projection FrameworkProjection Framework
)()( tubtxAdt
dx )(ˆ)(ˆˆˆ
tubtxAdt
xd
xAAxVxAUV Tqq
Tq ˆˆˆ
AAx
xAˆˆ
qT
q AUVA ˆ
7
• Use Eigenvectors of the system matrix (modal analysis)Use Eigenvectors of the system matrix (modal analysis)
• Use Frequency Domain DataUse Frequency Domain Data– ComputeCompute
– Use the SVD to pick Use the SVD to pick q < kq < k important vectors important vectors
• Use Time Series DataUse Time Series Data– ComputeCompute
– Use the SVD to pick Use the SVD to pick q < kq < k important vectors important vectors1 2( ), ( ), , ( )kx t x t x t
1 2( ), ( ), , ( )kx s x s x s
Approaches for picking V and UApproaches for picking V and U
II.2.b POD Principal Component Analysisor SVD Singular Value Decompositionor KLD Karhunen-Lo`eve Decompositionor PCA Principal Component Analysis
Point Matching
8
• Use Eigenvectors of the system matrixUse Eigenvectors of the system matrix
• POD or SVD or KLD or PCA.
• Use Krylov Subspace Vectors (Moment Matching)Use Krylov Subspace Vectors (Moment Matching)
• Use Singular Vectors of System Grammians Product Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)(Truncated Balance Realizations)
Approaches for picking V and UApproaches for picking V and U
9
A canonical form for model order reductionA canonical form for model order reduction
Assuming Assuming AA is non-singular is non-singular we can cast the dynamical we can cast the dynamical linear system into a linear system into a canonical form for moment canonical form for moment matching model order matching model order reductionreduction
Note: this step is not Note: this step is not necessary, it just makes the necessary, it just makes the notation simple for notation simple for educational purposeseducational purposes
xcy
ubAxsxT
xcy
ubxsExT
bAb
AE1
1
10
H s
H s
Taylor series expansion:Taylor series expansion:
2 xx b Ab A b
UU
Intuitive view of Krylov subspace choice for Intuitive view of Krylov subspace choice for change of base projection matrixchange of base projection matrix
2span , , ,x b Eb E b
• change base and use only the first few change base and use only the first few vectors of the Taylor series expansion: vectors of the Taylor series expansion: equivalent to match first derivatives equivalent to match first derivatives around expansion pointaround expansion point
0
k k
k
x s E b u
sEx x bu 1x I sE bu
2b Eb E b
11
Aside on Krylov Subspaces - DefinitionAside on Krylov Subspaces - Definition
2 1, , , , kk E b span b Eb E b E b
The order k Krylov subspace generated The order k Krylov subspace generated from matrix A and vector b is defined asfrom matrix A and vector b is defined as
12
Moment matching around non-zero frequenciesMoment matching around non-zero frequencies
In stead of expanding around only s=0 we can expand around another points 1 20 Js s s s s s s
xcy
buAxsxT
xcy
buAxxssT
h
)~(
xcy
ubxxEsT
hh
~
bIsAb
IsAE
hh
hh
1
1
For each expansion point the problem can then be put again in the canonical form
hsss ~
13
IfIf
andand
ThenThen
Projection Framework: Projection Framework: Moment Matching Theorem (E. Grimme 97)Moment Matching Theorem (E. Grimme 97)
hh
J
hkq bEU b
h,)(Range
1
hTh
J
hkq cEV c
h,)(Range
1
Total of 2q moment of the transfer function will match
,...,1
1,...,0for
ˆ
Jh
kkl
s
H
s
H bh
bh
s
l
l
s
l
l
hh
14
Combine point and moment matching: Combine point and moment matching: multipoint moment matchingmultipoint moment matching
H s
H s
• Multipole expansion points give larger bandMultipole expansion points give larger band• Moment (derivates) matching gives more Moment (derivates) matching gives more accurate behavior in between expansion pointsaccurate behavior in between expansion points
15
Compare Pade’ ApproximationsCompare Pade’ Approximationsand Krylov Subspace Projection Frameworkand Krylov Subspace Projection Framework
Krylov Subspace Krylov Subspace Projection Framework:Projection Framework:• multipoint moment multipoint moment matchingmatching• AND numerically very AND numerically very stable!!!stable!!!
H s
H s
H s
H s
Pade approximations:Pade approximations:• moment matching at moment matching at single DC pointsingle DC point• numerically very numerically very ill-conditioned!!!ill-conditioned!!!
16
• Use Eigenvectors of the system matrixUse Eigenvectors of the system matrix
• POD or SVD or KLD or PCA.
• Use Krylov Subspace Vectors (Moment Matching)Use Krylov Subspace Vectors (Moment Matching)– general Krylov Subspace methodsgeneral Krylov Subspace methods
– case 1: Arnoldicase 1: Arnoldi
– case 2: PVLcase 2: PVL
– case 3: multipoint moment matchingcase 3: multipoint moment matching
– moment matching preserving passivity: PRIMAmoment matching preserving passivity: PRIMA
• Use Singular Vectors of System Grammians Product Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)(Truncated Balance Realizations)
Approaches for picking V and UApproaches for picking V and U
17
IfIf U and V are such that: U and V are such that:
ThenThen the first q moments (derivatives) of the the first q moments (derivatives) of the reduced system matchreduced system match
1,..., qU V u u
11,..., ( , ) , , , q
q qspan u u E b span b Eb E b
TU U I
Special simple case #1: Special simple case #1: expansion at s=0,V=U, orthonormal Uexpansion at s=0,V=U, orthonormal UTTU=I U=I
0 s=0
ˆ for k 0, ,
k k
k k
s
H Hq
s s
ˆˆˆ T k T kc E b c E b
18
1
0
T T k k
k
H s c I sE b c E b s
1
0
ˆ ˆˆ ˆ ˆˆ ˆT T k k
k
H s c I sE b c E b s
ˆˆˆ
kT k T T Tq q q q
T T T T Tq q q q q q q q
k
c E b c U U EU U b
c U U EU U EU U EU U b
Algebraic proof of case #1: Algebraic proof of case #1: expansion at s=0, V=U, orthonormal Uexpansion at s=0, V=U, orthonormal UTTU=IU=I
T T k
k
c EE E b c E b
apply k times lemma in next slideapply k times lemma in next slide
19
Lemma: . Lemma: . Tq qU U b b
T Tq q q q q qU U b U U U g U g b
1,..., s.t.
q qb span u u g b U g
Note in general: Note in general:
Tqq nUU I
BUT...BUT...
Substitute:Substitute:
IIqq U is orthonormalU is orthonormal
20
b
Eb
2E b3E b
Vectors will quickly line up with dominant eigenspace!Vectors will quickly line up with dominant eigenspace!
Need for Orthonormalization of UNeed for Orthonormalization of U
Vectors {b,Eb,...,Ek-1b} cannot be computed directly
21
Need for Orthonormalization of U Need for Orthonormalization of U (cont.)(cont.)
• In "change of base matrix" U In "change of base matrix" U transforming to the new transforming to the new reduced state space, we can use reduced state space, we can use ANY columns that span the ANY columns that span the reduced state spacereduced state space
• In particular we can In particular we can ORTHONORMALIZE the Krylov ORTHONORMALIZE the Krylov subspace vectorssubspace vectors
1
1
1
q
r
q
r
N
q
x
x
u u
x
x
U
i ju u i j
22
1 1
1
1,
1i i
i
i i
u uu
Normalize new vectorNormalize new vector
1 /u b b
For i = 1 to q
1i iu Eu
Generates new Krylov Generates new Krylov subspace vectorsubspace vector
1 1 1T
i i i j j
ji
u u u u u
Orthogonalize new vectorOrthogonalize new vector
For j = 1 to i
Orthonormalization of U: The Arnoldi AlgorithmOrthonormalization of U: The Arnoldi Algorithm
Normalize first vectorO(n)
sparse: O(n) dense:O(nsparse: O(n) dense:O(n22))
O(qO(q22n)n)
O(n)O(n)
Computational ComplexityComputational Complexity
23
Most of the computation cost is spent in calculating:
Set up and solve a linear system using GCR
If we have a good preconditioners and a fast matrix vector product each new vector is calculated in O(n)
The total complexity for calculating the projection
matrix Uq is O(qn)
Generating vectors for the Krylov subspaceGenerating vectors for the Krylov subspace
iuA1
ii uuA
1
ihi uEu
1
24
What about computing the reduced matrix ?What about computing the reduced matrix ?
iq
i
qi uuuE
,
,1
1 ...
qTq EUUE ˆ
qq uuuuE
...... 11 qq UEU
qTq EUU
E
Orthonormalization of the i-th column of Uq
Orthonormalization of all columns of Uq
So we don’t need to computethe reduced matrix. We have it already:
25
• Use Eigenvectors of the system matrixUse Eigenvectors of the system matrix
• POD or SVD or KLD or PCA.
• Use Krylov Subspace Vectors (Moment Matching)Use Krylov Subspace Vectors (Moment Matching)– general Krylov Subspace methodsgeneral Krylov Subspace methods
– case 1: Arnoldicase 1: Arnoldi
– case 2: PVLcase 2: PVL
– case 3: multipoint moment matchingcase 3: multipoint moment matching
– moment matching preserving passivity: PRIMAmoment matching preserving passivity: PRIMA
• Use Singular Vectors of System Grammians Product Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)(Truncated Balance Realizations)
Approaches for picking V and UApproaches for picking V and U
26
ThenThen the first the first 2q2q moments of reduced system match moments of reduced system match
IfIf U and V are such that: U and V are such that:
11,..., ( , ) , , , ( )
����������������������������T T T q
q qspan v v E c span c E c E c
TV U I
Special case #2: Special case #2: expansion at s=0, biorthogonal Vexpansion at s=0, biorthogonal VTTU=IU=I
11,..., ( , ) , , , q
q qspan u u E b span b Eb E b
0 s=0
ˆ for 0, , 2
k k
k k
s
H Hk q
s s
ˆˆˆ T k T kc E b c E b
27
1 2 2
0
( ) ( )
Tk kT T k
k
H s c I sE b E c E b s
Proof of special case #2: expansion at s=0, Proof of special case #2: expansion at s=0, biorthogonal Vbiorthogonal VTTU=UU=UTTV=IV=Iqq (cont.) (cont.)
12 2
0
ˆ ˆˆ ˆ ˆ ˆˆ ˆ( ) ( )
Tk kT T k
k
H s c I sE b E c E b s
2 2 2 2
2 2
ˆˆ ˆ ˆ ˆˆ( ) ( ) ( ) ( )
ˆ ˆ ˆ ˆ
( ) ( )
T Tk k k kT T T T T T
TT T T T T T T T
Tk kT T
E c E b U E V U c V EU V b
U E V U E VU c V EU V EUV b
E c E b
apply k times the lemma in next slideapply k times the lemma in next slide
28
Lemma: . Lemma: . Tq qU V b b
T Tq q q q q qU V b U V U g U g b
1,..., s.t.
q qb span u u g b U g
Substitute:Substitute:
Tq qV U c c
T Tq q q q q qV U c V U V f V f c
1,..., s.t.
q qc span v v f c V f
Substitute:Substitute:
IIqq biorthonormalitybiorthonormality
IIqq biorthonormalitybiorthonormality
29
PVL: Pade Via LanczosPVL: Pade Via Lanczos[P. Feldmann, R. W. Freund TCAD95][P. Feldmann, R. W. Freund TCAD95]
• PVL is an implementation of the biorthogonal case 2:PVL is an implementation of the biorthogonal case 2:
11,..., ( , ) , , , ( )
����������������������������T T T q
q qspan v v E c span c E c E c
TV U I
11,..., ( , ) , , , q
q qspan u u E b span b Eb E b
Use Lanczos process to Use Lanczos process to biorthonormalize the columns of U biorthonormalize the columns of U and V: and V: gives very good numerical gives very good numerical stabilitystability
30
Example: Simulation of voltage gain of a filter Example: Simulation of voltage gain of a filter with PVL (Pade Via Lanczos)with PVL (Pade Via Lanczos)
31
Compare to Pade via AWE Compare to Pade via AWE (Asymptotic Waveform Evaluation)(Asymptotic Waveform Evaluation)
32
• Use Eigenvectors of the system matrixUse Eigenvectors of the system matrix
• POD or SVD or KLD or PCA.
• Use Krylov Subspace Vectors (Moment Matching)Use Krylov Subspace Vectors (Moment Matching)– general Krylov Subspace methodsgeneral Krylov Subspace methods
– case 1: Arnoldicase 1: Arnoldi
– case 2: PVLcase 2: PVL
– case 3: multipoint moment matchingcase 3: multipoint moment matching
– moment matching preserving passivity: PRIMAmoment matching preserving passivity: PRIMA
• Use Singular Vectors of System Grammians Product Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)(Truncated Balance Realizations)
Approaches for picking V and UApproaches for picking V and U
33
Case #3: Intuitive view of subspace choice for general Case #3: Intuitive view of subspace choice for general expansion pointsexpansion points
In stead of expanding around only s=0 we can expand around another points 1 20 Js s s s s s s
xcy
buAxsxT
xcy
buAxxssT
h
)~(
xcy
ubxxEsT
hh
~
bIsAb
IsAE
hh
hh
1
1
For each expansion point the problem can then be put again in the canonical form
hsss ~
34
xcy
ubIsAxxIsAsT
hh
11~
Case #3: Intuitive view of Krylov subspace choice Case #3: Intuitive view of Krylov subspace choice for general expansion points (cont.)for general expansion points (cont.)
matches first matches first kkjj of transfer of transfer
function function around each around each expansion expansion point spoint sjj
Hence choosing Krylov subspaceHence choosing Krylov subspace
ss11=0=0
ss11ss22
ss33
bIsAIsAU hh
J
hkq b
h
11
1
,)(Range
35
Most of the computation cost is spent in calculating:
Set up and solve a linear system using GCR
If we have a good preconditioners and a fast matrix vector product each new vector is calculated in O(n)
The total complexity for calculating the projection
matrix Uq is O(qn)
Generating vectors for the Krylov subspaceGenerating vectors for the Krylov subspace
ih uIsA1)(
iih uuIsA
1)(
ihi uEu
1
36
• Use Eigenvectors of the system matrixUse Eigenvectors of the system matrix
• POD or SVD or KLD or PCA.
• Use Krylov Subspace Vectors (Moment Matching)Use Krylov Subspace Vectors (Moment Matching)– general Krylov Subspace methodsgeneral Krylov Subspace methods
– case 1: Arnoldicase 1: Arnoldi
– case 2: PVLcase 2: PVL
– case 3: multipoint moment matchingcase 3: multipoint moment matching
– moment matching preserving passivity: PRIMAmoment matching preserving passivity: PRIMA
• Use Singular Vectors of System Grammians Product Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)(Truncated Balance Realizations)
Approaches for picking V and UApproaches for picking V and U
37
Sufficient conditions for passivitySufficient conditions for passivity
Sufficient conditions for passivity:
sx Ax Bu
y Cx
1)
2) 0, for all
T
T
C B
x Ax x
Note that these are NOT necessary conditions (common misconception)
i.e. A is negative semidefinite
Example Finite Difference System from on Poisson Equation (heat problem)
endT
x
1 1
( )NxN Nx
scalarinp
T
Nxscalarouu tt put
y tdx t
A x t b u t c x tdt
Heat In0 0T
2 1 0 0
1 2
0 0
2 1
0 0 1 1
A
A
1
0
0
b
1
0
0
c
We already know the Finite Difference matrices is positive semidefinite. Hence A or E=A-1 are negative semidefinite.
39
Sufficient conditions for passivitySufficient conditions for passivity
Sufficient conditions for passivity:
Cxy
BuxsEx
xExx
BCT
T
allfor ,0)2
)1
Note that these are NOT necessary conditions (common misconception)
i.e. E is negative semidefinite
40
Congruence Transformations Congruence Transformations Preserve Negative (or positive) SemidefinitnessPreserve Negative (or positive) Semidefinitness
• Def. congruence transformationDef. congruence transformation EUUE Tˆ
same matrix
• Note: case #1 in the projection framework V=U produces Note: case #1 in the projection framework V=U produces congruence transformationscongruence transformations
• Lemma: a congruence transformation preserves the Lemma: a congruence transformation preserves the negative (or positive) semidefiniteness of the matrixnegative (or positive) semidefiniteness of the matrix
• Proof. Just renameProof. Just rename
x 0x then 0 if EUxUxExx TTT
yUx
41
Congruence Transformation Preserves Negative Congruence Transformation Preserves Negative Definiteness of E (hence passivity and stability)Definiteness of E (hence passivity and stability)
If we use Tq
Tq UV
• Then we loose half of the degrees of freedom i.e. we match only q moments instead of 2q• But if the original matrix E is negative semidefinite so is the reduced, hence the system is passive and stable
nxnnxn
nxqnxq
qU xs x TqU bu
nxqnxq
qUx xEqxnqxn
TqV
42
Sufficient conditions for passivitySufficient conditions for passivity
Sufficient conditions for passivity:
Cxy
BuAxsEx
xAxx
xExx
BC
T
T
T
allfor ,0)3
allfor ,0)2
)1
Note that these are NOT necessary conditions (common misconception)
i.e. A is negativesemidefinite
i.e. E is positivesemidefinite
43
Example. Example. hState-Space Model from MNA of R, L, C circuitshState-Space Model from MNA of R, L, C circuits
)(
)(
)(
)(
0100
0001
)(
)(
00
10
00
01
)(
)(
)(
)(
11
1
111
11
0
3
2
1
2
1
2
1
3
2
1
3
2
1
ti
tv
tv
tv
tv
tv
I
I
ti
tv
tv
tv
RR
RR
i
v
v
v
dt
d
L
C
C
L
out
out
in
in
LL
in1I in
2I
out2vout
1v LI
E is PositiveSemidefinite
A is NegativeSemidefinite TCB
When using MNA
For immittance systemsin MNA form
xxAAx T allfor ,0)( Lemma: A is negative semidefinite if and only ifLemma: A is negative semidefinite if and only if
44
PRIMA (for preserving passivity) PRIMA (for preserving passivity) (Odabasioglu, Celik, Pileggi TCAD98)(Odabasioglu, Celik, Pileggi TCAD98)
A different implementation of case #1:A different implementation of case #1:V=U, UV=U, UTTU=I, ArnoldiU=I, Arnoldi Krylov Projection Framework: Krylov Projection Framework:
b
ˆTb
Use Arnoldi: Numerically very stableUse Arnoldi: Numerically very stable
xby
buAxsExT
xUby
buUxAUUxEUsU
qT
Tqq
Tqq
Tq
E A
IUU
uuVU
bEAEAbspanbEAuuspan
T
q
qqq
},...,{
})(,...,,{),(},...,{
1
11111
45
PRIMA preserves passivityPRIMA preserves passivity
• The main difference between and case #1 and PRIMA:The main difference between and case #1 and PRIMA:• case #1 applies the projection framework tocase #1 applies the projection framework to
• PRIMA applies the projection framework to PRIMA applies the projection framework to
• PRIMA preserves passivity becausePRIMA preserves passivity because– uses Arnoldi so that U=V and the projection becomes a uses Arnoldi so that U=V and the projection becomes a
congruence transformation congruence transformation
– E and -A produced by electromagnetic analysis are typically E and -A produced by electromagnetic analysis are typically positive semidefinite positive semidefinite
– input matrix must be equal to output matrixinput matrix must be equal to output matrix
xByBuAxsEx T
xByBuAxExsA T 11
46
Algebraic proof of moment matching for PRIMA Algebraic proof of moment matching for PRIMA expansion at s=0, V=U, orthonormal Uexpansion at s=0, V=U, orthonormal UTTU=IU=I
k
k
kTTT
k
k
kTTT
sbAEAbbAEAsIbbEsAbsH
sbAEAbbAEsAIbbsEAbsH
0
111111
0
111111
ˆˆˆˆˆˆˆ)ˆˆ(ˆˆ)ˆˆ(ˆ)(ˆ
)()()(
bAEAbbEAAEAb
bUAUUEUUAUUEUUAUUUb
bUAUUEUUAUUUbbAEAb
kTT
TTTTTTT
TTk
TTTkT
11111
111
1111 ˆˆˆˆˆ
Used Lemma: If U is orthonormal (UTU=I) and b is a vector such that
bAUbUAUUUcolspanbA TTT 111 then )(
47
Proof of lemmaProof of lemma
bAUbUAUUUcolspanbA TTT 111 )(
Proof:
bAUUgUg
AUgUAUU
bAAUAUU
bUAUU
TT
TT
TT
TT
1
1
11
1
UgbAgUcolspanbA 11 s.t. )(
48
Compare methodsCompare methods
number of number of moments moments matched by matched by model of order qmodel of order q
preserving passivitypreserving passivity
case #1 (Arnoldi, case #1 (Arnoldi, V=U, UV=U, UTTU=I on U=I on
sAsA-1-1Ex=x+Bu)Ex=x+Bu)qq nono
PRIMAPRIMA (Arnoldi, (Arnoldi,
V=U, UV=U, UTTU=I on U=I on
sEx=Ax+Bu)sEx=Ax+Bu)qq
yesyes
necessary when model necessary when model is used in a time domain is used in a time domain
simulatorsimulator
case #2 (case #2 (PVLPVL, , Lanczos,VLanczos,V≠≠U, VU, VTTU=I U=I on sAon sA-1-1Ex=x+Bu)Ex=x+Bu)
2q2q
more efficientmore efficient
no no
(good only if model is (good only if model is used in frequency used in frequency
domain)domain)
49
ConclusionsConclusions
• Reduction via eigenmodesReduction via eigenmodes– expensive and inefficientexpensive and inefficient
• Reduction via rational function fitting (point matching)Reduction via rational function fitting (point matching)– inaccurate in between points, numerically ill-conditionedinaccurate in between points, numerically ill-conditioned
• Reduction via Quasi-Convex OptimizationReduction via Quasi-Convex Optimization– quite efficient and accuratequite efficient and accurate
• Reduction via moment matching: Pade approximationsReduction via moment matching: Pade approximations– better behavior but covers small frequency bandbetter behavior but covers small frequency band– numerically very ill-conditionednumerically very ill-conditioned
• Reduction via moment matching: Krylov Subspace Projection Reduction via moment matching: Krylov Subspace Projection FrameworkFramework– allows multipoint expansion moment matching (wider frequency allows multipoint expansion moment matching (wider frequency
band)band)– numerically very robust and computationally very efficientnumerically very robust and computationally very efficient– use PVL is more efficient for model in frequency domainuse PVL is more efficient for model in frequency domain– use PRIMA to preserve passivity if model is for time domain use PRIMA to preserve passivity if model is for time domain
simulatorsimulator
Case study: Passive Reduced Models Case study: Passive Reduced Models from an Electromagnetic Field Solverfrom an Electromagnetic Field Solver
dielectric layerdielectric layerlong coplanar T-line,long coplanar T-line,shorted on other sideshorted on other side
frequency [Hz]frequency [Hz]
__ with dielectrics__ with dielectrics
- - w/o dielectrics- - w/o dielectrics
Importance of including dielectrics:Importance of including dielectrics:a simple transmission line examplea simple transmission line example
1010-4-4
1010-3-3
1010-2-2
1010-1-1
101000
1100 22 33 44 55 66x 10x 1088
admittance [S]admittance [S]
Can guarantee passivityCan guarantee passivityCan guarantee passivityCan guarantee passivity
Techniques for including dielectricsTechniques for including dielectrics
• Finite Element MethodFinite Element Method
• Green’s Functions for dielectric bodiesGreen’s Functions for dielectric bodies
• Surface Formulations using Equivalent TheoremSurface Formulations using Equivalent Theorem– (substitute dielectrics with equivalent surface currents (substitute dielectrics with equivalent surface currents
and use free space Green’s functions)and use free space Green’s functions)
• Volume Formulations using Polarization CurrentsVolume Formulations using Polarization Currents
EjEj
EjH
r
r
)1(00
0
pJ
'),()'(
4'),()'(
4
)(drr'rrJdrr'rrJ
rJdc V pV c
c KjKj
cJ cJcJ
Volume Integral Formulation Volume Integral Formulation including Dielectrics including Dielectrics
)()(ˆ0)( srrJnrJ j current and charge conservationcurrent and charge conservation
A B
dielectricsdielectrics
')()'(4
')()'(4)(
)(
0
drr'r,rJdrr'r,rJrJ
dc V pV cp KjKj
j
pJ pJ pJ pJ
)(')()(4
1')()(
4
1SSsssSsss rdr'r,r'rdr'r,r'r
dc S dS c KK
c
d
c c
dd
conductorsconductors
Frequency independent kernel Frequency independent kernel approximationapproximation
• Note: in this work we used a classical frequency Note: in this work we used a classical frequency independent approximation for the integration independent approximation for the integration kernel:kernel:
r'rr'r
1),(K
d
c
d
c
d
c
d
c
dddc
cdcc
dddc
cdccc
VV
qqII
PPPP
PolsLLLL
sR
0
00001
000
Reducing to algebraic formReducing to algebraic form
• Surface and Volume discretization both for conductors and Surface and Volume discretization both for conductors and dielectrics + dielectrics + Galerkin gives branch equations:Galerkin gives branch equations:
conductorsconductors
dielectricsdielectrics
dielectric layerdielectric layer
conductorconductor
A mesh formulation for both A mesh formulation for both conductors and dielectricsconductors and dielectrics
0 0 010
0 0 0
0
cc cdc c c
dc dd d d
c ccc cd
d ddc dd
L LR I Vs
L L Pols I V
qP P
qP P
0 0 010
0 0 0
10
cc cdc
dc dd Tm ms
cc cd
dc dd
L LRs
L L PolsM M I V
P P
P Ps
Mesh analysis guarantees passivityMesh analysis guarantees passivity
)()(
)()(
tCxty
tButRxdt
dxL
where:where:
Can prove that:Can prove that:1)
2) ( ) 0, for all
3) ( ) 0, for all
T
T T
T T
C B
x L L x x
x R R x x
0 0
0 0
0 0
Tcc cd fc
fc fd Tdc dd fd
cc cd
dc dd
L L MM M
L L M
P PL
P P
Pol
Mesh analysis guarantees passivity Mesh analysis guarantees passivity (cont.)(cont.)
)()(
)()(
tCxty
tButRxdt
dxL
where:where:
Can prove that:Can prove that:
00
00
Tfd
T
Tpd
Tpc
T
dddc
cdcc
fddddc
cdccpdpc
Tfccfc
MPol
M
MPPPP
PolMPPPP
MMMRM
R
1)
2) ( ) 0, for all
3) ( ) 0, for all
T
T T
T T
C B
x L L x x
x R R x x
( ) 0, for all T Tx L L x x Proof :
T
T
dddc
cdcc
Tfd
Tfc
dddc
cdccfdfc
Pol
PPPP
M
MLLLL
MM
L
00
00
00
diagonal with diagonal with positive coef.positive coef.
positive definite when positive definite when using Galerkinusing Galerkin
congruence transformationcongruence transformationpreserves positive definitenesspreserves positive definiteness
is block diagonal and the blocks are all positive, is block diagonal and the blocks are all positive, hence is positive semidefinite and so ishence is positive semidefinite and so isL
TL LL
( ) 0, for all T Tx R R x x Proof :
00
00
Tfd
T
Tpd
Tpc
T
dddc
cdcc
fddddc
cdccpdpc
Tfccfc
MPol
M
MPPPP
PolMPPPP
MMMRM
R
2 0 0
0 0 0
0 0 0
Tfc c fc
T
M R M
R R
( ) 0, for all T Tx R R x x Proof :
congruence transformationcongruence transformationpreserves positive definitenesspreserves positive definiteness
diagonal with positive coef.diagonal with positive coef.
is block diagonal and the blocks are all is block diagonal and the blocks are all positive semidefinite, hence is also positive positive semidefinite, hence is also positive semidefinitesemidefinite
TR RTR R
Example 1: frequency responseExample 1: frequency responseof the coplanar transmission lineof the coplanar transmission line
frequency [Hz]frequency [Hz]
__ with dielectrics, reduced model__ with dielectrics, reduced model
o with dielectrics, full systemo with dielectrics, full system
1010-4-4
1010-3-3
1010-2-2
1010-1-1
101000
1100 22 33 44 55 66x 10x 1088
admittance [S]admittance [S](order 16)(order 16)
(order 700)(order 700)
Example2: frequency responseExample2: frequency responseof the line with opposite stripsof the line with opposite strips
frequency [Hz]frequency [Hz]
__ with dielectrics, reduced model__ with dielectrics, reduced model
o with dielectrics, full systemo with dielectrics, full system1010-4-4
1010-3-3
1010-2-2
1010-1-1
101000
1100 22 33 44 55 66x 10x 1088
admittance [S]admittance [S]
(order 16)(order 16)
(order 700)(order 700)
Example2: Current distributionsExample2: Current distributions
Note: NOT TO SCALE!Note: NOT TO SCALE!reduced filament widths reduced filament widths for visualization purposesfor visualization purposes
Example 3: current distributionsExample 3: current distributionsfor two bus wires on an MCMfor two bus wires on an MCM
Frequency response for Frequency response for the reduced model of the MCM busthe reduced model of the MCM bus
__ with dielectrics, reduced model (order __ with dielectrics, reduced model (order 12)12)
o with dielectrics, full system (order 600)o with dielectrics, full system (order 600)
- - without dielectrics- - without dielectrics
frequency [Hz]frequency [Hz]1010-4-4
1010-3-3
1010-2-2
1010-1-1
101000
1100 22 33 44 55 66x 10x 1088
admittance [S]admittance [S]
Conclusions Electromagnetic Example Conclusions Electromagnetic Example
• Volume formulation with full mesh analysis (both Volume formulation with full mesh analysis (both conductors conductors and dielectricsand dielectrics) produces ) produces
– well conditioned well conditioned
– and positive semidefinite matricesand positive semidefinite matrices
• Hence Hence guaranteed passive modelsguaranteed passive models are generated are generated when using congruence transformationwhen using congruence transformation
68
• Use Eigenvectors of the system matrixUse Eigenvectors of the system matrix
• POD or SVD or KLD or PCA.
• Use Krylov Subspace Vectors (Moment Matching)Use Krylov Subspace Vectors (Moment Matching)
• Use Singular Vectors of System Grammians Product Use Singular Vectors of System Grammians Product (Truncated Balance Realizations)(Truncated Balance Realizations)
Approaches for picking V and UApproaches for picking V and U
69
Energy of the output Energy of the output y(t)y(t) starting from state starting from state x x with no input:with no input:
Observability GramianObservability Gramian
2)(
xty
0
xdtCexCe AtTAt xdtCeCex AtTtAT T
0
0W
0
)()( dttyty T
Observability Gramian:
Note: If Note: If x=xx=xii the i-th eigenvector of the i-th eigenvector of WWo o ::
ioT
ixxWxty
i
2)( io,
Hence: eigenvectors of Hence: eigenvectors of WWoo corresponding to corresponding to smallsmall
eigenvalues do NOT produce much energy at the outputeigenvalues do NOT produce much energy at the output(i.e. they are not very observable):(i.e. they are not very observable): Idea: let’s get rid of them!Idea: let’s get rid of them!
CCAWWA To
T 0Note: it is also the solution Lyapunov equation
70
Minimum amount input energy required to drive theMinimum amount input energy required to drive thesystem to a specific state system to a specific state x x ::
Controllability GramianControllability Gramian
xdteBBex tATAtT T
0
1cWInverse of Controllability Gramian:
Note: If Note: If x=xx=xii the i-th eigenvector of the i-th eigenvector of WWcc::
ic
Tix
xWxtui
12)(min
0
)()(min dttutu T
Hence: eigenvectors of Hence: eigenvectors of WWcc corresponding to corresponding to small small
eigenvalues do require a lot of input energy in ordereigenvalues do require a lot of input energy in orderto be reached (i.e. they are not very controllable): to be reached (i.e. they are not very controllable): Idea: let’s get rid of them!Idea: let’s get rid of them!
TTCC BBAWAW
It is also the solution of
ic,
1
71
Naïve Controllability/Observability MORNaïve Controllability/Observability MOR
• Suppose I could compute a basis for the strongly Suppose I could compute a basis for the strongly observable and/or strongly controllable spaces. observable and/or strongly controllable spaces. Projection-based MOR can give a reduced model that Projection-based MOR can give a reduced model that deletes weakly observable and/or weakly controllable deletes weakly observable and/or weakly controllable modes. modes.
• Problem: Problem: – What if the same mode is strongly controllable, but weakly What if the same mode is strongly controllable, but weakly
observable? observable? – Are the eigenvalues of the respective Gramians even Are the eigenvalues of the respective Gramians even
unique? unique?
72
Changing coordinate systemChanging coordinate system
• Consider an invertible change of coordinates: Consider an invertible change of coordinates: • We know that the input/output relationship will be We know that the input/output relationship will be
unchanged.unchanged.• But what about the the Gramians, and their eigenvalues?But what about the the Gramians, and their eigenvalues?
• Gramians and their eigenvalues change! Hence the Gramians and their eigenvalues change! Hence the relative degrees of observability and controllability are relative degrees of observability and controllability are properties of the coordinate systemproperties of the coordinate system
• A bad choice of coordinates will lead to bad reduced A bad choice of coordinates will lead to bad reduced models if we look at controllability and observability models if we look at controllability and observability separately. separately.
• What coordinate system should we use then?What coordinate system should we use then?
)(~)( txUtx
UWUW oT0
~ Tcc UWUW 1~
73
Fortunately the eigenvalues of the Fortunately the eigenvalues of the product product (Hankel singular (Hankel singular values) do not change when changing coordinates:values) do not change when changing coordinates:
BalancingBalancing
120
SSWWc
)(~)( txUtx
TcUWU 1 UWWU c 0
1 1121 )()( SUSU
Diagonal matrix with eigenvalues of the product
The eigenvectors change
And since And since WWc c and and WWoo are symmetric, are symmetric,
a change of coordinate matrix a change of coordinate matrix UU can can be found that diagonalizes both:be found that diagonalizes both:
2 In Balanced coordinates the Gramians In Balanced coordinates the Gramians are equal and diagonalare equal and diagonal
But not the eigenvalues
UWU T0
74
Selection of vectors for the columns of Selection of vectors for the columns of the reduced order projection matrix.the reduced order projection matrix.
20
1 UWUUWU TTc
In balanced coordinates it is easy to select the best In balanced coordinates it is easy to select the best vectors for the reduced model: we want the subspace of vectors for the reduced model: we want the subspace of vectors that are at the same time most controllable and vectors that are at the same time most controllable and observable: observable:
In other words the ones corresponding to the largestIn other words the ones corresponding to the largesteigenvalues of the controllability and observability eigenvalues of the controllability and observability Grammians product.Grammians product.
simply pick the eigenvectors simply pick the eigenvectors corresponding to the largest corresponding to the largest entries on the diagonal entries on the diagonal (Hankel singular values).(Hankel singular values).
75
Truncated Balance Realization SummaryTruncated Balance Realization Summary
• The good news:The good news:– we even have we even have bounds for the errorbounds for the error– Can do even a bit better with the optimal Hankel Reduction
• The bad news:The bad news:– it is it is expensiveexpensive::
• need to compute the Gramians (solve Lyapunov equation)need to compute the Gramians (solve Lyapunov equation)• need to compute eigenvalues of the product: need to compute eigenvalues of the product: O(NO(N33))
• The bottom line:The bottom line:– If the size of your system allows you O(NIf the size of your system allows you O(N33) computation, ) computation,
Truncated Balance Realization is a much better choice than the Truncated Balance Realization is a much better choice than the any other reduction methodany other reduction method..
– But if you cannot afford O(NBut if you cannot afford O(N33) computation (e.g. dense matrix ) computation (e.g. dense matrix with N > 5000) then PRIMA or PVL or Quasi-Convex-Optimization with N > 5000) then PRIMA or PVL or Quasi-Convex-Optimization are better choicesare better choices
NNqqq jHjH ,1,1 ...2)()(
CCAWWA To
T 0
76
• Use Eigenvectors of the system matrixUse Eigenvectors of the system matrix
• POD or SVD or KLD or PCA.
• Use Krylov Subspace Vectors (Moment Matching)Use Krylov Subspace Vectors (Moment Matching)
• Use Singular Vectors of System Grammians ProductUse Singular Vectors of System Grammians Product– Truncated Balance Realizations (TBR)Truncated Balance Realizations (TBR)
– Guaranteed Passive TBRGuaranteed Passive TBR
Approaches for picking V and UApproaches for picking V and U
77
TBR: Passivity Preserving?
• TBR does not generally preserve passivity
‑ Not guaranteed PR-preserving
‑ Not guarateed BR-preserving
• A special case: “symmetrizable” models
‑ Suppose the system is transformable to symmetric and internally PR
‑ TBR will generate PR models! (via congruence!)
‑ Stronger property than for PRIMA: TBR is coordinate-invariant
s.p.d. is and AsECB T
78
Positive-Real Lemma
• Lur’e equations :
• The system is positive-real if and only if is positive semidefinite
• A dual set of equations can be written for with
TT
TT
TT
DDWW
QWXBC
QQXAXA
X
TTTT DDBCCBAA ,,,Y
79
PR Preserving TBR
• Lur’e equations for “Grammians” : Lyapunov + Constraints
• Insight : from the PR lemma Can be used in a TBR procedure ‑ “Balance” the Lur’e equations then truncate
• By similar partitioning argument, truncated (reduced) system will be PR/BR (passive) iff the original is!
TT
TT
TT
DDWW
QWXBC
QQXAXA
YX ,
YX ,
80
Physical Interpretation
• Consider Y-parameter model ‑ Inputs: voltages. Outputs: currents.
‑ Dissipated energy
• Lur’e Equation for PR-“Controllability” Grammian ‑ Singular values represent: gains from dissipated energy to state
‑ Minimum energy dissipation to reach a given state:
• Lur’e Equation for PR-“Observability” Grammian‑ Singular values represent: gains from state to output
‑ Energy dissipated, given initial state:
s.p.d! iff 0 )()( 01
0 XxXxdttuty TT
uyVI TT
00)()( Yxxdttuty TT
81
Computational Procedure
• Put system into standard form
‑ If is singular, requires an eigendecomposition
• Solve the PR/BR Lur’e equations
‑ Solve a generalized eigenproblem of 2X size
‑ Special treatment for singular
• Balance & Truncate as in standard TBR
BAsICsKDBAsECDsH ~)~
(~
)()( 11
E
D
82
Alternate Hybrid Procedure
• Perform standard TBR
• Use Positive-Real Lemma to check passivity of models generated
• If model is not acceptable, proceed to PR-TBR
• Why?
‑ Usually costs less
‑ May get better models
83
Example : RLC Model
TBR Model Not Positive Real
84
Example : Integrated Spiral Inductor
Order 60 PRIMA
Order 5 PR-TBR
Two Complementary Approaches • Moment Matching Approaches
– Accurate over a narrow band. • Matching function values and
derivatives.
– Cheap: O(qn).– Use it as a FIRST STAGE
REDUCTION
• Truncated Balanced Realization and Hankel Reduction– Optimal (best accuracy for given
size q, and apriori error bound.
– Expensive: O(n3)
– USE IT AS A SECOND STAGE REDUCTION
Combined Krylov-TBR algorithm
Initial Model: (A B C), n
Intermediate Model: (Ai Bi Ci), ni
Reduced Model: (Ar Br Cr), q
Krylov reduction (Wi , Vi):
Ai = WiTAVi
Bi = WiTB
Ci = CVi
TBR reduction (Wt , Vt):
Ar = WtTAVt
Br = WtTB
Cr = CVt
87
Conclusions
• Moment Matching Projection Methods
‑ e.g. PVL, PRIMA, Arnoldi
‑ are suitable for application to VERY large systems O(qn)
‑ but do not generate optimal models
• PR/BR-TBR
‑ Independent of system structure
‑ Guarantee passive models
‑ but computationally O(n3) usable only on model size < 3000
• Combination of projection methods and new TBR technique provides near-optimal compression and guaranteed passive models -- in reasonable time
• Quasi-Convex Optimization Reduction is also a good alternative specially when building models from measurements
88
Course Outline
Numerical Simulation Quick intro to PDE Solvers Quick intro to ODE SolversModel Order reduction Linear systems Common engineering practice Optimal techniques in terms of model accuracy Efficient techniques in terms of time and memory Non-Linear SystemsParameterized Model Order Reduction Linear Systems Non-Linear Systems
Monday
Yesterday
FridayTomorrow
Today