mor
DESCRIPTION
qsfqsfTRANSCRIPT
G2ELAB
Model Order Reduction Partial Report
Mateus Antunes Oliveira Leite
24/04/2015
Partial report containing the theory of model order reduction
1. Topics in Linear Algebra
In order to introduce the concepts of model order reduction in a self-contained manner, some
important topics in linear algebra are presented.
Vector basis
Any vector in a โ๐ space can be represented as a linear combination of n linearly independent base
vectors. In Equation (1), ๐i represents a base vector of โ๐ with index i and ๐ฏ is the vector being
composed.
๐ฏ = ๐ผ1๐๐ + ๐ผ2๐๐ + โฏ + ๐ผ๐๐๐ง (1)
In matrix notation, this relation can be expressed as in Equation (2). The columns of matrix ๐ are the
base vectors and the vector ๐ contains the coefficients of each of the base vectors. The vector ๐ฏ is
said to be in the column space of B.
๐ฏ = ๐๐ (2)
It must be pointed out that vectors are geometric objects and thus are independent of the basis one
chooses the represent them. The vectors ๐ฏ and ๐ can be regarded as the same geometric vector but
in different basis. Therefore, Equation (2) can be regarded as change of basis. Once the columns of ๐
are linearly independent, the matrix is not singular and its inverse exists. This permits to express the
inverse mapping as represented in Equation (3).
๐ = ๐โ๐๐ฏ (3)
Projections
In this section, a simple 3-dimensional example of projection is presented. However, the derived
results are general and can be applied directly to an arbitrary number of dimensions.
Figure 1 - Projection into a 2-dimensional space
ฮฉ
v
p
e
b1
b2
The Figure 1 represents a projection of the vector ๐ฏ into the subspace . The resulting or projected
vector is ๐ฉ and ๐ is the different among these two quantities. The subspace represented by the
matrix ๐ can be constructed by arranging the base vectors ๐๐ and ๐๐ as its columns. Thus, any vector
lying in this subspace can be represented by a relation similar to Equation (2). Therefore, the relation
among the vectors v, p and e is given by Equation (4).
๐ฏ = ๐ฉ + ๐ (4)
The fact that ๐ฉ is in the column span of ๐ allows writing equation (4) as in Equation (5).
๐ฏ = ๐๐ + ๐ (5)
To determine the vector , one should choose a subspace ๐ฟ orthogonal to e, obtaining an explicit
expression for , as shown in Equation (6).
๐ = (๐ฟT๐)โ1
๐ฟT๐ฏ (6)
Therefore, the projected vector can be obtained by Equation (7).
๐ฉ = ๐(๐ฟT๐)โ1
๐ฟT๐ฏ (7)
The linear operator representing the oblique projection of the components of ๐ฏ that are orthogonal
to ๐ฟ into ๐ can be extracted directly from this relation as expressed in Equation (8).
๐ทฮฉ,ฮจ = ๐(๐ฟT๐)โ1
๐ฟT (8)
If both the subspaces ๐ and ๐ฟ are the same, this projection is called an orthogonal projection. This
kind of projection has a very important relation to the following optimization problem: find a vector
๐ฉ lying in the subspace ๐ that best approximates the vector v. In the current context, the best
approximation is obtained when the square of the length of the error is minimized. Using the law of
cosines, the error vector represented in Figure 1 is given by Equation (9).
|๐|2 = ๐ฏT๐ฏ โ 2๐ฏT๐ฉ + ๐ฉT๐ฉ (9)
Using the fact that ๐ฉ is in the column space of , the above relation can be written as in Equation
(10).
|๐|2 = ๐ฏT๐ฏ โ 2๐ฏT๐๐ + (๐๐)T๐๐ถ (10)
Taking the gradient of the right hand side in relation to ฮฑ and imposing the condition of it being zero,
one can isolate the vector . This process is shown in (11). See the appendix for the used math
property.
๐(๐ฏT๐ฏ โ 2๐ฏT๐๐ + (๐๐)T๐๐ถ) = ๐
๐(โ2๐ฏT๐๐) + ๐[(๐๐)T๐๐ถ] = ๐
โ2[๐๐ฏT๐๐ + ๐(๐๐)T๐ฏ] + 2๐(๐๐)T๐๐ถ = ๐
๐T๐ฏ = ๐T๐๐ถ
๐ถ = (๐T๐)โ1
๐T๐ฏ
(11)
Therefore the vector that minimizes the square of the modulus of the error is given by Equation (12).
๐ฉ = ๐(๐T๐)โ1
๐T๐ฏ (12)
Comparing Equations (8) and (12), one can conclude that the optimal approximation vector is just an
orthogonal projection into the subspace .
A very important application of this fact that will be largely used in model order reduction is that it
can be used to approximately solve overdetermined systems. As an example, imagine that one wants
to solve the linear system of Equation (13) with ๐ โ โ๐,๐, ๐ฑ โ โ๐ and ๐ โ โ๐ with n > m.
๐๐ฑ = ๐ (13)
This problem is overdetermined and can be interpreted geometrically as trying to find the
representation of the vector ๐ using the columns of ๐ as the base vectors. It is very unlikely that this
task is possible, once that the number of base vector to span the totality of โ๐ is equal to n but only
m base vectors are available. Therefore, the available basis represents a subspace of dimension m
embedded on a higher n-dimensional space. One approach to solve this problem in an approximated
fashion is to project the vector ๐ into the column span of A. This results in shown in Equation (14).
๐๐ฑ = ๐(๐T๐)โ1
๐T๐ (14)
One can multiply both sides of the above equation by ๐T to obtain Equation (15).
๐T๐๐ฑ = ๐T๐ (15)
Therefore, it is possible to conclude that to obtain the closest approximation to Equation (13), in the
sense of least squares, it is only necessary to multiply the system of equations by the transpose of
the coefficient matrix.
Similarity Transformation
Matrices may be used to represent linear operators; an example of this is the projection operator. If
two bases for the same linear space are related by a change of coordinates as shown by Equation
(16), it is natural to wonder how to transform the operator between coordinate systems. One may
start with Equation (17), that represents the application of a transformation A to a vector x.
๐ฑ = ๐๐ฒ (16)
๐๐ฑ = ๏ฟฝฬ ๏ฟฝ (17)
Using Equation (16) into (17), the relation for y and the transformed operator B may be obtained.
This is shown in Equation (18).
๐๐ฒ = ๐โ1๐๐๐ฒ = ๏ฟฝฬ ๏ฟฝ (18)
This type of relation has a very important property: the eigenvalues of A are the same as Bโs. This can
be easy shown by the development in (19).
|๐โ1๐๐ โ ๐๐| = |๐โ1๐๐ โ ๐๐โ1๐| = |๐โ1||๐ โ ฮป๐||๐| = |๐ โ ฮป๐| (19)
In a similar fashion, one may wonder of what are the relations of the eigenvalues of a transformation
of the kind indicated in Equation (20).
๐๐ซ = ๐T๐๐ (20)
For this analysis the focus is not to show that the eigenvalues are the same, which they are not, but
to demonstrate that their signs are preserved. This is very important because, as will be explained
latter in this document, the sign of the eigenvalues determines if the system is stable or not.
If A is symmetric, and the system is stable, then it can be decomposed as written in Equation (21).
๐จ = โ๐ฉ๐ป๐ฉ (21)
This allows to prove that the reduced matrix is negative definite as shown in (22). For a general A
matrix, there is no guarantee that the resulting reduced system will be stable.
๐ฑ๐๐๐ซ๐ฑ = ๐ฑ๐๐T๐๐๐ฑ = โ๐ฑ๐๐๐๐ฉ๐ป๐ฉ๐ท๐ฑ = โ(๐ฉ๐ท๐)๐ป(๐ฉ๐ท๐) = โโ๐ฉ๐ท๐โ โค ๐ (22)
Singular Value Decomposition
Any matrix ๐ โ โm,n may be decomposed as a product of two orthonormal and a diagonal matrix as
shown in Equation (23).
๐ = ๐๐บ๐โ (23)
To be able to determine these matrices, one may write the product of Equation (24).
๐๐โ = ๐๐บ๐โ๐๐บโ๐โ = ๐๐บ๐บโ๐โ (24)
This leads to conclude that the matrix U is build using the eigenvectors of the matrix AA*, and each
of the diagonal entries of ฮฃ is the square root of the not null eigenvalues of AA*. A similar argument
shows that the matrix V is composed by the eigenvectors of the matrix A*A. The first n columns of
matrix U are an orthonormal basis for the range of A.
This decomposition allows writing any matrix as a sum of products of rank 1. This is written in
Equation (25). It is possible to show that truncating this series at position r leads to the best rank r
approximation of matrix A.
๐ = ฯ1๐ฎ๐๐ฏ๐โ + ฯ1๐ฎ๐๐ฏ๐
โ + โฏ + ฯr๐ฎ๐ซ๐ฏ๐ซโ (25)
2. Model Order Reduction by Projection
State Space Representation
The representation of time invariant linear systems used in this document is given by Equations (26)
and (27). In these equations: ๐ โ โ๐,๐, ๐ โ โ๐,๐, ๐ โ โ๐,๐, ๐ โ โ๐ ,๐, ๐ โ โ๐ ,๐, ๐ฑ โ โ๐, ๐ฎ โ โ๐,
๐ฒ โ โs. The vector x is the state vector, y is the output vector and u is the input vector.
๐๏ฟฝฬ๏ฟฝ = ๐๐ฑ + ๐๐ฎ (26)
It must be pointed out that Equations (26) and (27) are not coupled.
๐ฒ = ๐๐ฑ + ๐๐ฎ (27)
For now, the matrix D is considered of all zeros. If this is not the case in a particular application, the
reasoning presented below can be easily adapted.
If the matrix E is not singular it is possible to write the system in a simpler form as shown in (28).
๏ฟฝฬ๏ฟฝ = ๐โ1๐๐ฑ + ๐โ1๐๐ฎ = ๐โฒ๐ฑ + ๐โฒ๐ฎ (28)
Many textbooks present this formulation as the standard form for the state space system. However,
problems arise when E is singular. This indicates the presence of algebraic states that have to be
threated explicitly. The first step is to write Equation (26) as in Equation (29).
[๐11 ๐๐ ๐
] [๏ฟฝฬ๏ฟฝ1
๏ฟฝฬ๏ฟฝ2] = [
๐11 ๐12
๐21 ๐22] [
๐ฒ1
๐ฒ2] + [
๐1
๐2] ๐ฎ (29)
To obtain this form, permutations of the lines and columns of the E matrix should be followed by
equivalent permutations of A and B. To achieve this, one may use a permutation matrix P to create
the transformation of Equation (30).
๐ฑ = ๐๐ฒ (30)
Accordingly, the transformed system is given by Equation (31).
๐T๐๐๐ฒ = ๐T๐๐๐ฒ + ๐T๐๐ฎ (31)
In this stage it is possible to apply a Kron reduction to Equation (29) to obtain (32).
๐11๏ฟฝฬ๏ฟฝ1 = (๐11 โ ๐12๐22โ1๐21)๐ฒ๐ + (๐1 โ ๐12๐22
โ1๐2)๐ฎ (32)
This representation is a reduced state space equation that allows direct reduction to standard form.
Thus, writing the system as in Equation (26) or as in Equation (28) is interchangeable and both
formulations may be used when they are more convenient.
Observability and Reachability
There are two quantities that are very important to characterize the system. Suppose a system
written as in Equation (28), the response to an arbitrary input signal is given by Equation (33).
๐ฑ(๐ก) = ๐๐(๐กโ๐ก0)๐ฑ(๐ก0) + โซ ๐๐(๐กโ๐)๐๐ฎ(ฯ)dฯ๐ก
๐ก0
(33)
If in the beginning of the simulations the system is found with zero initial conditions, it is a natural
question to ask which is the input signal u(t) that may be used to drive the system to a given state.
There may be an infinite number of signals that can accomplish this task. To reduce the number of
choices, one may want that this signal is optimal in the sense that it is capable to drive the system to
the desired state using the minimum amount of energy. For simplicity, but without any loss of
generality, the initial time is set to zero. The equation that describes the state evolution of this
system is given by (34).
x(t) = โซ ๐๐(tโ๐)๐๐ฎ(ฯ)dฯ๐ก
0
(34)
If one chooses the input function as in Equation (35), the above identity is satisfied in the least
amount of energy [1]. The matrix ๐ซ which is called the controllability Gramian is defined by
Equation (36). Some authors use t = โ to define this quantity and they call it the infinite
Gramian.
๐ฎ(๐) = ๐โ๐๐โ(๐กโ๐)๐โ1(๐ก)๐ฑ(๐ก) (35)
๐ = โซ ๐๐(tโฯ)๐๐โ๐๐โ(๐กโ๐)๐๐๐ก
0
= โซ ๐๐๐๐๐โ๐๐โ๐๐๐๐ก
0
(36)
The energy in this signal can by written as in (37). Its physical interpretation leads to the conclusion
that the controllability Gramian measures how hard is to achieve a certain state. In the framework of
model order reduction, states that are difficult to reach are good candidates for truncation.
Ec = โซ ๐ฎ(๐)โ๐ฎ(๐)๐๐ = ๐ฑ0โ ๐โ1๐ฑ0
๐ก
0
(37)
The above discussion involved solely the states of the system and ignored its output. A dual concept
called observability can be derived if one analyses the amount of energy that a certain state delivers
to the output if no input signal is present. The system response to this scenario is given by Equation
(38).
๐ฑ0 = ๐๐๐๐ก๐ฑ๐ (38)
The energy of this signal can be calculates with aid of Equation (37) and the result is given by
Equation (39) which is also the definition of the observability Gramian.
Eo = ๐ฑ0โ โซ ๐๐โ๐๐โ๐๐๐๐๐๐
๐ก
0
๐ฑ0 = ๐ฑ๐โ ๐ ๐ฑ0 (39)
Controllability and Observability are dual concepts and their relation is shown in Table 1.
Table 1 โ Duality relation among controllability and observability
Controllability Observability
A A* B C*
Reduction by Projection
If it is assumed that n is a large number, the burden of solving Equation (26) is very high or even
prohibitive. Therefore, a reduction of the system may be used to allow a faster solution with an
acceptable loss of accuracy. As presented in Equation (2), any two bases for โn can be related by a
transformation matrix. The transformation matrix can be partitioned into two smaller matrices,
allowing one to obtain Equation (40). V contains q base vector and U the last n โ q.
๐ฑ = ๐๐ฑ๐ซ + ๐๐ฑ๐ฌ (40)
Then, Equation (26) may be written as in (41).
๐๐๏ฟฝฬ๏ฟฝ๐ซ = ๐๐๐ฑ๐ซ + ๐๐ฎ + ๐๐๐ฑ๐ฌ โ ๐๐๏ฟฝฬ๏ฟฝ๐ฌ (41)
One may choose the basis in a way that the vectors of V are more important in representing the
dynamics of the system than the vectors in U. Thus, the last two terms of the right hand side can be
interpreted as an error factor.
๐๐๏ฟฝฬ๏ฟฝ๐ซ = ๐๐๐ฑ๐ซ + ๐๐ฎ + ๐ (42)
The above system in xr is overdetermined. The same process used in Equations (13), (14) and (15) can
be used to allow the solution of this equation. This is shown in (43).
๐T๐๐๏ฟฝฬ๏ฟฝ๐ซ = ๐T๐๐๐ฑ๐ซ + ๐T๐๐ฎ + ๐T๐ (43)
Until this point, the relation is exact and does not lead to any computational improvement. However,
if the last term of the right rand side of (43) is neglected, one can write the equations for the reduced
system. This is shown in Equations (44) and (45).
๐T๐๐๏ฟฝฬ๏ฟฝ๐ซ = ๐T๐๐๐ฑ๐ซ + ๐T๐๐ฎ (44)
๐ฒ = ๐๐๐ฑ๐ซ (45)
Under the light of projection matrices, the above equations represent the projection of the system
into a subspace spanned by V and orthogonal to W. The above set of equations allows the solution of
an approximation to the original system but with a reduced order.
For simplicity, a relation between the original system and reduced system matrices can be made. This
allows writing the reduced system in standard form. This relation is presented in Table 2.
Table 2 - Reduced System Coefficient Matrices
Original System Reduced System
๐ ๐T๐๐ ๐ ๐T๐๐ ๐ ๐T๐ ๐ ๐๐
The reduced system can be written as in Equations (46) and (47).
๐๐ซ๏ฟฝฬ๏ฟฝ๐ซ = ๐๐ซ๐ฑ๐ซ + ๐๐ซ๐ฎ (46)
๐ฒ = ๐๐ซ๐ฑ๐ซ (47)
To deal with initial conditions, one could use the projection of the initial condition vector into the
span of the reduced basis.
Invariance of the Transfer Function
In the frequency domain, the transfer function of the system is given by Equation (48). This is
valid for the full and reduced system as well if one keeps in mind the relations of Table 2.
๐ = ๐(s๐ โ ๐)โ1๐ (48)
If instead of using matrices V and W for the projection, one utilizes different matrices but with the
same column span, the resulting transfer function is unchanged. If the matrices K and L are
nonsingular, the matrices Vโ and Wโ, defined in Equations (49) and (50), have the same column span
of V and W, respectively.
๐โฒ = ๐๐ (49)
๐โฒ = ๐๐ (50)
Substituting these matrices into Equation (48), Equation (51) is obtained.
๐ = ๐๐โฒ๐โ1(s(๐โฒ๐โ1)T๐๐โฒ๐โ1 โ (๐โฒ๐โ1)T๐๐โฒ๐โ1)โ1
(๐๐โ1)T๐ (51)
Developing the above expression, the expression to the transformed reduced system is found to be
identical to the original reduced system. Therefore, it may be concluded that the transfer function is
invariant under a change of base. The important aspects of matrices V and W are their column span.
Balanced Truncation
The observability and reachability of each state is dependent of the system realization. If one applies
a base change represented by the matrix T, the transformed Gramians are given by (52).
๏ฟฝฬ๏ฟฝ = ๐ปโ๐๐๐ปโโ ๏ฟฝฬ๏ฟฝ = ๐ปโ๐ ๐ป
(52)
It is possible to choose the matrix T such that both Gramians are equal and diagonal. This can be
achieved by a singular value decomposition of the product (53).
๐๐ = ๐ป๐บ๐๐ปโ๐ (53)
In order to reduce the system, the states that have small associated diagonal values. The problem of
this method is the burden to calculate the Gramians that is of the order of n3 [2]. The standard way
to do this is to solve the Lyapunov equations given by (54). The verification of this fact can be made
by direct substitution of the definition of the Gramians.
๐จ๐ + ๐๐จโ + ๐ฉ๐ฉโ = ๐๐จโ๐ + ๐ ๐จ + ๐ชโ๐ช = ๐
(54)
Proper Orthogonal Decomposition
The development of this section is largely inspired in [3]. The interested reader may look at this work
for further discussion.
The proper orthogonal decomposition aims at finding a projection operator of fixed rank (ฮ ) that
minimizes the quadratic error created by the simulation of a lower order system. For the case of
continuous time, the quadratic error is given by Equation (55). We already know that one may
truncate Equation (25) to solve directly this problem. However, the following development raises a
lot of understanding of the problem structure.
๐ = โซ โ๐ฑ(๐ก) โ ๐ท๐ฑ(๐ก)โ2๐๐ก๐ป
๐
(55)
The integrand is the projection of the vector x into the space that is orthogonal to the projected one.
As any vector can be written as the sum of the projection into a subspace and the projection to the
orthogonal subspace, minimizing the error is equal to maximize the projection into the orthogonal
space. This is shown in Equation (56).
๐ = โซ โ๐ท๐ฑ(๐ก)โ2๐๐ก๐ป
๐
(56)
Using Equation (12) and imposing the orthonormality of the base, the development in (57) is
possible. In this equation, tr represents the trace.
๐ = โซ โ๐ท๐ฑ(๐ก)โ2๐๐ก๐ป
๐
= โซ โ๐T๐ฑ(๐ก)โ2
๐๐ก๐ป
๐
= ๐ก๐ (๐๐ป โซ ๐ฑ(๐ก)๐ฑ(๐ก)๐ ๐๐ก๐ป
๐
๐) (57)
This motivates the definition of the POD Kernel given by Equation (58).
๐ = โซ ๐ฑ(๐ก) ๐ฑ(๐ก)๐ ๐๐ก๐ป
๐
(58)
This development can be cast into an optimization problem whose Lagrangian is given by (59).
๐(๐ฝ, ๐ถ) = ๐๐(๐๐ป๐ฒ๐) โ โ ๐ถ๐,๐(๐๐๐ป๐๐ โ ๐น๐๐)
๐,๐
(59)
It is possible to cast the orthogonality restriction as in Equation (60) if one defines the matrix A as in
(61).
๐(๐ฝ, ๐ถ) = ๐ก๐(๐๐ป๐ฒ๐) โ ๐ก๐(๐จ๐ป๐ฝ๐ป๐ฝ) + ๐ก๐(๐จ) (60)
๐จ๐๐ = ๐ถ๐๐ , ๐จ๐๐ =๐
๐๐ถ๐๐
(61)
Using the properties in the appendix, one may calculate the first order optimality condition, in
relation to ฮฉ, to obtain Equation (62).
๐๐ = ๐๐ (62)
This condition implies that V must span a subspace that is invariant under the application of the
linear operator K. The first order optimality condition for the ฮฑ values leads to the obvious restriction
given by Equation (63).
๐T๐ = โ (63)
Both of these conditions are satisfied if V is built with the eigenvectors of K. To calculate K in a digital
computer, a discretization is necessary. Equation shows a way to approximate the kernel. The S
matrix is defined in (65).
๐ = โซ ๐ฅ(๐ก) ๐ฅ(๐ก)๐ ๐๐ก๐ป
๐
= โ ๐(๐๐)๐(๐๐)๐ป๐ซ๐๐
๐
= ๐บ๐บ๐ป (64)
๐ = [๐๐ โ๐ซ๐๐ ๐๐ โ๐ซ๐๐ โฏ ๐๐๐ โ๐ซ๐๐๐] (65)
As the kernel is an n-by-n matrix, the burden to compute the eigenvectors may be very high.
However it is possible to avoid this by the application of the singular value decomposition of S. This is
shown in Equation (66).
๐๐๐ = (๐ฮฃ๐๐)(๐ฮฃ๐๐)๐ = ๐ฮฃฮฃ๐๐๐ (66)
Therefore, the eigenvectors that we are looking for are the columns of the U matrix of the singular
value decomposition of the kernel. To compute U in an efficient way, the โeconomicโ version of the
SVD may be used. Mathematically this is equivalent to compute V by the eigenvalue decomposition
of the product STS, that has a size much smaller than the Kernel, and then using Equation (67) to get
only some eigenvectors. This does not alter the problem once that the column span of the product is
unchanged by the multiplication by ฮฃ.
๐ฮฃ = ๐๐๐๐ (67)
It must be pointed out that this is only the mathematical description of the solution. In a computer
implementation this procedure may be replaced by more efficient ones.
In this method, only the system states are taken into account. No information is obtained from the
system output. This can be easily solved by using a second system that satisfies the relations in Table
1, called the dual system. Using information from these two systems allows to approximate the
balanced truncation method in a less expensive way.
Moment Matching
The transfer function in Equation (48) can be rewritten as in Equation (68).
๐ = โ๐(s๐โ1๐ โ ๐)โ1๐โ1๐ (68)
The central term can be expanded to obtain a series representation of the transfer function. This is
shown in (69).
๐ = โ โ ๐(๐โ1๐)๐๐โ1๐s๐
โ
๐=0
(69)
The negative of these terms in s are called the moments of the transfer function. These moments are
centered at the zero frequency. To obtain the moments for other frequencies, the transfer function
can be written as in Equation (85).
๐ = ๐((๐ โ ฯ)๐ โ (๐ โ ฯ๐))โ1
๐ (70)
Direct comparison of Equations (48) and (70) allows determining the equivalences pointed out in
Table 3. Table 3 - Equivalence for decentered moments
Centered at Zero Centered at ฯ
๐ ๐ โ ๐ ๐ ๐ โ ๐๐
Using these relations of can directly write the moments for any frequency. This is written in Equation
(71). This relation can be used to extend other results obtained to the zero centered expression to an
arbitrary placed frequency.
๐ = โ โ ๐[(๐ โ ๐๐)โ1๐]๐(๐ โ ๐๐)โ1๐(s โ ฯ)j
โ
๐=0
(71)
The moment matching technique aims at choosing the columns of V and W in a way that some of the
moments of the transfer function are exactly matched. The first moment of the reduced model is
given by Equation (72).
๐ = ๐๐(๐T๐๐)โ๐
๐T๐ (72)
If A-1B is in the column span of V, there exists a vector r0 as shown in Equation (88).
๐๐ซ0 = ๐โ1๐ (73)
Equation (72) can be written as in (74). Therefore, the zeroth moment of the reduced system is equal
of the zeroth moment of the full system.
๐ฆ0 = ๐๐(๐T๐๐)โ๐
๐T๐๐๐ซ0 = ๐๐โ1๐ (74)
The first moment for the reduced system is given by Equation (75).
๐ฆ1 = ๐๐(๐๐๐๐)โ๐
๐๐๐๐(๐๐๐๐)โ๐
๐๐๐ (75)
Using (73), Equation (75) can be reduced to (76).
๐ฆ1 = ๐๐(๐๐๐๐)โ๐
๐๐๐๐โ1๐ (76)
If A-1EA-1B is in the column span of V, there exists a vector r1 as shown in Equation (77).
๐๐ซ๐ = ๐โ1๐๐โ1๐ (77)
This relation can be used to write (76) as in Equation (78).
๐ฆ1 = ๐๐(๐๐๐๐)โ๐
๐๐๐๐๐ซ1 = ๐๐โ1๐๐โ1๐ (78)
Therefore, if A-1B and A-1EA-1B are in the column span of V, the zeroth and the first moments match.
This process can be continued to match the successive moments. To express this result in a simple
way, one may introduce the Krylov subspaces, defined in Equation (79).
๐ฆ(๐, ๐ฏ, ๐) = [๐ฏ ๐๐ฏ ๐๐๐ฏ โฏ ๐๐ชโ๐๐ฏ ๐๐ช๐ฏ] (79)
Using this definition, q moments of the transfer function of the full system are matched if the matrix
V is given by Equation (80).
๐ = ๐ฆ(๐โ๐๐, ๐โ๐๐, ๐) (80)
It must be pointed out that nothing was imposed over the matrix W. A simple choice for this matrix is
to make it equal to V. However, one may use a very similar argument to show that W may be chosen
in order to match even more moments. A very similar argument shows that if W is chosen in
accordance to Equation (81), then other q moments of the full system are matched.
๐ = ๐ฆ(๐โ๐๐๐, ๐โ๐๐ช๐ป, ๐) (81)
In some special systems, e.g. impedance probing, both of the subspaces are the same. If A and E are
symmetric and CT = B, there is no need to calculate both subspaces.
3. Circuit Simulation
This section is largely related to the work in [4].
Before introducing the linear equations for circuit simulation, some notation and important
concepts are introduced. One way to model a circuit into the computer is by an abstraction
called graph. A graph, denoted by G(V, E), is a group of two sets: the set of vertices (V) and the
set of edges (E), containing tuples of elements of V.
For the case of circuit representation, an edge can be tough as a circuit component (resistor,
capacitor, current source, etc.) and the nodes are the connection points. In this particular
analysis, only resistors, capacitors, inductors and current sources will be considered. These
components and its orientations are illustrated in Figure 2.
Figure 2 - Circuit elements with voltage and current orientations
Suppose the existence of an oriented graph representing the circuit. In order to translate this
structure into matrix notation, one utilizes an edge-node incidence matrix (๐ โ โ|V|,|E|) with its
elements satisfying ๐ij โ {โ1, 0, 1}. Each column of this matrix is directly associated with an
edge of the underlying graph and each row with a node. To build the matrix, each column must
contain exactly one entry 1, one entry -1 and |V| - 2 zero entries. The nonzero elements must be
placed accordingly with the incidence of the edges into the nodes. The graph orientation is used
to determine the sign of the entry.
From this matrix, it is possible to derive a reduced matrix excluding the line representing the
node of known potential, usually the ground node. This new matrix is denoted by ๏ฟฝฬ๏ฟฝ. If the edges
of this matrix are placed in a way that components of the same kind are side by side, some
submatrices can be identified as shown in (82).
๏ฟฝฬ๏ฟฝ = [๐s ๐r ๐c ๐l] (82)
Using these submatrices is possible to write the Kirchhoffโs circuit laws for the circuit as shown
in Equation (83). The vectors ๐ขr, ๐ขc, ๐ขl e ๐ขs contains the currents of each of the different types of
circuit elements.
๐r๐ขr + ๐c๐ขc + ๐l๐ขl = ๐s๐ขs (83)
Direct application of resistors and capacitors terminal relations leads to Equation (84).
๐r๐โ1๐ฏ๐ + ๐c๐d๐ฏ๐
dx+ ๐l๐ขl = ๐s๐ขs (84)
i
V
i
V
i
V
i
V
The relation between the voltage in each of the elements and the potential of each of the nodes
of the circuit is given by Equation (85). The index i is a placeholder and can be replaced by r, c, l
or s.
๐iT๐ฏ = ๐ฏi (85)
Using this relation, Equation (84) can be rewritten as in Equation (86).
๐r๐โ1๐rT๐ฏ + ๐c๐๐c
Td๐ฏ
dx+ ๐l๐ขl = ๐s๐ขs (86)
Writing the inductors terminal voltage and using Equation (85), it is possible to write Equation
(87). This allows obtaining a unified equation with the node voltages and currents in the
inductors as unknowns. This is presented in Equation (88).
๐iT๐ฏ = ๐ฏi (87)
[๐c๐๐๐T ๐
๐ โ๐]
๐
๐๐ฅ[๐ฏn
๐ขl] = โ [
๐r๐โ1๐rT ๐i
๐iT ๐
] [๐ฏn
๐ขl] + [
๐๐
๐] ๐ข๐
(88)
If formulated in this fashion, both right and left Krylov subspaces are the equal. The same
phenomenon happens with the original and dual systems in the Proper Orthogonal Decomposition.
This allows calculating only one subspace and having the precision achieved by two subspaces in the
general case.
4. Application Example
The formulation developed in the last section is used to build an electric circuit as shown in Figure 3.
This circuit consists basically in a current source in parallel with a capacitor that feeds a ladder of
adjustable size that contains a pattern of inductance, resistance and capacitance. The output is an
ordinary resistor whose choice is independent of the other.
Figure 3 - RLC ladder
For this experiment a pattern as indicated in Table 4 was chosen. The Bode diagram for the
impedance measured at the terminals of the current source is plotted in Figure 4.
Table 4 - Value patterns for the RLC ladder
Parameter Value
Input capacitor 1 pF Output resistor 1 ฮฉ Resistor pattern 1 ฮฉ Inductor pattern 1 nH, 10 nH, 1ฮผH Capacitor pattern 1pF, 1nF, 10nF
Figure 4 โ Bode diagram of the system
The resulting reduced order models for an order equal to 20 is shown for the POD method in Figure
5. The sampling time was one nanosecond. Figure 6 shows the result for the Krylov method with 12
logarithmically distributed expansion points for the same system. Finally, Figure 7 shows the result
for a Balanced Truncation method.
Figure 5 โ POD Approximation
Figure 6 โ Krylov Approximation
Figure 7 โ Balanced Truncation
5. Appendix โ Mathematical development
Gradient property
๐ = โ ๐ข๐๐ฃ๐
๐
(89)
๐ ๐
๐๐ฅ๐= โ
๐๐ข๐
๐๐ฅ๐๐ฃ๐ + ๐ข๐
๐๐ฃ๐
๐๐ฅ๐๐
(90)
โ๐ฅ(๐ข๐๐ฃ) = (โ๐ฅ๐ข๐)๐ฃ + (โ๐ฅ๐ฃ๐)๐ข (91)
First gradient of the trace
๐ก๐(๐๐๐พ๐) = โ โ ๐๐๐ข โ ๐พ๐๐๐๐๐ข
๐๐๐ข
(92)
๐ ๐ก๐(๐๐๐พ๐)
๐๐๐๐= โ โ (
๐๐๐๐ข
๐๐๐๐โ ๐พ๐๐๐๐๐ข
๐
+ ๐๐๐ข โ ๐พ๐๐
๐๐๐๐ข
๐๐๐๐๐
)
๐๐ข
(93)
๐ ๐ก๐(๐๐๐พ๐)
๐๐๐๐= โ ๐พ๐๐๐๐๐
๐
+ โ(๐พ๐)๐๐๐๐๐
๐
(94)
โ๐ ๐ก๐(๐๐๐พ๐) = ๐พ๐ + ๐พ๐๐ (95)
Second gradient of the trace
๐ก๐(๐ด๐๐๐๐) = โ โ ๐ด๐๐ข โ ๐๐๐๐๐๐ข
๐๐๐ข
(96)
๐ ๐ก๐(๐ด๐๐๐๐)
๐๐๐๐= โ โ ๐ด๐๐ข โ (
๐๐๐๐
๐๐๐๐๐๐๐ข + ๐๐๐
๐๐๐๐ข
๐๐๐๐)
๐๐๐ข
(97)
๐ ๐ก๐(๐ด๐๐๐๐)
๐๐๐๐= โ ๐๐๐ข(๐ด๐)๐ข๐
๐ข
+ โ ๐๐๐๐ด๐๐
๐
(98)
โ๐ ๐ก๐(๐ด๐๐๐๐) = ๐๐ด๐ + ๐๐ด (99)