nonlinear identification and adaptive control › dw › arc › papers › ifac_2.pdf · engine...
TRANSCRIPT
NONLINEAR IDENTIFICATION AND ADAPTIVE CONTROLOF COMBUSTION ENGINES
Rolf Isermann and Norbert M uller
Darmstadt University of TechnologyInstitute of Automatic Control
Laboratory of Control Systems and Process AutomationLandgraf-Georg-Str. 4, D-64283 Darmstadt, Germany
Tel: +49/6151-162114, Fax: +49/6151-293445fRIsermann,NMueller [email protected]
Abstract: Advanced engine control systems require accurate models of the thermodynamic-mechanical process, which are substantially nonlinear and often time-variant. After brieflyintroducing the identification of nonlinear processes with grid-based look-up tables and aspecial local linear Radial Basis Function network (LOLIMOT), a comparison is made withregard to computation effort, storage requirements and convergence speed. A new trainingalgorithm for online adaptation of look-up tables is introduced which reduces the convergencetime considerably. Application examples and experimental results are shown for a multi-dimensional nonlinear model ofNOX emissions of a Diesel engine, and for the adaptivefeedforward control of the ignition angle of a SI engine.Copyright c 2001 IFAC
Keywords: Engine control, Learning systems, Identification, Neural networks, Feedforwardcontrol
1. INTRODUCTION
The last two decades in the automotive industry haveseen an ever increasing usage of electronics. In the mid-dle 1970s car manufacturers introduced microprocessor-based engine control systems to meet the conflictingstate regulations and customer demands of high fueleconomy, low emissions, best possible engine perfor-mance and ride comfort. Especially the American reg-ulations (clean air act 1963, on-board diagnosis (OBDI 1988, OBD II 1994)) and the corresponding Europeanregulations EURO 1 (1992), EURO 2 (1996), EURO 3(2000) had a considerable effect on the development ofnew engine control strategies. New sensors with elec-trical output and new actuators had to be developed.Also auxiliary units like electro-mechanical controlledcarburetors and low pressure injection systems for SI
engines and high pressure injection systems for Dieselengines have shown a development from pure mechan-ical to electro-mechanical devices with electronic con-trol. New actuators were added like for exhaust gasrecirculation, camshaft positioning and variable geome-try turbochargers. Today’s combustion engines are com-pletely microcomputer controlled with manyactuators(e.g. electrical, electro-mechanical, electro-hydraulic orelectro-pneumatic actuators, influencing spark timing,fuel-injector pulse widths, exhaust gas recirculationvalves), severalmeasured output variables(e.g. pres-sures, temperatures, engine rotational speed, air massflow, camshaft position, oxygen-concentration of theexhaust gas), taking into account differentoperatingphases(e.g. start-up, warming-up, idling, normal oper-ation, overrun, shut down). The microprocessor-based
control has grown up to a rather complicated control unitwith 50-120 look-up tables, relating about 15 measuredinputs and about 30 manipulated variables as outputs.Because many output variables like torque and emissionconcentrations are mostly not available as measurements(too costly or short life time) a majority of control func-tions is feedforward.
In the future new electronically controlled actuators andnew sensors entail additional control functions for newengine technologies (variable turbine geometry (VTG)turbo chargers, swirl control, dynamic manifold pres-sure, variable valve timing (VVT) of inlet valves). In-creasing computational capabilities using floating pointprocessors will allow advanced estimation techniquesfor non-measurable quantities like engine torque or ex-haust gas properties and precise feedforward and feed-back control over large ranges and with small tolerances.
2. CONTROL STRUCTURE FOR INTERNALCOMBUSTION ENGINES: STATE OF THE ART
feedforwardcontrol
vehicleemissions
UEGR
TCS
ϑ p0 ϑ0 n
αped
n
p2
λair-fuel
θign
Ucool
ϑH2O
H2O
feedbackcontrol
combustionprocess
super-charging
cooling
UWG
Uman
minj
α thr
Ucam
θinj
catalytic conv.
Fig. 1. Simplified control structure of a SI engine
Modern IC engines obtain increasingly more actuatingelements. SI engines have - except the classical inputslike amount of injectionminj , ignition angle�ign, in-jection angle�inj - additionally controlled air/fuel ratio,exhaust gas recirculation and variable valve timing, seeFig. 1. Diesel engines were until 1987 only manipu-lated by injection mass and injection angle. Now theyhave additional manipulated inputs like exhaust gas re-circulation, variable geometry turbochargers, commonrail pressure and modulated injection, seeFig. 2. Theengine control system has therefore to be designed for5-10 main manipulated variables and 5-8 main outputvariables, leading to a complex nonlinear multiple-inputmultiple-output system. Because some of the design re-quirements contradict each other suitable compromiseshave to be made. Since the majority of control functionsare realized by feedforward structures, precise modelsare required. Feedback control is used in the case ofSI engines for example for the�-control (keeping astoichiometric air/fuel ratio�=1 for the catalyst), for
feedforwardcontrol
feedbackcontrol
vehicleemissions
m T
ϑ p T n
α
n
p
m
θ
Uϑ
combustionprocess
super-charging
cooling
p
U
U
U
H2O
2
air
CS
00H2O
ped
cool
VTG
EGR
in j
inj
inj
inj
Fig. 2. Simplified control structure of a Diesel enginewith turbocharger
the electronic throttle control and for the ignition an-gle in case of knock. Diesel engines with turbochargerspossess a charging pressure control with waste gate orvariable vanes and speed overrun break away. Both en-gines have idling speed control and coolant water tem-perature control. Additionally, there are several auxiliaryclosed loop controls like fuel pressure control and oilpressure control. All control functions have to be de-fined for all possible operating phases. Most feedfor-ward control functions are implemented as grid-basedtwo-dimensional (i.e. two input signals) look-up tablesor as one dimensional characteristics. This is becauseof the strongly nonlinear static and dynamic behavior ofthe IC engines and the direct programming and fast pro-cessing in microprocessors with fixed point arithmetics.Some of the functions are based on physical models withcorrection factors, but many control functions can onlybe obtained by systematic experiments on dynamome-ters and with vehicles.
3. IDENTIFICATION OF NONLINEAR SYSTEMS:LOOK-UP TABLES
Due to the extremely nonlinear dynamics of internalcombustion engines, and due to the fact that often mea-sured data is the only available information on the con-nection between cause and effect, engine control unitsrely heavily upon the calibration of black-box models.In spite of their limitations to problems with one- andtwo-dimensional input spaces, grid-based look-up tablesare by far the most common type of nonlinear staticmodels used in practice. The reason for this lies in theirsimplicity and their extremely low computational eval-uation demand. Modern engine control units of sparkignition engines contain more than 100 look-up tables(Adler, 2000).
In the following the basics of grid-based look-up tablesare given. The off-line estimation of the heights of theinterpolation nodes will be explained, followed by thepresentation of an on-line adaptation algorithm using the
NLMS approach and a newly developed, modified RLSalgorithm.
3.1 Structure of two-dimensional look-up tables
Grid-based look-up tables can also be interpreted as a2nd order B-spline network (Brown and Harris, 1994).They consist of a set of data points or interpolation nodespositioned on a multi-dimensional grid. Each node com-prises two components. The scalar data point heights areestimates of the approximated nonlinear function at theircorresponding data point position. All nodes are storede.g. in the ROM of the control unit. For model generationusually all data point positions are fixed a-priori.
s s s s s
s s s s s
s s s s s
s s s s s
s s s s s
ppppppppppppppppppp
ppppppppppppppppppp
ppppppppppppppppppp
ppppppppppppppppppp
ppppppppppppppppppp
ppppppppppppppppppp
ppppppppppppppppppp
ppppppppppppppppppp
ppppppppppppppppppp
ppppppppppppppppppp
ppppppppppppppppppp
ppppppppppppppppppp
ppppppppppppppppppp
ppppppppppppppppppp
ppppppppppppppppppp
ppppppppppppppppppp
ppppppppppppppppppp
ppppppppppppppppppp
ppppppppppppppppppp
- u1g1;1 g1;k g1;k+1 g1;M1
6u2
g2;1
g2;l
g2;l+1
g2;M2
�1;1 �k;1 �k+1;1 �M1;1
�1;2 �k;2 �k+1;2 �M1;2
�1;l �k;l �k+1;l �M1;l
�1;l+1
�k;l+1
�k+1;l+1 �M1;l+1
Fig. 3. grid-based placement of the interpolation nodesof a two-dimensional look-up table.
In the following we consider a two-dimensional look-up table of the sizeM1 � M2, see Fig. 3. It con-sists of interpolation nodes, located on the grid linesg1;1; : : : ; g1;M1 andg2;1; : : : ; g2;M2 . The model outputy(u1; u2) for a given input(u1; u2) is calculated byconsidering the closest points to the bottom left, bot-tom right, top left, and top right of the model input,seeFig. 4. The heights of the interpolation nodes�k;l,�k+1;l, �k;l+1, and�k+1;l+1 are weighted with the cor-responding opposite area:
by = �k;l �Ak+1;l+1
A+ �k+1;l �
Ak;l+1
A
+ �k;l+1 �Ak+1;l
A+ �k+1;l+1 �
Ak;l
A
(1)
with the areas
Ak+1;l+1 = (g1;k+1 � u1) (g2;l+1 � u2)Ak;l+1 = (u1 � g1;k) (g2;l+1 � u2)Ak+1;l = (g1;k+1 � u1) (u2 � g2;l)Ak;l = (u1 � g1;k) (u2 � g2;l)
The total areaA is calculated as
A = (g1;k+1 � g1;k) (g2;l+1 � g2;l);
Ak;l
Ak;l+1
Ak+1;l
Ak+1;l+1s
s s
s s
�k;l
�k;l+1
�k+1;l
�k+1;l+1
g2;l
u2
g2;l+1
g1;k
u1
g1;k+1
Fig. 4. Areas for interpolation within a two-dimensionallook-up table.
with k and l denoting the closest interpolation nodewhich is smaller thanu1 andu2, respectively. (1) canbe described in a basis function formulation
by = MXi=1
wi ��i(u; c) (2)
with theM -dimensional weighting vector
w = [�1;1 : : : �M1;M2 ]T
= [w1 w2 : : : wM ]T (3)
which represents the heights of the interpolation nodes.With regard to (1) for each input vectoru = [u1 u2]
T ingeneral only 4 basis functions�i are non-zero. The basisfunctions depend on the positionsc of the interpolationnodes and on the input vectoru.
3.2 Off-line estimation of the look-up table heights
For generating a look-up table, the most widely appliedmethod to obtain the heights of the interpolation nodes isto position measurement data points directly on the gridpoints. Then, an additional approximation step can beavoided. However, if available measurement data pointsdo not correspond to the desired positions of the grid,or if the locations of the grid have to be changed dueto memory restrictions, estimation techniques have to beapplied in order to derive the heights of the interpolationnodes. This allows an arbitrary positioning of the gridlines, irrespective of the location of the measurementdata points. In the following an approach for off-lineestimation of the heights of the interpolation nodes isexplained.
The optimization of the grid location represents a non-linear optimization problem and is not addressed in thispaper, in principle, any nonlinear optimization techniquecan be applied, (Nelles, 2000).
The off-line estimation of the heights of the interpola-tion nodes can be performed by least squares method(Isermann, 1992), in order to fit to measurement datathat does not lie on the pre-specified grid (Nelles and
Fink, 2000). The loss functionPNi=1 e
2(i) is mini-mized, wheree(i) = y(i)� y(i) represent the modelerrors for data samplefu(i); y(i)g. For a given data setfu(i); y(i)gNi=1,N > M , the regression matrixX is
X =
26664�1(u(1); c) �2(u(1); c) � � � �M (u(1); c)�1(u(2); c) �2(u(2); c) � � � �M (u(2); c)
......
...�1(u(N); c) �2(u(N); c) � � � �M (u(N); c)
37775
where N is the number of data samples of the inputdatafu(i)gNi=1, andc contains the coordinates of theinterpolation nodes.
The parameter vectorw, which contains the heights ofthe interpolation nodes, see (3), is given as
w = (XTX)�1XT y; (4)
with y = [y(1) y(2) : : : y(N)]T .
The regression matrixX is typically sparse. In the two-dimensional case each row ofX contains only 4 non-zero elements. If the measurement data does not coverthe whole input space, some basis functions will not beactivated at all,X will contain columns of zeros andmay become singular. In order to make the least squaresproblem solvable the corresponding regressors and theirassociated heights have to be removed vomX andw(Nelles, 2000).
3.3 On-line adaptation of look-up table heights usingNLMS algorithm
On-line -or instantaneous- learning means that infor-mation about the desired input/output behavior is givenonly by isolated samples(u(t); y(t)). On-line adaptationis of special importance for time-variant systems, e.g.due to changing ambient conditions or aging. The adap-tation method with the lowest computational require-ments is thenormalized least mean squarealgorithm(NLMS) (Brown and Harris, 1994), it can be applied asfollows for the adaptation of the heights of the interpo-lation nodes:
wi(t+ 1) = wi(t) + � � e(t) ��i(u(t); c)PMj=1�
2j (u(t); c)
;
e(t) denotes the error between the correct valuey(t) andthe old network outputy(u(t)). The learning rate� mustbe within the range0 < � < 2, however, appropriatevalues vary between 1 for fast learning and� << 1for robustness against measurement noise. In order tofurther improve robustness against noise and to avoidover-fitting, a dead zone with minimal error value forparameter adaptation can be set (Heiss, 1996).
initializew; v; ~�;P
collect measurement data(u1; u2; y)
determine surrounding nodes
update and savew andvinitialize ~� andP
RLS update
?
?
?
?
?
-
oldnodes
newnodes
Fig. 5. flow chart of modified RLS algorithm
3.4 On-line adaptation of look-up table heights usingRLS algorithm
In relation to the usually used NLMS algorithm, the con-vergence time during on-line adaptation of the heightsof the interpolation nodes can be considerably reducedby applying recursive least squares (RLS) algorithms(Isermann, 1992). The look-up table consists ofM1 �M2 interpolation nodes, however numerical stability canonly be obtained, if the problem is reduced to the estima-tion of fewer parameters. The algorithm proposed in thefollowing estimates only the heights of the surroundinginterpolation nodes of a query point, i.e. instead of theM parameters inw only 4 parameters~� are estimatedat each time instant. During adaptation, the algorithmsrequires for each heightwi the storage of a variancevalue�2i . Without prior knowledge the variancesv =[�21 :::�
2M ]T can be initialized as�2i = 100:::1000.
We consider the estimation of the parameter vector~� = [�k;l; �k+1;l; �k;l+1; �k+1;l+1]
T = [wj :::wj+3]T ,
seeFig. 4. The matrixP is initialized as
P =
0BB@�2j 0 0 0
0 �2j+1 0 00 0 �2j+2 0
0 0 0 �2j+3
1CCA ; (5)
standard RLS estimation can then be applied:
(t+ 1) =1
�+ T (t+ 1)P (t) (t+ 1)P (t) (t+ 1)
~�(t+ 1) = ~�(t) + (t+ 1)�y(t+ 1)� T (t+ 1)~�(t)
�P (t+ 1) =
1
�
�I � (t+ 1) T (t+ 1)
�P (t)
with
(t) = [�j(u(t); c) ::: �j+3(u(t); c)]:
� is a forgetting factor that allows exponential forgettingof the old measurements. It allows a trade-off betweentracking performance and noise suppression.�j are theweighting functions as in (2).
Before each RLS update the algorithm verifies that thesurrounding interpolation nodes did not change as aresult of a changing operating condition. However, ifthe surrounding interpolation nodes changed since thelast update, the diagonal elements ofP are retrievedand saved in the variance vectorv. P is re-initializedas in (5), the variances of the new interpolation nodesare used as the new diagonal elements.Fig. 5 shows theflow chart of the proposed RLS algorithm for updatinglook-up tables.
In order to avoid deterioration of the already estimatedmodel parameters, a supervisory level freezes the iden-tification algorithm if excitation is insufficient, e.g. ifdeviations between model output and desired output areincremental.
4. FAST LOCAL LINEAR NEURAL NETWORKS
Look-up tables are restricted to one- and two-dimensionalapplications since memory and computational require-ments grow exponentially with the number of input sig-nals. For high dimensional problems, neural networkshave been successfully applied. Well known neural net-works for the identification of nonlinear processes aremultilayer perceptrons (MLP) and radial basis functions(RBF), (Isermannet al., 1997). In the following locallinear model networks are considered which have sev-eral advantages for engine measurements. Local linearmodel tree (LOLIMOT) is an extended radial basis func-tion network. It is extended by replacing the output layerweights with a linear function of the network inputs(Nelles, 1999). Thus, each neuron represents a locallinear model with its corresponding validity function.Furthermore, the radial basis function network is nor-malized, that is the sum of all validity functions for aspecific input combination sums up to one. The Gaus-sian validity functions determine the regions of the inputspace where each neuron is active. The input space of thenet is divided intoM hyper-rectangles, each representedby a linear function.
The outputy of a LOLIMOT network with n inputsx1; : : : ; xn is calculated by summing up the contribu-tions of allM local linear models
y =
MXi=1
(wi0 + wi1x1 + :::+ winxn) � �i(x) (6)
Fig. 6. First four iterations of LOLIMOT training algo-rithm
wherewij are the parameters of thei-th linear regressionmodel andxi is the model input. The validity functions�i are typically chosen as normalized Gaussian weight-ing functions.
�i(x) =�i
MPj=1
�j
(7)
with
�i = exp
��1
2
(x1 � ci1)2
�2i1� :::�
1
2
(xn � cin)2
�2in
�
with center coordinatescij and dimension individualstandard deviations�ij . LOLIMOT is trained as follows.In an outer loop the network structure is optimized.It is determined by the number of neurons and thepartitioning of the input space, which is defined by thecenters and standard deviations of the validity functions.An inner loop estimates the parameters and possiblythe structure of the local linear models. The networkstructure is optimized by a tree construction algorithmthat determines the centers and standard deviations ofthe validity functions (Fig. 6).
The LOLIMOT algorithm partitions the input space inhyper-rectangles. In the center of each hyper-rectangle,the validity function of the corresponding linear model isplaced. The standard deviations are chosen proportionalto the size of the hyper-rectangle. This makes the size ofthe validity region of a local linear model proportionalto its hyper-rectangle extension. Thus, a model may bevalid over a wide operating range of one input variablebut only in a small area of another one. At each iterationof the outer loop, one additional neuron is placed in anew hyper-rectangle, which is derived from the locallinear model with the worst local error measure
Jlocal =
NXj=1
�i(x(j)) � (y(j)� y(j))2:
In other words, the local error is the sum of the squarederrors weighted with the corresponding validity function�i over all data samplesN . The new hyper-rectangle is
Fig. 7. LOLIMOT net with external dynamics, LLM:Local Linear Models The filters are chosen as sim-ple time delaysq�1
then chosen by testing the possible cuts in all dimen-sions and taking the one with the highest performanceimprovement.
In an inner loop the parameters of the local linear modelsare estimated by a local weighted least-squares tech-nique. The prediction errors are weighted with the cor-responding validity function. Each local linear model isestimated separately, that is the overlap between neigh-bored local models is neglected. This approach is veryfast and robust. It is especially important to note thatdue to the local estimation approach the computationaldemand increases only linearly with the number of localmodels.
In addition to approximating stationary relations of non-linear processes, LOLIMOT is also capable of simulat-ing the dynamic behavior of processes. In order to modelmultivariate, nonlinear, dynamic processes the followingtime-delay neural network approach can be taken.
y(t) = f (x(t))
x(t) = [u1(t� 1); :::; u1(t�m); :::; up(t� 1):::;
up(t�m); y(t� 1); :::; y(t�m)]T
wherem is the dynamic order of the model,u1(t); : : : ; up(t)are thep model inputs andy(t) is the model output attime t, seeFig. 7.
Some of the main advantages of the LOLIMOT ap-proach is its fast training time of some 10..30 s, com-pared to many minutes to hours with other neural net-works, its applicability to adaptive problems (Fischeretal., 1998) and the interpretability of the net structure andparameters in a physical sense.
4.1 On-line Adaptation of the LOLIMOT Parameters
For on-line adaptation of the LOLIMOT network, thevalidity functions�i are kept fixed and only the param-eters of the local linear models are adapted.
0
0.5
1
0
0.5
10
0.2
0.4
0.6
0.8
1
input u1
input u2
outp
ut y
Fig. 8. Nonlinear test function with two input signals.
The followingrecursive weighted least-squares(RWLS)algorithm with exponential forgetting is utilized to es-timate the parameters of each local linear model. Forthe j-th local linear model, it computes a new param-eter estimatewj(t) at the time instantt as follows, see(Isermannet al., 1992):
(t+1)= 1�
�j (x(j))+ T
j(t+1)P j(t)
j(t+1)
P j(t) j(t+1)
wj(t+1)=wj(t)+ j(t+1)�y(t+1)� T
j(t+1)wj(t)
�
P j(t+ 1) = 1�
�I �
j(t+ 1) T
j(t+ 1)
�P j(t)
� is a forgetting factor that implements exponential for-getting of the old measurements and�j is the weight-ing of the actual data with the validity function value.The role of the forgetting factor as well as supervisorytasks for the on-line adaptation are discussed in (Finketal., 1999).
5. COMPARISON OF LOOK-UP TABLE ANDLOCAL LINEAR NEURAL NETWORK FOR THE
TWO-DIMENSIONAL CASE
For the comparison of the on-line adaptation algorithmsof look-up table and local linear neural network, a non-linear test function with two input signals was consid-ered, seeFig. 8. The testing structure shown inFig. 9was used to train the look-up table using NLMS andRLS adaptation algorithms and the LOLIMOT neuralnetwork using RWLS adaptation algorithm.u1 andu2have been random numbers within[0; 1], before trainingthe network parameters of look-up table and LOLIMOTneural network have been initialized to zero.
Fig. 10shows the comparison of memory requirementsand computation time, and of time of convergence forthe on-line training. In the first and second diagrams,
nonlineartest function
on-line adaptedlook-up table /neural network
e-
-
-
-
- -?
��������
� e
y
y
+u2
u1
Fig. 9. Testing structure used for on-line adaptation oflook-up tables and neural networks.
values in black refer togeneralization, values in whiterefer to the case that alsoadaptationis activated. Alldata is normalized to look-up table values without adap-tation. The first column shows values of the look-uptable, using the NLMS adaptation algorithm. Memoryand computational requirements are small, however timeof convergence is large. Additional memory and compu-tational requirements of the NLMS adaptation is frac-tional and not visible in the diagram. If the look-up tableis adapted using the proposed RLS algorithm, memoryexpense doubles since in addition to the heights alsotheir covariances have to be stored. However, time ofconvergence can be considerably reduced. Memory re-quirements are large for the LOLIMOT neural network,also the computational requirements, which is due to theevaluation of the exponential functions. However, due tothe fast RWLS algorithm, time of convergence duringadaptation is very fast.
For the two dimensional case the conventional look-up table outperforms the LOLIMOT neural network,if fast convergence is required the application of RLSadaptation is advantageous.
look−up table (NLMS)look−up table (RLS) LOLIMOT0
2
4
6
rela
tive
mem
ory
requ
irem
ents
look−up table (NLMS)look−up table (RLS) LOLIMOT0
20
40
60
rela
tive
com
puta
tion
time
look−up table (NLMS)look−up table (RLS) LOLIMOT0
0.5
1
rela
tive
conv
erge
nce
time
Fig. 10. Comparison of memory and computational re-quirements, as well as time of convergence for theon-line training of a two-dimensional look-up anda two-dimensional neural network.
Training
nen
g
[rpm
]
mf
[mg]
Qsi
[°C
A]
NO
X[p
pm
]
time [s]
0
1000
2000
4000
0
0
50
-20
0
0
0
100
100
200
200
300
300
400
400
500
500
600
600
Fig. 11. Training data set for a dynamicNOX model
Notice however, that for the high dimensional case mem-ory and computational requirements of the look-up ta-ble grow exponentially with the number of input sig-nals, which is not the case for the LOLIMOT algorithm(Nelles, 1999). Therefore in the high dimensional casethe LOLIMOT algorithm would be the better choice.
6. NONLINEAR IDENTIFICATION OF EXHAUSTGAS COMPONENTS
Theoretical models of exhaust gas concentrations aretoo complex and labor-intensive for control design. Thereason is that complex thermodynamic and chemicalequations, side-effects like swirl, tumble, quenching,local temperatures etc. have to be taken into account(Heywood, 1988). Therefore, exhaust gas models aremostly experimental.
In contrast to two-dimensional look-up tables, neuralnets can easily be applied for high order problems inorder to represent all relevant variables influencing theemissions. Depending on the complexity of the engine,more than 5-7 inputs might be necessary to considerthe most relevant variables concerning the exhaust gasformation. Among these are e.g. the engine speed, load,injection angle and -pressure, EGR and the position ofthe guide blades of turbochargers with variable turbinegeometry (VTG). Another advantage of dynamic neuralnets is the ability to dynamically measure the engine’sbehavior and calculate its static maps by means of thedynamic neural network. Consequently, there is no needto measure the whole look-up table using measurementpoints on equidistant grids, which helps to save expen-sive test stand time. As an example for engine emissionmodels, a dynamicNOX model of a 1.9l direct injectiondiesel engine is described. When recording the trainingdata for dynamic models, one should always keep inmind, that the measured data has to contain the wholerange of amplitudes and dynamics of the process in
Generalization
nen
g
[rpm
]
mf
[mg]
Θsi
[°C
A]
NO
X[p
pm
]
time [s]
0
1000
2000
5000
0
0
50
-20
0
0
0
50
50
100
100
150
150
200
200
250
250
300
300
350
350
Fig. 12. Generalization of theNOX model trained withthe data in Fig. 11
order to get satisfying results. APRBS signals (ampli-tude modulated pseudo random binary signal) are oftenvery suitable as process inputs because they excite theprocess at a wide range of amplitudes and frequencies.In this example, step responses of differing amplitudeshave proven to be suitable for the training of theNOX -models.Fig. 11shows the dynamically measuredNOXand the three net-inputsmf ,neng and�si of the trainingdata set (EGR and VTG were disabled during this test).
After building the neural model according to (8), themodel can be applied to a new (unknown) data set andsimulates theNOX according to the respective inputdata (on-line in the vehicle, if necessary).
^NOX(k) = fLOLIMOT (mf (k � 1);
neng(k � 1);�si(k � 1); ^NOX(k � 1)(8)
Fig. 12 illustrates the good generalization performanceof a first order dynamic net with 15 neurons for theNOX emissions. Despite the excitation of the net-inputs
Fig. 13.NOX -Map, calculated from dynamic neural net
in Figs. 11 and 12 were of quite different quality, thesimulatedNOX is able to follow the measured valueswith an error of just a few percent, which was quitesatisfying.
Another important feature of these dynamic neural mod-els is the possibility of deriving stationary maps fromdynamic models. The data inFig. 11 led to a dynamicmodel according to (8). This model was then used tocalculate the stationary two-dimensional look-up tableshown inFig. 13. Thus, it is possible to get a good visualoverview on the engine’s characteristics by maps derivedfrom dynamic measurements. Even more importantly,this feature allows a fast dynamic measurement of theengine’s characteristics and a calculation of the staticengine maps by means of the dynamic neural model.
The obtained dynamic model for theNOX emissionscan be used for online optimization tasks for modernengine control units (Hafneret al., 2000). If sensorsfor theNOX emissions should be available as they areused in latest direct injection gasoline engines (Gl¨ucketal., 2000), the RLWS adaptation method, see Section 4.1could be used for online adaptation. This could be usedfor compensation of varying fuel qualities or manufac-turing tolerances.
7. ADAPTIVE FEEDFORWARD CONTROL OFIGNITION ANGLE
Cylinder pressure signals contain valuable informationfor closed loop engine control. For using this informa-tion low cost combustion pressure sensors with highlong-term stability have been developed (Anastasia andPestana, 1987; Herden and K¨usell, 1994) and are al-ready installed into certain production engines (Inoueetal., 1993).
The objective ofignition control is to achieve optimumengine efficiency for each combustion event. Generallyfactors that influence the optimal ignition angle are en-gine specifications like configuration of the combustionchamber, operating conditions like engine speed, load,temperature, and exhaust gas recirculation flow rate, aswell as ambient conditions such as air temperature, airpressure and humidity in the atmosphere.
Standard ignition control systems are based on feedfor-ward control and therefore rely heavily upon calibrationof look-up tables. The database values are initially cali-brated from an analysis of a nominal engine under fixedenvironmental conditions. However, aging effects, man-ufacturing tolerances or a changing environment usuallychange the engine characteristics and lead to deteriorat-ing performance.
Based on cylinder pressure sensors, Learning Feed-Forward Control (LFFC) can be used in order to opti-
-
6
s
s
s
s
QB
p
p3
p2
VV23 V14
3
2
1
4
-
Fig. 14. Constant volume diagram of an ideal constant-volume combustion cycle
mize the point of ignition of each cylinder. The LFFCconsists of a function approximator, the latter is repre-sented by a two-dimensional look-up table. The adaptivelook-up table is trained on-line using the RLS trainingalgorithm. The learning process is performed during nor-mal operation of the engine, without determining specialexcitation signals. The learning system is just collectingand processing input-output samples at the individualoperating points.
7.1 Cylinder Pressure Evaluation
Not only thermodynamic analysis but also extensiveexperimental results suggest that the optimum efficiencyof each combustion event is achieved if 50 % energyconversion occurs at8Æ crankshaft angle (CA) afterTop Dead Center (TDC), irrespective of the operatingpoint, (Bargende, 1995; Matekunas, 1986). Thereforethe crank angle location of 50% energy conversion isto be calculated and to be controlled by modifying thepoint of ignition accordingly.
The energy conversion during a combustion cycle canbe described by the Mass Fraction Burned (MFB)xMFB(�) of the cylinder charge at a specific crank angle�. The MFB, in turn, can be approximated by
xMFB(�) =p(�)V (�)� � pi V
�i
peoc V �eoc � pi V�i
(9)
where p(�) and V (�) denote the measured cylinderpressure and the cylinder volume at a certain crankangle, respectively. The indiceseoc andi denote specificcrank angle locations at theend of combustion andbeforeignition, respectively.� stands for the polytropicindex.
This approximation can be derived by considering thepressure-volume diagram of an ideal constant-volumecombustion cycle, seeFig. 14. For the compressionstroke (1 ! 2), as well as for the power stroke (3 ! 4)an polytropic behavior can be assumed, i.e.
p(�) � V (�)� = C1 (10)
−100 −80 −60 −40 −20 0 20 40 60 80 100
0
0.2
0.4
0.6
0.8
1
mas
s fr
actio
n bu
rned
[−]
crank angle [°CA]
50% MFB
loc. of 50% MFB
thermodynamic analysis
approximated MFB
Fig. 15. Approximation of MFB in comparison to ther-modynamic analysis. The location of 50% MFB isto be controlled by appropriate settings of the pointof ignition.
for the compression stroke and
p(�) � V (�)� = C2 (11)
for the power stroke. Therefore the amount of energyQB , released between ignition (point2) and the end ofcombustion (point3) is
QB � C2 � C1 = peoc � V�eoc � pi � V
�i : (12)
During combustion the energy released is proportionalto p(�)V (�), normalized to the values between end ofcombustion and ignition, which leads to equation (9).
In Fig. 15 for an arbitrary cycle the approximation ofMFB according to (9) is compared to MFB calculatedby thermodynamic analysis of the cylinder pressure. Inthe presented control system the crank angle location ofxMFB(50%) is calculated by means of the approxima-tion mentioned above.
7.2 Control Structure
Applying closed-loop ignition control with for examplestandard PI-controllers, this cannot provide acceptablecontrol performance under fast changing operating con-ditions since the controllers cannot be tuned for highcontrol effort. This is due to the fact, that after ignitionof the air-fuel mixture, the combustion cannot be influ-enced any further. Therefore the ignition angle can onlybe computed for the next cycle, based on measurementsfrom the present engine cycles. Therefore a dead timeof one cycle is inherent. Moreover as there exist sig-nificant cycle fluctuations even under steady operatingconditions, the results of cylinder pressure evaluation
−150 −100 −50 0 50 100 1500
10
20
30
40
crank angle [°CA]
cyl.
pres
sure
[bar
]
0 10 20 30 40 50 60 70 80 90 100
5
10
15
cycle number
loc.
of x
MF
B=
0.5
[°C
A]
Fig. 16. Upper diagram: measured cylinder pressure sig-nals and reconstructed polytrope; lower diagram:calculated crank angle locations of 50% MFB.nmot = 3000 rpm, 50% load
of several engine cycles have to be averaged (e.g. amoving average over 10 cycles is used). Because thesystem error usually differs from one engine operatingcondition to another, during engine transients it takesa certain amount of time for the PI controller to ,,in-tegrate” to a new ignition angle. The stochastic natureof the combustion events can be seen inFig. 16. Theupper diagram shows the measured cylinder pressuresignals of the compression and the power strokes of 100consecutive cycles of an arbitrary cylinder. The dottedlines represent the reconstructed polytrope, which is thecomponent of the cylinder pressure which is due to thepiston motion (towed or motored pressure) . The lowerdiagram depicts the corresponding crank angle locationsof 50% MFB, which are to be controlled to8Æ CA.The stochastic nature of the combustion events is clearlyvisible.
This motivates the use of Learning Feedforward Control(LFFC), whose general structure is shown inFig. 17.The linear controllerC is used to compensate randomdisturbances and to provide a reference signal for thelearning feed-forward controller. It does not need to havea high performance and can be designed in such a waythat it provides a robust stability. Since the learningfeedforward controller acts instead of a controller’s in-tegral term, the linear controller is preferably a simpleproportional gain which can be deactivated for noisesuppression.
For the ignition control system the function approxima-tor is divided into the conventional, fixed ignition look-up table and into the adaptive offset map, seeFig. 18.The operating condition is determined by the engineload (a normalized value calculated from the intake man-ifold pressure signal) and the engine’s rotational speed.
feedforwardfunction
approximator
C Pee r rr
- - -ZZ - -?
??
6
����
����
��1
r +
�
uulin
umap+
+
y
Fig. 17. General structure of a LFFC (Learning Feedfor-ward Controller). The controllerC is basically usedfor the adaptation of the feedforward map
adaptiveoffsetmap
conventionalignition angle
map
Pee- - --?
?? ??
6
����
��1
r=�ref
+ �
�i;c
�i�adapt +
+
y=�xMFB=0:5
engine speed��
engine load��
Fig. 18. Structure of the adaptive feedforward ignitioncontrol system
The conventional look-up table determines an approx-imate value of the ignition angle and is valid for allcylinders, it is depicted inFig. 20. For each cylinderthe location of 50% MFB is calculated and comparedto the reference location of8Æ CA. Correction values arecalculated and memorized by the adaptive offset look-up table of each cylinder, at the corresponding operatingcondition. In order to reduce noise in the controller out-put signal�i, no linear proportional controllerC is usedin parallel to the function approximator, seeFig. 17.
7.3 Measurement Results
In Fig. 19 at a constant engine speed of 2500 rpm acertain load change sequence is repeated for three times,see the third diagram. The first diagram shows the crankangle locations of 50% MFB for two cylinders, whichare to be controlled to8ÆCA after TDC. The seconddiagram depicts the correction values�z;adapt for theignition points of the corresponding cylinders. For thefirst 50 sec only the conventional feedforward control isactive. The considerable deviations of the crank anglelocations of 50% MFB from the optimal value for bestengine efficiency at8ÆCA after TDC is visible, seethe upper diagram. Between 50 and 168 sec the adap-tive feedforward controller is active. Since the adaptiveinput-output map of each cylinder had been initialized to
0 20 40 60 80 100 120 140 160 180
5
10
15
20
θ50
% M
FB
,1+
2 [°]
0 20 40 60 80 100 120 140 160 180
−5
0
5
θa
dap
t [°]
0 20 40 60 80 100 120 140 160 180
15
20
25
30
load
[%]
time [sec]
� -� -�-fixed
feedforwardadaptive feedforward
controlfixedfeedf.
Fig. 19. Control performance of the ignition controlsystem during three equivalent load changes. Theadaptive look-up table is initialized with zero, after50 sec the adaptive feedforward control is activated.
zero, it first has to be adapted for both operating condi-tions. The control performance for the already adaptedfeedforward control is shown between 130 and 160 sec.Then it is switched back to the conventional, fixed feed-forward control.Fig. 20 shows the conventional, fixedignition angle look-up table (a.)) and the adaptive offsetmap of the second cylinder after a training sequence(b.)).
In spite of the considerable, stochastic nature of the com-bustion events, the adaptive feedforward control allowsto maintain the mean crank angle location of 50% MFBaround its optimal value at about8ÆCA after TDC.
8. REAL-TIME COMPUTER SYSTEM ANDEXPERIMENT CONTROL
As a matter of course the presented cylinder pressureevaluation and the adaptive control algorithms requirea considerable calculation effort. State-of-the-art engine
02000
40006000
0
50
100−100
−50
0
]mpr[ deeps enigne]%[ daol enig
c,i]°[
θ
ne 10002000
30004000
5000
0
50
−5
0
5
10
deeps enigne]%[ daol enig
2.lyc,tpada]°[
θ
ne
a.) b.)
Fig. 20. Conventional ignition map (a.)) and adaptiveoffset map of the second cylinder after a trainingsequence (b.)).
ElectronicControlUnit
ISA Bus Interface
Host PC
MATLAB/Simulink
Stateflow
ControlDesk
IndiSPACE
PHS
Bus
ISA
Bus
RAMDS4110
MUX-A/DDS2003
TimerDS4201
CANDS4302
PowerPCDS1005
CAN
K-line
dSPACE PX20 Box
RCP-System
PHS
Bus
ISA
Bus
DIO/PWMDS4001
Multi-I/ODS2201-1
ECU Interface
DS4120
Multi-I/ODS2201-2
CANDS4302
PowerPCDS1005
Cylinderpressuresignals
Crankshaftsignal
ISA BUS Interface
Real-TimeInterface
Real-TimeWorkshop
Fig. 21. Rapid control prototyping system configuration
control units (ECU) would not be able to calculatethe algorithms in an appropriate time. Assuming thatthe growth in calculating power proceeds at the highspeed of the past, future ECUs should make sufficientcalculation time available within only a few years.
However, special real-time computer systems alreadyallow an implementation and testing of the proposedalgorithms in vehicle or engine test stands. In orderto operate the control algorithms under realistic con-ditions, an indicating system was implemented on thebasis of a PowerPC, type Motorola MPC 750, 480MHz,and coupled with a rapid control prototyping (RCP)system of the company dSPACE, which is also basedon a PowerPC. Both systems operate in parallel to theproduction car’s ECU. The goal of the RPC system is toenable a very fast end easy implementation and testingof new control concepts on real-time hardware. The useris enabled to code newly developed algorithms fromblock diagrams (e.g. MATLAB/Simulink) and downloadthe code by means of an automatic code generationsoftware to the real-time hardware. A complete designiteration can thus be accomplished within a few minutes(Hanselmann, 1996).
The indicating system allows to evaluate the cylinderpressure in real time with a resolution of1Æ crankshaftangle for four cylinders. Thermodynamic and signal-based characteristic cylinder pressure features are trans-mitted to the RCP system. Based on this information,appropriate correction values for the injection duration,the ignition angle and the exhaust gas recirculation rateare calculated and sent to the production car’s ECU viaCAN bus interface, seeFig. 21. The described real-timehardware system enables very fast and easy implementa-tion and testing of complex algorithms including exten-sive data preprocessing for the cylinder pressure, evenunder harsh real-time conditions of combustion engines.Fig. 22shows the dynamometer, consisting of an 160kWasynchronous motor coupled to the combustion engine.
Fig. 22. Asynchronous motor coupled to the combustionengine
9. CONCLUSIONS
The demands for improved engine emissions, perfor-mance and efficiency require the application of advancedidentification and control strategies. The proposed adap-tation algorithms for grid-based look-up tables allowthe online training of the heights of the interpolationnodes with improved convergence time. Fast neural net-works with local linear models can approximate multi-dimensional problems within small training times ofmostly less than a minute. The latter has been shownfor the dynamic modelling of theNOX emissions of adirect injection Diesel engine.
Recent computational capacities and sensor technolo-gies allow the online evaluation of cylinder pressure sig-nals for ignition control. Using cylinder-selective adap-tive feedforward controllers, the 50% point of energyconversion can be kept near to the optimal point. Theproposed control system is capable to compensate formanufacturing tolerances, fuel quality variations andlong-term effects such as aging or wear of the engine.Moreover, the complexity and cost of engine calibrationcan thus be reduced.
10. REFERENCES
Adler, Ulrich (Ed.) (2000).Automotive Handbook. 5 ed..Robert Bosch GmbH.
Anastasia, Ch.M. and G.W. Pestana (1987). A cylinderpressure sensor for closed loop engine control. In:SAE Technical Paper Series. number 870288.
Bargende, M. (1995). Most optimal location of 50 %mass fraction burned and automatic knock detec-tion components for automatic optimization of SI-engine calibrations.MTZ Worldwide.
Brown, M. and C. Harris (1994).Neurofuzzy AdaptiveModelling and Control. Prentice Hall. New York.
Fink, F., M. Fischer and O. Nelles (1999). Supervisionof nonlinear adaptive controllers based on fuzzymodels. In:14th IFAC World Congress. Beijing,China.
Fischer, M., O. Nelles and R. Isermann (1998). Adap-tive predictive control of a heat exchanger basedon a fuzzy model.Control Engineering Practice6(2), 259–269.
Gluck, K.-H., U. Gobel, H. Hahn, J. H¨ohne, R. Krebs,T. Kreuzer and E. Pott (2000). Die Abgasreini-gung der FSI-Motoren von Volkswagen.MTZ Mo-tortechnische Zeitschrift61(6), 402–412.
Hafner, M., M. Sch¨uler and R. Isermann (2000). Model-based optimization of ic engines by means of fastneural networks - part 2: Static and dynamic op-timization of fuel consumption versus emissions.MTZ Worldwide.
Hanselmann, H. (1996). Automotive control: From con-cept to experiment to product. In:IEEE Interna-tional Symposium on Computer-Aided Control Sys-tem Design.
Heiss, M. (1996). Error-minimizing dead-zone for basisfunction networks.IEEE Transactions on NeuralNetworkspp. 1503–1506.
Herden, W. and M. K¨usell (1994). A new combustionpressure sensor for advanced engine management.In: SAE Technical Paper Series. number 940379.
Heywood, J. B. (1988).Internal Combustion EngineFundamentals. McGraw-Hill.
Inoue, T., S. Matsushita, K. Nakanishi and H. Okano(1993). Toyota lean combustion system - the thirdgeneration system. In:SAE Technical Paper Series.number 930873.
Isermann, R. (1992).Identifikation dynamischer Sys-teme. Vol. 1. 2 ed.. Springer Verlag. Berlin.
Isermann, R., K.-H. Lachmann and D. Matko (1992).Adaptive Control Systems. Prentice Hall. Engle-wood Cliffs.
Isermann, R., S. Ernst and O. Nelles (1997). Identifica-tion with dynamic neural networks - architectures,comparisons, applications -. In:IFAC Symposiumon System Identification. Fukuoka, Japan.
Matekunas, F. A. (1986). Engine combustion controlwith ignition timing by pressure ratio management.U.S. Patent.
Nelles, O. (1999). Nonlinear System Identification withLocal Linear Neuro-Fuzzy Models. PhD thesis.Darmstadt University of Technology. Darmstadt,Germany.
Nelles, O. (2000).Nonlinear System Identification.Springer Verlag. Berlin, Heidelberg.
Nelles, O. and A. Fink (2000). Tool zur Optimierungvon Rasterkennfeldern.atp Automatisierungstech-nische Praxis42(5), 58–66.