system identification using matlab
Post on 30-Nov-2015
208 Views
Preview:
DESCRIPTION
TRANSCRIPT
Introduction
An adaptive filter is a filter that self-adjusts its transfer function according to an
optimization algorithm driven by an error signal. Because of the complexity of the
optimization algorithms, most adaptive filters are digital filters. By way of contrast, a non-
adaptive filter has a static transfer function. Adaptive filters are required for some
applications because some parameters of the desired processing operation (for instance, the
locations of reflective surfaces in a reverberant space) are not known in advance. The
adaptive filter uses feedback in the form of an error signal to refine its transfer function to
match the changing parameters.
Generally speaking, the adaptive process involves the use of a cost function, which is a
criterion for optimum performance of the filter, to feed an algorithm, which determines
how to modify filter transfer function to minimize the cost on the next iteration.
As the power of digital signal processors has increased, adaptive filters have become much
more common and are now routinely used in devices such as mobile phones and other
communication devices, camcorders and digital cameras, and medical monitoring
equipment.
What is an Adaptive Filter?
An adaptive filter is a computational device that attempts to model the relationship between
two signals in real time in an iterative manner. Adaptive filters are often realized either as a
set of program instructions running on an arithmetical processing device such as a
microprocessor or DSP chip, or as a set of logic operations implemented in a field-
programmable gate array (FPGA) or in a semicustom or custom VLSI integrated circuit.
1
However, ignoring any errors introduced by numerical precision effects in these
implementations, the fundamental operation of an adaptive filter can be characterized
independently of the specific physical realization that it takes.
Adaptive filter Aspects:
An adaptive filter is defined by four aspects:
1. The signals being processed by the filter
2. The structure that defines how the output signal of the filter is computed from its input
signal
3. The parameters within this structure that can be iteratively changed to alter the filter’s
input-output relationship.
4. The adaptive algorithm that describes how the parameters are adjusted from one time
instant to the next.
By choosing a particular adaptive filter structure, one specifies the number and type of
parameters that can be adjusted. The adaptive algorithm used to update the parameter
values of the system can take on a myriad of forms and is often derived as a form of
optimization procedure that minimizes an error criterion that is useful for the task at hand.
In this section, we present the general adaptive filtering problem and introduce the
mathematical notation for representing the form and operation of the adaptive filter. We
then discuss several different structures that have been proven to be useful in practical
applications. We provide an overview of the many and varied applications in which
adaptive filters have been successfully used. Finally, we give a simple derivation of the
least-mean-square (LMS) algorithm, which is perhaps the most popular method for
2
adjusting the coefficients of an adaptive filter, and we discuss some of this algorithm’s
properties.
The Adaptive Filtering Problem:
Figure1 shows a block diagram in which a sample from a digital input signal x (n)is fed
into a device, called an adaptive filter that computes a corresponding output signal sample
y (n) at time
n. For the moment, the structure of the adaptive filter is not important; except for the fact
that it contains adjustable parameters whose values affect how y (n)is computed. The
output signal is compared to a second signald (n), called the desired response signal, by
subtracting the two samples at time n. This difference signal, given by:
e (n )=d (n )−x (n )1.1
e (n )isknown as the error signal. The error signal is fed into a procedure which alters or
adapts the parameters of the filter from time n to time (n+1) in a well-defined manner. This
process of adaptation is represented by the oblique arrow that pierces the adaptive filter
block in the figure. As the time index n is incremented, it is hoped that the output of the
adaptive filter becomes a better and better match to the desired response signal through this
adaptation process, such that the magnitude of e (n )decreases over time. In this context,
what is meant by “better” is specified by the form of the adaptive algorithm used to adjust
the parameters of the adaptive filter.
In the adaptive filtering task, adaptation refers to the method bywhich the parameters of the
system are changed from time index n to time index (n+1). The number and types of
parameters within this system depend on the computational structure chosen for the system.
3
We now discuss different filter structures that have been proven useful for adaptive
filtering tasks.
Adaptive filter problem Figure 1
The Task of an Adaptive Filter:
When considering the adaptive filter problem as illustrated in Fig.1 for the first time, a
reader is likely to ask, “If we already have the desired response signal. The point of trying
to match it using an adaptive filter. In fact, the concept of “matching” y (n )to d (n ) with
some system obscures the subtlety of the adaptive filtering task. Consider the following
issues that pertain to many adaptive
Filtering problems:
In practice, the quantity of interest is not always d (n ).Our desire may be to represent
in y (n ) a certain component of d (n ) that is contained in x (n ) or it may be to isolate
a component of d (n ) within the error e (n ) that is not contained in x (n ).
Alternatively, we may be solely interested in the values of the parameters inW (n )
and have no concern aboutx (n ), y (n ) or d (n ) themselves.
4
There are situations in which d (n )is not available at all times. In such situations,
adaptation typically occurs only when d (n )is available. When d (n )is unavailable, we
typically use our most-recent parameter estimates tocompute y (n )in an attemptto
estimate the desired response signal d (n ) .
There are real-world situations in which d (n )is never available. In such cases, one
can use additional information about the characteristics of a “hypothetical” d (n )
such as its predicted statistical behavior or amplitude characteristics, to form
suitable estimates of d (n )from the signals available to the adaptive filter. Such
methods are collectively called blind adaptation algorithms. The fact that such
schemes even work is a tribute both to the ingenuity of the developers of the
algorithms and to the technological maturity of the adaptive filtering field.
It should also be recognized that the relationship between d (n ) and e (n ) can vary with time.
In such situations, the adaptive filter attempts to alter its parameter values to follow the
changes in this relationship as “encoded” by the two sequences x (n ) and d (n )this behavior
is commonly referred to as tracking.
Applications of Adaptive Filters:
Perhaps the most important driving forces behind the developments in adaptive filters
throughout their history have been the wide range of applications in which such systems
can be used. We now discuss the forms of these applications in terms of more-general
problem classes that describe theassumed relationship between d (n )and x (n )our discussion
illustrates the key issues in selecting an adaptive filter for a particular task.
5
System Identification:
Consider Fig. 2, which shows the general problem of system identification. In this diagram,
the system enclosed by dashed lines is a “black box,” meaning that the quantities inside are
not observable from the outside. Inside this box is (1) an unknown system which represents
a general input output relationship and (2) the signal ŋ(n) called the observation noise
signal because it corrupts the observations of the signal at the output of the unknown
system.
Fig 2 System Identification
Let d¿ (n )represent the output of the unknown system with x.n/ as its input. Then, the
desired response signal in this model is:
d¿ (n )=d (n )+ŋ(n)
Here, the task of the adaptive filter is to accurately represent the signal d¿ (n )at its output. If
d¿ (n )= y (n )
Then the adaptive filter has accuratelymodeled or identified the portion of the unknown
system that is driven by y (n )since the model typically chosen for the adaptive filter is a
6
linear filter, the practical goal of the adaptive filter is to determine the best linear model that
describes the input-output relationship of the unknown system. Such a procedure makes the
most sense when the unknown system is also alinear model of the same structure as the
adaptive filter, as it is possible that
d¿ (n )= y (n )
For some set of adaptive filter parameters. For ease of discussion, let the unknown system
and the adaptive filter both be FIR filters, such that
d (n )=W optT (n )+ŋ(n)
Where W optT (n )/ is an optimum set of filter coefficients for the unknown system at time n.
In this problem formulation, the ideal adaptation procedure would adjustw (n) such that
w (n)=W optT (n )
as n!1. In practice, the adaptive filter can only adjustw (n) such that y (n) closely
approximates
d¿ (n )Overtime.The system identification task is at the heart of numerous adaptive filtering
applications
Introduction
7
Digital signal processing (DSP) has been a major player in the current technical
advancements such as noise filtering, system identification, and voice prediction. Standard
DSP techniques, however, are not enough to solve these problems quickly and obtain
acceptable results. Adaptive filtering techniques must be implemented to promote accurate
solutions and a timely convergence to that solution.
Adaptive Filtering System Configurations:
There are four major types of adaptive filtering configurations; adaptive system
identification, adaptive noise cancellation, adaptive linear prediction, and adaptive inverse
system. All of the above systems are similar in the implementation of the algorithm, but
different in system configuration. All four systems have the same general parts; an input
x(n), a desired result d(n), an output y(n), an adaptive transfer function w(n), and an error
signal e(n) which is the difference between the desired output u(n) and the actual output
y(n). In addition to these parts, the system identification and the inverse system
configurations have an unknown linear system u(n) that can receive an input and give a
linear output to the given input.
Adaptive System Identification Configuration:
The adaptive system identification is primarily responsible for determining a discrete
estimation of the transfer function for an unknown digital or analog system. The same input
x(n) is applied to both the adaptive filter and the unknown system from which the outputs
are compared (see figure 3). The output of the adaptive filter y(n) is subtracted from the
output of the unknown system resulting in a desired signal d(n). The resulting difference is
an error signal e(n) used to manipulate the filter coefficients of the adaptive system
trending towards an error signal of zero.
8
Figure 3: Adaptive filter identification configuration
After a number of iterations of this process are performed, and if the system is designed
correctly, the adaptive filter’s transfer function will converge to, or near to, the unknown
system’s transfer function. For this configuration, the error signal does not have to go to
zero, although convergence to zero is the ideal situation, to closely approximate the given
system. There will, however, be a difference between adaptive filter transfer function and
the unknown system transfer function if the error is nonzero and the magnitude of that
difference will be directly related to the magnitude of the error signal. Additionally the
order of the adaptive system will affect the smallest error that the system can obtain. If
there are insufficient coefficients in the adaptive system to model the unknown system, it is
said to be under specified. This condition may cause the error to converge to a nonzero
constant instead of zero. In contrast, if the adaptive filter is over specified, meaning that
there are more coefficients than needed to model the unknown system, the error will
converge to zero, but it will increase the time it takes for the filter to converge.
Adaptive Noise Cancellation Configuration:
9
The second configuration is the adaptive noise cancellation configuration as shown in
figure 2. In this configuration the input x(n), a noise source N1(n), is compared with a
desired signal d(n), which consists of a signal s(n) corrupted by another noise N0(n). The
adaptive filter coefficients adapt to cause the error signal to be a noiseless version of the
signal s(n).
Figure 4:Adaptive filter noise cancellation
Both of the noise signals for this configuration need to be uncorrelated to the signal s(n). In
addition, the noise sources must be correlated to each other in some way, preferably equal,
to get the best results.
Do to the nature of the error signal; the error signal will never become zero. The error
signal should converge to the signal s(n), but not converge to the exact signal. In other
words, the difference between the signal s(n) and the error signal e(n) will always be
greater than zero. The only option is to minimize the difference between those two signals.
Adaptive Linear Prediction Configuration:
Adaptive linear prediction is the third type of adaptive configuration (see figure 5). This
configuration essentially performs two operations. The first operation, if the output is taken
10
from the error signal e(n), is linear prediction. The adaptive filter coefficients are being
trained to predict, from the statistics of the input signal x(n), what the next input signal will
be. The second operation, if the output is taken from y(n), is a noise filter similar to the
adaptive noise cancellation outlined in the previous section.
As in the previous section, neither the linear prediction output nor the noise cancellation
output will converge to an error of zero. This is true for the linear prediction output because
if the error signal did converge to zero, this would mean that the input signal x(n) is entirely
deterministic, in which case we would not need to transmit any information at all.
Figure 5: Linear Prediction Configuration
In the case of the noise filtering output, as outlined in the previous section, y(n) will
converge to the noiseless version of the input signal.
Adaptive Inverse System Configuration:
The final filter configuration is the adaptive inverse system configuration as shown in
figure 4. The goal of the adaptive filter here is to model the inverse of the unknown system
u(n). This is particularly useful in adaptive equalization where the goal of the filter is to
eliminate any spectral changes that are caused by a prior system or transmission line. The
way this filter works is as follows. The input x(n) is sent through the unknown filter u(n)
and then through the adaptive filter resulting in an output y(n). The input is also sent
11
through a delay to attain d(n). As the error signal is converging to zero, the adaptive filter
coefficients w(n) are converging to the inverse of the unknown system u(n).
Figure 6: Adaptive Inverse Configuration
For this configuration, as for the system identification configuration, the error can
theoretically go to zero. This will only be true; however, if the unknown system consists
only of a finite number of poles or the adaptive filter is an IIR filter. If neither of these
conditions are true, the system will converge only to a constant due to the limited number
of zeroes available in an FIR system.
Performance Measures in Adaptive Systems:
Six performance measures will be discussed in the following sections; convergence rate,
minimum mean square error, computational complexity, stability, robustness, and filter
length.
Convergence Rate:
The convergence rate determines the rate at which the filter converges to it’s resultant state.
Usually a faster convergence rate is a desired characteristic of an adaptive system.
12
Convergence rate is not, however, independent of all of the other performance
characteristics. There will be a tradeoff, in other performance criteria, for an improved
convergence rate and there will be a decreased convergence performance for an increase in
other performance. For example, if the convergence rate is increased, the stability
characteristics will decrease, making the system more likely to diverge instead of converge
to the proper solution. Likewise, a decrease in convergence rate can cause the system to
become more stable. This shows that the convergence rate can only be considered in
relation to the other performance metrics, not by itself with no regards to the rest of the
system.
Minimum Mean Square Error:
The minimum mean square error (MSE) is a metric indicating how well a system can adapt
to a given solution. A small minimum MSE is an indication that the adaptive system has
accurately modeled, predicted, adapted and/or converged to a solution for the system. A
very large MSE usually indicates that the adaptive filter cannot accurately model the given
system or the initial state of the adaptive filter is an inadequate starting point to cause the
adaptive filter to converge. There are a number of factors which will help to determine the
minimum MSE including, but not limited to; quantization noise, order of the adaptive
system, measurement noise, and error of the gradient due to the finite step size.
Computational Complexity:
Computational complexity is particularly important in real time adaptive filter applications.
When a real time system is being implemented, there are hardware limitations that may
13
affect the performance of the system. A highly complex algorithm will require much
greater hardware resources than a simplistic algorithm.
Stability:
Stability is probably the most important performance measure for the adaptive system. By
the nature of the adaptive system, there are very few completely asymptotically stable
systems that can be realized. In most cases the systems that are implemented are marginally
stable, with the stability determined by the initial conditions, transfer function of the system
and the step size of the input.
Robustness:
The robustness of a system is directly related to the stability of a system. Robustness is a
measure of how well the system can resist both input and quantization noise.
Filter Length:
The filter length of the adaptive system is inherently tied to many of the other performance
measures. The length of the filter specifies how accurately a given system can be modeled
by the adaptive filter. In addition, the filter length affects the convergence rate, by
increasing or decreasing computation time, it can affect the stability of the system, at
certain step sizes, and it affects the minimum MSE. If a thefilter length of the system is
increased, the number of computations will increase, decreasing the maximum convergence
rate. Conversely, if the filter length is decreased, the number of computations will decrease,
increasing the maximum convergence rate. For stability, due to an increase in length of the
filter for a given system, you may add additional poles or zeroes that may be smaller than
those that already exist. In this case the maximum step size, or maximum convergence rate,
will have to be decrease to maintain stability. Finally, if the system is under specified,
14
meaning there are not enough pole and/or zeroes to model the system, the mean square
error will converge to a nonzero constant. If the system is over specified, meaning it has too
many poles and/or zeroes for the system model, it will have the potential to converge to
zero, but increased calculations will affect the maximum convergence rate possible.
Filter Algorithms:
A number of filter algorithms will be discussed in this section; the finite impulse response
(FIR) least mean squares (LMS) gradient approximation method will be discussed in detail,
characteristics of infinite impulse response (IIR) adaptive filters will be briefly discussed,
the transform domain adaptive filter (TDAF) and numerous other algorithms will be
mentioned for completeness.
Finite Impulse Response (FIR) Algorithms:
Least Mean Squares Gradient Approximation Method
Given an adaptive filter with an input x(n), an impulse response w(n) and an output y(n)
you will get a mathematical relation for the transfer function of the system
15
y(n) = wT(n)x(n)
and
x(n) = [x(n), x(n-1), x(n-2), ... , x(n-(N-1))]
where wT(n) = [w0(n), w1(n), w2(n) ... wN-1(n)] are the time domain coefficients for an
Nth order FIR filter.
Note in the above equation and throughout a boldface letter represents a vector ant the
super script T represents the transpose of a real valued vector or matrix.
Using an estimate of the ideal cost function the following equation can be derived.
w(n+1) = w(n) - μΔE[e2](n).
In the above equation w(n+1) represents the new coefficient values for the next time
interval, μ is a scaling factor, and ΔE[e2](n) is the ideal cost function with respect to the
vector w(n). From the above formula one can derive the estimate for the ideal cost function
w(n+1) = w(n) - μe(n)x(n)
where
e(n) = d(n) - y(n) and
y(n) = xT(n)w(n).
In the above equation μ is sometimes multiplied by 2, but here we will assume it is
absorbed by the μ factor.
In summary, in the Least Mean Squares Gradient Approximation Method, often referred to
as the Method of Steepest Descent, a guess based on the current filter coefficients is made,
and the gradient vector, the derivative of the MSE with respect to the filter coefficients, is
16
calculated from the guess. Then a second guess is made at the tap-weight vector by making
a change in the present guess in a direction opposite to the gradient vector. This process is
repeated until the derivative of the MSE is zero.
Convergence of the LMS Adaptive Filter:
The convergence characteristics of the LMS adaptive filter is related to the autocorrelation
of the input process as defined by:
Rx = E[x(n)xT(n)]
There are two conditions that must be satisfied in order for the system to converge. These
conditions include:
• The autocorrelation matrix, Rx, must be positive definite.
• 0 < μ < 1/λmax., where λmax is the largest eigenvalue of Rx.
In addition, the rate of convergence is related to the eigenvalue spread. This is defined
using the condition number of Rx, defined as κ = λmax/λmin, where λmin is the minimum
eigenvalue of Rx. The fastest convergence of this system occurs when κ = 1, corresponding
to white noise. This states that the fastest way to train a LMS adaptive system is to use
white noise as the training input. As the noise becomes more and more colored, the speed
of the training will decrease.
Transform Domain Adaptive Filter (TDAF)
The TDAF refers to the application of an adaptive filter after a transform of the input
system to an orthogonal basis, such as the discrete Fourier Transform (DFT), the discrete
cosine transform (DCT), and the Walsh Hadamard transform (WHT). The primary
17
motivation for the use of the TDAF is that for colored noise, the TDAF can exhibit faster
convergence rates. This will be briefly explained in the next section.
The TDAF, in simple terms, is a transform of the system, using one of the above
techniques, followed by an application of an adaptive filter algorithm, such as the LMS
adaptive filter.
Convergence of the TDAF
It can be shown that the optimum MSE surface is a hypersphere. When white noise is used
in training, the error surface will approach a hypersphere. If, however, colored noise is
being used as the training input, the shape of the MSE surface will be a hyperellipse. This
causes an increase in the convergence time.
When the TDAF is applied to the system with a colored noise input, it causes a rotation and
a scaling of the axis, causing the points of the MSE surface to intersect the axes at equal
distances. This will cause an increase in the convergence time of the adaptive filter.
Quasi-Newton Adaptive Algorithms:
The quasi-Newton adaptive algorithm uses second order statistics to reduce the
convergence rate of an adaptive filter, via the Gauss-Newton method. Probably the best
known quasi-Newton algorithm is the recursive least squares (RLS) algorithm. It is
important to note that even with the increase in convergence rate, the RLS algorithm
requires great amounts of processing power, which can make it difficult to implement on
real-time systems.
There are a number of other quasi-Newton algorithms that have fast convergence rates, and
that are also feasible alternatives for real time processing.
18
Adaptive Lattice Algorithms
The primary reason for the use of a lattice structure is to reduce the quantization noise
introduced by filter coefficients in finite word length systems. As the name suggests,
adaptive lattice algorithms are design with the goal of reducing coefficient quantization
error, and thus, possibly decreasing the word length of the system with comparable
performance
Infinite Impulse Response (IIR) Adaptive Filters:
The primary advantage of IIR filters is that to produce an equivalent frequency response to
an FIR filter, they can have a fewer number of coefficients. This in theory should reduce
the number of adds, multiplies and shifts to perform a filtering operation.
This theory of using IIR filters to reduce the computational burden is the primary
motivation for the use of IIR adaptive filters. There are, however, a number of problems
that are introduced with the use of IIR adaptive filters.
The fundamental concern with IIR adaptive filters is the potential for instability due to
poles moving outside the unit circle during the training process. Even if the system is
initially stable and the final system is stable, there is still the possibility of the system going
unstable during the convergence process. Some suggestion has been made to limit the poles
to within the unit circle, however, this method requires that the step sizes be small, which
considerably reduces the convergence rate.
Due to the interplay between the movement of the poles and zeros, the convergence of IIR
systems tends to be slow . The result is that even though IIR filters have fewer coefficients,
therefore few calculations per iteration, the number of iterations may increase cause a net
loss in processing time to convergence. This, however, is not a problem with all pole filters.
19
• In an IIR system, the MSE surface may contain local minimum that can cause a
convergence of that system to the local minimum instead of the absolute
minimum. More care need to be taken in the initial conditions in IIR adaptive
filters than in FIR adaptive filters.
• IIR filters are more susceptible to coefficient quantization error than FIR, due to
the feedback.
There have been a number of studies done on the use of IIR adaptive filters, but due to the
problems stated above, they are still not widely used in industry today.
MATLAB:
MATLAB (matrix laboratory) is a numerical computing environment and fourth-generation
programming language. Developed byMathWorks, MATLAB allows matrix manipulations,
plotting of functions and data, implementation of algorithms, creation of user interfaces,
and interfacing with programs written in other languages, including C, C++, Java,
and Fortran.
Although MATLAB is intended primarily for numerical computing, an optional toolbox
uses the MuPAD symbolic engine, allowing access to symbolic computing capabilities. An
additional package, Simulink, adds graphical multi-domain simulation and Model-Based
Design for dynamic and embedded systems.
In 2004, MATLAB had around one million users across industry and academia. MATLAB
users come from various backgrounds ofengineering, science, and economics. MATLAB is
widely used in academic and research institutions as well as industrial enterprises.
20
The Scenario:
Figure 1 presents a block diagram of system identification application using adaptive
filtering. The objective is to change (adapt) the coefficients of a filter W (which can be a
FIRor an IIR one), to match as closely as possible the response of an unknown system H.
Theunknown system and the adapting filter process the same input signal x[n] and have
asoutputs: d[n] (also referred to as the desired signal) and y[n].
The filter W is adapted using the least mean-square algorithm, which is the most
widely used adaptive filtering algorithm. First the error signal, e[n], computed as in
eq.1,measures the difference between the output of the adaptive filter and the output of
theunknown system. On the basis of this measure, the adaptive filter will change its
coefficientsc[n] in an attempt to reduce the error. The coefficient update relation is a
function of the errorsignal squared and is given by the following equation:
e[n] = d[n] - y[n]
c[n+1] = c[n] + μ e[n] x[n]
Figure 7: The scenario block diagram
21
The convergence time of the LMS algorithm depends on the step size μ. If μ is small,
then it may take a long convergence time and this may defeat the purpose of using an
LMSfilter. However if μ is too large, the algorithm may never converge.
The LMS reference design has the following two main functional blocks: the FIR (or
IIR) Filter and the LMS AlgorithmThe FIR filter is implemented serially using a multiplier
and an adder with a feedbackas shown in the high level schematic from Figure 2. The FIR
result is normalized to minimize saturation.
Figure 8: Using Adaptive filter to identify the Unknown system
The MATLAB Code:
Code Explanation:
1. The user firstly input the number of samples of the input signal. Number of samples is
held on the variable N.
2. Step size is also requested from the user. The MU factor will later affect the
convergence rhythm of the algorithm.
22
3. The input signal is added to noise signal and the output signal of this addition will act
as a new input signal.
4. The output signal from 3 above is introduced to an unknown system (the function is
already known, just to check to what extent the LMS algorithm is working). The output
of the system is the desired signal.
5. Adaptive filter initialization through the LMS. The Adaptive filter inputs are: desired
signal (output from the unknown system) + input signal (after adding noise) + the LMS
factor (Ha). The output of the adaptive filter is the y (which should be close to desired
signal) + the error.
6. Last step is to show the marginal differences between the adapted signal and the actual
one. Also comparing the errors through time.
Functions used in code:
Filter:
The function "filter" is actually a digital filter. It can be low pass or high pass based on the coefficients.
Output of the filter = filter(b, a, input signal).
b is the denominator of the filter function.
a is the nominator of the filter function.
Example:
you can use filter to find a running average without using a for loop. This example finds the running average of a 16-element vector, using a window size of 5.
data = [1:0.2:4]';
windowSize = 5;
filter(ones(1,windowSize)/windowSize,1,data)
ans =
0.2000
23
0.4400
0.7200
1.0400
1.4000
1.6000
1.8000
2.0000
2.2000
2.4000
2.6000
2.8000
3.0000
3.2000
3.4000
3.6000
Algorithm:
The filter function is implemented as a direct form II transposed structure
Where n-1 is the filter order, and which handles both FIR and IIR filters [1].
The operation of filter at sample is given by the time domain difference equations:
24
The input-output description of this filtering operation in the -transform domain is a
rational transfer function.
Randn:
Normally distributed random numbers
Syntax:
Y = randn
Y = randn(n)
Y = randn(m,n)
Y = randn([m n])
Y = randn(m,n,p,...)
Y = randn([m n p...])
Y = randn(size(A))
randn(method,s)
s = randn(method)
Description
25
Y = randn returns a pseudorandom, scalar value drawn from a normal distribution with
mean 0 and standard deviation 1.
Y = randn(n) returns an n-by-n matrix of values derived as described above.
Y = randn(m,n) or Y = randn([m n]) returns an m-by-n matrix of the same.
Y = randn(m,n,p,...) or Y = randn([m n p...]) generates an m-by-n-by-p-by-... array of the
same.
Example
R = randn(3,4) might produce
R =
1.1650 0.3516 0.0591 0.8717
0.6268 -0.6965 1.7971 -1.4462
0.0751 1.6961 0.2641 -0.7012
Fircband:
Perform constrained-band equiripple FIR filter design
Syntax:
b = fircband(n,f,a,w,c)
b = fircband(n,f,a,s)
b = fircband(...,'1')
b = fircband(...,'minphase')
26
b = fircband(..., 'check')
b = fircband(...,{lgrid})
[b,err] = fircband(...)
[b,err,res] = fircband(...)
Description:
fircband is a minimax filter design algorithm that you use to design the following types of
real FIR filters: Types 1-4 linear phase Type 1 is even order, symmetric Type 2 is odd
order, symmetric Type 3 is even order, antisymmetric Type 4 is odd order, antisymmetric
Minimum phase Maximum phase, Minimum order (even or odd), extra ripple Maximal
ripple Constrained ripple Single-point band (notching and peaking) Forced gain Arbitrary
shape frequency response curve filters
b = fircband(n,f,a,w,c) designs filters having constrained error magnitudes (ripples). c is a
cell array of strings of the same length as w. The entries of c must be either 'c' to indicate
that the corresponding element in w is a constraint (the ripple for that band cannot exceed
that value) or 'w' indicating that the corresponding entry in w is a weight. There must be at
least one unconstrained band--c must contain at least one w entry. For instance, Example 1
below uses a weight of one in the passband, and constrains the stopband ripple not to
exceed 0.2 (about 14 dB).
A hint about using constrained values: if your constrained filter does not touch the
constraints, increase the error weighting you apply to the unconstrained bands.
Notice that, when you work with constrained stopbands, you enter the stopband constraint
according to the standard conversion formula for power--the resulting filter attenuation or
27
constraint equals 20*log(constraint) where constraint is the value you enter in the function.
For example, to set 20 dB of attenuation, use a value for the constraint equal to 0.1. This
applies to constrained stopbands only.
b = fircband(n,f,a,s) is used to design filters with special properties at certain frequency
points. s is a cell array of strings and must be the same length as f and a. Entries of s must
be one of: 'n'--normal frequency point. 's'--single-point band. The frequency band is given
by a single point. You must specify the corresponding gain at this frequency point in a. 'f'--
forced frequency point. Forces the gain at the specified frequency band to be the value
specified. 'i'--indeterminate frequency point. Use this argument when bands abut one
another (no transition region).
MATLAB CODE (M-file):
1. Clear all the MATLAB environment variables and main screen
clc;
28
clear all;
2. The user firstly input the number of samples of the input signal. Number of samples is held on the variable N.
N= input('No of Samples :'); %No of Samples
n=(1:N)';
x= input ('enter signal:');
3. Step size is also requested from the user. The MU factor will later affect the convergence rhythm of the algorithm.
Step=input ('please enter the mu (step size) :');
4.The input signal is added to noise signal and the output signal of this addition will act as a new input signal.
v = 0.8*randn(N,1); % Random noise part.
ar = [1,1/2]; % FILTER coefficients.
v1 = filter(1,ar,v); % Noise signal. Applies a 1-D digital filter.
s=x+v1; % added with noise.. .
subplot(4,1,1);
5. Plotting the input signal
plot(s);
title('Received Signal S(t)');
xlabel('Time');
ylabel('Amplitude'); % received signal
29
6. Designing the Unknown system
[b,err,res] = fircband(30,[0 0.4 0.9 1],[1 1 0 1],[1 0.2],{'w','c'}); % designs the
unknown system
7. Calculating the desired signal, the output signal from 3 above is introduced to an unknown system (the function is already known, just to check to what extent the LMS algorithm is working). The output of the system is the desired signal.
d=filter(b,1,s); % desired signal
subplot(4,1,2);
plot(d);
title('DESIRED SIGNAL d(t)');
xlabel('Time');
ylabel('Amplitude');
8. Adaptive filter initialization through the LMS. The Adaptive filter inputs are: desired
signal (output from the unknown system) + input signal (after adding noise) + the LMS factor(Ha). The output of the adaptive filter is the y ( which should be close to desired signal) + the error.
ha=adaptfilt.lms(31,step); %ADAPTIVE ALGORITHM
[y,e]=filter(ha,s,d);
subplot(4,1,3);
plot(y);
title('Adaptive filter output y(t)');
30
xlabel('Time'); ylabel('Amplitude');
subplot(4,1,4);
plot(e);
title('Error signal e(t)');
xlabel('Time');
ylabel('Amplitude');
9. Last step is to show the marginal differences between the adapted signal and the actual
one. Also comparing the errors through time.
% Displaying the marginal difference between adapted system and actual
% system
disp ('Ideal System Coefficients');
disp ([b]);
disp ('Derived system (adsaptive filter)');
disp ([ha.coefficients]);
disp( 'Difference between two systems');
disp ([ha.coefficients-b]);
%%%%%%%%%%%%%%%%%%%%Calculation of Point of convergance%%%%%%
%%%%%%%%%%%%%%
E_db=20*log10(abs(e));
31
a=0;
for n= 16 : N
if(E_db(n)<=-100 && a==0)
a=1;
break;
end;
end; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
Conv_Sample=n
figure(2);
plot(1:N,[d,y,e]);
title('System Identification');
xlabel('Time');
ylabel('Amplitude');
legend('DESIRED','OUTPUT','ERROR');
figure(3);subplot(2,1,1);
stem(b);
title('ACTUAL SYSTEM');
xlabel('Step');
32
ylabel('h(k)');
subplot(2,1,2);
stem([ha.coefficients]);
title('IDENTIFIED SYSTEM');
xlabel('Step');
ylabel('ha(k)');
figure(4); subplot(2,1,1);
plot((abs(e)));
title('Error');
ylabel('e(n)');
xlabel('iteration number');
subplot(2,1,2);
plot(20*log10(abs(e)));
title('Error(dB)');
ylabel('20 log e(n)');
xlabel('iteration number');
Results and discussion:
33
Figure 9: shows the input signal to the unknown system (received signal), the desired
signal, the adaptive filter output and the error signal.
As it can be clearly seen that the adaptive filter has successfully constructed the system
function. Also, the error signal amplitude was high at the beginning of the identification
process but with further iterations the error amplitude has been reduced significantly. It can
also be easily recognized that the error reduction has occurred in the very beginning, which
proves one of the LMS algorithm regarding fast error reduction.
34
Figure: 10. Shows the desired, output and error signals.
The above diagram plots the three important signals, the desired signal, the actual system
output signal and the error signal. The error signal here is the difference between the
desired and the actual output signals. As it can be clearly seen that the desired signal and
the actual output signal are in complete match (actually the blue desired is behind the green
actual output). And this phenomena happens immediately after the errors diminish.
35
Figure 11: shows the frequency domain representation for the actual system and the
identified system.
It can be clearly seen that there is high similarities between the tow graphs.
36
Conclusion:
The convergence time of the LMS algorithm depends on the step size μ. If μ is small, then
it may take a long convergence time and this may defeat the purpose of using an LMSfilter.
However if μ is too large, the algorithm may never converge.
The LMS reference design has the following two main functional blocks: the FIR (or
IIR) Filter and the LMS Algorithmthe FIR filter is implemented serially using a multiplier
and an adder with a feedbackas shown in the high level schematic from Figure 11. The FIR
result is normalized to minimize saturation.
There are two main reasons why the LMS adaptive filter is so popular:
1. It is relatively easy to be implemented in software and hardware due to itscomputational
simplicity and efficient use of memory;
2. It performs robustly in the presence of numerical errors caused by finite-precision
arithmetic.
For the system identification application, the first step was to create a Simulink
modelwhich provided the input signals for the hardware design. Concerning the hardware
implementation, the LMS adaptive filter core was coded in Verilog-HDL, a dedicated
description language, after a previous created block scheme. Simultaneously. All the test
filesand the filter core were instantiated into a top level file to be simulated synchronous.
Theresults prove that in time, the coefficients adapt and the error converges.
37
38
top related