# Digital Signal Processing Using MATLAB for Students and Researchers (Leis/Signal Processing) || Temporal and Spatial Signal Processing

Post on 07-Dec-2016

228 views

Embed Size (px)

TRANSCRIPT

<ul><li><p> CHAPTER 6 TEMPORAL AND SPATIAL SIGNAL PROCESSING </p><p> 6.1 CHAPTER OBJECTIVES </p><p> On completion of this chapter, the reader should be able to:</p><p> 1. explain and use correlation as a signal processing tool. 2. be able to derive the correlation equations for some mathematically defi ned </p><p>signals. 3. derive the operating equations for system identifi cation in terms of </p><p>correlation. 4. derive the equations for signal extraction when the signal is contaminated by </p><p>noise, and explain the underlying assumptions behind the theory. 5. derive the equations for linear prediction and optimal fi ltering . 6. write down the basic equations used in tomography for reconstruction of an </p><p>object s interior from cross - sectional measurements. </p><p> 6.2 INTRODUCTION </p><p> Signals, by defi nition, are varying quantities. They may vary over time (temporal) or over an x y plane (spatial), over three dimensions, or perhaps over time and space (e.g., a video sequence). Understanding how signals change over time (or space) helps us in several key application areas. Examples include extracting a desired signal when it is mixed with noise, identifying (or at least, estimating) the coeffi -cients of a system, and reconstructing a signal from indirect measurements (tomography). </p><p> 6.3 CORRELATION </p><p> The term correlation means, in general, the similarity between two sets of data. As we will see, in signal processing, it has a more precise meaning. Correlation in </p><p>165</p><p>Digital Signal Processing Using MATLAB for Students and Researchers, First Edition. John W. Leis. 2011 John Wiley & Sons, Inc. Published 2011 by John Wiley & Sons, Inc.</p><p>c06.indd 165c06.indd 165 4/13/2011 5:22:49 PM4/13/2011 5:22:49 PM</p></li><li><p>166 CHAPTER 6 TEMPORAL AND SPATIAL SIGNAL PROCESSING</p><p>signal processing has a variety of applications, including removal of random noise from a periodic signal, pattern recognition, and system parameter identifi cation. </p><p> To introduce the computation of correlation, consider fi rst a simpler problem involving a set of measured data points ( x n , y n ). We wish to quantify the similarity between them or more precisely, to determine if they are correlated (if one depends in some way on the other). Note that this does not necessarily mean a causal rela-tionship (that one causes another). These data points may represent samples from the same signal at different times, or samples from two different but related signals. </p><p> We may picture the two sets of samples as shown on the axes of Figure 6.1 , on the assumption that there is a linear relationship between the two sets of measure-ments. Because the measurements are not precise, some variation or noise will cause the points to scatter from the ideal. </p><p> Mathematically, we wish to choose the value of b describing the line y = bx such that the error between calculated (estimated) and measured points is mini-mized over all data points. The error is the difference between the measured data point y d and the straight line estimation using an equation for y l . So how do we determine b ? </p><p> We can employ the tools of calculus to help derive a solution. Since the error may be positive or negative, we take the square of the error:</p><p>= </p><p> = ( )= ( )</p><p>y y</p><p>y y</p><p>y bx</p><p>d l</p><p>d l</p><p>d</p><p>2 2</p><p>2</p><p> (6.1) </p><p> The average squared (mean square) error over N points is:</p><p>EN</p><p>y bxdN</p><p>221( ) = ( ) . (6.2) </p><p> FIGURE 6.1 Correlation interpreted as a line of best fi t. The measured points are show as , and the line shows an approximation to the points in the ( x , y ) plane. The error is shown between one particular point and the approximation; the goal is to adjust the slope of the line until the average error over all measurements is minimized ( min2). </p><p>x</p><p>y</p><p>yd</p><p>yl</p><p>c06.indd 166c06.indd 166 4/13/2011 5:22:49 PM4/13/2011 5:22:49 PM</p></li><li><p>6.3 CORRELATION 167</p><p> For simplicity of notation, replace y d with y . This is a minimization problem, and hence the principles of function minimization are useful. The derivative with respect to the parameter b is:</p><p> = ( ) ( )Eb N y bx xN</p><p>1 2 . (6.3) </p><p> Setting this gradient to zero,</p><p>1 2 0</p><p>1 1</p><p>2</p><p>2</p><p>2</p><p>Nxy bx</p><p>Nxy b</p><p>Nx</p><p>bxy</p><p>x</p><p>N</p><p>N N</p><p>N</p><p>N</p><p> +( ) =</p><p> =</p><p>=</p><p> .</p><p> (6.4) </p><p> If the points x n and y n represent samples of signals x ( n ) and y ( n ), then b is simply a scalar quantity describing the similarity of signal x ( n ) with signal y ( n ). If b = 0, there is no similarity; if b = 1, there is a strong correlation between one and the other. </p><p> 6.3.1 Calculating Correlation </p><p> The autocorrelation of a sampled signal is defi ned as the product</p><p>R kN</p><p>x n x n kxxn</p><p>N</p><p>( ) = ( ) ( )=</p><p>10</p><p>1</p><p>, (6.5) </p><p>where k is the lag or delay. The correlation may be normalized by the mean square of the signal R xx (0) giving:</p><p>xx xxxx</p><p>k R kR</p><p>( ) = ( )( )0 . (6.6) </p><p> The subscript xx denotes the fact that the signal is multiplied by a delayed version of itself. For a number of values of k , the result is a set of correlation values an autocorrelation vector . If we limit ourselves to one - way relative delays in the lag parameter (as in most practical problems), we have a vector</p><p>r =</p><p>( )( )</p><p>( )</p><p>RR</p><p>R N</p><p>01</p><p>1</p><p> (6.7) </p><p> Note that the term correlation is used, and not covariance. These terms are used differently in different fi elds. In signal processing, correlation is normally used as we have defi ned it here. However, it is often necessary to remove the mean of the </p><p>c06.indd 167c06.indd 167 4/13/2011 5:22:49 PM4/13/2011 5:22:49 PM</p></li><li><p>168 CHAPTER 6 TEMPORAL AND SPATIAL SIGNAL PROCESSING</p><p>sample block before the correlation calculation. Sometimes, the correlation is nor-malized by a constant factor, as will be described shortly. This is simply a constant scaling factor, and does not affect the shape of the resulting correlation plot. </p><p> To illustrate the calculation of autocorrelation, suppose we have an N = 4 sample sequence. For k = 0, we compute the product of corresponding points x (0) x (0) + x (1) x (1) + x (2) x (2) + x (3) x (3), as shown in the vectors below: </p><p> x (0) x (1) x (2) x (3) x (0) x (1) x (2) x (3) </p><p> For k = + 1, we compute the product x (1) x (0) + x (2) x (1) + x (3) x (2), </p><p> x (0) x (1) x (2) x (3) x (0) x (1) x (2) x (3) </p><p> For k = 1, we compute a similar product of aligned terms: </p><p> x (0) x (1) x (2) x (3) x (0) x (1) x (2) x (3) </p><p> and so forth, for k = 2, 3, . . . . Extrapolating the above diagrams, it may be seen that the lag may be positive or negative, and may in theory range over</p><p>lag 1, 2, , 1, 0,1, , 2, 1k N N N N + + </p><p> The continuous - time equivalent of autocorrelation is the integral</p><p> R x t x t dtxx </p><p>( ) = ( ) ( )1 , (6.8) where represents the signal lag. The integration is taken over some suitable inter-val; in the case of periodic signals, it would be over one (or preferably, several) cycles of the waveform. </p><p> Autocorrelation is the measure of the similarity of a signal with a delayed version of the same signal . Cross - correlation is a measure of the similarity of a signal with a delayed version of another signal . Discrete cross - correlation is defi ned as</p><p>R kN</p><p>x n y n kxyn</p><p>N</p><p>( ) = ( ) ( )=</p><p>10</p><p>1</p><p>, (6.9) </p><p>where, again, k is the positive or negative lag. The normalized correlation in this case is:</p><p>xy xyxx yy</p><p>k R kR R</p><p>( ) = ( )( ) ( )0 0 . (6.10) </p><p>c06.indd 168c06.indd 168 4/13/2011 5:22:49 PM4/13/2011 5:22:49 PM</p></li><li><p>6.3 CORRELATION 169</p><p> The cross - correlation is computed in exactly the same way as autocorrelation, using a different second signal. For a simple N = 4 - point example, at a lag of k = 0, we have x (0) y (0) + x (1) y (1) + as shown: </p><p> x (0) x (1) x (2) x (3) y (0) y (1) y (2) y (3) </p><p> At k = 1, we have: </p><p> x (0) x (1) x (2) x (3) y (0) y (1) y (2) y (3) </p><p> x (0) x (1) x (2) x (3) y (0) y (1) y (2) y (3) </p><p> The process may be continued for k = 2, 3, . . . . Multiplying out and com-paring the terms above, it may be seen that autocorrelation is symmetrical, whereas in general cross - correlation is not. </p><p> The continuous - time equivalent of cross - correlation is the integral</p><p> R x t y t dtxy </p><p>( ) = ( ) ( )1 . (6.11) where represents the signal lag. The integration is taken over some suitable inter-val; in the case of periodic signals, over one cycle of the waveform (or, preferably, many times the period). </p><p> The cross - correlation (without normalization) between two equal - length vectors x and y may be computed as follows: </p><p> % illustrating one - and two - sided correlation % set up data sequences N = 4; x = [1:4] ; y = [6:9] ; % two - sided correlation ccr2 = zeros (2 * N 1, 1); for lag = N + 1 : N 1 cc = 0; for idx = 1 : N lagidx = idx lag; if ((lagidx >= 1) & & (lagidx </p></li><li><p>170 CHAPTER 6 TEMPORAL AND SPATIAL SIGNAL PROCESSING</p><p> Note that we need nested loops: the outer loop to iterate over the range of correlation lags we require, and an inner loop to calculate the summation over all terms in each vector. Also, no attempt has been to optimize these loops: the loops are coded exactly as per the equations. It would be more effi cient in practice to start the indexing of the inner for loop at the very fi rst nonzero product term, rather than utilize the if test within the for loop as shown for the two - sided correlation. </p><p> The difference in computation time (and/or processor loading) may be negli-gible for small examples such as this, but for very large signal vectors, such attention to effi ciency pays off in terms of reduced execution time. </p><p> This slide - and - sum operation should remind us of convolution, which was introduced in Section 5.9 . The key difference is that in convolution, we reverse one of the sequences at the start. Since MATLAB has the conv () function available, it must (implicitly) perform this reversal. Now, in computing the correlation, we do not require this reversal. Thus, if we reverse one of the sequences prior to convolu-tion, we should negate the reversal within conv (), and thus obtain our correlation. This is illustrated in the following, where we use the fl ip upside - down function fl ipud () to time - reverse the vector: </p><p> ccr2(lag + N) = cc; end disp (ccr2) % one - sided correlation ccr1 = zeros (N, 1); for lag = 0 : N 1 cc = 0; for idx = lag + 1:N lagidx = idx lag; cc = cc + x(idx) * y(lagidx); end ccr1(lag + 1) = cc; end disp (ccr1) </p><p> % using convolution to calculate correlation x = [1:4] ; y = [6:9] ; cc = conv (x, fl ipud (y)); disp (cc) </p><p> Note that we need to take particular care to use fl ipud () or fl iplr () (fl ip left - right) as appropriate here, depending upon whether the input vectors are column vectors or row vectors. </p><p>c06.indd 170c06.indd 170 4/13/2011 5:22:49 PM4/13/2011 5:22:49 PM</p></li><li><p>6.3 CORRELATION 171</p><p> FIGURE 6.2 Visualizing correlation as applied to waveforms. The top waveform is fi xed, the middle is the moving waveform, and the bottom panel shows the resulting correlation of the waveforms. In panel a, we see the initial situation, where there is no overlap. In panel b, as the second waveform is moved further on, the correlation is increasing. </p><p>Input waveform</p><p>Lagged waveform (maximum negative lag)</p><p>Resulting correlation</p><p>Input waveform</p><p>Lagged waveform (negative lag)</p><p>Resulting correlation</p><p>(a) Time snapshot 1. (b) Time snapshot 2.</p><p> FIGURE 6.3 In panel a, the correlation has reached a peak. In panel b, the correlation is just starting to decline. </p><p>Input waveform</p><p>Lagged waveform (zero lag)</p><p>Resulting correlation</p><p>Input waveform</p><p>Lagged waveform (positive lag)</p><p>Resulting correlation</p><p>(a) Time snapshot 3. (b) Time snapshot 4.</p><p> 6.3.2 Extending Correlation to Signals </p><p> The calculation examples in the preceding section are for very short vectors. When considering signal processing applications, the vectors are generally quite long (perhaps thousands of samples), and so it helps to visualize matters not as discrete points, but as waveforms. Now that we have the calculation tools at our disposal, we can extend our understanding to longer vectors computed automatically. </p><p> Figures 6.2 through 6.4 show how the correlation product is computed for sinusoidal vectors as signals. One important practical aspect is the fact that the data window is fi nite. This gives rise to a decreasing correlation function, as illustrated. </p><p>c06.indd 171c06.indd 171 4/13/2011 5:22:49 PM4/13/2011 5:22:49 PM</p></li><li><p>172 CHAPTER 6 TEMPORAL AND SPATIAL SIGNAL PROCESSING</p><p> 6.3.3 Autocorrelation for Noise Removal </p><p> Suppose we have a sine wave, and wish to compute the autocorrelation. In MATLAB, this could be done using the code shown previously for correlation, but with a sine wave substituted for the input sample vector. </p><p> FIGURE 6.4 In panel a, the correlation is declining further. In panel b, the correlation is almost back to zero. </p><p>Input waveform</p><p>Lagged waveform (positive lag)</p><p>Resulting correlation</p><p>Input waveform</p><p>Lagged waveform (maximum positive lag)</p><p>Resulting correlation</p><p>(a) Time snapshot 5. (b) Time snapshot 6.</p><p> t = 0: pi /100:2 * 4 * pi; x = sin (t); </p><p> Note that the result has the form of a cosine waveform, tapered by a triangular envelope (as shown in Fig. 6.4 ). This is due to the fact that, in practice, we only have a fi nite data record to work with. </p><p> Mathematically, we can examine this result as follows. Using the defi nition of continuous autocorrelation</p><p> R x t x t dtxx </p><p>( ) = ( ) ( )1 . (6.12) If the sine wave is </p><p>x t A t( ) = +( )sin , (6.13) then, </p><p>x t A t( ) = ( ) +( ) sin . (6.14) So,</p><p> R A t t dtxx </p><p> ( ) = +( ) ( ) +( )1 20 sin sin . (6.15) </p><p> Noting that is one period of the waveform, and that = 2 , this simplifi es to:</p><p>c06.indd 172c06.indd 172 4/13/2011 5:22:50 PM4/13/2011 5:22:50 PM</p></li><li><p>6.3 CORRELATION 173</p><p> R Axx ( ) =2</p><p>2cos . (6.16) </p><p> This result can be generalized to other periodic signals. The autocorrelation will always retain the same period as the original waveform. What about additive...</p></li></ul>

Recommended

View more >