exam #3 reviewbazuinb/ece3800sw/exam3_review.pdfrxx t1,t2 e x1x2 dx1 dx2 x1x2f x1,x2 the above...
TRANSCRIPT
-
B.J. Bazuin, Fall 2016 1 of 64 ECE 3800
Exam #3 Review Chapter 4 Expectation and Moments
Mean. Moments, Variance, Expected Value Operator
Chapter 5. Random Processes Random Processes, Wide Sense Stationary
Chapter 8 Random Sequences 8.1 Basic Concepts 442 Infinite-length Bernoulli Trials 447 Continuity of Probability Measure 452 Statistical Specification of a Random Sequence 454 8.2 Basic Principles of Discrete-Time Linear Systems 471 8.3 Random Sequences and Linear Systems 477 8.4 WSS Random Sequences 486 Power Spectral Density 489 Interpretation of the psd 490 Synthesis of Random Sequences and Discrete-Time Simulation 493 Decimation 496 Interpolation 497 8.5 Markov Random Sequences 500 8.6 Vector Random Sequences and State Equations 511 8.7 Convergence of Random Sequences 513 8.8 Laws of Large Numbers 521
Chapter 9 Random Processes 9.1 Basic Definitions 544 9.2 Some Important Random Processes 548 9.3 Continuous-Time Linear Systems with Random Inputs 572 White Noise 577 9.4 Some Useful Classifications of Random Processes 578 Stationarity 579 9.5 Wide-Sense Stationary Processes and LSI Systems 581 Wide-Sense Stationary Case 582 Power Spectral Density 584 An Interpretation of the Power Spectral Density 586 More on White Noise 590 Stationary Processes and Differential Equations 596 9.6 Periodic and Cyclostationary Processes 600 9.7 Vector Processes and State Equations 606 State Equations 608
Homework problems
-
B.J. Bazuin, Fall 2016 2 of 64 ECE 3800
Previous homework problem solutions as examples – Dr. Severance’s Skill Examples
Skills #6 … 21.3, 24.2, 24.6, 25.1
Skills #7 … 27.4, 28.2, 28.3, 30.1, 31.4 This exam is likely to be four problems similar in nature to the 2016 Spring exam. :
1. You will be given a random sequence or process. Determine the autocorrelation. Determine the power spectral density. Perform a cross-correlation.
2. Filtering of random sequence or random process. Determine the input autocorrelation.
Determine the output autocorrelation. Determine the input power spectram density. Determine the output power spectral density.
3. .Given a power spectral density, determine the random process or sequence mean, 2nd
moment (total power), variance. Determine the power in a frequency band. And now for a quick chapter review … the important information without the rest!
-
B.J. Bazuin, Fall 2016 3 of 64 ECE 3800
Auto- and Cross-Correlation Function Basics The Autocorrelation Function
For a sample function defined by samples in time of a random process, how alike are the different samples? Define: 11 tXX and 22 tXX The autocorrelation is defined as:
2121212121 ,, xxfxxdxdxXXEttRXX
The above function is valid for all processes, stationary and non-stationary. For WSS processes:
XXXX RtXtXEttR 21, If the process is ergodic, the time average is equivalent to the probabilistic expectation, or
txtxdttxtx
T
T
TTXX 2
1lim
and XXXX R
Define: kXxk and lXxk
l kx x
lkXlkKK lkxxpmfxxlXkXElkR ,;,,**
For WSS
kx xkXkKK kxxpmfxxnXnkXEXkXEkR
0
0,;,00 0*
0**
If the process is ergodic, the sample average is equivalent to the probabilistic expectation, or
N
NnNKK nXknXN
k *12
1lim
As a note for things you’ve been computing, the “zoreth lag of the autocorrelation” is
221212211111 0, XXXXXX xfxdxXEXXERttR
2221lim0 txdttxT
T
TTXX
-
B.J. Bazuin, Fall 2016 4 of 64 ECE 3800
Properties of Autocorrelation Functions 1) 220 XXERXX The mean squared value of the random process can be obtained by observing the zeroth lag of the autocorrelation function. 2) XXXX RR or kRkR XXXX The autocorrelation function is an even function in time. Only positive (or negative) needs to be computed for an ergodic WSS random process. 3) 0XXXX RR or 0XXXX RkR The autocorrelation function is a maximum at 0. For periodic functions, other values may equal the zeroth lag, but never be larger. 4) If X has a DC component, then Rxx has a constant factor.
tNXtX NNXX RXR 2
Note that the mean value can be computed from the autocorrelation function constants! 5) If X has a periodic component, then Rxx will also have a periodic component of the same period. Think of:
20,cos twAtX where A and w are known constants and theta is a uniform random variable.
wAtXtXERXX cos22
5b) For signals that are the sum of independent random variable, the autocorrelation is the sum of the individual autocorrelation functions.
tYtXtW YXYYXXWW RRR 2
For non-zero mean functions, (let w, x, y be zero mean and W, X, Y have a mean) YXYYXXWW RRR 2
YXYyyXxxWwwWW RRRR 2222 222 2 YYXXyyxxWwwWW RRRR
22 YXyyxxWwwWW RRRR Then we have
22 YXW yyxxww RRR
-
B.J. Bazuin, Fall 2016 5 of 64 ECE 3800
6) If X is ergodic and zero mean and has no periodic component, then we expect 0lim
XXR
7) Autocorrelation functions can not have an arbitrary shape. One way of specifying shapes permissible is in terms of the Fourier transform of the autocorrelation function. That is, if
dtjwtRR XXXX exp
then the restriction states that wallforRXX 0
Additional concept: tNatX
NNXX RatNtNEaR 22
-
B.J. Bazuin, Fall 2016 6 of 64 ECE 3800
The Crosscorrelation Function
For a two sample function defined by samples in time of two random processes, how alike are the different samples?
Define: 11 tXX and 22 tYY The cross-correlation is defined as:
2121212121 ,, yxfyxdydxYXEttRXY
2121212121 ,, xyfxydxdyXYEttRYX
The above function is valid for all processes, jointly stationary and non-stationary. For jointly WSS processes:
XYXY RtYtXEttR 21, YXYX RtXtYEttR 21,
Note: the order of the subscripts is important for cross-correlation!
If the processes are jointly ergodic, the time average is equivalent to the probabilistic expectation, or
tytxdttytx
T
T
TT
XY 21lim
txtydttxty
T
T
TT
YX 21lim
and
XYXY R YXYX R
-
B.J. Bazuin, Fall 2016 7 of 64 ECE 3800
Properties of Crosscorrelation Functions
1) The properties of the zoreth lag have no particular significance and do not represent mean-square values. It is true that the “ordered” crosscorrelations must be equal at 0. .
00 YXXY RR or 00 YXXY
2) Crosscorrelation functions are not generally even functions. However, there is an antisymmetry to the ordered crosscorrelations:
YXXY RR For
tytxdttytx
T
T
TT
XY 21lim
Substitute t
yxdyxT
T
TTXY
2
1lim
YX
T
TTXY
xydxyT21lim
3) The crosscorrelation does not necessarily have its maximum at the zeroth lag. This makes sense if you are correlating a signal with a timed delayed version of itself. The crosscorrelation should be a maximum when the lag equals the time delay!
It can be shown however that 00 XXXXXY RRR
As a note, the crosscorrelation may not achieve the maximum anywhere …
4) If X and Y are statistically independent, then the ordering is not important YXtYEtXEtYtXERXY
and YXXY RYXR
-
B.J. Bazuin, Fall 2016 8 of 64 ECE 3800
5) If X is a stationary random process and is differentiable with respect to time, the crosscorrelation of the signal and it’s derivative is given by
d
dRR XXXX Defining derivation as a limit:
e
tXetXXe
0lim
and the crosscorrelation
etXetXtXEtXtXER
eXX
0lim
e
tXtXetXtXEReXX
0lim
e
tXtXEetXtXEReXX
0lim
e
ReRR XXXXeXX
0lim
d
dRR XXXX
Similarly,
22
dRdR XXXX
-
B.J. Bazuin, Fall 2016 9 of 64 ECE 3800
Measurement of the Autocorrelation Function (Ergodic, WSS) We love to use time average for everything. For wide-sense stationary, ergodic random processes, time average are equivalent to statistical or probability based values.
txtxdttxtx
T
T
TT
XX 21lim
Using this fact, how can we use short-term time averages to generate auto- or cross-correlation functions? An estimate of the autocorrelation is defined as:
T
XX dttxtxTR
0
1ˆ
Note that the time average is performed across as much of the signal that is available after the time shift by tau. For tau based on the available time step, k, with N equating to the available time interval, we have:
kN
iXX ttktixtixtktN
tkR0
11ˆ
kN
iXXXX kixixkN
kRtkR0
11ˆˆ
In computing this autocorrelation, the initial weighting term approaches 1 when k=N. At this point the entire summation consists of one point and is therefore a poor estimate of the autocorrelation. For useful results, k
-
B.J. Bazuin, Fall 2016 10 of 64 ECE 3800
Relation of Spectral Density to the Autocorrelation Function For WSS random processes, the autocorrelation function is time based and, for ergodic processes, describes all sample functions in the ensemble! In these cases the Wiener-Khinchine relations is valid that allows us to perform the following.
diwtXtXERwS XXXX exp
For an ergodic process, we can use time-based processing to aive at an equivalent result …
txtxdttxtx
T
T
TT
XX 21lim
For wXX
dtwXiwttxT
T
TTXX
exp
21lim
dttwitxT
wXT
TTXX
exp21lim
2wXwXwXXX
We can define a power spectral density for the ensemble as:
diwRRwS XXXXXX exp
Based on this definition, we also have
XXXX RwS
wSR XXXX 1
dwiwtwStR XXXX exp21
Properties of the Power Spectral Density The power spectral density as a function is always
real, positive, and an even function in w.
As an even function, the PSD may be expected to have a polynomial form as:
Finite property in frequency. The Power Spectral Density must also approach zero as w approached infinity,
-
B.J. Bazuin, Fall 2016 11 of 64 ECE 3800
Relation of Spectral Density to the Autocorrelation Function
The power spectral density as a function is always real, positive, and an even function in w/f.
You can convert between the domains using: The Fourier Transform in w
diwRwS XXXX exp
dwiwtwStR XXXX exp21
The Fourier Transform in f
dfiRfS XXXX 2exp
dfftifStR XXXX 2exp
The 2-sided Laplace Transform
dsRsS XXXX exp
j
jXXXX dsstsSj
tR exp21
Deriving the Mean-Square Values from the Power Spectral Density The mean squared value of a random process is equal to the 0th lag of the autocorrelation
dwwSdwiwwSRXE XXXXXX 210exp
2102
dffSdwfifSRXE XXXXXX 02exp02
Therefore, to find the second moment, integrate the PSD over all frequencies.
-
B.J. Bazuin, Fall 2016 12 of 64 ECE 3800
The Cross-Spectral Density The Fourier Transform in w
diwRwS XYXY exp and
diwRwS YXYX exp
dwiwtwStR XYXY exp21
and
dwiwtwStR YXYX exp21
Properties of the functions
wSconjwS YXXY Since the cross-correaltion is real,
the real portion of the spectrum is even the imaginary portion of the spectrum is odd
-
B.J. Bazuin, Fall 2016 13 of 64 ECE 3800
Generic Example of a Discrete Spectral Density 2211 2cos2sin tfCtfBAtX
where the phase angles are uniformly distributed R.V from 0 to 2π.
2211
2211
2cos2sin2cos2sin
tfCtfBAtXtfCtfBAtX
E
tXtXERXX
1122
2211
22222
2222
11112
11112
2sin2cos2cos2sin2cos2cos
2cos2cos2sin2sin
2sin2sin
tftfBCtftfBCtftfC
tfACtfACtftfB
tfABtfABA
ERXX
2211
22222
11112
2
22sin2cos2cos
2sin2sin
tftfBCtftfC
tftfBA
ERXX
With practice, we can see that the above math becomes
2222
11122
222cos212cos
21
222cos212cos
21
tffEC
tffEBARXX
which lead to
22
1
22 2cos
22cos
2fCfBARXX
Forming the PSD
And then taking the Fourier transform
22
2
11
22
21
21
221
21
2ffffCffffBfAfS XX
222
11
22
44ffffCffffBfAfS XX
-
B.J. Bazuin, Fall 2016 14 of 64 ECE 3800
We also know from the before
dffSdwwSX XXXX212
Therefore, the 2nd moment can be immediately computed as
dfffffCffffBfAX 22
2
11
222
44
22
24
24
222
2222 CBACBAX
We can also see that the mean value beconmed
AtfCtfBAEX 2211 2cos2sin
So, the variance is
2222
222
2222 CBACBA
A is a “DC” term whereas B and C are “AC” terms as would be expected from X(t).
-
B.J. Bazuin, Fall 2016 15 of 64 ECE 3800
Chapter 8 Random Sequences
8.1 Basic Concepts
Random Stochastic Sequence
Definition 8.1-1. Let P,, be a probability space. Let . Let ,nX be a mapping of the sample space into a space of complex-valued sequences on some index set Z. If, for each fixed integer Zn , ,nX is a random variable, then ,nX is a ransom (stochastic) sequence. The index set Z is all integers, n , padded with zeros if necessary,
Example sets of random sequences.
Figure 8.1-1 Illustration of the concept of random sequence X(n,ζ), where the ζ domain (i.e., the sample space Ω) consists of just ten values. (Samples connected only for plot.)
The sequences can be thought of as “realizations” of the random sequence or sample sequences.
The absolute sequence is the realization of individual random variables in time.
One the realization exists; it becomes statistical data related to on instantiation of the Random Sequence.
Prior to collecting a realization, the Random Sequence can be defined probabilistically.
-
B.J. Bazuin, Fall 2016 16 of 64 ECE 3800
Statistical Specification of a Random Sequence
In general we are looking developing properties for developing random processes where:
(1) The statistical specification for the random sequence matches the probabilistic (or axiomatic) specification for the random variables used to generate the sequence.
(2) We will be interested in stationary sequences where the statistics do not change in time. We will be defining a wide-sense stationary random process definition where only the mean and the variance need to be constant in time.
A random sequence X[n] is said to be statistically specified by knowing its Nth-order CDFs for all integers N>=1. That states that we know …
1111
1,,1,1,,1,;,,,
Nnnn
NnnnX
xNnXxnXxnXPNnnnxxxF
If we specify all these infinite-order joint distributions at all finite times, using continuity of the probability measures, we can calculate the probabilities of events involving an infinite number of random variables via limiting operations involving the finite order CDFs.
Consistency can be guaranteed by construction … constructing models of stochastic sequences and processes.
Moments play an important role and, for Ergodic Sequences, they can be estimated from a single sample sequence of the infinite number that may be possible.
Therefore,
nnXn
XX
dxxfx
dxnxfxnXEn ;
and for a discrete valued random sequences
k
kkX xnXPxnXEn
-
B.J. Bazuin, Fall 2016 17 of 64 ECE 3800
The Autocorrelation Function
The expected value of a random sequence evaluated at offset times can be determined.
lklkXlkKK dxdxxxfxxlXkXElkR ,,*
For sequences of finite average power … 2kXE , then the correlation function will exist. We can also describe the “centered” autocorrelation sequence as the autocovariance.
*, llXkkXElkK XXKK Note that
**
****
*,
lklXkXE
lklXklkXlXkXE
llXkkXElkK
XX
XXXX
XXKK
*,, lklkRlkK XXKKKK
Basic properties of the functions:
Hermitian symmetry *., klRlkR KKKK
Hermitian symmetry *., klKlkK KKKK
Deriving other functions 2, kXEkkRKK
2, XKK kkK
-
B.J. Bazuin, Fall 2016 18 of 64 ECE 3800
Example 8.1-1 & 10 functions consisting of R.V and deterministic sequences
Let nfXnX ,
where X is a random variable and f is a deterministic function in sample time n.
Note then,
nfnfXEnXE X ,
The autocorrelation function becomes
dxxflfXkfXlXkXElkR XKK**,,,
dxxfXXlfkflkR XKK**,
**, XXElfkflkRKK If X is a real R.V.
22*, XXKK lfkflkR Similarly
**, XXXX XXElfkflkK 2*, XXX lfkflkK
-
B.J. Bazuin, Fall 2016 19 of 64 ECE 3800
Example 8.1-11 Waiting times in a line, creating a random sequence
Consider the random sequence of IID “exponential random variable” waiting times in a line. Assume that each of the waiting times per individual t(k) is based on the exponential.
0,exp0,0
;ttt
tfntf
The waiting time is then described as.
n
kknT
1
where 11 T , 212 T , … , nnT 21
T(n) is the random sequence!
This calls for a summation of random variables, where the new pdf for each new sum is the convolution of the exponential pdf with the previous pdf or
tftftf 2;
tftftftftftf 2;3;
Exam #1 derived the first convolution
t
dttf0
expexp2;
ttdttft
exp1exp2; 20
2
Repeating
t
dttf0
2 expexp3;
tdttft
exp2
exp3;2
3
0
3
If you see the pattern …. we can jump to the nth summation where
-
B.J. Bazuin, Fall 2016 20 of 64 ECE 3800
tntnntf
nnn
exp!1exp
!1;
11
This is called the Erlang probability density function …
It is used to determine waiting times in lines and software queues … how long until your internet request can be processed!
The mean and variance of the Erlang pdf used to define the random sequence T(n) is
nnT
22
nVarnT
Not that for every element of the sequence both the mean and variance are dependent upon the sample number (definitely not stationary or even WSS).
A random sequence based on the Gaussian.
Assume iid Gaussian R.V. with zero mean and a variance
Letting 2,0 WNnW For 0 WnWE and 22 WnWE What about the autocorrelation
lklkkWElWkWElkR WKK ,0
,,22
*
or lklkR WKK 2,
or recognizing a WSS random sequence kkR WKK 2
-
B.J. Bazuin, Fall 2016 21 of 64 ECE 3800
A random sequence based on the sum of two Gaussians.
Assume iid Gaussian R.V. with zero mean and a variance
For 1 nWnWnX
Then, 02 WnXE
and
2
22
22
22
2
12
112
1
W
WW nWEnWE
nWnWnWnWE
nWnWEnXE
also *11, lWlWkWkWElkRKK
**** 1111, lWkWlWkWlWkWlWkWElkRKK But then
1111, 2222 lklklklklkR WWWWKK 112, 222 lklklklkR WWWKK
and recognizing WSS
112 222 kkkkR WWWKK
-
B.J. Bazuin, Fall 2016 22 of 64 ECE 3800
Stationary vs. Nonstationary Random Sequences and Processes
The probability density functions for random variables in time have been discussed, but what is the dependence of the density function on the value of time, t or n, when it is taken?
If all marginal and joint density functions of a process do not depend upon the choice of the time origin, the process is said to be stationary (that is it doesn’t change with time). All the mean values and moments are constants and not functions of time!
For nonstationary processes, the probability density functions change based on the time origin or in time. For these processes, the mean values and moments are functions of time.
In general, we always attempt to deal with stationary processes … or approximate stationary by assuming that the process probability distribution, means and moments do not change significantly during the period of interest.
The requirement that all marginal and joint density functions be independent of the choice of time origin is frequently more stringent (tighter) than is necessary for system analysis. A more relaxed requirement is called stationary in the wide sense: where the mean value of any random variable is independent of the choice of time, t, and that the correlation of two random variables depends only upon the time difference between them. That is
XXtXE and XXRXXttXXEtXtXE 00 1221 for 12 tt
You will typically deal with Wide-Sense Stationary Signals.
-
B.J. Bazuin, Fall 2016 23 of 64 ECE 3800
Stationary Systems Properties
Mean Value
00;; XXXX dxxfxdxnxfxnXEn
The mean value is not dependent upon the sample in time.
Autocorrelation
nlnkRnlXnkXEdxdxnlnkxxfxx
dxdxlkxxfxxlXkXElkR
KK
nlnknlnkXnlnk
lklkXlkKK
,.
,;,
,;,,
*
*
**
And in particular for WSS lkRlkRlkR KKKKKK 0,,
Autocovariance
nlnkKnlXnkXE
lXkXEllXkkXElkK
KKXX
XXXXKK
,.
,*
**
And in particular for WSS lkKlkKlkK KKKKKK 0,,
The autocorrelation and autocovariance are functions of the time difference and not the absolute time. Also,
*
***
***
****
*
,
,
,
,
lklkR
lklklklkR
lklXEklkXElkR
lklXklkXlXkXE
llXkkXElkK
XXKK
XXXXXXKK
XXXXKK
XXXX
XXKK
And for WSS 2XKKKK lkRlkK
-
B.J. Bazuin, Fall 2016 24 of 64 ECE 3800
8.2 Basic Principles of Discrete-Time Linear Systems
We get to do convolutions some more … in the discrete time domain!
Note: if you are in ECE 3710, this should be normal; otherwise, ECE 3100 probably talked about linear systems being a convolution.
For a “causal” discrete finite impulse response linear system we will have ….
mxmnhknxkhnyn
mk
0
For a “non-causal” discrete linear system we will have ….
mxmnhknxkhnymk
For a linear system, superposition applies
nyny
knxkhaknxkha
knxakhknxakh
knxaknxakhny
kk
kk
k
21
20
210
1
220
110
22110
For a filter with poles and zeros …. the filter may be autoregressive as well!
knykaknxkbnykk
10
This is called a linear constant coefficient difference equation (LCCDE) in the text.
-
B.J. Bazuin, Fall 2016 25 of 64 ECE 3800
Linear time invariant and linear shift invariant
nallforknxLkny , A time offset in the input will not change the response at the output! This is key to the convolution theory!
System Impulse response
The response to a unit impulse is the impulse response
nhnLny
8.3 Random Sequences and Linear Systems
For a “non-causal” discrete linear system we will have …. (FIR filter)
mxmnhEknxkhEnyEmk
mxEmnhknxEkhnyEmk
If WSS
Xm
Xk
Y mnhkhnyE
k
XY khnyE
The mean times the coherent gain of the filter.
For a filter with poles and zeros …. the filter may be autoregressive as well!
knykaEknxkbEnyEkk 10
knyEkaknxEkbnyEkk
10
If WSS
Yk
Xk
Y kakbnyE
10
011
kX
kY kbka and
1
0
1k
kXY
ka
kb
-
B.J. Bazuin, Fall 2016 26 of 64 ECE 3800
Auto- and Cross-Correlation
For a “causal” discrete finite impulse response linear system we will have … (impulse response based)
mxmnhknxkhnyn
mk
0
And performing a cross-correlation (assuming real R.V. and processing)
knxkhnxEnynxEk
20
121
knxnxkhEnynxEk
210
21
knxnxEkhnynxEk
21
021
knnRkhnynxE XXk
21
021 ,
For x(n) WSS
nkmnRkhmRmnynxE XXk
XY
0
kmRkhmRmnynxE XXk
XY
0
mRmhmRmnynxE XXXY What about the other way … YX instead of XY
2
0121 nxknxkhEnxnyE
k 21
021 nxknxEkhnxnyE
k
210
21 , nknRkhnxnyE XXk
For x(n) WSS … see the next page
For x(n) WSS
knmnRkhmRmnxnyE XXk
YX
0
kmRkhmRmnxnyE XXk
YX
0
-
B.J. Bazuin, Fall 2016 27 of 64 ECE 3800
Perform a change of variable for k to “-l” (assuming h(t) is real, see text for complex
lmRlhmRmnxnyE XXl
YX
0
Therefore mRmhmRmnxnyE XXYX
What about the auto-correlation of y(n)?
And performing an auto-correlation (assuming real R.V. and processing)
22
0211
0121
21
knxkhknxkhEnynyEkk
0 022112121
1 2k kknxknxkhkhEnynyE
0 0
221121211 2k k
knxknxEkhkhnynyE
0 0
221121211 2
,k k
XX knknRkhkhnynyE
For x(n) WSS
0 0
12211 2k k
XXYY knkmnRkhkhmRmnynyE
0 0
21211 2k k
XXYY kkmRkhkhmRmnynyE
Summary: For x(n) WSS and a real filter
mRmhmRmnynxE XXXY mRmhmRmnxnyE XXYX
mhmhmRmRmnynyE XXYY
-
B.J. Bazuin, Fall 2016 28 of 64 ECE 3800
Example: White Noise Inputs to a causal filter
Let nNnRXX 20
0 0
21212
1 2
0k k
XXYY kkRkhkhRnyE
0 0
210
212
1 22
0k k
YY kkNkhkhRnyE
0
1102
12
0k
YY khkhNRnyE
0
202
20
kYY kh
NRnyE
For a white noise process, the mean squared (or 2nd moment) is proportional to the filter power.
Typically, there are similar derivations for sampled systems and continuous systems. .
-
B.J. Bazuin, Fall 2016 29 of 64 ECE 3800
The power spectral density output of linear systems 489
The discrete Power Spectral Density is defined as:
n
XXXX nwjnRwS exp
The inverse transform is defined as
dwnwjwSwSnR XXXXXX exp2
11
Properties:
1. Sxx(w) is purely real as Rxx(n) is conjugate symmetric
2. If X(n) is a real-valued WSS process, then Sxx(w) is an even function, as Rxx(n) is real and even.
3. Sxx(w)>= 0 for all w.
4. Rxx(m)=0 for all m>N for some finite integer. This is the condition for the Fourier transform to exist … finite energy.
Cross-Spectral Density
Since we have already shown the convolution formula, we can progress to the cross-spectral density functions
n
XYXY nwjnRwS exp
n
XXXY nwjnRnhwS exp
Then wHwSwS XXXY
And for the other cross spectral density
n
YXYX nwjnRwS exp
n
XXXY nwjmRmhwS exp
Then for a real filter *wHwSwHwSwS XXXXXY
-
B.J. Bazuin, Fall 2016 30 of 64 ECE 3800
The output power spectral density becomes
n
YYYY nwjnRwS exp
n
XXYY nwjmhmhmRwS exp*
For all systems
2* wHwSwHwHwSwS XXXXYY
Relationships from textbook
-
B.J. Bazuin, Fall 2016 31 of 64 ECE 3800
Synthesis of Random Sequences and Discrete-Time Simulations
We can generating a transfer function to provide a random sequence with a specified psd or correlation function. Staring with a digital filter
knykaknxkbnykk
10
The Fourier transform is
wBwA
wXwYwH
where
0
expn
nwjnbwB and
1
exp1n
nwjnawA
The signal input to the filter is white noise, producing a constant magnitude frequency response. Therefore,
20*022
wHN
wHwHN
wSYY
For real causal coefficients, this is also equivalent to
wHwHNwSYY 20
Or using z-transform notation for wjz exp then wjz exp1 this can be written as 10
2 zHzH
NzSYY
In the z-domain, the unit circle is a key component where this implies that there is a mirror image about the unit circle for poles and zero in the z-domain. As an added point of interest, minimum phase, stable filters will have all their poles and zeros inside the unit circle. The mirror image elements form the “inverse filter”.
-
B.J. Bazuin, Fall 2016 32 of 64 ECE 3800
Example 8.4-5 Filter generation
If a desired psd can be stated as
22
cos21
wwS NXX
For wjwjw expexpcos2 and wjwj expexp1
The equivalent z-transform representation is
zzw 1cos2 and zz 11
Then
zzzzzSN
XX
11
2
1
zHzHzzzzzS NNNXX
12
12
12
11
11
111
The desired filter is then
z
zH
11
The inverse transform becomes
nunh n
or
1 nynxny
Note: this should look similar to Example 8.4-6 ….
-
B.J. Bazuin, Fall 2016 33 of 64 ECE 3800
Chapter 9 Random Processes 9.1 Basic Definitions
Random Stochastic Processes
Definition 9.1-1. Let P,, be a probability space. then define a mapping of X from the sample space to a space of continuous time functions. The elements in this space will be called sample functions. This mapping is called a random process if at each fixed time the mapping is a random variable, that is, ,tX for each fixed t on the real line t .
Figure 9.1-1 A random process for a continuous sample space Ω = [0,10].
For the autocorrelation defined as:
2121212121 ,, xxfxxdxdxXXEttRXX
For WSS processes:
XXXX RtXtXEttR 21,
If the process is ergodic, the time average is equivalent to the probabilistic expectation, or
txtxdttxtx
T
T
TT
XX 21lim
and
XXXX R
-
B.J. Bazuin, Fall 2016 34 of 64 ECE 3800
The application of the Expected Value Operator
Moments play an important role and, for Ergodic Processes, they can be estimated from a single process in time of the infinite number that may be possible.
Therefore, tXEtX
and the correlation functions (auto- and cross-correlation) *2121, tXtXEttRXX *2121, tYtXEttRXY
and the covariance functions (auto- and cross-correlation) *221121, ttXttXEttK XXXX *221121, ttYttXEttK YXXY
with *212121 ,, ttttRttK XXXXXX
Note that the variance can be computed from the auto-covariance as tttXttXEttK XXXXX 2*,
and the “power” function can be computed from the auto-correlation
2*, tXEtXtXEttRXX For real X(t) tttXEttR XXXX 222,
-
B.J. Bazuin, Fall 2016 35 of 64 ECE 3800
Example: tfAtx 2sin for A a uniformly distributed random variable 2,2A
212121 2sin2sin, tfAtfAEtXtXEttRXX
2121
22121 2cos2cos2
1, ttfttfAEtXtXEttRXX
2121221 2cos2cos21, ttfttfAEttRXX
for 12 tt
212
21 2cos2cos1221, ttffttRXX
2121 2cos2cos2416, ttffttRXX
A non-stationary process
-
B.J. Bazuin, Fall 2016 36 of 64 ECE 3800
Example 9.1-5 Auto-correlation of a sinusoid with random phase
Think of: ,sin twAtX
where A and w are known constants. And theta is a uniform pdf covering the unit circle.
The mean is computed as twAEtXEtX sin twEAtXEtX sin
dtwAtXEtX
sin2
1
twAtXEtX cos2
002
coscos2
AtwtwAtXEtX
The auto-correlation is computed as
*21*2121 sinsin, twAtwAEtXtXEttRXX 2cos21cos21, 21212*2121 ttwttwEAtXtXEttRXX
2cos2
cos2
, 212
21
2
21 ttwEAttwAttRXX
212
21
2
21 cos20cos
2, ttwAttwAttRXX
fARR XXXX 2cos22
Note that if A was a random variable (independent of phase) we would have …
wAERttwAEttR XXXX cos2cos2,2
21
2
21
and
002
AEtXEtX
Note: this Random Process is Wide-Sense stationary (mean and variance not a function of time)
-
B.J. Bazuin, Fall 2016 37 of 64 ECE 3800
Example: tfAtx 2sin for a uniformly distributed random variable 2,0
The time based formulation:
txtxdttxtx
T
T
TT
XX 21lim
T
TTXX
dttfAtfAT
2sin2sin21lim
T
TTXX
dttffT
A 222cos2cos21lim
2
2
fAXX 2cos22
It also ergodic if A is a constant! If A is an R.V. it may not be ergodic. (based on the R.V.)
-
B.J. Bazuin, Fall 2016 38 of 64 ECE 3800
Example:
TttrectBtx 0 for B =+/-A with probability p and (1-p) and t0 a uniformly
distributed random variable
2,
20TTt . Assume B and t0 are independent.
TttrectB
TttrectBEtXtXEttRXX 01012121 ,
Tttrect
TttrectBEtXtXEttRXX 0101
22121 ,
As the RV are independent
Tttrect
TttrectEBEtXtXEttRXX 0201
22121 ,
Tttrect
TttrectEpApAttRXX 0201
2221 1,
2
2
002012
211,
T
TXX dtTT
ttrectT
ttrectAttR
For 21 0 tandt
2
2
002 11,0
T
TXX dtTT
trectAR
The integral can be recognized as being a triangle, extending from –T to T and zero everywhere else.
TtriARXX
2
T
TTT
A
TTT
A
T
RXX
,0
0,1
0,1,0
2
2
-
B.J. Bazuin, Fall 2016 39 of 64 ECE 3800
Some Important Random Processes
Asynchronous Binary Signaling
The pulse values are independent, identically distributed with probability p that amplitude is a and q=1-p that amplitude is –a. The start of the “zeroth” pulse is uniformly distributed from –T/2 to T/2
22
,1 TDTforD
Dpdf
Determine the autocorrelation of the bipolar binary sequence, assuming p=0.5.
kk T
TkDtrectXtX
Note: the rect function is defined as
else
TtT
Ttrect
,022
,1
Determine the Autocorrelation 2121, tXtXEttRXX
kk
nnXX T
TkDtrectXT
TnDtrectXEttR 2121,
n kknXX T
TkDtrectXT
TnDtrectXEttR 2121,
n kkknXX T
TkDtrectXT
TnDtrectXXEttR 2121,
-
B.J. Bazuin, Fall 2016 40 of 64 ECE 3800
n kkknXX T
TkDtrectXT
TnDtrectEXXEttR 2121,
For samples more than one period apart, Ttt 21 , we must consider apapapapapapapapXXE jk 1111
222 112 ppppaXXE jk 144 22 ppaXXE jk
For p=0.5 0144 22 ppaXXE jk
For samples within one period, Ttt 21 ,
2222 1 aapapXEXXE kkk 0144 221 ppaXXE kk
For samples within one period, Ttt 21 , there are two regions to consider, the sample bit overlapping and the area of the next bit.
kXX T
TkDtrectT
TkDtrectEattR 21221,
But the overlapping area … should be triangular. Therefore
0,112
2
2
2
1
TfordtXXET
dtXXET
RT
Tkk
T
TkkXX
TfordtXXET
dtXXET
RT
Tkk
T
TkkXX
0,112
2
1
2
2
or
0,112
2
2
TfordtT
aRT
TXX
Tfordtt
aRT
TaXX
0,112
2
2
Therefore
-
B.J. Bazuin, Fall 2016 41 of 64 ECE 3800
TforT
Ta
TforT
TaRXX
0,
0,
2
2
or recognizing the structure
TTforT
aRXX
,12
This is simply a triangular function with maximum of a2, extending for a full bit period in both time directions.
For unequal bit probability
Tforppa
TTforT
ppT
ta
Ra
XX
,144
,144
22
22
As there are more of one bit or the other, there is always a positive correlation between bits (the curve is a minimum for p=0.5), that peaks to a2 at = 0.
Note that if the amplitude is a random variable, the expected value of the bits must be further evaluated. Such as,
22 kk XXE
21 kk XXE
In general, the autocorrelation of communications signal waveforms is important, particularly when we discuss the power spectral density later in the textbook.
If the signal takes on two levels a and b vs. a and –a, the result would be
bpbpapbpbpapapapXXE jk 1111 For p = 1/2
2
22
241
21
41
babbaaXXE jk
And 222 1 bpapXEXXE kkk
-
B.J. Bazuin, Fall 2016 42 of 64 ECE 3800
For p = 1/2
22
222
2221
bababaXEXXE kkk
Therefore,
Tforba
TTforT
baba
RXX
,2
,122
2
22
For a = 1, b = 0 and T=1, we have
Tfor
TTforTRXX
,41
,141
41
Figure 9.2-2 Autocorrelation function of ABS random process for a = 1, b = 0 and T = 1.
-
B.J. Bazuin, Fall 2016 43 of 64 ECE 3800
Exercise 6-3.1 – Cooper and McGillem
a) An ergodic random process has an autocorrelation function of the form 1610cos164exp9 XXR
Find the mean-square value, the mean value, and the variance of the process.
The mean-square (2nd moment) is 222 41161690 XXRXE
The constant portion of the autocorrelation represents the square of the mean. Therefore 1622 XE and 4
Finally, the variance can be computed as, 2516410 2222 XXRXEXE
b) An ergodic random process has an autocorrelation function of the form
1
642
2
XXR
Find the mean-square value, the mean value, and the variance of the process.
The mean-square (2nd moment) is
222 6160 XXRXE
The constant portion of the autocorrelation represents the square of the mean. Therefore
414
164lim 2
222
t
XE and 2
Finally, the variance can be computed as, 2460 2222 XXRXEXE
-
B.J. Bazuin, Fall 2016 44 of 64 ECE 3800
Exercise 6-6.1
Find the cross-correlation of the two functions …
tftX 2cos2 and tftY 2sin10 Using the time average functions
tytxdttytx
T
T
TT
XY 21lim
T
TTXY
dttftfT
2sin102cos221lim
f
XY dtftff
1
0
2sin222sin2120
ff
XY dtffdttff
1
0
1
0
2sin10222sin10
f
fXY dtfftff
f1
0
1
02sin10222cos
410
fff
fXY 2sin1022cos222cos
410
fffXY 2sin1022cos224cos4
10
fffXY 2sin1022cos22cos4
10
fXY 2sin10 Using the probabilistic functions
tytxERXY tftfERXY 2sin102cos2 tftfERXY 2sin2cos20
fftfERXY 2sin2222sin10 2222sin102sin10 ftfEfRXY
From prior understanding of the uniform random phase ….
fRXY 2sin10
-
B.J. Bazuin, Fall 2016 45 of 64 ECE 3800
Section 9.4 Classifications of Random Processes
Definition 9.4-1.: Let X and Y be random processes.
(a) They are Uncorrelated if 21*21*2121 ,, tandtallfortttYtXEttR YXXY
(b) They are Orthogonal if 21*2121 ,0, tandtallfortYtXEttRXY
(c) They are Independent if for all positive integers n, the nth-order CDF of X and Y factors. That is
nnYnnX
nnnXY
tttyyyFtttxxxFtttyxyxyxF
,,,;,,,,,,;,,,,,,;,,,,,,
21212121
212211
Note that if two processes are uncorrelated and one of the means is zero, they are orthogonal as well!
Stationarity
A random process is stationary when its statistics do not change with the continuous time parameter.
TtTtTtxxxF
tttxxxF
nnX
nnX
,,,;,,,,,,;,,,
2121
2121
Overall, the CDF and pdf do not change with absolute time. They may have time characteristics, as long as the elements are based on time differences and not absolute time.
0,;,,;, 21212121 ttxxFttxxF XX
0,;,,;, 21212121 ttxxfttxxf XX
This implies that 0,0,, 21*2121 XXXXXX RttRtXtXEttR
Definition 9.4-3.: Wide Sense Stationary
A random process is wide-sense stationary (WSS) when its mean and variance statistics do not change with the continuous time parameter. We also include the autocorrelation being a function of one variable …
tofntindependedforRtXtXE XX ,,*
-
B.J. Bazuin, Fall 2016 46 of 64 ECE 3800
Power Spectral Density
Definition 9.1-1: PSD
Let Rxx(t) be an autocorrelation function for a WSS random process. The power spectral density is defined as the Fourier transform of the autocorrealtion function.
diwRRwS XXXXXX exp
The inverse exists in the form of the inverse transform
dwiwtwStR XXXX exp21
Properties:
1. Sxx(w) is purely real as Rxx(t) is conjugate symmetric
2. If X(t) is a real-valued WSS process, then Sxx(w) is an even function, as Rxx(t) is real and even.
3. Sxx(w)>= 0 for all w.
Wiener–Khinchin Theorem For WSS random processes, the autocorrelation function is time based and has a spectral decomposition given by the power spectral density.
Also see:
http://en.wikipedia.org/wiki/Wiener%E2%80%93Khinchin_theorem
Why this is very important … the Fourier Transform of a “single instantiation” of a random process may be meaningless or even impossible to generate. But if the random process can be described in terms of the autocorrelation function (all ergodic, WSS processes), then the power spectral density can be defined.
I can then know what the expected frequency spectrum output looks like and I can design a system to keep the required frequencies and filters out the unneeded frequencies (e.g. noise and interference).
-
B.J. Bazuin, Fall 2016 47 of 64 ECE 3800
Relation of Spectral Density to the Autocorrelation Function
For “the right” random processes, power spectral density is the Fourier Transform of the autocorrelation:
diwtXtXERwS XXXX exp
For an ergodic process, we can use time-based processing to arrive at an equivalent result …
txtxdttxtx
T
T
TT
XX 21lim
T
TT
XX dttxtxTtXtXE
21lim
diwdttxtx
TtXtXE
T
TT
XX exp21lim
dtdiwtxtxT
T
TT
XX
exp21lim
dtdiwttiwtxtxT
T
TT
XX
exp21lim
dtdtiwtxiwttxT
T
TT
XX
expexp21lim
dtdtiwtxiwttxT
T
TT
XX
expexp21lim
If there exists wXX
dtwXiwttxT
T
TTXX
exp
21lim
dttwitxT
wXT
TTXX
exp21lim
2wXwXwXXX
-
B.J. Bazuin, Fall 2016 48 of 64 ECE 3800
Property:
Since Rxx is symmetric, we must have that
XXXX RR and wOiwEwOiwE XXXX
For this to be true, wOiwOi XX , which can only occur if the odd portion of the Fourier transform is zero! 0wOX .
This provides information about the power spectral density,
wERwS XXXXX wEwS XXX
0 wS XX
The power spectral density necessarily contains no phase information!
-
B.J. Bazuin, Fall 2016 49 of 64 ECE 3800
Example 9.5-3
Find the psd of the following autocorrelation function … of the random telegraph.
0,exp forRXX
Find a good Fourier Transform Table … otherwise
dwjRwS XXXX exp
dwjwS XX expexp
0
0
expexpexpexp dwjdwjwS XX
0
0
expexp dwjdwjwS XX
0
0
expexp
wjwj
wjwjwS XX
wjwj
wjwj
wjwj
wjwjwS XX
exp0exp
0expexp
wjwjwjwj
wjwjwS XX
11
222222
ww
wS XX
For a=3
Figure 9.5-2 Plot of psd for exponential autocorrelation function.
-
B.J. Bazuin, Fall 2016 50 of 64 ECE 3800
Example 9.5-4
Find the psd of the triangle autocorrelation function … autocorrelation of rect.
TtriRXX
or TT
RXX
,1
T
TXX dwjT
wS
exp1
T
TXX dwjT
dwjT
wS0
0
exp1exp1
TT
T
TXX
wjwj
wjwj
T
wjwj
wjwj
T
wjwj
wjwjwS
02
0
2
0
0
expexp1
expexp1
expexp
22
22
1expexp1
expexp11
1expexp1
wwTwj
wjTwjT
T
wTwj
wjTwjT
wT
wjwjTwj
wjTwj
wjwS XX
222
expexp121
expexp1expexp
wTwj
wTwj
TwT
wjTwjT
wjTwjT
TwjTwj
wjTwjwS XX
22cos212sin2sin2
wTw
TwTwTw
wTwwS XX
TwwT
wS XX cos112
2
2
2
2
2
2
2sin
2sin212
Tw
Tw
TTwjwT
wS XX
-
B.J. Bazuin, Fall 2016 51 of 64 ECE 3800
Deriving the Mean-Square Values from the Power Spectral Density
Using the Fourier transform relation between the Autocorrelation and PSD
diwRwS XXXX exp
dwiwtwStR XXXX exp21
The mean squared value of a random process is equal to the 0th lag of the autocorrelation
dwwSdwiwwSRXE XXXXXX 210exp
2102
dffSdwfifSRXE XXXXXX 02exp02
Therefore, to find the second moment, integrate the PSD over all frequencies.
As a note, since the PSD is real and symmetric, the integral can be performed as
0
22120 dwwSRXE XXXX
0
2 20 dffSRXE XXXX
-
B.J. Bazuin, Fall 2016 52 of 64 ECE 3800
Converting between Autocorrelation and Power Spectral Density
Using the properties of the functions we can actually different variations of Transforms!
The power spectral density as a function is always real, positive, and an even function in w/f.
You can convert between the domains using any of the following …
The Fourier Transform in w
diwRwS XXXX exp
dwiwtwStR XXXX exp21
The Fourier Transform in f
dfiRfS XXXX 2exp
dfftifStR XXXX 2exp
The 2-sided Laplace Transform (the jw axis of the s-plane)
dsRsS XXXX exp
j
jXXXX dsstsSj
tR exp21
-
B.J. Bazuin, Fall 2016 53 of 64 ECE 3800
Example: Inverse Laplace Transform.
222
22
22
1
2
wA
w
AwS
X
XXX
Substitute s for w
ssA
sAsS XX
2
22
2 22
Partial fraction expansion
ss
Ass
sksks
ks
ksS XX
21010 2
20210
1010
222
0
AkAkk
kkskk
sA
sAsS XX
22
Taking the LHP Laplace Transform
Taking the RHP with –s and then –t.
0expexpexp222
2
tfortAtAtA
sAL
Combining we have
tARXX exp2
0exp2
tfortA
sAL
-
B.J. Bazuin, Fall 2016 54 of 64 ECE 3800
7-6.3 A stationary random process has a spectral density of.
else
wwS XX ,0
2010,5
(a) Find the mean-square value of the process.
02
22
10 dwwSdwwSR XXXXXX
20
10
10
20
20
10
52
1252
152
10 dwdwdwRXX
501020
210
2100 20
10
wRXX
(b) Find the auto-correlation function the process.
dwtwjwStR XXXX exp21
10
20
20
10
expexp2
5 dwtwjdwtwjtRXX
10
20
20
10
expexp2
5tj
twjtj
twjtRXX
tj
tjtj
tjtj
tjtj
tjtRXX20exp10exp10exp20exp
25
tj
tjtj
tjtj
tjtj
tjtRXX10exp10exp20exp20exp
25
ttttj
tjtj
tjtRXX
10sin20sin510sin220sin22
5
-
B.J. Bazuin, Fall 2016 55 of 64 ECE 3800
ttt
ttt
tRXX
15cos5sin10
21020cos
21020sin25
tttt
ttRXX
15cos5sinc5015cos5
5sin50
(c) Find the value of the auto-correlation function at t=0..
015cos05sinc50015cos05
05sin500
XX
R
115011500 XX
R
500 XXR
It must produce the same result!
-
B.J. Bazuin, Fall 2016 56 of 64 ECE 3800
White Noise
Noise is inherently defined as a random process. You may be familiar with “thermal” noise, based on the energy of an atom and the mean-free path that it can travel.
As a random process, whenever “white noise” is measured, the values are uncorrelated with each other, not matter how close together the samples are taken in time.
Further, we envision “white noise” as containing all spectral content, with no explicit peaks or valleys in the power spectral density.
As a result, we define “White Noise” as
tSRXX 0
20
0N
SwS XX
Band Limited White Noise
fW
WfNSwS XX
02
00
The equivalent noise power is then:
WNSWdwSRXEW
WXX
000
2 20
But what about the autocorrelation?
W
WXX dftfiStR 2exp0
tiWti
tiWtiS
tiftiStR
W
WXX
22exp
22exp
22exp
00
tiWtiiStRXX
2
2sin20
For xtxtxt
sinc
WtSWtRXX 2sinc2 0
-
B.J. Bazuin, Fall 2016 57 of 64 ECE 3800
The Cross-Spectral Density
Why not form the power spectral response of the cross-correlation function?
The Fourier Transform in w
diwRwS XYXY exp and
diwRwS YXYX exp
dwiwtwStR XYXY exp21
and
dwiwtwStR YXYX exp21
Properties of the functions
wSconjwS YXXY
Since the cross-correlation is real, the real portion of the spectrum is even the imaginary portion of the spectrum is odd
There are no other important (assumed) properties to describe
-
B.J. Bazuin, Fall 2016 58 of 64 ECE 3800
Section 9.3 Continuous-Time Linear Systems with Random Inputs
Linear system requirements:
Definition 9.3-1 Let x1(t) and x2(t) be two deterministic time functions and let a1 and a2 be two scalar constants. Let the linear system be described by the operator equation
txLty then the system is linear if “linear super-position holds”
txLatxLatxatxaL 22112211 for all admissible functions x1 and x2 and all scalars a1 and a2.
For x(t), a random process, y(t) will also be a random process.
Linear transformation of signals: convolution in the time domain txthty
th ty tx
Linear transformation of signals: multiplication in the Laplace domain
sXsHsY
sX sH sY
The convolution Integrals (applying a causal filter)
0
dhtxty
or
t
dxthty
Where for physical realize-ability, causality, and stability constraints we require
00 tforth and
dtth
-
B.J. Bazuin, Fall 2016 59 of 64 ECE 3800
Example: Applying a linear filter to a random process 03exp5 tfortth
tMtX 2cos4
where M and are independent random variables, uniformly distributed [0,2].
We can perform the filter function since an explicit formula for the random process is known.
t
dxthty
t
dMtty 2cos43exp5
tt
dtdtMty 2cos3exp203exp5
t
t
diiiit
tMty
2exp2exp3exp10
33exp5
t
iiit
iiitMty
232exp3exp
232exp3exp10
35
23
2exp23
2exp103
5i
itii
itiMty
49
2exp232exp23103
5 itiiitiiMty
ttMty 2sin22cos31320
35
Linear filtering will change the magnitude and phase of sinusoidal signals (DC too!).
tMtX 2cos4
69.33,2cos4135
35
tMty
Expected value operator with linear systems
For a causal linear system we would have
-
B.J. Bazuin, Fall 2016 60 of 64 ECE 3800
0
dhtxty
and taking the expected value
0
dhtxEtyE
0
dhtxEtyE
0
dhttyE
For x(t) WSS
00
dhdhtyE
Notice the condition fop a physically realizable system!
The coherent gain of a filter is defined as:
00
Hdtthhgain
Therefore, 0HXEhXEtYE gain
Note that:
dttfithfH 2exp
For a causal filter
0
2exp dttfithfH
At f=0
0
0 dtthH
And 0HtyE What about a cross-correlation? (Converting an auto-correlation to cross-correlation)
For a linear system we would have
-
B.J. Bazuin, Fall 2016 61 of 64 ECE 3800
dhtxty
And performing a cross-correlation (assuming real R.V. and processing)
dhtxtxEtytxE 2121
dhtxtxEtytxE 2121
dhtxtxEtytxE 2121
dhttRtytxE XX 2121 ,
For x(t) WSS
dhRRtytxE XXXY
hRRtytxE XXXY
What about the other way … YX instead of XY
And performing a cross-correlation (assuming real R.V. and processing)
2121 txdhtxEtxtyE
dhtxtxEtxtyE 2121
dhtxtxEtxtyE 2121
dhttRtxtyE XX 2121 ,
For x(t) WSS … see the next page
For x(t) WSS
dhttRRtxtyE XXYX
-
B.J. Bazuin, Fall 2016 62 of 64 ECE 3800
dhRRtxtyE XXYX
Perform a change of variable for lamba to “-kappa” (assuming h(t) is real, see text for complex0
dhRRtxtyE XXYX
Therefore
dhRRtxtyE XXYX
hRRtxtyE XXYX
What about the auto-correlation of y(t)?
And performing an auto-correlation (assuming real R.V. and processing)
222211112121 , dhtxdhtxEttRtytyE YY
112222112121 , dhdhtxtxEttRtytyE YY
112222112121 , dhdhtxtxEttRtytyE YY
112222112121 ,, dhdhttRttRtytyE XXYY
For x(t) WSS
122112 ddhhRRtytyE XXYY
112221 dhdhRRtytyE XXYY
-
B.J. Bazuin, Fall 2016 63 of 64 ECE 3800
Example: White Noise Inputs to a causal filter
Let tNtRXX 20
0122
0211
2 0 ddhRhRtYE XXYY
0122
021
01
2
20 ddhNhRtYE YY
0
11102
20 dhhNRtYE YY
0
12
102
20 dhNRtYE YY
For a white noise process, the mean squared (or 2nd moment) is proportional to the filter power.
The power spectral density output of linear systems
The first cross-spectral density hRR XXXY
diwRwS XYXY exp
diwhRwS XXXY exp
Using convolution identities of the Fourier Transform (if you want the proof it isn’t bad, just tedious)
wHwSwS XXXY
The second cross-spectral density hRR XXYX
diwRwS YXYX exp
-
B.J. Bazuin, Fall 2016 64 of 64 ECE 3800
diwhRwS XXYX exp*
Using convolution identities of the Fourier Transform (if you want the proof it isn’t bad, just tedious)
*wHwSwS XXYX
The output power spectral density becomes hhRR XXYY
diwRwS YYYY exp
diwhhRwS XXYY exp
Using convolution identities of the Fourier Transform *wHwHwSwS XXYY
2wHwSwS XXYY
This is a very significant result that provides a similar advantage for the power spectral density computation as the Fourier transform does for the convolution.