my signals notes

150
Dr. Mahmoud M. Al-Husari Signals and Systems Lecture Notes This set of lecture notes are never to be considered as a substitute to the textbook recommended by the lecturer.

Upload: saed-mami

Post on 14-Oct-2014

589 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: My Signals Notes

Dr. Mahmoud M. Al-Husari

Signals and SystemsLecture Notes

This set of lecture notes are never to be considered as a substituteto the textbook recommended by the lecturer.

Page 2: My Signals Notes

2

Page 3: My Signals Notes

Contents

1 Introduction 51.1 Signals and Systems Defined . . . . . . . . . . . . . . . . . . . . . 51.2 Types of Signals and Systems . . . . . . . . . . . . . . . . . . . . 6

1.2.1 Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.2.2 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2 Mathematical Description of Signals 92.1 Classification of CT and DT Signals . . . . . . . . . . . . . . . . 9

2.1.1 Periodic and non-periodic Signals . . . . . . . . . . . . . . 92.1.2 Deterministic and Random Signals . . . . . . . . . . . . . 112.1.3 Signal Energy and Power . . . . . . . . . . . . . . . . . . 122.1.4 Even and Odd Functions . . . . . . . . . . . . . . . . . . 15

2.2 Useful Signal Operations . . . . . . . . . . . . . . . . . . . . . . . 182.2.1 Time Shifting . . . . . . . . . . . . . . . . . . . . . . . . . 182.2.2 Time Scaling . . . . . . . . . . . . . . . . . . . . . . . . . 192.2.3 Time Reflection . . . . . . . . . . . . . . . . . . . . . . . . 202.2.4 Multiple Transformations . . . . . . . . . . . . . . . . . . 20

2.3 Useful Signal Functions . . . . . . . . . . . . . . . . . . . . . . . 222.3.1 Complex Exponentials and Sinusoids . . . . . . . . . . . . 222.3.2 The Unit Step Function . . . . . . . . . . . . . . . . . . . 252.3.3 The Signum Function . . . . . . . . . . . . . . . . . . . . 262.3.4 The Unit Ramp Function . . . . . . . . . . . . . . . . . . 262.3.5 The Rectangle Function . . . . . . . . . . . . . . . . . . . 272.3.6 The Unit Impulse Function . . . . . . . . . . . . . . . . . 282.3.7 Some Properties of the Unit Impulse . . . . . . . . . . . . 292.3.8 The Unit Sinc Function . . . . . . . . . . . . . . . . . . . 34

3 Description of Systems 373.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.2 Systems Characteristics . . . . . . . . . . . . . . . . . . . . . . . 37

3.2.1 Memory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383.2.2 Invertibility . . . . . . . . . . . . . . . . . . . . . . . . . . 383.2.3 Causality . . . . . . . . . . . . . . . . . . . . . . . . . . . 393.2.4 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393.2.5 Time Invariance . . . . . . . . . . . . . . . . . . . . . . . 403.2.6 Linearity and Superposition . . . . . . . . . . . . . . . . . 41

3.3 Linear Time-invariant Systems . . . . . . . . . . . . . . . . . . 423.3.1 Time-Domain Analysis of LTI systems . . . . . . . . . . 42

3

Page 4: My Signals Notes

4 CONTENTS

3.3.2 The Convolution Sum . . . . . . . . . . . . . . . . . . . . 433.4 The Convolution Integral . . . . . . . . . . . . . . . . . . . . . . 513.5 Properties of LTI Systems . . . . . . . . . . . . . . . . . . . . . . 58

4 The Fourier Series 614.1 Orthogonal Representations of Signals . . . . . . . . . . . . . . . 62

4.1.1 Orthogonal Vector Space . . . . . . . . . . . . . . . . . . 624.1.2 Orthogonal Signal Space . . . . . . . . . . . . . . . . . . . 63

4.2 Exponential Fourier Series . . . . . . . . . . . . . . . . . . . . . . 664.2.1 The Frequency Spectra (Exponential) . . . . . . . . . . . 67

4.3 Trigonometric Fourier Series . . . . . . . . . . . . . . . . . . . . . 704.3.1 Compact (Combined) Trigonometric Fourier Series . . . . 734.3.2 The Frequency Spectrum (Trigonometric) . . . . . . . . . 74

4.4 Convergence of the Fourier Series . . . . . . . . . . . . . . . . . . 784.4.1 Dirichlet Conditions . . . . . . . . . . . . . . . . . . . . . 78

4.5 Properties of Fourier Series . . . . . . . . . . . . . . . . . . . . . 794.5.1 Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . 794.5.2 Time Shifting . . . . . . . . . . . . . . . . . . . . . . . . . 794.5.3 Frequency Shifting . . . . . . . . . . . . . . . . . . . . . . 804.5.4 Time Reflection . . . . . . . . . . . . . . . . . . . . . . . . 804.5.5 Time Scaling . . . . . . . . . . . . . . . . . . . . . . . . . 804.5.6 Time Differentiation . . . . . . . . . . . . . . . . . . . . . 814.5.7 Time Integration . . . . . . . . . . . . . . . . . . . . . . . 824.5.8 Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . 824.5.9 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . 834.5.10 Effects of Symmetry . . . . . . . . . . . . . . . . . . . . . 844.5.11 Parseval’s Theorem . . . . . . . . . . . . . . . . . . . . . . 86

4.6 System Response to Periodic Inputs . . . . . . . . . . . . . . . . 87

5 The Fourier Transform 915.1 Development of the Fourier Transform . . . . . . . . . . . . . . . 915.2 Examples of The Fourier Transform . . . . . . . . . . . . . . . . 955.3 Fourier Transform of Periodic Signals . . . . . . . . . . . . . . . . 995.4 Properties of the Fourier Transform . . . . . . . . . . . . . . . . 101

5.4.1 Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . 1025.4.2 Time Shifting . . . . . . . . . . . . . . . . . . . . . . . . . 1025.4.3 Frequency Shifting (Modulation) . . . . . . . . . . . . . . 1035.4.4 Time Scaling and Frequency Scaling . . . . . . . . . . . . 1055.4.5 Time Reflection . . . . . . . . . . . . . . . . . . . . . . . . 1075.4.6 Time Differentiation . . . . . . . . . . . . . . . . . . . . . 1075.4.7 Frequency Differentiation . . . . . . . . . . . . . . . . . . 1105.4.8 Time Integration . . . . . . . . . . . . . . . . . . . . . . . 1115.4.9 Time Convolution . . . . . . . . . . . . . . . . . . . . . . 1125.4.10 Frequency Convolution (Multiplication) . . . . . . . . . . 1135.4.11 Symmetry - Real and Imaginary Signals . . . . . . . . . . 1145.4.12 Symmetry - Even and Odd Signals . . . . . . . . . . . . . 1165.4.13 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1175.4.14 Energy of Non-periodic Signals . . . . . . . . . . . . . . . 119

5.5 Energy and Power Spectral Density . . . . . . . . . . . . . . . . . 1215.5.1 The Spectral Density . . . . . . . . . . . . . . . . . . . . . 121

Page 5: My Signals Notes

CONTENTS 5

5.5.2 Energy Spectral Density . . . . . . . . . . . . . . . . . . . 1215.5.3 Power Spectral Density . . . . . . . . . . . . . . . . . . . 123

5.6 Correlation Functions . . . . . . . . . . . . . . . . . . . . . . . . 1265.6.1 Energy Signals . . . . . . . . . . . . . . . . . . . . . . . . 1265.6.2 Power Signals . . . . . . . . . . . . . . . . . . . . . . . . . 1265.6.3 Convolution and Correlation . . . . . . . . . . . . . . . . 1275.6.4 Autocorrelation . . . . . . . . . . . . . . . . . . . . . . . . 127

5.7 Correlation and the Fourier Transform . . . . . . . . . . . . . . 1295.7.1 Autocorrelation and The Energy Spectrum . . . . . . . . 1295.7.2 Autocorrelation and the Power Spectrum . . . . . . . . . 130

6 Applications of The Fourier Transform 1316.1 Signal Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

6.1.1 Frequency Response . . . . . . . . . . . . . . . . . . . . . 1316.1.2 Ideal Filters . . . . . . . . . . . . . . . . . . . . . . . . . . 1336.1.3 Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . 1366.1.4 Practical Filters . . . . . . . . . . . . . . . . . . . . . . . 140

6.2 Amplitude Modulation . . . . . . . . . . . . . . . . . . . . . . . . 1416.3 Sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

6.3.1 The Sampling Theorem . . . . . . . . . . . . . . . . . . . 145

Page 6: My Signals Notes

6 CONTENTS

Page 7: My Signals Notes

Chapter 1

Introduction

1.1 Signals and Systems Defined

Course Objective: A mathematical study of signals and systems . Why studySignals and Systems? The tools taught here are fundamental to all engineeringdisciplines. Electrical engineering topics such as but not limited to

• Speech processing

• Image processing

• Communication

• Control systems

• Advanced Circuit Analysis

uses the tools taught in this course extensively.

What is a signal? Vague definition: A signal is something that containsinformation. Many of the examples of signals shown in Figure 1.1 provide uswith information one way or another.

(a) Traffic Signals (b) Human Signals (c) ECG Graph

Figure 1.1: Examples of Signals

Formal Definition : A signal is defined as a function of one or more variableswhich conveys information on the nature of a physical phenomenon. In otherwords, any time-varying physical phenomenon which is intended to convey in-formation is a signal.

7

Page 8: My Signals Notes

8 CHAPTER 1. INTRODUCTION

Signals are processed or operated on by systems. What is a system?Formal Definition : A system is defined as an entity that manipulates one ormore signals to accomplish a function, thereby yielding new signals. When oneor more excitation signals are applied at one or more system inputs, the systemproduces one or more response signals at its outputs. (throughout my lecturenotes I will simply use the terms input and output.)

Figure 1.2: Block diagram of a simple system.

Systems with more than one input and more than one output are called MIMO(Multi-Input Multi-Output). Figures 1.3 through 1.5 shows examples ofsystems.

Figure 1.3: Communication between two people involving signals and signalprocessing by systems.

(a) Stock Market (b) Signals & System Course

Figure 1.4: Examples of systems

1.2 Types of Signals and Systems

Signals and systems are classified into two basic types:

• Continuous Time.

• Discrete Time.

Page 9: My Signals Notes

1.2. TYPES OF SIGNALS AND SYSTEMS 9

Figure 1.5: Electric Circuit

1.2.1 Signals

A continuous-time (CT) signal is one which is defined at every instant of timeover some time interval. They are functions of a continuous time variable. Weoften refer to a CT signal as x(t). The independent variable is time t andcan have any real value, the function x(t) is called a CT function because it isdefined on a continuum of points in time.

Figure 1.6: Example of CT signal

It is very important to observe here that Figure 1.6(b) illustrates adiscontinuous function. At discontinuity, the limit of the function valueas we approach the discontinuity from above is not the same as the limitas we approach the same point from below. Stated mathematically, if thetime t = t0 is a point of discontinuity of a function g(t), then

limε→t0g(t+ ε) 6= lim

ε→t0g(t− ε)

However, the two functions shown in Figure 1.6 are continuous-timefunctions because their values are defined on a continuum of times t (t ∈R), where R is the set of all real values. Therefore, the terms continuousand continuous time mean different things. A CT function is defined ona continuum of times, but is not necessarily continuous at every point intime.

Page 10: My Signals Notes

10 CHAPTER 1. INTRODUCTION

A discrete-time (DT) signal is one which is defined only at discrete pointsin time and not between them. The independent variable have only a discreteset of values. We often refer to DT signal as x[n], n here belongs to the set ofall integers Z (n ∈ Z) i.e. n = 0,±1,±2, ∙ ∙ ∙ . (Figure 1.7)

Figure 1.7: Example of DT function

1.2.2 Systems

A CT system transforms a continuous time input signal into CT outputs asshown in Figure 1.8

Figure 1.8: Continuous time system

Similarly a DT system transforms a discrete time input signal to a DT outputsignal.

In Engineering disciplines, problems that often arise are of the form

• Analysis problems

• Synthesis problems

In Analysis problems one is usually presented with a specific system and isinterested in characterizing it in detail to understand how it will respond to vari-ous inputs. On the other hand, synthesis problems requires designingsystems to process signals in a particular way to achieve desired outputs. Ourmain focus in this course are on analysis problems.

Page 11: My Signals Notes

Chapter 2

Mathematical Descriptionof Signals

2.1 Classification of CT and DT Signals

2.1.1 Periodic and non-periodic Signals

A periodic function is one which has been repeating an exact pattern for aninfinite period of time and will continue to repeat that exact pattern for aninfinite time. That is, a periodic function x(t) is one for which

x(t) = x(t+ nT ) (2.1)

for any integer value of n, where T > 0 is the period of the function and−∞ < t < ∞. The signal repeats itself every T sec. Of course, it also repeatsevery 2T, 3T and nT . Therefore, 2T, 3T and nT and are all periods of thefunction because the function repeats over any of those intervals. The minimumpositive interval over which a function repeats itself is called the fundamentalperiod T0, (Figure 2.1). T0 is the smallest value that satisfies the condition

x(t) = x(t+ T0) (2.2)

The fundamental frequency f0 of a periodic function is the reciprocal of the

Figure 2.1: Example of periodic CT function with fundamental period

fundamental frequency f0 =1T0. It is measured in Hertz and is the number of

cycles (periods) per second. The fundamental angular frequency ω0 measured

11

Page 12: My Signals Notes

12 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

in radians per second is

ω0 =2π

T0= 2πf0. (2.3)

With respect to the signal shown in Figure 2.2 determine the fundamentalExample 2.1frequency and the fundamental angular frequency.

Time, seconds

x(t)

0 0.2 0.4 0.6 0.8 1

-1

-0.5

0

0.5

1

Figure 2.2: Signal of Example 2.1

� Solution It is clear that thefundamental period T0 = 0.2sec.Thus,

f0 =1

0.2= 5Hz

ω0 = 2πf0 = 10π rad/sec.

It repeats itself 5 times in one sec-ond, which can be clearly seen inFigure 2.2. �

A real valued sinusoidal signal x(t) can be expressed mathematically byExample 2.2

x(t) = Asin(ω0t+ φ) (2.4)

Show that x(t) is periodic.

� Solution For x(t) to be periodic it must satisfy the condition x(t) = x(t+T0),thus

x(t+ T0) = Asin(ω0(t+ T0) + φ)

= Asin(ω0t+ φ+ ω0T0)

Recall that sin(α+ β) = sinα cosβ + cosα sinβ, therefore

x(t+ T0) = A [sin(ω0t+ φ) cos ω0T0 + cos(ω0t+ φ) sinω0T0] (2.5)

Substituting the fundamental period T0 =2πω0in (2.5) yields

x(t+ T0) = A [sin(ω0t+ φ) cos2π + cos(ω0t+ φ) sin2π]

= A sin(ω0t+ φ)

= x(t) �

An important question for signal analysis is whether or not the sum of twoperiodic signals is periodic. Suppose that x1(t) and x2(t) are periodic signalswith fundamental periods T1 and T2, respectively. Then is the sum x1(t)+x2(t)periodic; that is, is there a positive number T such that

x1(t+ T ) + x2(t+ T ) = x1(t) + x2(t) for all t? (2.6)

It turns out that (2.6) is satisfied if and only if the ratio T1T2can be written as

the ratio klof two integers k and l. This can be shown by noting that if T1

T2= kl,

Page 13: My Signals Notes

2.1. CLASSIFICATION OF CT AND DT SIGNALS 13

then lT1 = kT2 and since k and l are integers x1(t) + x2(t) are periodic withperiod lT1. Thus the expression (2.6) follows with T = lT1. In addition, if kand l are co-prime (i.e. k and l have no common integer factors other than 1,)then T = lT1 is the fundamental period of the sum x1(t) + x2(t).

Let x1(t) = cos(πt2

)and x2(t) = cos

(πt3

), determine if x1(t)+x2(t) is periodic. Example 2.3

� Solution x1(t) and x2(t) are periodic with the fundamental periods T1 = 4(since ω1 =

π2 =

2πT1=⇒ T1 = 4) and similarly T2 = 6. Now

T1

T2=4

6=2

3

then with k = 2 and l = 3, it follows that the sum x1(t)+ x2(t) is periodic withfundamental period T = lT1 = (3)(4) = 12sec. �

2.1.2 Deterministic and Random Signals

Deterministic Signals are signals who are completely defined for any instant oftime, there is no uncertainty with respect to their value at any point of time.They can also be described mathematically, at least approximately. Let a signalx(t) be defined as (Figure 2.3)

x(t) =

{1− |t| , −1 < t < 1

0, otherwise

It is clear that this function is well defined mathematically (Figure 2.3). Time, seconds

x(t)

-3 -2 -1 0 1 2 30

1

Figure 2.3: Example of deter-ministic signal.

A random signal is one whose values cannot be predicted exactly and cannot bedescribed by any mathematical function. A common name for random signalsis noise, (Figure 2.4) .

Figure 2.4: Examples of Noise

Page 14: My Signals Notes

14 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

2.1.3 Signal Energy and Power

Size of a Signal

The size of any entity is a number that indicates the largeness or strength of thatentity. Generally speaking, the signal amplitude varies with time. How can asignal as a one shown in Figure 2.5, that exists over a certain time interval withvarying amplitude be measured by one number that will indicate the signal sizeor signal strength? One must not consider only signal amplitude but also theduration. If for instance one wants to measure the size of a human by a singleTime, seconds

0

0

Figure 2.5: What is the size ofa signal?

number one must not only consider his height but also his width. If we makea simplifying assumption that the shape of a person is a cylinder of variableradius r (which varies with height h) then a reasonable measure of a humansize of height H is his volume given by

V = π

∫ H

0

r2(h)dh

Arguing in this manner, we may consider the area under a signal as a possiblemeasure of its size, because it takes account of not only the amplitude butalso the duration. However this will be a defective measure because it couldbe a large signal, yet its positive and negative areas could cancel each other,indicating a signal of small size. This difficulty can be corrected by defining thesignal size as the area under the square of the signal, which is always positive.We call this measure the Signal Energy E∞, defined for a real signal x(t) as

E∞ =

∫ ∞

−∞x2(t) dt (2.7)

This can be generalized to a complex valued signal as

E∞ =

∫ ∞

−∞|x(t)|2 dt (2.8)

(Note for complex signals |x(t)|2 = x(t)x∗(t) where x∗(t) is the complexconjugate of x(t).) Signal energy for a DT signal is defined in an analogousway as

E∞ =∞∑

n=−∞

|x[n]|2 (2.9)

Find the signal energy ofExample 2.4

x(t) =

{A, |t| < T1/2

0, otherwise

� Solution From the definition in (2.7)Plotting x(t) is helpful, assometimes you do not need todetermine the integral. Youcan find the area under thesquare of the signal from thegraph instead.

E∞ =

∫ ∞

−∞x2(t) dt =

∫ T1/2

−T1/2A2dt

=[A2t]T1/2

−T1/2= A2T1. �

Page 15: My Signals Notes

2.1. CLASSIFICATION OF CT AND DT SIGNALS 15

For many signals encountered in signal and system analysis, neither the integralin

E∞ =

∫ ∞

−∞|x(t)|2 dt

nor the summation

E∞ =

∞∑

n=−∞

|x[n]|2

converge because the signal energy is infinite. The signal energy must be finitefor it to be a meaningful measure of the signal size. This usually occurs becausethe signal in not time-limited (Time limited means the signal is nonzero overonly a finite time.) An example of a CT signal with infinite energy would be asinusoidal signal

x(t) = A cos(2πf0t).

For signals of this type, it is usually more convenient to deal with the averagesignal power of the signal instead of the signal energy. The average signal powerof a CT signal is defined by

P∞ = limT→∞

1

T

∫ T/2

−T/2|x(t)|2 dt (2.10)

Some references use the definition

P∞ = limT→∞

1

2T

∫ T

−T|x(t)|2 dt (2.11)

Note that the integral in (2.10) is the signal energy of the signal over a time T,and is then divided by T yielding the average signal power over time T. Thenas T approached infinity, this average signal power becomes the average signalpower over all time. Observe also that the signal power P∞ is the time average(mean) of the signal amplitude squared, that is, the mean-squared value of x(t).Indeed, the square root of P∞ is the familiar rms(root mean square) value of rms=

√P∞

x(t).

For DT signals the definition of signal power is

P∞ = limN→∞

1

2N

N−1∑

n=−N

|x[n]|2 (2.12)

which is the average signal power over all discrete time.

For periodic signals, the average signal power calculation may be simpler. Theaverage value of any periodic function is the average over any period. Therefore,since the square of a periodic function is also periodic, for periodic CT signals

P∞ =1

T

T

|x(t)|2 dt (2.13)

where the notation∫Tmeans the integration over one period (T can be any

period but one usually chooses the fundamental period).

Page 16: My Signals Notes

16 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

Find the signal power ofExample 2.5

x(t) = A cos(ω0t+ φ)

� Solution From the definition of signal power for a periodic signal in (2.13),

P∞ =1

T

T

|A cos(ω0t+ φ)|2dt =

A2

T0

∫ T0/2

−T0/2cos2

(2π

T0t+ φ

)

dt (2.14)

Using the trigonometric identity

cos(α)cos(β) =1

2[cos(α− β) + cos(α+ β)]

in (2.14) we get

P∞ =A2

2T0

∫ T0/2

−T0/2

[

1 + cos

(4π

T0t+ 2φ

)]

dt (2.15)

=A2

2T0

∫ T0/2

−T0/2dt +

A2

2T0

∫ T0/2

−T0/2cos

(4π

T0t+ 2φ

)

dt

︸ ︷︷ ︸=0

(2.16)

the second integral on the right hand side of (2.16) is zero because it is theintegral of a sinusoid over exactly two fundamental periods. Therefore, the

power is P∞ =A2

2 . Notice that this result is independent of the phase φ andthe angular frequency ω0. It depends only on the amplitude A. �

Find the power of the signal shown in Figure 2.6Example 2.6

A Periodic Square Pulse

Time, seconds

x(t)

-1 -0.5 0 0.5 1

-1

-0.5

0

0.5

1

Figure 2.6: A periodic pulse

� Solution From the definition of signal powerfor a periodic signal

P∞ =1

T

T

|x(t)|2 dt =1

0.5

[∫ 0.25

0

12 dt+

∫ 0.5

0.25

(−1)2 dt

]

= 1 �

Page 17: My Signals Notes

2.1. CLASSIFICATION OF CT AND DT SIGNALS 17

Comment The signal energy as defined in (2.7) or (2.8) does not indicatethe actual energy of the signal because the signal energy depends not onlyon the signal but also on the load. To make this point clearer assume wehave a voltage signal v(t) across a resistor R, the actual energy delivered tothe resistor by the voltage signal would be

Energy =

∫ ∞

−∞

|v(t)|2

Rdt =

1

R

∫ ∞

−∞|v(t)|2 dt =

E∞

R

The signal energy is proportional to the actual physical energy delivered bythe signal and the proportionality constant, in this case, is R. However, onecan always interpret signal energy as the energy dissipated in a normalizedload of a 1Ω resistor. Furthermore, the units of the signal energy dependon the units of the signal. For the voltage signal whose unit is volt(V),the signal energy units is expressed in V 2.s. Parallel observations applies tosignal power defined in (2.11).

Signals which have finite signal energy are referred to as energy signals andsignals which have infinite signal energy but finite average signal power arereferred to as power signals. Observe that power is the time average of energy.Since the averaging is over an infinitely large interval, a signal with finite energyhas zero power, and a signal with finite power has infinite energy. Therefore, asignal cannot both be an energy and power signal.. On the other hand, thereare signals that are neither energy nor power signals. The ramp signal is such anexample. Figure 2.7 Shows examples of CT and DT energy and power signals.

Figure 2.7: Examples of CT and DT energy and power signals

2.1.4 Even and Odd Functions

A function g(t) is said to be an even function of t if

g(t) = g(−t)

Page 18: My Signals Notes

18 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

and a function g(t) is said to be an odd function of t if

g(t) = −g(−t)

An even function has the same value at the instants t and -t for all values oft. Clearly, g(t) in this case is symmetrical about the vertical axis (the verticalaxis acts as a mirror) as shown in Figure 2.8. On the other hand, the valueof an odd function at the instant t is the negative of its value at the instant-t. Therefore, g(t) in this case is anti-symmetrical about the vertical axis, asdepicted in Figure 2.8.

Any function x(t) can be expressed as a sum of its even and odd components

Figure 2.8: An even and odd function of t

x(t) = xe(t) + xo(t) . The even and odd parts of a function x(t) are

xe(t) =x(t) + x(−t)

2xo(t) =

x(t)− x(−t)2

(2.17)

The most important even and odd functions in signal analysis are cosines andsines. Cosines are even, and sines are odd.

Some properties of Even and Odd Functions

• Even function × odd function = odd function.

• Odd function × odd function = even function.

• Even function × even function = even function.

• For even functions,∫ a−a x(t) dt = 2

∫ a0x(t) dt, (Figure 2.9).

• For odd functions,∫ a−a x(t) dt = 0, (Figure 2.9).

Figure 2.9: Integrals of even and an odd function

Page 19: My Signals Notes

2.1. CLASSIFICATION OF CT AND DT SIGNALS 19

• If the odd part of a function is zero, the function is even.

• If the even part of a function is zero, the function is odd.

Figure 2.10 show some examples of products of even and odd CT functions.

(a) Product of two even functions (b) Product of even and odd functions

(c) Product of even and odd functions (d) Product of two odd functions

Figure 2.10: Examples of products of even and odd CT functions

Determine the even and odd components of the signal shown in Figure 2.11 Example 2.7

Time, seconds

x(t)

-3 -2 -1 0 1 2 30

0.5

1

Figure 2.11: Finding even andodd components of x(t)

� Solution Using (2.17) to find the even and oddcomponents of x(t) and are shown in Figure 2.12. �

This is

xe(t)

Time, seconds

xo(t)

-3 -2 -1 0 1 2 3

-3 -2 -1 0 1 2 3

-0.5

0

0.5

0

0.5

Figure 2.12: Even and odd components of x(t)

Page 20: My Signals Notes

20 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

2.2 Useful Signal Operations

Three useful signal operations are discussed here: shifting (also called timetranslation), scaling and reflection.

2.2.1 Time Shifting

Time shifting is a transformation of the independent variable. Consider a signalf(t) as shown in Figure 2.13(a) and the same signal delayed by t0 seconds

Figure 2.13: Time shifting a signal

Figure 2.13(b) which we shall denote φ(t). Whatever happens in f(t) at someinstant t also happens in φ(t) but t0 seconds later at the instant t+t0. Therefore

φ(t+ t0) = f(t)

andφ(t) = f(t− t0).

Therefore, to time shift a signal by t0, we replace t with t− t0. Thus f(t− t0)represents f(t) time shifted by t0 seconds. If t0 is positive, the shift is to theright (delay). If t0 is negative, the shift is to the left (advance).

Let a signal be defined as followsExample 2.8

x(t) =

1, −1 < t < 0

1− 12 t, 0 < t < 2

0, otherwise

sketch x(t− 1).

Page 21: My Signals Notes

2.2. USEFUL SIGNAL OPERATIONS 21

� Solution First plot the signal x(t) to visualize the signal and the

Time, seconds

x(t)

-3 -2 -1 0 1 2 30

0.2

0.4

0.6

0.8

1

Figure 2.14: A Signal x(t)

important points of time (Figure 2.14). We can begin to understand how tomake this transformation by computing the values of x(t− 1) at a few selectedpoints as shown in Table 2.1 of Figure 2.15. Next, plot x(t − 1) as function oft. It should be apparent that replacing t by t − 1 has the effect of shifting thefunction one unit to the right as in Figure 2.15. �

t t− 1 x(t− 1)

-2 -3 0-1 -2 00 -1 11 0 12 1 0.53 2 04 3 05 4 0

Table 2.1

Time, seconds

x(t−1)

-4 -3 -2 -1 0 1 2 3 40

0.2

0.4

0.6

0.8

1

1.2

Figure 2.15: Selected values of x(t− 1)

2.2.2 Time Scaling

The compression or expansion of a signal in time is known as time scaling.Consider the signal x(t) of Figure 2.14, if x(t) is to be compressed in time by afactor a(a > 1), the resulting signal φ(t) is given by

φ(t) = x(at)

Assume a = 2, then φ(t) = x(2t), constructing a similar table to Table 2.1,paying particular attention to the turning points of the original signal, as shownin Table 2.2. Next, plot x(2t) as function of t, (Figure 2.16). On the other hand,

t 2t x(2t)

-2 -4 0-1.5 -3 0-1 -2 0-0.5 -1 10 0 10.5 1 0.51 2 01.5 3 02 4 0

Table 2.2Time, seconds

x(2t)

-4 -3 -2 -1 0 1 2 3 40

0.2

0.4

0.6

0.8

1

1.2

Figure 2.16: Selected values of x(2t)

if x(t) is to be expanded (stretched) in time by a factor a(a > 1) the resultingsignal φ(t) is given by

φ(t) = x

(t

a

)

Page 22: My Signals Notes

22 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

Assume a = 2, then φ(t) = x(t2

)and following the same procedure earlier the

expansion can be seen clearly in Figure 2.17.

Time, seconds

x(t 2)

-4 -3 -2 -1 0 1 2 3 4 5 60

0.2

0.4

0.6

0.8

1

Figure 2.17: Stretched versionof x(t).

In summary, to time scale a signal by a factor a, we replace t with at. Ifa > 1, the scaling results in compression, and if a < 1, the scaling results inexpansion.

2.2.3 Time Reflection

Also called time reversal, the signal φ(t) = x(−t). Observe that whateverhappens at the time instant t also happens at the instant −t. The mirror imageof x(t) about the vertical axis is x(−t). Recall that the mirror image of x(t)about the horizontal axis is −x(t). Figure 2.18 shows a discrete time exampleof time reflection.

Figure 2.18: A function g[n] and its reflected version g[−n].

2.2.4 Multiple Transformations

All three transformations, time shifting, time scaling and time reflection, canbe applied simultaneously, for example

φ(t) = x

(t− t0a

)

(2.18)

To understand the overall effect, it is usually best to break down a transforma-tion like (2.18) into successive simple transformations,

x(t)t→t/a−−−−→ x

(t

a

)t→t−t0−−−−−→ x

(t− t0a

)

(2.19)

Observe here that the order of the transformation is important. For example, ifwe exchange the order of the time-scaling and time-shifting operations in (2.19),we get

x(t)t→t−t0−−−−−→ x(t− t0)

t→t/a−−−−→ x

(t

a− t0

)

6= x

(t− t0a

)

The result of this sequence of transformations is different from the precedingresult. We could have obtained the same preceding result if we first observethat

x

(t− t0a

)

= x

(t

a−t0

a

)

.

Then we could time-shift first and time-scale second, yielding

x(t)t→t− t0a−−−−−→ x

(

t−t0

a

)t→t/a−−−−→ x

(t

a−t0

a

)

= x

(t− t0a

)

.

Page 23: My Signals Notes

2.2. USEFUL SIGNAL OPERATIONS 23

For a different transformation, a different sequence may be better, for example

x(bt− t0)

In this case the sequence of time shifting and then time scaling is the simplestpath to correct transformation

x(t)t→t−t0−−−−−→ x(t− t0)

t→bt−−−→ x(bt− t0)

Let a signal be defined as in Example 2.9. Determine and plot x(−3t− 4). Example 2.9

� Solution Method 1 Construct a table to compute the values of x(−3t−4) ata few selected points as shown in Table 2.3 of Figure 2.19. Next, plot x(−3t−4)as function of t.

t t′ = −3t− 4 x(−3t− 4)

-2/3 -2 0-1 -1 1-4/3 0 1-5/3 1 0.5-2 2 0-7/3 3 0

Table 2.3

Time, seconds

x(−3t−4)

-3 -2 -1 0 1 20

0.2

0.4

0.6

0.8

1

Figure 2.19: Selected values of x(−3t− 4)

Comment: When solving using the method of constructing a table as the oneshown in Table 2.3, it is much easier to start constructing your table from thesecond column i.e. the time transformation argument of the function. The timetransformation argument in this example is −3t − 4 which can be labeled t′.Start with few selected points of t′, find the corresponding t points and fill thecolumn corresponding to t. This could be done easily by writing an expression

of t in terms of t′, t = −(t′+43

). Finally, plot x(−3t− 4) as function of t.

Method 2 We do the transformation graphically paying particular attention tothe correct sequence of transformations. We can consider the following sequence

x(t)t→−3t−−−−→ x(−3t)

t→t+ 43−−−−−→ x

(

−3

(

t+4

3

))

= x(−3t− 4)

as shown in Figure 2.20(a). Alternatively,

x(t)t→t−4−−−−→ x(t− 4)

t→−3t−−−−→ x(−3t− 4)

as shown in Figure 2.20(b).

Page 24: My Signals Notes

24 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

x(t)

x(t)

x(−3t)

x(t−4)

Time, seconds

x(−3(t+4/3))

Time, seconds

x(−3t−4)

-3 -2 -1 0 1 2 3 4 5 6 7-3 -2 -1 0 1 2 3 4 5 6 7

-3 -2 -1 0 1 2 3 4 5 6 7-3 -2 -1 0 1 2 3 4 5 6 7

-3 -2 -1 0 1 2 3 4 5 6 7-3 -2 -1 0 1 2 3 4 5 6 7

0

0.5

1

0

0.5

1

0

0.5

1

0

0.5

1

0

0.5

1

0

0.5

1

(a) First Reflect and scale then shift (b) Alternative sequence of transformation

Figure 2.20: Method 2 solutions to Example 2.9.

2.3 Useful Signal Functions

2.3.1 Complex Exponentials and Sinusoids

Some of the most commonly used mathematical functions for describing signalsshould already be familiar: the CT sinusoids

x(t) = Acos

(2πt

T0+ φ

)

= Acos(ω0t+ φ) = Acos(2πf0t+ φ)

where

• A = amplitude of sinusoid or exponential

• T0 = real fundamental period of sinusoid

• f0 = real fundamental frequency of sinusoid, Hz

• ω0 = real fundamental frequency of sinusoid, radians per second (rad/s)

Another important function in the area of signals and systems is the exponentialsignal est where s is complex in general. An exponential function in its mostgeneral form is written as

x(t) = Cest

where both C, s can be real or complex numbers.

Page 25: My Signals Notes

2.3. USEFUL SIGNAL FUNCTIONS 25

Reminder It is very useful here to remember the following Euler identities

ejφ = cos(φ) + jsin(φ) (2.20)

e−jφ = cos(φ)− jsin(φ) (2.21)

cos(φ) =ejφ + e−jφ

2(2.22)

sin(φ) =ejφ − e−jφ

2j(2.23)

The exponential signal x(t) = Cest is studied in more details by writing

C = Aejφ, (polar form)

s = σ + jω, (rectangularform)

Therefore,

x(t) = Aejφe(σ+jω)t

= Aeσtejφejωt

= A︸︷︷︸TermI

eσt︸ ︷︷ ︸TermII

ej(ωt+φ)︸ ︷︷ ︸TermIII

Where

• Term I, A = the amplitude of the complex exponential

• Term II, eσt = real exponential function (Figure 2.21)

Time, seconds

σ = 0

σ > 0σ < 0

0

1

Figure 2.21: Real exponential signal eσt

• Term III, ej(ωt+φ) = complex exponential function. Using Euler identity(2.20)

ej(ωt+φ) = cos(ωt+ φ) + jsin(ωt+ φ)

Figure 2.22 shows different examples of the real part of the functionx(t) = Aejφe(σ+jω)t.

Page 26: My Signals Notes

26 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

Realpartofx(t) eσt envelope (σ > 0)

Time, seconds

Realpartofx(t)

eσt envelope (σ < 0)

Figure 2.22: The real part of x(t) = Aeσtej(ωt+φ)

Some properties of Complex Exponential Functions

The complex exponential function ej(ω0t+φ) has a number of important proper-ties.

• It is periodic with fundamental period T0 = 2π|ω0|since

ej(ω0t+φ) = ej(ω0

(t+ 2πkω0

)+φ)

for any k ∈ Z.

Note that ej2πk= 1 for any k ∈ Z.

• Re{ej(ω0t+φ)

}= cos(ω0t + φ) and the Im

{ej(ω0t+φ)

}= sin(ω0t + φ),

these terms are sinusoids of frequency ω0.

• The term φ is often called phase. Note that we can write

ej(ω0t+φ) = ejω0

(t+ φ

ω0

)

which implies that the phase has the effect of time shifting the signal.

• Since complex exponential functions are periodic they have infinite totalenergy but finite power.

P∞ =1

T

T

∣∣ejω0t

∣∣2 dt =

1

T

∫ t+T

t

1.dτ = 1

• Set of periodic exponentials with fundamental frequencies that are multi-plies of a single positive frequency ω0 are said to be harmonically relatedcomplex exponentials

Θk(t) = ejkω0t for k = 0,±1,±2, ∙ ∙ ∙ (2.24)

? k = 0⇒ Θk(t) is a constant.

Page 27: My Signals Notes

2.3. USEFUL SIGNAL FUNCTIONS 27

? k 6= 0 ⇒ Θk(t) is periodic with fundamental frequency |k|ω0 andfundamental period 2π

|k|ω0= T0|k| . Note that each exponential in (2.24)

is also periodic with period T0.

? Θk(t) is called the kth harmonic. Harmonic (from music): tones

resulting from variations in acoustic pressures that are integer mul-tiples of a fundamental frequency.

2.3.2 The Unit Step Function

Figure 2.23: The CT unit stepfunction

A CT unit step function is defined as, (Figure 2.23)

x(t) =

{1, t > 0

0, t < 0(2.25)

This function is called the unit step because the height of the step change infunction value is one unit in the system of units used to describe the signal. Thefunction is discontinuous at t = 0 since the function changes instantaneouslyfrom 0 to 1 when t = 0. It will be shown later that u(t = 0) = 1

2 .

Some authors define the unit step by

u(t) =

{1, t ≥ 0

0, t < 0or u(t) =

{1, t > 0

0, t ≤ 0

For most analysis purposes these definitions are all equivalent.

The unit step is defined and used in signal and system analysis because it canmathematically represent a very common action in real physical systems, fastswitching from one state to another. For example in the circuit shown in Figure2.24 the switch moves from one position to the other at time t = 0. The voltageapplied to the RC circuit can be described mathematically by v0(t) = vs(t)u(t).The unit step function is very useful in specifying a function with different

Figure 2.24: Simple RC circuitmathematical description over different intervals. Consider, for example, therectangular pulse depicted in Figure 2.25(a). We can express such a pulse interms of the unit step function by observing that the pulse can be expressed asthe sum of the two delayed unit step functions as shown in Figure 2.25(b). Theunit step function delayed by t0 seconds is u(t− t0). From Figure 2.25(b), it isclear that one way of expressing the pulse of Figure 2.25(a) is u(t−2)−u(t−4).

The DT Unit Step Function

The DT counterpart of the CT unit step function u(t) is u[n], also called unitsequence (Figure 2.26) defined by

Figure 2.26: The DT unit stepfunction

u[n] =

{1, n ≥ 0

0, n < 0(2.26)

For this function there is no disagreement or ambiguity about its value at n = 0,it is one.

Page 28: My Signals Notes

28 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

Time, seconds

Unit Step delayed by 2 units

u(t−2)

Unit step delayed by 4 units

Time, seconds

u(t−4)

0 1 2 3 4 5 6

0 1 2 3 4 5 6

0 1 2 3 4 5 60

1

0

1

0

1

(a) A Rectangular pulse (b) Representing rectangular pulses by step functions

Figure 2.25: A Rectangular pulse and its representation by step functions

2.3.3 The Signum Function

The signum function Figure 2.27 is closely related to the unit step function. Itis some time called the sign function, but the name signum is more common soas not confuse the sounds of the two words sign and sine !! The signum functionis defined as

Figure 2.27: The CT signumfunction

sgn(t) =

{1, t > 0

−1, t < 0(2.27)

2.3.4 The Unit Ramp Function

Another type of signal that occurs in systems is one which is switched on atsome time and changes linearly after that time or one which changes linearlybefore some time and is switched off at that time. Figure 2.28 illustrates someexamples.

Figure 2.29: The CT unit rampfunction

Time, seconds

x(t)

Time, seconds

x(t)

-1 0 1 2 3 4 50 2 4-3

-2

-1

0

1

2

3

4

5

Figure 2.28: Functions that change linearly before or after some time

Signals of this kind can be described with the use of the ramp function. TheCT unit ramp function (Figure 2.29) is the integral of the unit step function. It

Page 29: My Signals Notes

2.3. USEFUL SIGNAL FUNCTIONS 29

is called the unit ramp function because, for positive t, its slope is one

ramp(t) =

{t, t > 0

0, t ≤ 0=

∫ t

−∞u(λ) dλ = tu(t) (2.28)

The integral relationship in (2.28) between the CT unit step and CT ramp func-tions is shown below in Figure 2.30.

Figure 2.30: Illustration of the integral relationship between the CT unit stepand the CT unit ramp.

Describe the signal shown in Figure 2.31 in terms of unit step functions. Example 2.10

� Solution The signal x(t) illustrated in Figure 2.31 can be conveniently han-dled by breaking it up into the two components x1(t) and x2(t), as shown inFigure 2.32. Here, x1(t) can be obtained by multiplying the ramp t by the pulseu(t)− u(t− 2) as shown in Figure 2.32. Therefore,

Time, seconds

x(t)

-2 -1 0 1 2 3 4 5 60

1

2

Figure 2.31: A signal x(t).

x1(t) = t[u(t)− u(t− 2)].

Similarly,x2(t) = −2(t− 3)[u(t− 2)− u(t− 3)]

and

x(t) = x1(t) + x2(t)

= t[u(t)− u(t− 2)]− 2(t− 3)[u(t− 2)− u(t− 3)]

= tu(t)− 3(t− 2)u(t− 2) + 2(t− 3)u(t− 3). �

2.3.5 The Rectangle Function

A very common type of signal occurring in systems is one in which a signalis switched on at some time and then back off at a later time. The rectanglefunction (Figure 2.33) is defined as

Figure 2.33: The CT rectanglefunction.

rect

(t

τ

)

=

{1, |t| < τ

2

0, otherwise(2.29)

Page 30: My Signals Notes

30 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

t x1(t)

−2(t− 3) x2(t)

-1 0 1 2 3 4-1 0 1 2 3 4

-1 0 1 2 3 4-1 0 1 2 3 4

-1

0

1

2

-1

0

1

2

-1

0

1

2

-1

0

1

2

Figure 2.32: Description of the signal of example 2.10 in terms of unit stepfunction.

The notation used here is convenient, τ represent the width of the rectanglefunction while the rectangle centre is at zero, therefore any time transformationscan be easily applied to the notation in (2.29), (Figure 2.34). A special case

2rect(t+14

)

-5 -4 -3 -2 -1 0 1 2 3 4 50

1

2

Figure 2.34: A shifted rect.function with width fourseconds and center at -1.

of the rectangle function defined in (2.29) is when τ = 1, it is called the unitrectangle function, rect(t) (also called the square pulse). It is a unit rectanglefunction because its width, height, and area are all one. Note that one canalways perform time transformations to the unit rectangle function to arrive atthe same notation used in (2.29). The signal in Figure 2.34 could be obtainedby scaling and shifting the unit rectangle function. However the notation usedin (2.29) is much easier when handling rectangle functions.

2.3.6 The Unit Impulse Function

The unit impulse function δ(t), also called the delta function is one of the mostimportant functions in the study of signals and systems and yet the strangest.It was first defined by P.A.M Dirac (sometimes called by his name the DiracDistribution) as

δ(t) = 0 t 6= 0 (2.30)∫ ∞

−∞δ(t) dt = 1 (2.31)

Try to visualise this function: a signal of unit area equals zero everywhereexcept at t = 0 where it is undefined !! To be able to understand the definitionof the delta function let us consider a unit-area rectangular pulse defined by thefunction (Figure 2.35)

δa(t) =

{1a, |t| < a

2

0, |t| > a2

(2.32)

Page 31: My Signals Notes

2.3. USEFUL SIGNAL FUNCTIONS 31

Figure 2.35: A unit-area rectangular pulse of width a

Now imagine taking the limit of the function δa(t) as a approaches zero. Tryto visualise what will happen, the width of the rectangular pulse will becomeinfinitesimally small, a height that has become infinitely large, and an overallarea that has been maintained at unity. Using this approach to approximatethe unit impulse which is now defined as

δ(t) = lima→0δa(t) (2.33)

Other pulses, such as triangular pulse may also be used in impulse approxi-mations (Figure 2.36). The area under an impulse is called its strength, or

Figure 2.36: A unit area triangular pulse

sometimes its weight. An impulse with a strength of one is called a unit im-pulse. The impulse cannot be graphed in the same way as other functionsbecause its amplitude is undefined when t = 0. For this reason a unit impulse isrepresented by a vertical arrow a spear-like symbol. Sometimes, the strength ofthe impulse is written beside it in parentheses, and sometimes the height of thearrow indicates the strength of the impulse. Figure 2.37 illustrates some waysof representing impulses graphically.

2.3.7 Some Properties of the Unit Impulse

Multiplication of a Function by an Impulse

A common mathematical operation that occurs in signals and systems analysisis the multiplication of an impulse with another function g(t) that is known tobe continuous and finite at t = 0. (i.e. g(t = 0) exists and its value is g(0), ) weobtain

g(t)δ(t) = g(0)δ(t) (2.34)

Page 32: My Signals Notes

32 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

Figure 2.37: Graphical representation of impulses

since the impulse exists only at t = 0. It is useful here to visualise the aboveproduct of the two functions. The unit impulse δ(t) is the limit of the pulseδa(t) defined in (2.32). The multiplication is then a pulse whose height at t = 0g(0)/a and whose width is a. In the limit as a approaches zero the pulse becomesan impulse and the strength is g(0), (Figure 2.38). Similarly, if a function g(t)is multiplied by an impulse δ(t− t0)(impulse located at t = t0), then

g(t)δ(t− t0) = g(t0)δ(t− t0) (2.35)

provided g(t) is finite and continuous at t = t0.

The Sampling or Sifting Property

Another important property that follows naturally from the multiplication prop-erty is the so-called sampling or sifting property. Before we state this prop-erty let us first explore an important idea. Consider the unit-area rectangularThe word sifting is spelled

correctly it is not to be con-fused with the word shift-ing.

function δa(t) defined in (2.32). Let this function multiply another function g(t),which is finite and continuous at t = 0, and find the area under the product ofthe two functions,

A =

∫ ∞

−∞δa(t)g(t) dt

(Figure 2.38). Using the definition of δa(t) we can rewrite the integral as

Figure 2.38: Multiplication of a unit-area rectangular pulse centered at t = 0and a function g(t), which is continuous and finite at t = 0.

A =1

a

∫ a/2

−(a/2)g(t) dt (2.36)

Page 33: My Signals Notes

2.3. USEFUL SIGNAL FUNCTIONS 33

Now imagine taking the limit of this integral as a approaches zero. In the limit,the two limits of the integration approach zero from above and below. Sinceg(t) is finite and continuous at t = 0, as a approaches zero in the limit the valueof g(t) becomes a constant g(0) and can be taken out of the integral. Then

lima→0A = g(0) lim

a→0

1

a

∫ a/2

−(a/2)dt = g(0) lim

a→0

1

a(a) = g(0) (2.37)

So in the limit as a approaches zero, the function δa(t) has the interestingproperty of extracting (hence the name sifting) the value of any continuousfinite function g(t) at time t = 0 when the multiplication of δa(t) and g(t) isintegrated between any two limits which include time t = 0. Thus, in otherwords ∫ ∞

−∞g(t)δ(t) dt = lim

a→0

∫ ∞

−∞g(t)δa(t) dt = g(0) (2.38)

The above result follows naturally from (2.34),∫ ∞

−∞g(t)δ(t) dt = g(0)

∫ ∞

−∞δ(t) dt

︸ ︷︷ ︸=1

= g(0) (2.39)

This result means that the area under the product of a function with an impulseδ(t) is equal to the value of that function at the instant where the unit impulseis located. From (2.35) it follows that

∫ ∞

−∞g(t)δ(t− t0) dt = g(t0)

∫ ∞

−∞δ(t− t0) dt

︸ ︷︷ ︸=1

= g(t0) (2.40)

The unit impulse δ(t) can be defined by using the sampling property that iswhen it is multiplied by any function g(t), which is finite and continuous att = 0, and the product is integrated between limits which include t = 0, theresult is

x(0) =

∫ ∞

−∞x(t)δ(t) dt

One can argue here, what if I can find a function other than the impulse functionthat satisfies the sampling property. The answer would be it must be equivalentto the impulse δ(t). The next property explores this argument.

The Unit Impulse is the Derivative of the Unit Step

Let us evaluate the integral (du/dt)x(t), using integration by parts:

∫ ∞

−∞

du(t)

dtx(t) dt = u(t)x(t)

∣∣∣∣

−∞

−∫ ∞

−∞u(t)x(t) dt

= x(∞)− 0−∫ ∞

0

x(t) dt

= x(∞)− x(t)∣∣∞0

= x(0)

Page 34: My Signals Notes

34 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

The result shows that du/dt satisfies the sampling property of δ(t). Therefore

du

dt= δ(t) (2.41)

Consequently∫ t

−∞δ(τ) dτ = u(t) (2.42)

The result in (2.42) can be obtained graphically by observing that the area from−∞ to t is zero if t < 0 and unity if t > 0

∫ t

−∞δ(τ) dτ =

{0, t < 0

1, t > 0

= u(t)

The same result could have been obtained by considering a function g(t) andits derivative g(t) as in Figure 2.39. In the limit as a approaches zero thefunction g(t) approaches the unit step function. In that same limit the widthof g(t) approaches zero but maintains a unit area which the same as the initialdefinition of δa(t). The limit as a approaches zero of g(t) is called the generalisedderivative of u(t).

Figure 2.39: Functions which approach the unit step and unit impulse

Time Transformations applied to the Unit Impulse

The important feature of the unit impulse function is not its shape but the factthat its width approaches zero while the area remains at unity. Therefore whentime transformations are applied to δ(t), in particular scaling it is the strengththat matters and not the shape of δ(t), (Figure 2.40). It is helpful to note that

∫ ∞

−∞δ(αt)dt =

∫ ∞

−∞δ(λ)

|α|=1

|α|(2.43)

and so

δ(αt) =δ(t)

|α|(2.44)

Page 35: My Signals Notes

2.3. USEFUL SIGNAL FUNCTIONS 35

Figure 2.40: Effect of scaling on unit impulse

Summary of Properties of δ(t)

1. δ(t) = 0, t 6= 0.

2.∫∞−∞ δ(t) dt = 1.

3. δ(t) is an even function i.e. δ(t) = δ(−t)

4.∫ t−∞ δ(τ) dτ = u(t).

5.∫∞0δ(t− τ) dt = u(t).

6. dudt= δ(t).

7. δ(αt) = δ(t)|α| . More generally, δ(α(t− τ)) =

1|α|δ(t− τ)

8. x(t)δ(t) = x(0)δ(t).

9. x(t)δ(t− τ) = x(τ)δ(t− τ).

10.∫∞−∞ x(t)δ(t) dt = x(0).

11.∫∞−∞ x(t)δ(t− τ) dt = x(τ).

The DT Unit Impulse Function

The DT unit impulse function δ[n], sometimes referred to as the Kronecker deltafunction (Figure 2.41) is defined by

Figure 2.41: The DT unit im-pulse function.

δ[n] =

{1, n = 0

0, n 6= 0(2.45)

The DT delta function δ[n] is referred to as the unit sample that occurs at n = 0δ[n− k] =

{1, n = k

0, n 6= kand the shifted function δ[n− k] as the sample that occurs at n = k .

Some properties of δ[n]

1. δ[n] = 0 for n 6= 0.

2.

n∑

m=−∞

δ[m] = u[n], this can be easily seen by considering two cases for n,

namely n < 0 and n > 0, (Figure 2.42).

Page 36: My Signals Notes

36 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

• Case 1:n∑

m=−∞

δ[m] = 0 for n < 0, this is true since δ[m] has a value

of one only when m = 0 and zero anywhere else. The upper limit ofthe summation is less than zero thus δ[m = 0] is not included in thesummation.

Figure 2.42: A DT unit im-pulse function • Case 2: On the other hand if n > 0, δ[m = 0] will be included in

the summation, therefore

n∑

m=−∞

δ[m] = 1.

In summary,

n∑

m=−∞

δ[m] =

{1, n ≥ 0

0, n < 0

= u[n]

3. u[n]−u[n−1] = δ[n], this can be clearly see in Figure 2.43 as you subtractthe two signals from each other.

Figure 2.43: δ[n] = u[n]− u[n− 1]

4.

∞∑

k=0

δ[n− k] = u[n].

5. x[n]δ[n] = x[0]δ[n].

6. x[n]δ[n− k] = x[k]δ[n− k].

7. The DT unit impulse is not affected by scaling, i.e. δ[αn] = δ[n].

8. I will leave out some other important properties to a later stage of thiscourse in particular the sifting property of the DT unit impulse.

2.3.8 The Unit Sinc Function

The unit sinc function (Figure 2.44) is called a unit function because its heightand area are both one, it is defined as

sinc(t) =sin(πt)

πt(2.46)

Page 37: My Signals Notes

2.3. USEFUL SIGNAL FUNCTIONS 37

Figure 2.44: The CT unit sinc function

Some authors define the sinc function as

sinc(t) =sin(t)

t

One can use either of them as long as one definition is used consistently.

It is also known as the filtering or interpolating function, some authors name itthe sampling function Sa(πt). Since the denominator is an increasing functionof t and the numerator is bounded (|sin t| ≤ 1), therefore it is simply a dampedsine wave. Figure 2.44 shows that the sinc function is an even function of thaving its peak at t = 0. What is sinc(0)? To determine the value of sinc(0),simply use L’Hopital’s rule to the definition in (2.46). Then

limt→0sinc(t) = lim

t→0

sin(πt)

πt= limt→0

π cos(πt)

π= 1.

Page 38: My Signals Notes

38 CHAPTER 2. MATHEMATICAL DESCRIPTION OF SIGNALS

Page 39: My Signals Notes

Chapter 3

Description of Systems

3.1 Introduction

The words signal and systems were defined very generally in Chapter 1. Sys-tems can be viewed as any process or interaction of operations that transformsan input signal into an output signal with properties different from those of theinput signals. A system may consist of physical components (hardware realiza-tion) or may consist of an algorithm that computes the output signal from theinput signal (software realization).

Figure 3.1: CT and DT sys-tems block diagrams

One way to define a system is anything that performs a function, it operateson something to produce something else. It can be thought of as a mathematicaloperator. A CT system operates on a CT input signal to produce a CT outputi.e. y(t) = H{x(t)}. H is the operator denoting the action of a system, itspecifies the operation to be performed and also identifies the system. On theother hand, a DT system operates on a DT signal to produce a DT output(Figure 3.1). I will sometimes use the following notation to describe a system

x(t)H−−−→ y(t)

which simply means the input x to system H produces the output y.

By knowing how to mathematically describe and characterize all the com-ponents in a system and how the components interact with each other, an engi-neer can predict, using mathematics, how a system will work, without actuallybuilding it. Systems may be interconnected together in different configurations,mainly in series, parallel and feedback (Figure 3.2.)

3.2 Systems Characteristics

Systems may be classified in the following categories:

1. Memoryless (instantaneous) and dynamic (with memory) systems.

2. Invertible and non-invertible systems.

3. Causal and non-causal systems.

39

Page 40: My Signals Notes

40 CHAPTER 3. DESCRIPTION OF SYSTEMS

Figure 3.2: A system composed of four interconnected components, with twoinputs and two outputs.

4. Stable and non-stable systems.

5. Time-invariant and time-varying systems.

6. Linear and non-linear systems.

3.2.1 Memory

A systems output or response at any instant t generally depends upon the entirepast input. However, there are systems for which the output at any instant tdepends only on its input at that instant and not on any other input at anyother time. Such systems are said to have no memory or is called memoryless.The only input contributing to the output of the system occurs at the sametime as the output. The system has no stored information of any past inputsthus the term memoryless. Such systems are also called static or instantaneoussystems. Otherwise, the system is said to be dynamic (or a system with mem-ory). Instantaneous systems are a special case of dynamic systems. Here aresome examples

• y(t) = 2x(t), this is a memoryless system.

• y(t) = x(2t), system with memory.

• y(t) =1

C

∫ t

−∞x(τ) dτ , system with memory.

• A voltage divider circuit is a memoryless system (Figure 3.3).Figure 3.3: A voltage divider

3.2.2 Invertibility

A system H performs certain operations on input signals. If we can obtain theinput x(t) back from the output y(t) by some operation, the system H is saidto be invertible. Thus, an inverse system H−1 can be created so that whenthe output signal is fed into it, the input signal can be recovered (Figure 3.4).For a non-invertible system, different inputs can result in the same output,and it is impossible to determine the input for a given output. Therefore, foran invertible system it is essential that distinct inputs applied to the systemproduce distinct outputs so that there is one-to-one mapping between an inputand the corresponding output. An example of a system that is not invertible isa system that performs the operation of squaring the input signals, y(t) = x2(t).

Page 41: My Signals Notes

3.2. SYSTEMS CHARACTERISTICS 41

Figure 3.4: The inverse system

For any given input x(t) it is possible to determine the value of the output y(t).However, if we attempt to find the output, given the input, by rearranging therelationship into x(t) =

√y(t) we face a problem. The square root function

has multiple values, for example√4 = ±2. Therefore, there is no one to one

mapping between an input and the corresponding output signals. In other wordswe have the same output for different inputs. For a system that is invertible,consider an inductor whose input-output relationship is described by

y(t) =1

L

∫ t

−∞x(τ) dτ

the operation representing the inverse system is simply: L ddt.

3.2.3 Causality

A causal system is one for which the output at any instant t0 depends only onthe value of the input x(t) for t ≤ t0. In other words, the value of the currentoutput depends only on current and past inputs. This should seem obvious ashow could a system respond to an input signal that has not yet been applied.Simply, the output cannot start before the input is applied. A system thatviolates the condition of causality is called a noncausal system. A noncausalsystem is also called anticipative which means the systems knows the input inthe future and acts on this knowledge before the input is applied. Noncausalsystems do not exist in reality as we do not know yet how to build a systemthat can respond to inputs not yet applied. As an example consider the systemspecified by y(t) = x(t + 1). Thus, if we apply an input starting at t = 0, theoutput would begin at t = −1, as seen in Figure 3.5 hence a noncausal system.

On the other hand a system described by the equation

y(t) =

∫ t

−∞x(τ) dτ

is clearly a causal system since the output y(t) depends on inputs that occursince −∞ up to time t (the upper limit of the integral). If the upper limit isgiven as t+ 1 the system is noncausal.

3.2.4 Stability

A system is stable if a bounded input signal yields a bounded output signal. Asignal is said bounded if its absolute value is less than some finite value for alltime,

|x(t)| <∞, −∞ < t <∞.

Page 42: My Signals Notes

42 CHAPTER 3. DESCRIPTION OF SYSTEMS

Time, seconds

x(t)

Time, seconds

x(t+ 1)

-2 -1 0 1 2-2 -1 0 1 20

1

0

1

Figure 3.5: A noncausal system

A system for which the output signal is bounded when the input signal isbounded is called bounded-input-bounded-output (BIBO) stable system.

3.2.5 Time Invariance

A system is time-invariant if the input-output properties do not change withtime. For such a system, if the input is delayed by t0 seconds, the output is thesame as before but delayed by t0 seconds. In other words, a time shift in theinput signal causes the same time shift in the output signal without changingthe functional form of the output signal. If input x(t) yields output y(t) theinput x(t− t0) yields output y(t− t0) for all t0 ∈ R, i.e.

x(t)H−−−→ y(t) =⇒ x(t− t0)

H−−−→ y(t− t0).

A very simple example of a system which is not time-invariant (i.e. time-varying)would be one described by

y[n] = x[2n].

Let x1[n] = g[n] and let x2[n] = g[n− 1], where g[n] is shown in Figure 3.6, andlet the output signal corresponding to x1[n] be y1[n] and the output to x2[n] bey2[n]. For this system to be time-invariant the output y2[n] must be the sameas y1[n] but delayed by one unit, but it is not as shown in Figure 3.7.

Figure 3.6: A DT input signal.

Figure 3.7: Outputs of the system described by y[n] = x[2n] to two differentinputs.

Page 43: My Signals Notes

3.2. SYSTEMS CHARACTERISTICS 43

3.2.6 Linearity and Superposition

Homogeneity (Scaling) Property

A system is said to be homogenous for arbitrary real or complex number α,if the input signal is increased α-fold, the output signal also increases α-fold.Thus, if

x(t)H−−−→ y(t)

then for all real or imaginary α

αx(t)H−−−→ αy(t)

Additivity Property

The additivity property of a system implies that if several inputs are acting onthe system, then the total output of the system can be determined by consideringeach input separately while assuming all the other inputs to be zero. The totaloutput is then the sum of all the component outputs. This property may beexpressed as follows: if an input x1(t) acting alone produces an output y1(t),and if another input x2(t), also acting alone, has an output y2(t), then, withboth inputs acting together on the system, the total output will be y1(t)+y2(t).Thus, if

x1(t)H−−−→ y1(t) and x2(t)

H−−−→ y2(t)

then

x1(t) + x2(t)H−−−→ y1(t) + y2(t).

A system is linear if both the homogeneity and the additivity property aresatisfied. Both these properties can be combined into one property (superposi-tion) which can be expressed as follows: if

x1(t)H−−−→ y1(t) and x2(t)

H−−−→ y2(t)

then for all real or imaginary α and β,

αx1(t) + βx2(t)H−−−→ αy1(t) + βy2(t).

Determine whether the system described by the differential equation Example 3.2

ay(t) + by2(t) = x(t)

is linear or nonlinear.

� Solution Consider two individual inputs x1(t) and x2(t), the equationsdescribing the system for the two inputs acting alone would be

ay1(t) + by21(t) = x1(t) and ay2(t) + by

22(t) = x2(t)

The sum of the two equations is

a[y1(t) + y2(t)] + b[y21(t) + y

22(t)] = x1(t) + x2(t)

Page 44: My Signals Notes

44 CHAPTER 3. DESCRIPTION OF SYSTEMS

which is not equal to

a[y1(t) + y2(t)]′′

+ b[y1(t) + y2(t)]2 = x1(t) + x2(t).

Therefore, in this system superposition is not applied hence the system isnonlinear. �

Remark For a system to be linear a zero input signal implies a zero output.Consider for an example the system

y[n] = 2x[n] + x0

where x0 might be some initial condition. If x[n] = 0 it is clear that y[n] 6= 0which is not linear unless x0 is zero.

3.3 Linear Time-invariant Systems

From now on we will focus on systems that are linear and time-invariant (LTI).Many engineering systems are well approximated by LTI models, analysis ofsuch systems is simple and elegant. We consider two methods of analysis ofLTI systems: the time-domain method and the frequency-domain method. Thefrequency domain methods are addressed at a late stage in the course.

3.3.1 Time-Domain Analysis of LTI systems

Analysis for DT systems will be introduced first, as it is easier to analyze, it willthen be extended to CT systems. Recall that by analysis we mean determiningthe response y[n] of an LTI system to an arbitrary input x[n]

Unit Impulse Response h[n]

The unit impulse function δ[n] is used extensively in determining the responseof a DT LTI system. When the input signal to the system is δ[n] the output iscalled the impulse response, h[n]

δ[n]H−−−→ h[n].

If we know the system response to an impulse input, and if an arbitrary inputx[n] can be expressed as a sum of impulse components, the system responsecould be obtained by summing the system response to various impulse compo-nents. Figure 3.8 shows how a signal x[n] can be expressed as a sum of impulsecomponents.

The component of x[n] at n = k is x[k]δ[n− k], and x[n] is the sum of all thesecomponents summed from k = −∞ to ∞. Therefore

x[n] = x[0]δ[n] + x[1]δ[n− 1] + x[2]δ[n− 2] + ∙ ∙ ∙

+ x[−1]δ[n+ 1] + x[−2]δ[n+ 2] + ∙ ∙ ∙

=

∞∑

k=−∞

x[k]δ[n− k] (3.1)

Page 45: My Signals Notes

3.3. LINEAR TIME-INVARIANT SYSTEMS 45

x[n]

x[−2]δ[n+ 2]

x[−1]δ[n+ 1]

x[0]δ[n]

x[1]δ[n− 1]

x[2]δ[n− 2]

-3 -2 -1 0 1 2 3 n→

-3 -2 -1 0 1 2 3 n→

-3 -2 -1 0 1 2 3 n→

-3 -2 -1 0 1 2 3 n→

-3 -2 -1 0 1 2 3 n→

-3 -2 -1 0 1 2 3 n→

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

Figure 3.8: Representation of an arbitrary signal x[n] in terms of impulse com-ponents

The expression in (3.1) is the DT version of the sifting property, x[n] is writtenas a weighted sum of unit impulses.

Express the signal shown in Figure 3.9 as a weighted sum of impulse Example 3.3components.

n

x[n]

-2 -1 0 1 2 3-1

0

1

2

Figure 3.9: A DT signal x[n]

� Solution This can be easily shown as in Figure 3.10. Therefore,

x[n] = 2δ[n+ 1] + 2δ[n]− δ[n− 1] + δ[n− 2]. �

3.3.2 The Convolution Sum

We are interested in finding the system output y[n] to an arbitrary input x[n]knowing the impulse response h[n] to a DT LTI system. There is a systematicway of finding how the output responds to an input signal, it is called convolu-tion. The convolution technique is based on a very simple idea, no matter howcomplicated your input signal is, one can always express it in terms of weightedimpulse components. For LTI systems we can find the response of the systemto one impulse component at a time and then add all those responses to form

Page 46: My Signals Notes

46 CHAPTER 3. DESCRIPTION OF SYSTEMS

x[n]

2δ[n+ 1]

2δ[n]

−1δ[n− 1]

1δ[n− 2]

n

-3 -2 -1 0 1 2 3

-3 -2 -1 0 1 2 3

-3 -2 -1 0 1 2 3

-3 -2 -1 0 1 2 3

-3 -2 -1 0 1 2 3

-1

0

1

2

3

-1

0

1

2

3

-1

0

1

2

3

-1

0

1

2

3

-1

0

1

2

Figure 3.10: x[n] expressed as a sum of individual weighted unit impulses

the total system response. Let h[n] be the system response (output) to impulseinput δ[n]. Thus if

δ[n]H−−−→ h[n]

then because the system is time-invariance

δ[n− k]H−−−→ h[n− k]

and because of linearity, if the input is multiplied by a weight or constant theoutput is multiplied by the same weight thus

x[k]δ[n− k]H−−−→ x[k]h[n− k]

and again because of linearity

∞∑

k=−∞

x[k]δ[n− k]

︸ ︷︷ ︸x[n]

H−−−→

∞∑

k=−∞

x[k]h[n− k]

︸ ︷︷ ︸y[n]

Page 47: My Signals Notes

3.3. LINEAR TIME-INVARIANT SYSTEMS 47

The left hand side is simply x[n] [see equation (3.1)], and the right hand is thesystem response y[n] to input x[n]. Therefore

y[n] =

∞∑

k=−∞

x[k]h[n− k] (3.2)

The summation on the RHS is known as the convolution sum and is denoted byy[n] = x[n]∗h[n]. Now in order to construct the response or output of a DT LTIsystem to any input x[n], all we need to know is the systems impulse responseh[n].

Properties of the Convolution Sum

1. The Commutative Property

x[n] ∗ h[n] = h[n] ∗ x[n] (3.3)

This property can be easily proven by starting with the definition of con-volution

y[n] =

∞∑

k=−∞

x[k]h[n− k]

and letting q = n− k. Then we have

x[n] ∗ h[n] =∞∑

q=−∞

x[n− q]h[q] =∞∑

q=−∞

h[q]x[n− q] = h[n] ∗ x[n]

2. The Distributive Property

x[n] ∗ (h[n] + z[n]) = x[n] ∗ h[n] + x[n] ∗ z[n] (3.4)

If we convolve x[n] with the sum of h[n] and z[n], we get

x[n] ∗ (h[n] + z[n]) =∞∑

k=−∞

x[k] (h[n− k] + z[n− k])

=

∞∑

k=−∞

x[k]h[n− k]

︸ ︷︷ ︸=x[n]∗h[n]

+∞∑

k=−∞

x[k]z[n− k]

︸ ︷︷ ︸=x[n]∗z[n]

3. The Associative Property

x[n] ∗ (h[n] ∗ z[n]) = (x[n] ∗ h[n]) ∗ z[n] (3.5)

The proof to this property is left as an exercise to the reader.

4. The Shifting property

x[n−m] ∗ h[n− q] = y[n−m− q] (3.6)

In words, the input x is delayed by m samples, the signal h is also delayedby q samples, therefore the result of the convolution of both signals willintroduce a total delay in the output signal by m+ q samples.

Page 48: My Signals Notes

48 CHAPTER 3. DESCRIPTION OF SYSTEMS

5. Convolution with an Impulse

x[n] ∗ δ[n] = x[n] (3.7)

This property can be easily seen from the definition of convolution

x[n] ∗ δ[n[=∞∑

k=−∞

x[k]δ[n− k] (3.8)

and the RHS in (3.8) is simply x[n] from (3.1).

6. The Width Property

If x[n] and h[n] have lengths of m and n elements respectively, then thelength of y[n] is m + n − 1 elements. In some special cases this propertycould be violated. One should be careful to count samples with zero am-plitudes that exist in between the samples. Furthermore, the appearanceof the first sample in the output will be located at the summation of thelocations of the first appearing samples of each function.

We shall evaluate the convolution sum first by analytical method and later withgraphical aid.

Determine y[n] = x[n] ∗ h[n] for x[n] and h[n] as shown in Figure 3.11Example 3.4

x[n] h[n]

-3 -2 -1 0 1 2 3 n→-3 -2 -1 0 1 2 3 n→-1

0

1

2

-1

0

1

2

Figure 3.11: Two DT signals x[n] and h[n]

� Solution Method 1 Express x[n] as weighted sum of impulse components

x[n] = δ[n+ 1]− δ[n] + δ[n− 1] + δ[n− 2]

Since the system is an LTI one, the output is simply (Figure 3.12)

y[n] = h[n+ 1]− h[n] + h[n− 1] + h[n− 2] (3.9)

Remark It would have been easier if we determined y[n] = h[n] ∗ x[n]!! Tryto verify this yourself ?? The answer would have been the same because of thecommutative property. Note also the output width is (3 + 4 − 1 = 6 elements),the first sample in the output appears at [n = 0 + (−1) = −1] which followsfrom the width property.

Page 49: My Signals Notes

3.3. LINEAR TIME-INVARIANT SYSTEMS 49

h[n+ 1]

−h[n]

h[n− 1]

y[n] = h[n+ 1]− h[n] + h[n− 1] + h[n− 2]

h[n− 2]

-3 -2 -1 0 1 2 3 4 5

-3 -2 -1 0 1 2 3 4 5

-3 -2 -1 0 1 2 3 4 5

-3 -2 -1 0 1 2 3 4 5

-3 -2 -1 0 1 2 3 4 5

0

1

2

-1

0

1

2

3

0

1

2

-2

-1

0

0

1

2

Figure 3.12: Impulse response to individual components of x[n]

Method 2 By direct evaluation of the convolution sum

y[n] =

∞∑

k=−∞

x[k]h[n− k]

which can be written as

y[n] = ∙ ∙ ∙+ x[−2]h[n+ 2] + x[−1]h[n+ 1] + x[0]h[n]

+ x[1]h[n− 1] + x[2]h[n− 2] + ∙ ∙ ∙

and for x[n] in Figure 3.11 we have

y[n] = x[−1]h[n+ 1] + x[0]h[n] + x[1]h[n− 1] + x[2]h[n− 2]

= (1)h[n+ 1]− (1)h[n] + (1)h[n− 1] + (1)h[n− 2] (3.10)

which is the same as equation (3.9).

Page 50: My Signals Notes

50 CHAPTER 3. DESCRIPTION OF SYSTEMS

Graphical Procedure for the Convolution Sum

The direct analytical methods to evaluate the convolution sum are simple andconvenient to use as long as the number of samples are small. It is helpful toexplore some graphical concepts that helps in performing convolution of morecomplicated signals. If y[n] is the convolution of x[n] and h[n], then

y[n] =

∞∑

k=−∞

x[k]h[n− k] (3.11)

It is crucial to note that the summation index in (3.11) is k, so that n is justlike a constant. With this in mind h[n− k] should be considered a function of kfor purposes of performing the summation in (3.11). This consideration is alsoimportant when we sketch the graphical representations of the functions x[k]and h[n− k]. Both of these functions should be sketched as functions of k, notof n. To understand what the function h[n− k] looks like let us start with thefunction h[k] and perform the following transformations

h[k]k→−k−−−−→ h[−k]

k→k−n−−−−−→ h[−(k − n)] = h[n− k]

The first transformation is a time reflected version of h[k], and the second trans-formation shifts the already reflected function n units to the right for positiven; for negative n, the shift is to the left. The convolution operation can beperformed as follows:

1. Reflect h[k] about the vertical axis (n = 0) to obtain h[−k].

2. Time shift h[−k] by n units to obtain h[n− k]. For n > 0, the shift is tothe right; for n < 0, the shift is to the left.

3. Multiply x[k] by h[n − k] and add all the products to obtain y[n]. Theprocedure is repeated for each value n over the range −∞ to ∞.

Determine y[n] = x[n] ∗ h[n] graphically, where x[n] and h[n] are defined as inExample 3.5Figure 3.11.

� Solution Before starting with the graphical procedure it is a good idea hereto determine where the first sample in the output will appear (this was foundearlier to be at n = −1). Furthermore, the width property implies that thenumber of elements in y[n] are six. Thus, y[n] = 0 for −∞ < n ≤ −2, andy[n] = 0 for n ≥ 5, hence the only interesting range for n is −1 ≤ n ≤ 4. Nowfor n = −1

y[−1] =∞∑

k=−∞

x[k]h[−1− k]

and a negative n (n = −1) implies a time shift to the left for the functionh[−1− k]. Next multiply x[k] by h[−1− k] and add all the products to obtainy[−1] = 2. We keep repeating the procedure incrementing n by one every time,it is important to note here that by incrementing n by one every time meansshifting h[n − k] to the right by one sample. Figures 3.13 and 3.14 illustratesthe procedure for n = −1, 0, 1 and 2. Continuing in this matter for n = 3 and4, we obtain y[n] as illustrated in Figure 3.15. �

Page 51: My Signals Notes

3.3. LINEAR TIME-INVARIANT SYSTEMS 51

h[−1− k]

x[k]

x[k]h[−1− k]

y[−1] = 2

h[−k]

x[k]

x[k]h[−k]

y[0] = −1-4 -3 -2 -1 0 1 2 3 4 k →

-4 -3 -2 -1 0 1 2 3 4 k →

-4 -3 -2 -1 0 1 2 3 4 k →

-4 -3 -2 -1 0 1 2 3 4 k →

-4 -3 -2 -1 0 1 2 3 4 k →

-4 -3 -2 -1 0 1 2 3 4 k →

-2

-1

0

1

-1

0

1

0

1

2

0

1

2

-1

0

1

0

1

2

Figure 3.13: y[n] for n = −1 and 0

h[1− k]

x[k]

x[k]h[1− k]

y[1] = 3/2

h[2− k]

x[k]

x[k]h[2− k]

y[2] = 5/2

-4 -3 -2 -1 0 1 2 3 4 k →

-4 -3 -2 -1 0 1 2 3 4 k →

-4 -3 -2 -1 0 1 2 3 4 k →

-4 -3 -2 -1 0 1 2 3 4 k →

-4 -3 -2 -1 0 1 2 3 4 k →

-4 -3 -2 -1 0 1 2 3 4 k →

-1

0

1

2

-1

0

1

0

1

2

-1

0

1

2

-1

0

1

0

1

2

Figure 3.14: y[n] for n = 1 and 2

Alternative Form of Graphical Procedure

The alternative procedure is basically the same, the only difference is that in-stead of presenting the data as graphical plots, we display it as a sequence ofnumbers on tapes. The following example demonstrates the idea and shouldbecome clearer.

Page 52: My Signals Notes

52 CHAPTER 3. DESCRIPTION OF SYSTEMS

y[n]

-3 -2 -1 0 1 2 3 4 5 n→-1

0

1

2

3

Figure 3.15: Graph of y[n]

Determine y[n] = x[n] ∗ h[n], using the sliding tape method where x[n] andExample 3.6h[n] are defined as in Figure 3.11.

Solution In this procedure we write the sequences x[n] and h[n] in the slots oftwo tapes. Now leave the x tape fixed to correspond to x[k]. The h[−k] tape isobtained by time inverting the h[k] tape about the origin (n = 0) Figure 3.16.

x tape → 1 −1 1 1

h tape → 2 1 0.5

n = 0

x[k] 1 −1 1 1

← h[−k] 0.5 1 2

n = 0

Figure 3.16: Sliding tape procedure for DT convolution

Before going any further we have to align the slots such that the first element inthe stationary x[k] tape corresponds to the first element of the already invertedh[−k] tape as illustrated in Figure 3.17. We now shift the inverted tape by nslots, multiply values on two tapes in adjacent slots, and add all the products tofind y[n]. Figure 3.17 show the cases for n = 0, 1 and 2. For the case of n = 1,for example

y[1] = (1× 0.5) + (−1× 1) + (1× 2) = 1.5

Continuing in the same fashion for all n we obtain the same answer as is inFigure 3.15. �

n = 0

1 −1 1 1 y[−1] = 20.5 1 2 ← n = −1

1 -1 1 1 y[0] = −10.5 1 2 n = 0

1 -1 1 1 y[1] = 1.50.5 1 2 → n = 1

Figure 3.17: y[n] for n = 0, 1 and 2 using sliding tape procedure

Page 53: My Signals Notes

3.4. THE CONVOLUTION INTEGRAL 53

3.4 The Convolution Integral

Let us turn our attention now to CT LTI systems, we shall use the principleof superposition to derive the system’s response to some arbitrary input x(t).In this approach, we express x(t) in terms of impulses. Suppose the CT signalx(t) in Figure 3.18 is an arbitrary input to some CT system. We begin by

Figure 3.18: An arbitrary inputapproximating x(t) with narrow rectangular pulses as depicted in Figure 3.19.This procedure gives us a staircase approximation of x(t) that improves as pulsewidth is reduced. In the limit as pulse width approaches zero, this representation

Figure 3.19: Staircase approximation to an arbitrary input

becomes exact, and the rectangular pulses becomes impulses delayed by variousamounts. The system response to the input x(t) is then given by the sum ofthe system’s responses to each impulse component of x(t). Figure 3.19 showsx(t) as a sum of rectangular pulses, each of width Δτ. In the limit as Δτ → 0,each pulse approaches an impulse having a strength equal to the area underthat pulse. For example the pulse located at t = nΔτ can be expressed as

x(nΔτ)rect

(t− nΔτΔτ

)

and will approach an impulse at the same location with strength x(nΔτ)Δτ ,which can be represented by

[x(nΔτ)Δτ ]︸ ︷︷ ︸strength

δ(t− nΔτ) (3.12)

If we know the impulse response of the system h(t), the response to the impulsein (3.12) will simply be [x(nΔτ)Δτ ]h(t− nΔτ) since

δ(t)H−−−→ h(t)

δ(t− nΔτ)H−−−→ h(t− nΔτ)

[x(nΔτ)Δτ ]δ(t− nΔτ)H−−−→ [x(nΔτ)Δτ ]h(t− nΔτ) (3.13)

The response in (3.13) represents the response to only one of the impulse com-ponents of x(t). The total response y(t) is obtained by summing all such com-ponents (with Δτ → 0)

limΔτ→0

∞∑

n=−∞

x(nΔτ)Δτδ(t− nΔτ)

︸ ︷︷ ︸The input x(t)

H−−−→ lim

Δτ→0

∞∑

n=−∞

x(nΔτ)Δτh(t− nΔτ)

︸ ︷︷ ︸The output y(t)

Page 54: My Signals Notes

54 CHAPTER 3. DESCRIPTION OF SYSTEMS

and both sides by definition are integrals given by∫ ∞

−∞x(τ)δ(t− τ)dτ

︸ ︷︷ ︸x(t)

H−−−→

∫ ∞

−∞x(τ)h(t− τ)dτ

︸ ︷︷ ︸y(t)

In summary the response y(t) to the input x(t) is given by

y(t) =

∫ ∞

−∞x(τ)h(t− τ)dτ (3.14)

and the integral in (3.14) is known as the convolution integral and denoted byy(t) = x(t) ∗ h(t).

Properties of The Convolution Integral

The properties of the convolution integral are the same as of the convolutionsum and will be stated here for completion.

1. The Commutative Property

x(t) ∗ h(t) = h(t) ∗ x(t) (3.15)

This property can be easily proven by starting with the definition of con-volution

y(t) =

∫ ∞

−∞x(τ)h(t− τ)dτ

and letting λ = t− τ so that τ = t− λ and dτ = −λ, we obtain

x(t) ∗ h(t) =∫ ∞

−∞x(t− λ)h(λ)dλ =

∫ ∞

−∞h(λ)x(t− λ)dλ = h(t) ∗ x(t)

2. The Distributive Property

x(t) ∗ (h(t) + z(t)) = x(t) ∗ h(t) + x(t) ∗ z(t) (3.16)

3. The Associative Property

x(t) ∗ (h(t) ∗ z(t)) = (x(t) ∗ h(t)) ∗ z(t) (3.17)

4. The Shifting property

x(t− T1) ∗ h(t− T2) = y(t− T1 − T2) (3.18)

5. Convolution with an Impulse

x(t) ∗ δ(t) = x(t) (3.19)

By definition of convolution

x(t) ∗ δ(t) =∫ ∞

−∞x(τ)δ(t− τ)dτ (3.20)

Because δ(t − τ) is an impulse located at τ = t and according to thesampling property of the impulses, the integral in (3.20) is the value ofx(τ) at τ = t, that is x(t).

Page 55: My Signals Notes

3.4. THE CONVOLUTION INTEGRAL 55

6. The Width Property

If x(t) has a duration of T1 and h(t) has a duration of T2, then the du-ration of y(t) is T1 + T2. Furthermore, the appearance of the output willbe located at the summation of the times of where the two functions firstappear.

7. The Scaling Property

If y(t) = x(t) ∗ h(t) then y(at) = |a|x(at) ∗ h(at)

This property of the convolution integral has no counterpart for the con-volution sum.

The Graphical procedure

The steps in evaluating the convolution integral are parallel to thoses followedin evaluating the convolution sum. If y(t) is the convolution of x(t) and h(t),then

y(t) =

∫ ∞

−∞x(τ)h(t− τ)dτ (3.21)

One of the crucial points to remember here is that the integration is performedwith respect to τ , so that t is just like a constant. This consideration is alsoimportant when we sketch the graphical representations of the functions x(t)and h(t− τ). Both of these functions should be sketched as functions of τ , notof t. The convolution operation can be performed as follows:

1. Keep the function x(τ) fixed.

2. Reflect h(τ) about the vertical axis (t = 0) to obtain h(t− τ).

3. Time shift h(−τ) by t0 seconds to obtain h(t − t0). For t0 > 0, the shiftis to the right; for t0 < 0, the shift is to the left.

4. Find the area under the product of x(τ) by h(t0 − τ) to obtain y(t0), thevalue of the convolution at t = t0.

5. The procedure is repeated for each value of t over the range −∞ to ∞.

Determine y(t) = x(t) ∗ h(t) for x(t) and h(t) as shown in Figure 3.20. Example 3.7

x(t)

Time, seconds

y(t)

Time, seconds

-5 -4 -3 -2 -1 0 1 2 3 4 5-5 -4 -3 -2 -1 0 1 2 3 4 50

1

0

1

Figure 3.20: CT signals to be convolved

Page 56: My Signals Notes

56 CHAPTER 3. DESCRIPTION OF SYSTEMS

� Solution Figure 3.21(a) shows x(τ) and h(−τ) as functions of τ . The functionh(t − τ) is now obtained by shifting h(−τ) by t. If t is positive, the shift is tothe right; if t is negative the shift is to the left. Figure 3.21(a) shows that fornegative t, h(t− τ) does not overlap x(τ), and the product x(τ)h(t− τ) = 0, sothat y(t) = 0 for t < 0. Figure 3.21(b) shows the situation for 0 ≤ t ≤ 2 , herex(τ) and h(t− τ) do overlap and the product is nonzero only over the interval0 ≤ τ ≤ t (shaded area). Therefore,

y(t) =

∫ t

0

x(τ)h(t− τ)dτ 0 ≤ t < 2

All we need to do now is to substitute correct expressions for x(τ) and h(t− τ)in this integral

y(t) =

∫ t

0

(1)(1)dτ = t 0 ≤ t < 2

As we keep right shifting h(−τ) to obtain h(t − τ), the next interesting rangefor t is 2 ≤ t ≤ 4, (Figure 3.21(c).) Clearly, x(τ) overlaps with h(t− τ) over theshaded interval, which is t− 2 ≤ τ < 2. Therefore,

y(t) =

∫ 2

t−2x(τ)h(t− τ)dτ

=

∫ 2

t−2(1)(1)dτ

= 4− t 2 ≤ t ≤ 4

h(−τ)

t < 0

x(τ)h(−τ)

t < 0

(a)

x(τ)h(t− τ)

0 ≤ t < 2

(b)

x(τ)

h(t− τ)

t ≥ 2

x(τ)h(t− τ)

t ≥ 2

(c)

x(τ) x(τ)

h(t− τ)

0 ≤ t < 2

-5 t τ →

-5 0 2 τ →-5 0 1 2 τ →

-5 τ →

-5 t− 2 t τ →

-5 0 1 2 τ →

-5 0 t τ →-5 0 τ →

-5 0 τ →0

0.5

1

0

0.5

1

0

0.5

1

0

0.5

1

0

0.5

1

0

0.5

1

0

0.5

1

0

0.5

1

0

0.5

1

Figure 3.21: Convolution of x(t) and h(t)

Page 57: My Signals Notes

3.4. THE CONVOLUTION INTEGRAL 57

It is clear that for t > 4, x(τ) will not overlap h(t − τ) which implies y(t) = 0for t > 4. Therefore the result of the convolution is (Figure 3.22),

y(t) =

0 t < 0

t 0 ≤ t < 2

4− t 2 ≤ t ≤ 4

0 t > 4 �Time, seconds

y(t)

-5 -4 -3 -2 -1 0 1 2 3 4 50

1

2

Figure 3.22: Convolution ofx(t) and h(t)

Hint To check your answer, the convolution has the property that thearea under the convolution integral is equal to the product of the areasof the two signals entering into the convolution. The area can be computedby integrating equation (3.14) over the interval −∞ < t <∞, giving

∫ ∞

−∞y(t)dt =

∫ ∞

−∞

∫ ∞

−∞x(τ)h(t− τ)dτdt

=

∫ ∞

−∞x(τ)

[∫ ∞

−∞h(t− τ)dt

]

=

∫ ∞

−∞x(τ) [area under h(t)] dτ

= area under x(t)× area under h(t)

This check also applies to DT convolution,

∞∑

n=−∞

y[n] =

∞∑

m=−∞

x[m]

∞∑

n=−∞

h[n−m]

Determine graphically y(t) = x(t)∗h(t) for x(t) = e−tu(t) and h(t) = e−2tu(t). Example 3.8

� Solution Figure 3.23 show x(t) and h(t) and Figure 3.24(a) shows x(τ) andh(−τ) as functions of τ . The function h(t−τ) is obtained by shifting h(−τ) by t.Clearly, h(t−τ) does not overlap x(τ) for t < 0, and the product x(τ)h(t−τ) = 0,so that y(t) = 0 for t < 0. Figure 3.24(b) shows the situation for t ≥ 0. Herex(τ) and h(t− τ) do overlap over the shaded interval (0, t). Therefore,

y(t) =

∫ t

0

e−τe−2(t−τ)dτ

= e−2t∫ t

0

eτdτ

= e−t − e−2t t ≥ 0

h(t)x(t)

e−t

e−2t

0 t→ 0 t→

11

Figure 3.23: Signals x(t) and h(t)

Page 58: My Signals Notes

58 CHAPTER 3. DESCRIPTION OF SYSTEMS

h(−τ)

t < 0

h(t− τ)

x(τ)h(−τ)

t < 0

(a)

x(τ)

h(t− τ)

t > 0

x(τ)h(t− τ)

t > 0

(b)

x(τ)

0 τ →

0 t τ →

0 t τ →

0 τ →

0 τ →

t 0 τ →

0

0.5

1

0

0.5

1

0

0.5

1

0

0.5

1

0

0.5

1

0

0.5

1

Figure 3.24: Convolution of x(t) and h(t)

Therefore the output y(t) is (Figure 3.25),

y(t) = (e−t − e−2t)u(t) t ≥ 0 �

y(t)

0 t→

Figure 3.25: y(t)

Compute the convolution x(t) ∗ h(t), where x(t) and h(t) are as in Figure 3.26Example 3.9

x(t)h(t)

-1 0 1 t→-1 0 t→0

1

0

1

Figure 3.26: Signals x(t) and h(t)

Here, x(t) has a simpler mathematical expression that that of h(t), so it is prefer-able to invert x(t). Hence, we shall determine h(t) ∗x(t) rather than x(t) ∗h(t).According to Figure 3.26 x(t) and h(t) are

x(t) =

{1 −1 ≤ t < 0

0 otherwiseand h(t) =

t+ 1 −1 ≤ t < 0

1 0 ≤ t < 1

0 otherwise

Page 59: My Signals Notes

3.4. THE CONVOLUTION INTEGRAL 59

Figure 3.27 demonstrates the overlapping of the two signals h(τ) and x(t− τ).We can see that for t < −2, the product h(τ)x(t − τ) is always zero. For−2 ≤ t < −1,

y(t) =

∫ t+1

−1(τ + 1)(1)dτ

=

(τ2

2+ τ

)∣∣∣∣

t+1

−1

=1

2(t+ 2)2 − 2 ≤ t < −1

h(τ)

x(−τ)

t < −2x(t− τ)

h(τ)x(t− τ)

t < −2

h(τ)

x(t− τ)

−2 ≤ t < −1

h(τ)x(t− τ)

−2 ≤ t < −1

-1 t+ 1 τ →

t t+ 1 τ →

-1 0 1 τ →

τ →

t t+ 1 τ →

-1 0 1 τ →

0

0.5

1

0

0.5

1

0

0.5

1

0

0.5

1

0

0.5

1

0

0.5

1

Figure 3.27: Graphical interpretation of h(t) ∗ x(t)

For −1 ≤ t < 0, the product is shown in Figure 3.28(a) and

y(t) =

∫ 0

t

(τ + 1)(1)dτ +

∫ t+1

0

(1)(1)dτ

= 1−t2

2− 1 ≤ t < 0

For 0 ≤ t < 1, the product is shown in Figure 3.28(b) and

y(t) =

∫ 1

t

(1)(1)dτ

= 1− t 0 ≤ t < 1

Page 60: My Signals Notes

60 CHAPTER 3. DESCRIPTION OF SYSTEMS

For t > 1 the product is always zero. Summarizing, we have

y(t) =

0 t < −2(t+2)2

2 −2 ≤ t < −1

1− t2

2 −1 ≤ t < 0

1− t 0 ≤ t < 1

0 t ≥ 1

h(τ)

x(t− τ)

0 ≤ t < 1

h(τ)x(t− τ)

0 ≤ t < 1

(b)

h(τ)

−1 ≤ t < 0x(t− τ)

h(τ)x(t− τ)

−1 ≤ t < 0

(a)

t 0 t+ 1 τ →

t t+ 1 τ →

-1 0 1 τ →

-4 t 1 τ →

t t+ 1 τ →

-1 0 1 τ →

0

0.5

1

0

0.5

1

0

0.5

1

0

0.5

1

0

0.5

1

0

0.5

1

Figure 3.28: Graphical representation of h(t) ∗ x(t)

3.5 Properties of LTI Systems

The impulse response of an LTI system represents a complete description of thecharacteristics of the system.

Memoryless LTI Systems

In section 3.2.1 we defined a system to be memoryless if its output at any instantin time depends only on the values of the input at the same instant in time.There we saw the input output relation of a memoryless LTI system is

y(t) = Kx(t) (3.22)

for some constant K. By setting x(t) = δ(t) in (3.22), we see that this systemhas the impulse response

h(t) = Kδ(t)

Page 61: My Signals Notes

3.5. PROPERTIES OF LTI SYSTEMS 61

In other words the only memoryless systems are what we call constant gainsystems.

Invertible LTI Systems

Recall that a system is invertible only if there exists an inverse system whichenables the reconstruction of the input given the output. If hinv(t) representsthe impulse response of the inverse system, then in terms of the convolutionintegral we must therefore have

y(t) = x(t) ∗ h(t) ∗ hinv(t) = x(t)

this is only possible if

h(t) ∗ hinv(t) = hinv(t) ∗ h(t) = δ(t)

Causal LTI Systems

Using the convolution integral,

y(t) =

∫ ∞

−∞x(τ)h(t− τ)dτ

for a CT system to be causal, y(t) must not depend on x(τ) for τ > t. We cansee that this will be so if

h(t− τ) = 0 for τ > t

Let λ = t− τ , impliesh(λ) = 0 for λ < 0

In this case the convolution integral becomes

y(t) =

∫ t

−∞x(τ)h(t− τ)dτ

=

∫ ∞

0

h(τ)x(t− τ)dτ

Stable LTI Systems

A CT system is stable if and only if every bounded input produces a boundedoutput. Consider a bounded input x(t) such that |x(t)| < B for all t. Supposethat this input is applied to an LTI system with impulse response h(t). Then

|y(t)| =

∣∣∣∣

∫ ∞

−∞h(τ)x(t− τ)dτ

∣∣∣∣

≤∫ ∞

−∞|h(τ)| |x(t− τ)| dτ

≤ B∫ ∞

−∞|h(τ)| dτ

Therefore, the system is stable if∫ ∞

−∞|h(τ)| dτ <∞

Page 62: My Signals Notes

62 CHAPTER 3. DESCRIPTION OF SYSTEMS

Page 63: My Signals Notes

Chapter 4

The Fourier Series

In Chapter 3 we saw how to obtain the response of a linear time invariant sys-tem to an arbitrary input represented in terms of the impulse function. Theresponse was obtained in the form of the convolution integral. In this chapterwe explore other ways of expressing an input signal in terms of other signals.In particular we are interested in representing periodic signals in terms of com-plex exponentials, or equivalently, in terms of sine and cosine waveforms. Thisrepresentation of signals leads to the Fourier series, named after the Frenchphysicist Jean Baptiste Fourier. Fourier was the first to suggest that peri-odic signals could be represented by a sum of sinusoids. The concept is reallysimple: consider a periodic signal with fundamental period T0 and fundamentalfrequency ω0 = 2πf , this periodic signal can be expressed as a linear combi-nation of harmonically related sinusoids as shown in Figure 4.1. In Fourier

Figure 4.1: The concept of representing a periodic signal as a linear combinationof sinusoids

63

Page 64: My Signals Notes

64 CHAPTER 4. THE FOURIER SERIES

series representation of a signal, the higher frequency components of sinusoidshave frequencies that are integer multiplies of the fundamental frequency. Thisis called the harmonic number, for example, the function cos(2π(kf)) is thekth harmonic cosine, and its frequency is kf . The idea of the Fourier seriesdemonstrated in Figure 4.1 uses a constant, sines and cosines to represent theoriginal function, thus called the trigonometric Fourier series. Another formof the Fourier series is the complex form, here the original periodic function isrepresented as a combination of harmonically related complex exponentials. Aset of harmonically related complex exponentials form an orthogonal basis bywhich periodic signals can be represented, a concept explored next.

4.1 Orthogonal Representations of Signals

In this section we show a way of representing a signal as a sum of orthogonalsignals, such representation simplifies calculations involving signals. We canvisualize the signal as a vector in an orthogonal coordinate system, with theorthogonal waveforms being the unit coordinates.

4.1.1 Orthogonal Vector Space

Vectors, functions and matrices can be expressed in more than one set of co-ordinates, which we usually call a vector space. A three dimensional CartesianFigure 4.2 is an example of a vector space denoted by R3. The three axes ofthree-dimensional Cartesian coordinates, conventionally denoted the x, y, andz axes form the basis of the vector space R3. A very natural and simple basis

Figure 4.2: 3D Cartesian coor-dinate system

is simply the vectors e1 = (1, 0, 0), e2 = (0, 1, 0) and e3 = (0, 0, 1). Any vectorin R3 can be written in terms of this basis. A vector v = (a, b, c) in R3 forexample can be written uniquely as the linear combination v = ae1+ be2+ ce3and a, b, c are simply called the coefficients. We can obtain the coefficients withrespect to the basis using the inner product, for vectors this is simply the dotproduct

< v, e1 > = vT e1 =

[a b c

]

1

0

0

= a

< v, e2 > = vT e2 =

[a b c

]

0

1

0

= b

< v, e3 > = vT e3 =

[a b c

]

0

0

1

= c

We say the basis is orthogonal if the inner product of any two different vectorsof the basis set is zero, and to visualize, they are simply at right angles. It isclear that the vectors e1, e2 and e3 form an orthogonal basis since

< e1, e2 > = eT1 e2 = 0

< e2, e3 > = eT2 e3 = 0

< e3, e1 > = eT3 e1 = 0

Page 65: My Signals Notes

4.1. ORTHOGONAL REPRESENTATIONS OF SIGNALS 65

4.1.2 Orthogonal Signal Space

In the previous section we saw that a vector can be represented as a sum oforthogonal vectors, which form the coordinate system of a vector space. Theproblem in signals is analogous, we start by generalizing the concept of the innerproduct to signals. The inner product of two real valued functions f(t) and g(t)over an interval (a, b) is defined as

< f, g > =

∫ b

a

f(t)g(t)dt (4.1)

and for complex valued functions the inner product is defined as

< f, g > =

∫ b

a

f(t)g∗(t)dt (4.2)

where g∗(t) stands for the complex conjugate of the signal. For any basis setto be orthogonal the inner product of every single element of the set to everyother element must be zero. A set of signals Φi, i = 0,±1,±2, ∙ ∙ ∙ , is said to beorthogonal over an interval (a, b) if

∫ b

a

Φm(t)Φ∗n(t)dt =

{En m = n

0 m 6= n(4.3)

and En is simply the signal energy over the interval (a, b). If the energies En = 1for all n, the Φn(t) are said to be orthonormal signals. An orthogonal set canalways be normalized by dividing each signal in the set by

√En.

Show that the signals Φm(t) = sinmt, m = 1, 2, 3, ∙ ∙ ∙ , form an orthogonal Example 4.1set on the interval −π < t < π.

� Solution We start by showing that the inner product of each single elementof the set to every other element is zero

∫ π

−πΦm(t)Φ

∗n(t)dt =

∫ π

−π(sinmt)(sinnt)dt

=1

2

∫ π

−πcos(m− n)t dt−

1

2

∫ π

−πcos(m+ n)t dt

=

{π m = n

0 m 6= n

Since the energy in each signal equals π, the following set of signals forms anorthonormal set over the interval −π < t < π

sin t√π,sin 2t√π,sin 3t√π, ∙ ∙ ∙ �

Show that the signals Φk(t) = ej(kω0t), k = 0,±1,±2, ∙ ∙ ∙ , form an orthogonal Example 4.2

set on the interval (0, T ) where T = 2πω0.

Page 66: My Signals Notes

66 CHAPTER 4. THE FOURIER SERIES

� Solution It is easy to show that∫ T

0

Φk(t)Φ∗l (t)dt =

∫ T

0

ej(kω0t)e−j(lω0t)dt

=

∫ T

0

ej(k−l)ω0tdt

=

{T k = l

0 k 6= l

and hence the signals 1√Tejkω0t forms an orthonormal set over the interval (0, T ).

Evaluating the integral for the case k 6= l is not trivial and is shown below∫ T

0

ej(k−l)ω0tdt =1

j(k − l)ω0ej(k−l)ω0t

∣∣∣∣

T

0

=1

j(k − l)ω0

[ej(k−l)ω0T − 1

]and ω0 =

T

= 0

since ej2π(k−l) = 1 for k 6= l. �

Now, consider expressing a signal x(t) with finite energy over the interval (a, b)by an orthonormal set of signals Φi(t) over the same interval as

x(t) =

∞∑

i=−∞

ciΦi(t) (4.4)

The series representation in (4.4) is called a generalized Fourier series of x(t).We can visualize this similar to expressing a vector as a linear combination ofthe orthogonal basis. In an analogous fashion, here we are expressing a signal asa linear combination of orthogonal or orthonormal set of signals. The questionremains is how to find the coefficients ci for such a linear combination. It turnsout that computing ci is really simple, just multiply (4.4) by Φ

∗k(t) and integrate

over the range of definition of x(t). Therefore,

∫ b

a

x(t)Φ∗k(t)dt =

∫ b

a

∞∑

i=−∞

ciΦi(t)Φ∗k(t)dt

=

∞∑

i=−∞

ci

∫ b

a

Φi(t)Φ∗k(t)dt (4.5)

from (4.3) and since Φi(t) is an orthonormal set, (4.5) can be simplified as

ck =

∫ b

a

x(t)Φ∗k(t)dt k = 0,±1,±2, ∙ ∙ ∙ (4.6)

noting that the summation in (4.5) has a value only when i = k and the sum-mation is always zero otherwise. Note also that (4.6) is the inner product of thesignal with the orthonormal set. If the set Φi(t) is an orthogonal set, then thecoefficients in (4.6) are

ck =1

Ek

∫ b

a

x(t)Φ∗k(t)dt k = 0,±1,±2, ∙ ∙ ∙ (4.7)

Page 67: My Signals Notes

4.1. ORTHOGONAL REPRESENTATIONS OF SIGNALS 67

Consider the signal x(t) defined over the interval (0, 3) as shown in Figure (4.3). Example 4.3

Time, seconds

x(t)

0 1 2 3 t0

1

2

Figure 4.3: A signal x(t) defined over the interval (0, 3)

It is possible to represent this signal in terms of orthogonal set of basis signalsdefined over the interval (0, 3). Figure (4.4) shows a set of three orthogonalsignals used to represent the signal x(t).

φ1(t) φ2(t) φ3(t)

0 1 2 3 t0 1 2 3 t0 1 2 3 t0

1

0

1

0

1

Figure 4.4: A set of three orthogonal signals

The coefficients that represent signal x(t), obtained using equation (4.6), aregiven by

c1 =

∫ 3

0

x(t)Φ∗1(t)dt = 2

c2 =

∫ 3

0

x(t)Φ∗2(t)dt = 0

c3 =

∫ 3

0

x(t)Φ∗3(t)dt = 1

The signal x(t) can be represented in terms of Φi by

x(t) =3∑

i=1

ciΦi(t)

= c1Φ1 + c2Φ2 + c3Φ3

= 2Φ1 +Φ3 �

It is worth emphasizing here that the choice of the basis is not unique and manyother possibilities exist.

Page 68: My Signals Notes

68 CHAPTER 4. THE FOURIER SERIES

4.2 Exponential Fourier Series

It was shown in Example 4.2 that the set of exponentials ejkω0t(k = 0,±1,±2, ∙ ∙ ∙ )is orthogonal over an interval (0, T ) where T = 2π

ω0. If we select such a set as

basis functions then according to (4.4)

x(t) =∞∑

n=−∞

cnejnω0t (4.8)

where, from (4.7), the cn are complex constants and are given by

cn =1

T

∫ T

0

x(t)e−jnω0tdt n = 0,±1,±2, ∙ ∙ ∙ (4.9)

Each term in the series has a period T , hence if the series converges, its sum isperiodic with period T . Such a series is called the complex exponential Fourierseries, and the cn are called the Fourier coefficients or spectral coefficients. Fur-thermore, the interval of integration can be replaced by any other interval oflength T . Recall that we denote integration over an interval of length T by

∫T.

Find the exponential Fourier series for the signal x(t) in Figure 4.5Example 4.4

Figure 4.5: A periodic signal x(t)

� Solution In this case T = 2, ω0 = 2πT= π, the Fourier coefficients are

cn =1

2

∫ 1

−1x(t)e−jnπtdt

=1

2

∫ 0

−1−Ke−jnπtdt+

1

2

∫ 1

0

Ke−jnπtdt

=K

2

(1− ejnπ

jnπ+e−jnπ − 1−jnπ

)

=K

jnπ

(

1−1

2ejnπ −

1

2e−jnπ

)

=

{2Kjnπ

n odd

0 n even�

Page 69: My Signals Notes

4.2. EXPONENTIAL FOURIER SERIES 69

4.2.1 The Frequency Spectra (Exponential)

Also called the line spectra, theses are separate plots of the magnitude of cn ver-sus ω (or harmonic number), magnitude spectrum, and the phase of cn versus ω(or harmonic number), phase spectrum. In special cases we allow the magnitudespectrum to take on negative values as well as positive, the spectrum is calledamplitude spectrum. These two plots together are the frequency or line spectra(since amplitudes and phases are indicated by vertical lines) of the periodic sig-nal x(t). The Fourier coefficients cn is complex in general, this requires that thecoefficients cn be expressed in polar form as |cn| ej∠cn . Consider the signal ofExample 4.4, it was found that the Fourier coefficients are given by

cn =

{2Kjnπ

n odd

0 n even

The magnitude spectrum is

|cn| =

{2K|n|π n odd

0 n even

paying particular attention to the case when n = 0, which can be shown byusing l’Hopital’s rule that c0 = 0. The phase spectrum of x(t) is given by

∠cn =

−π2 n = (2m− 1), m = 1, 2, ∙ ∙ ∙

0 n = 2m, m = 0, 1, 2, ∙ ∙ ∙π2 n = −(2m− 1), m = 1, 2, ∙ ∙ ∙

The line spectra of x(t) are shown in Figure 4.6. Note that the magnitude spec-trum has even symmetry, the phase spectrum has odd symmetry. The spectrumexists only at n = 0,±1,±2,±3, ∙ ∙ ∙ ; corresponding to ω = 0,±ω0,±2ω0,±3ω0, ∙ ∙ ∙ ;i.e., only at discrete values of ω. It is therefore a discrete spectrum and it is

2Kπ

2Kπ

2K3π

2K3π

2K5π

2K5π

Magnitude Spectrum

π2

−π2

π2

−π2

π2

−π2

Phase Spectrum

-6 -5 -4 -3 -2 -1 0 1 2 3 4 5 n→

-6 -5 -4 -3 -2 -1 0 1 2 3 4 5 n→

Figure 4.6: Line Spectra for the periodic signal x(t) of Example 4.4

Page 70: My Signals Notes

70 CHAPTER 4. THE FOURIER SERIES

very common to see the spectrum written in terms of the discrete unit impulsefunction. The magnitude spectrum cn in Figure 4.6 could be expressed as

cn = ∙ ∙ ∙+2k

3πδ[n+ 3] +

2k

πδ[n+ 1] +

2k

π[n− 1] +

2k

3πδ[n− 3] + ∙ ∙ ∙

Find the exponential Fourier series of the impulse train δT (t) shown in FigureExample 4.54.7.

Figure 4.7: Impulse train

� Solution From (4.9), the Fourier coefficients are

cn =1

T

∫ T/2

−T/2δT (t)e

jnω0t dt

=1

T

∫ T/2

−T/2δ(t)ejnω0t dt =

1

Tejnω0t

∣∣∣∣t=0

=1

T

The result is based on the sifting property of the impulse function

∫ ∞

−∞x(t)δ(t− τ)dt = x(τ)

The exponential form of the Fourier series is given by

δT (t) =

∞∑

n=−∞

1

Tejnω0t

The amplitude spectrum for this function is given in Figure 4.8. Note that sincethe Fourier coefficients are real, there is no need for the phase spectrum. �

Figure 4.8: Amplitude spectrum for an impulse train

Page 71: My Signals Notes

4.2. EXPONENTIAL FOURIER SERIES 71

Figure 4.9: A periodic signal x(t)

Find the exponential Fourier series for the square signal x(t) in Figure 4.9. Example 4.6

� Solution In this case ω0 = 2πTthe Fourier coefficient for the d.c

component is

c0 =1

T

T

x(t)dt =1

T

∫ T/2

−T/21 dt

=1

T

∫ T1

−T1

1 dt =2T1T

The Fourier coefficients are

cn =1

T

∫ T/2

−T/2x(t)ejnω0t dt

=1

T

∫ T1

−T1

1 ejnω0t dt =1

jTnω0

(ejnω0T1 − e−jnω0T1

)

=2

Tnω0sin(nω0T1) =

2

Tn(2πT

) sin

(

n

(2π

T

)

T1

)

=2T1Tsinc

(2nT1T

)

� (4.10)

Consider the special case of T1 =π2 and T = 2π in example 4.6. Hence,

cn =1

2sinc

(n2

)

The line spectra to this particular case is shown in Figure 4.10. The frequencyspectra provides an alternative description to the signal x(t), namely the fre-quency domain description. The time domain description of a signal x(t) is aone as depicted in Figure 4.9. On the other hand the frequency domain de-scription of x(t) is as shown in Figure 4.10. One can say the signal has a dualidentity: the time domain identity x(t) and the frequency domain identity (fre-quency spectra). The two identities provide better understanding of a signal.We notice some interesting features of these spectra. First, the spectra exist forpositive as well as negative values of ω (the frequency). Second, the magnitudespectrum is an even function of ω and the phase spectrum is an odd functionω, a property that we explain in details later.

Page 72: My Signals Notes

72 CHAPTER 4. THE FOURIER SERIES

Figure 4.10: The magnitude and phase spectra for x(t) for T1 =π2 and T = 2π

What is Negative Frequency?

The existence of the spectrum at negative frequencies (i.e. double-sided) issomewhat disturbing, because, by definition, the frequency is a positive quantity.How do we interpret a negative frequency? And how do we then interpretthe spectral plots for negative values of ω? First we have to accept the factthat this situation only arises when dealing with complex exponentials. Thefrequency spectra are a graphical representation of coefficients cn as a functionof ω. Existence of the spectrum at ω = −nω0 is simply an indication of the factthat an exponential component e−jnω0t exists in the series.

4.3 Trigonometric Fourier Series

In the previous section we represented signals using a set of complex exponentialswhich is orthogonal over an interval (0, T ). A question that may well be askedat this point: If we know that a given function x(t) is real-valued, isn’t there anequivalent way of expressing a Fourier series representation of x(t) using a setof real-valued orthogonal functions? In this section we show that an orthogonaltrigonometric signal set can be selected as basis function to express a signal x(t)over any interval of duration T . Consider a signal set

Φi(t) = {1, cosω0t, cos 2ω0t, ∙ ∙ ∙ , cosω0t, ∙ ∙ ∙ ;

sinω0t, sinω0t, ∙ ∙ ∙ , sinnω0t, ∙ ∙ ∙ } (4.11)

Page 73: My Signals Notes

4.3. TRIGONOMETRIC FOURIER SERIES 73

It can be easily shown that this set is orthogonal over any interval T = 2π/ω0,which is the fundamental period. Therefore,

T

cosnω0t cosmω0t dt =

{0 m 6= nT2 m = n 6= 0

(4.12)

and ∫

T

sinnω0t sinmω0t dt =

{0 m 6= nT2 m = n 6= 0

(4.13)

and ∫

T

sinω0t cosmω0t dt = 0 for all n and m (4.14)

If we select Φi(t) in (4.11) as basis function then we can now express a signalx(t) by a trigonometric Fourier series over any interval of duration T as

x(t) = a0+a1 cosω0t+ a2 cos 2ω0t+ ∙ ∙ ∙

+b1 sinω0t+ b2 sin 2ω0t+ ∙ ∙ ∙ (4.15)

or

x(t) = a0 +

∞∑

n=1

[an cosnω0t+ bn sinnω0t] (4.16)

Using

cn =1

En

T

x(t)Φn(t)dt

we can determine the coefficients a0, an and bn, thus

a0 = c0 =1

T

T

x(t)dt averaged value of x(t) over one period

an =2

T

T

x(t) cosnω0t dt n = 1, 2, 3, ∙ ∙ ∙ (4.17)

bn =2

T

T

x(t) sinnω0t dt n = 1, 2, 3, ∙ ∙ ∙ (4.18)

In order to see the close connection of the trigonometric Fourier series with theexponential Fourier series, we shall re-derive the trigonometric Fourier seriesfrom the exponential Fourier series.For real valued signal x(t), the complex conjugate of cn is given by

c∗n =

[1

T

∫ T

0

x(t)e−jnω0tdt

]∗

=1

T

∫ T

0

x(t)ejnω0tdt recall x(t) = x∗(t) for a real signal

= c−n

Hence,|c−n| = |cn| and ∠c−n = −∠cn (4.19)

Page 74: My Signals Notes

74 CHAPTER 4. THE FOURIER SERIES

and this explains why the amplitude spectrum has even symmetry while thephase spectrum has odd symmetry. This property can be clearly seen in theexamples of section 4.2.1. Furthermore, this allows us to regroup the exponentialseries in (4.8) as follows

x(t) = c0 +

−1∑

n=−∞

cnejnω0t +

∞∑

n=1

cnejnω0t

= c0 +

∞∑

n=1

c−ne−jnω0t +

∞∑

n=1

cnejnω0t

= c0 +

∞∑

n=1

[c−ne

−jnω0t + cnejnω0t

](4.20)

Since cn in general is complex we can express cn in rectangular form as cn =α+ jβ and substitute this form for cn in (4.20). Hence,

x(t) = c0 +

∞∑

n=1

[(α− jβ)e−jnω0t + (α+ jβ)ejnω0t

]

= c0 +

∞∑

n=1

[α(ejnω0t + e−jnω0t

)+ jβ

(ejnω0t − e−jnω0t

)]

Using Euler’s identities we have

x(t) = c0 +

∞∑

n=1

[α (2 cosnω0t) + jβ (2j sinnω0t)]

= c0 +

∞∑

n=1

[2α cosnω0t− 2β sinnω0t] (4.21)

Equation (4.21) can be written as

x(t) = a0 +∞∑

n=1

[an cosnω0t+ bn sinnω0t]

where the coefficients a0, an and bn are given by

a0 = c0 =1

T

T

x(t)dt

an = 2α = 2Re{cn}=2

T

T

x(t) cosnω0t dt

bn = −2β = -2Im{cn}=2

T

T

x(t) sinnω0t dt

which is clearly as derived before. The result above becomes clearer when wesubstitute e−jnω0t = cosnω0t− j sinnω0t in

cn =1

T

∫ T

0

x(t)e−jnω0tdt

Page 75: My Signals Notes

4.3. TRIGONOMETRIC FOURIER SERIES 75

to obtain

cn =1

T

T

x(t) (cosnω0t− j sinnω0t) dt

=1

T

T

x(t) cosnω0t dt− j1

T

T

x(t) sinnω0t dt

= α+ jβ =1

2an − j

1

2bn (4.22)

4.3.1 Compact (Combined) Trigonometric Fourier Series

The trigonometric Fourier series in (4.16) can be written in a compact formby combining the sine and cosine terms to obtain a single sinusoid. Since thesinusoidal terms are of the same frequency we can use the trigonometric identity

an cosω0t+ bn sinnω0t = An cos(nω0t+ θn)

where

An =√a2n + b

2n (4.23)

θn = tan−1

(−bnan

)

(4.24)

The trigonometric Fourier series in (4.16) can now be expressed in the compactform of the trigonometric Fourier series as

x(t) = a0 +

∞∑

n=1

An cos(nω0t+ θn) (4.25)

where the coefficients An and θn are computed from an and bn using (4.17) and(4.18).

Find the compact trigonometric Fourier series for the periodic square wave x(t) Example 4.7illustrated in Figure 4.9 where T1 = π/2 and T = 2π.

� Solution The period is T = 2π and ω0 = 2π/T = 1. Therefore

x(t) = a0 +

∞∑

n=1

[an cosnt+ bn sinnt]

where

a0 =1

T

T

x(t)dt =1

∫ π/2

−π/2dt =

1

2

Note that a0 could have easily been deduced by inspecting x(t) in Figure 4.9, itis the average value of x(t) over one period. Next,

an =2

∫ π/2

−π/2cosnt dt =

2

nπsin(nπ2

)

=

0 n even2πn

n = 1, 5, 9, 13, ∙ ∙ ∙

− 2πn

n = 3, 7, 11, 15, ∙ ∙ ∙

Page 76: My Signals Notes

76 CHAPTER 4. THE FOURIER SERIES

and

bn =2

∫ π/2

−π/2sinnt dt = 0

Therefore

x(t) =1

2+

∞∑

n=1n odd

an cosnt

=1

2+2

π

(

cos t−1

3cos 3t+

1

5cos 5t−

1

7cos 7t+ ∙ ∙ ∙

)

(4.26)

Only the cosine terms appear in the trigonometric series. The series is thereforealready in compact form except that the amplitudes of alternating harmonicsare negative. Now, by definition, amplitudes An are positive. The negative signcan be expressed instead by introducing a phase of π radians since

− cosα = cos(α± π)

Using this fact, we can express the series in (4.26) as

x(t) =1

2+2

π

(

cos t+1

3cos(3t− π) +

1

5cos 5t+

1

7cos(7t− π) + ∙ ∙ ∙

)

This is now the Fourier series in compact form where

a0 =1

2

An =

{0 n even2πn

n odd

θn =

{0 for all n 6= 3, 7, 11, 15, ∙ ∙ ∙

−π n = 3, 7, 11, 15, ∙ ∙ ∙ �

4.3.2 The Frequency Spectrum (Trigonometric)

The compact trigonometric Fourier series in (4.25) indicates that a periodic sig-nal x(t) can be expressed as a sum of sinusoids of frequencies 0(dc), ω0, 2ω0, ∙ ∙ ∙ ,nω0, ∙ ∙ ∙ , whose amplitudes are A0, A1, A2, ∙ ∙ ∙ , An, ∙ ∙ ∙ , and whose phases are0, θ1, θ2, ∙ ∙ ∙ , θn, ∙ ∙ ∙ , respectively. We can plot amplitude An vs. ω (amplitudespectrum) and θn vs. ω (phase spectrum).To see the close connection between the trigonometric spectra (An and θn)

with the exponential spectra (|cn| and ∠cn), express cn in (4.22) in polar formcn = |cn|ej∠cn as follows

cn =

√(an2

)2+(bn2

)2ej tan

−1(−bnan )

=1

2

√a2n + b

2n ej tan

−1(−bnan )

Hence,

|cn| =1

2

√a2n + b

2n and ∠cn = tan

−1

(−bnan

)

(4.27)

Page 77: My Signals Notes

4.3. TRIGONOMETRIC FOURIER SERIES 77

and when compared with (4.23) and (4.24) implies |cn| = 12An for n ≥ 1 and

∠cn = θn for positive n. From (4.19), ∠cn = −θn for negative n. In conclu-sion, the connection between the trigonometric spectra with exponential spectracan be summarized as follows. The dc components c0 and a0 are identical inboth spectra. Moreover, the exponential amplitude spectrum |cn| is half of thetrigonometric amplitude spectrum An for n ≥ 1. The exponential angle spec-trum ∠cn is identical to the trigonometric phase spectrum θn for positive n and−θn for negative n. We can therefore produce the exponential spectra merelyby inspection of trigonometric spectra, and vice versa. The following exampledemonstrates this feature.

The trigonometric Fourier spectra of a certain periodic periodic signal x(t) Example 4.8are shown in Figure 4.11(a). By inspecting these spectra, sketch the corre-sponding exponential Fourier spectra.

(a) (b)

An

∠cn

|cn|

θn

-12 -9 -6 -3 0 3 6 9 12-12 -9 -6 -3 0 3 6 9 12

-12 -9 -6 -3 0 3 6 9 12-12 -9 -6 -3 0 3 6 9 12

−π/2

−π/4

0

π/4

π/2

−π/2

−π/4

0

4

8

12

16

0

4

8

12

16

Figure 4.11: Spectra for Example 4.8

� Solution The trigonometric frequency components exist at frequencies 0, 3, 6,and 9. The exponential frequency components exist at 0, 3, 6, 9 and −3,−6,−9.Consider the amplitude spectrum first, the dc component remains the same,i.e. c0 = a0 = 16. Now |cn| is an even function of ω and |cn| = An

2 . Thus, allthe remaining spectrum |cn| for positive n is half the trigonometric amplitudespectrum An, and for negative n is a reflection about the vertical axis of thespectrum for positive n, as shown in Figure 4.11(b). Note that the trigonometricFourier series for x(t) is written as

x(t) = 16 + 12 cos(3t− π4

)+ 8 cos

(6t− π2

)+ 4 cos

(9t− π4

)

The exponential Fourier series is

x(t) = 16 +[6e−j

π4 ej3t + 6ej

π4 e−3jt

]+[4e−j

π2 ej6t + 4ej

π2 e−j6t

]+[2e−j

π4 ej9t + 2ej

π4 e−j9t

]

= 16 + 6[ej(3t−

π4 ) + e−j(3t−

π4 )]+ 4

[ej(6t−

π2 ) + e−j(6t−

π2 )]+ 2

[ej(9t−

π4 ) + e−j(9t−

π4 )]

= 16 + 12 cos(3t− π4

)+ 8 cos

(6t− π2

)+ 4 cos

(9t− π4

)

Clearly both sets of spectra represent the same periodic signal. �

Page 78: My Signals Notes

78 CHAPTER 4. THE FOURIER SERIES

For the train of impulses in Example 4.5 sketch the trigonometric spectrumExample 4.9and write the trigonometric Fourier series.

� Solution By inspecting the double sided spectrum in Figure 4.8 we con-clude

A0 = c0 =1

T

An = 2|cn| =2

Tn = 1, 2, 3, ∙ ∙ ∙

θn = 0

Figure 4.12 shows the trigonometric Fourier spectrum. From this spectrum wecan express the impulse train δT (t) as

δT (t) =1

T[1 + 2(cosω0t+ cos 2ω0t+ cos 3ω0t+ ∙ ∙ ∙ )] ω0 =

T

Figure 4.12: A triangular periodic signal

Find the compact trigonometric Fourier series for the triangular periodic signalExample 4.10x(t) illustrated in Figure 4.13 and sketch the amplitude and phase spectra.

x(t)

-4 -3 -2 -1 0 1 2 3 t→

-1

0

1

Figure 4.13: A triangular periodic signal

� Solution In this case the period is T = 2. Hence, ω0 = π and

x(t) = a0 +

∞∑

n=1

[an cosnt+ bn sinnt]

where

x(t) =

{2t |t| ≤ 1

2

2(1− t) 12 < t ≤

32

Page 79: My Signals Notes

4.3. TRIGONOMETRIC FOURIER SERIES 79

Note it will be easier to choose the interval of integration from − 12 to32 rather

than 0 to 2. By inspecting Figure 4.13 it shows that the average value (dc) ofx(t) is zero, so that a0 = 0. Also

an =2

2

∫ 3/2

−1/2x(t) cosnπt dt

=

∫ 1/2

−1/22t cosnπt dt+

∫ 3/2

1/2

2(1− t) cosnπ dt

= zero

Therefore

an = 0

bn =

∫ 1/2

−1/22t sinnπt dt+

∫ 3/2

1/2

2(1− t) sinnπ dt

=8

n2π2=

0 n even8n2π2

n = 1, 5, 9, 13, ∙ ∙ ∙

− 8n2π2

n = 3, 7, 11, 15, ∙ ∙ ∙

Therefore

x(t) =8

π2

[

sinπt−1

9sin 3πt+

1

25sin 5πt−

1

49sin 7πt+ ∙ ∙ ∙

]

In order to plot Fourier spectra, the series must be converted into compacttrigonometric form as in Equation (4.25). This can be done using the identity

± sin kt = cos(kt∓ π2 )

Hence, x(t) can be expressed as

x(t) =8

π2

[

cos(πt− π2 ) +1

9cos(3πt+ π2 ) +

1

25cos(5πt− π2 ) +

1

49cos(7πt+ π2 ) + ∙ ∙ ∙

]

Note in this series how all the even harmonics are missing. The phases of oddharmonics alternate from −π2 to

π2 . Figure 4.14 shows amplitude and phase

spectra for x(t). �

Comment The summation of sinusoids that are harmonically related willresult in a signal that is periodic. The ratio of any two frequencies is arational number. The largest positive number of which all the frequencies areintegral (integer) multiples is the fundamental frequency. Consider the thesignal x(t)

x(t) = 2 + 7 cos( 12 t+ θ1) + 3 cos(23 t+ θ2) + 5 cos(

76 t+ θ3)

The frequencies in the spectrum of x(t) are 12 ,23 , and

76 ( we do not con-

sider dc). The ratios of the successive frequencies are 34 and47 , respec-

tively. Because both these numbers are rational, all the three frequenciesin the spectrum are harmonically related and the signal x(t) is periodic. Thelargest number of which 12 ,

23 , and

76 are integral multiples is

16 . Moreover,

3( 16 ) =12 , 4(

16 ) =

23 , and 7(

16 ) =

76 . Therefore the fundamental frequency is

16 . The three frequencies in the spectrum are the third, fourth, and seventhharmonics. The fundamental frequency in this example is absent.

Page 80: My Signals Notes

80 CHAPTER 4. THE FOURIER SERIES

The Phase Spectrum

The Magnitude Spectrum

825π2

89π2

8π2

0 1 2 3 4 5 6 7 8 9 n→

0 1 2 3 4 5 6 7 8 9 n→

0

−π2

0

π2

Figure 4.14: Line Spectra for the triangular periodic signal x(t) of Example 4.10

4.4 Convergence of the Fourier Series

For the Fourier series to make sense, the Fourier coefficients cn should be finite

for all values of n and

N∑

n=−N

cnejnω0t N→∞

−−−−→ x(t), i.e. the summation must

converge to the signal x(t). Fourier believed that any periodic signal could beexpressed as a sum of sinusoids. However, this turned out not to be the case.

4.4.1 Dirichlet Conditions

For the Fourier series to converge, signal x(t) must possess over any period thefollowing properties, which are known as the Dirichlit conditions.

1. x(t) is absolutely integrable, that is∫

T

|x(t)| dt <∞

2. x(t) has at most a finite number of discontinuities.

3. x(t) must have at most a finite number of maxima and minima.

These conditions are sufficient but not necessary. If a signal x(t) satisfies theDirichlet conditions, then the corresponding Fourier series is convergent. Itssum is x(t), except at any point t0 at which x(t) is discontinuous. At the pointsof discontinuity, the sum of the series is the average of the left and right handlimits of x(t) at t0, that is,

x(t0) =1

2[x(t−0 ) + x(t

+0 )]

Consider the periodic signal in Example 4.7, x(t) is written asExample 4.11

x(t) =1

2+2

π

(

cos t−1

3cos 3t+

1

5cos 5t−

1

7cos 7t+ ∙ ∙ ∙

)

(4.28)

Page 81: My Signals Notes

4.5. PROPERTIES OF FOURIER SERIES 81

We notice at t = π2 , which is a point of discontinuity of x(t), the sum in Equation

(4.28) has a value of 12 , which is equal to the arithmetic mean of the values one

(x(t = π2−)) and zero (x(t = π

2+)) of our signal x(t). Since x(t) satisfies Dirichlet

conditions, the sum in Equation (4.28) converges. Setting t = 0 in Equation(4.28), we obtain

x(0) =1

2+2

π

(

1−1

3+1

5−1

7+ ∙ ∙ ∙

)

=1

2+2

π(the sum of this infinite series must be π4 ) = 1 �

4.5 Properties of Fourier Series

In this section several properties of the Fourier series are stated without proofs.The students can refer to many references for the proofs of these properties.However, proofs of the Fourier transform properties will be provided.

4.5.1 Linearity

Suppose that x(t) and y(t) are periodic, both having the same period. Let theirFourier series expansion be given by

x(t) =∞∑

n=−∞

βnejnω0t

y(t) =

∞∑

n=−∞

γnejnω0t

and let

z(t) = k1x(t) + k2y(t)

where k1 and k2 are arbitrary constants. If

x(t)FS←→ βn

y(t)FS←→ γn

Then the Fourier coefficients of z(t) are

k1x(t) + k2y(t)FS←→ k1βn + k2γn

4.5.2 Time Shifting

If x(t) has the Fourier series coefficients cn,

x(t)FS←→ cn

Then the signal x(t− τ) has coefficients dn,

x(t− τ)FS←→ dn

Page 82: My Signals Notes

82 CHAPTER 4. THE FOURIER SERIES

where

dn =1

T

T

x(t− τ)e−jnω0tdt

= e−jnω0τ1

T

T

x(σ)e−jnω0σdσ

= cne−jnω0τ

Therefore,

x(t− τ)FS←→ cne

−jnω0τ

4.5.3 Frequency Shifting

Let z(t) = ejmω0tx(t), where m is an integer. If x(t) has the Fourier seriescoefficients cn,

x(t)FS←→ cn

Then the signal z(t) has coefficients dn,

ejmω0tx(t)FS←→ dn

where

dn =1

T

T

ejmω0tx(t)e−jnω0tdt

=1

T

T

x(t)e−j(n−m)ω0tdt

= cn−m

Notice the duality between the time shifting and frequency shifting properties.Shifting in one domain corresponds to multiplication by a complex exponentialin the other domain.

4.5.4 Time Reflection

If x(t) has the Fourier series coefficients cn,

x(t)FS←→ cn

Then the signal x(−t) has coefficients c−n,

x(−t)FS←→ c−n

4.5.5 Time Scaling

Let z(t) = x(at), with a > 0, and let the fundamental period of x(t) be T(Figure 4.15). The first thing to realize is that if x(t) is periodic with period T ,then z(t) is also periodic with fundamental period T

aand fundamental frequency

aω0. The Fourier series coefficients will be

dn =a

T

Ta

z(t)e−jn(aω0)tdt =a

T

Ta

x(at)e−jn(aω0)tdt

Page 83: My Signals Notes

4.5. PROPERTIES OF FOURIER SERIES 83

Figure 4.15: A signal x(t) and a time scaled version z(t) of that signal

We can make the change of variable λ = at. Therefore,

dn =a

T

1

a

T

x(λ)e−jn(aω0)(λ/a)dλ =1

T

T

x(λ)e−jnω0λdλ = cn

We notice that the Fourier series coefficients remain the same. Describing z(t)over its fundamental period T0

ais the same as describing x(t) over its funda-

mental period T0. The only difference is that both have different fundamentalfrequencies. The representations are

x(t) =

∞∑

n=−∞

cnejnω0t and x(at) =

∞∑

n=−∞

cnejn(aω0)t

The scaling operation simply changes the harmonic spacing from ω0 to aω0,keeping the coefficients identical.

4.5.6 Time Differentiation

The Fourier series representation of x(t) is

x(t) =

∞∑

n=−∞

cnejnω0t (4.29)

Differentiating both sides of this equation gives

d

dtx(t) =

∞∑

n=−∞

jnω0cnejnω0t

Thusd

dtx(t)

FS←→ jnω0cn

This operation accentuates the high frequency components of the signal. Notethat differentiation forces the average value, dc component of x(t), of the differ-entiated signal to be zero. Hence the Fourier series coefficient for n = 0 is zero.So differentiation of a time function corresponds to a multiplication of it Fourierseries coefficients by an imaginary number whose value is a linear function ofthe harmonic number.

Page 84: My Signals Notes

84 CHAPTER 4. THE FOURIER SERIES

4.5.7 Time Integration

If a periodic signal contains a nonzero average value (c0 6= 0), then the inte-gration of this signal produces a component that increases linearly with timeand, therefore, the resultant signal is non-periodic. However, if c0 = 0, then theintegrated signal is periodic as shown in Figure 4.16. Integrating both sides ofEquation (4.29)

∫ t

−∞x(λ)dλ =

∫ t

−∞

∞∑

n=−∞

cnejnω0λdλ

=

∞∑

n=−∞

cn

∫ t

−∞ejnω0λdλ

=

∞∑

n=−∞

cn

jnω0ejnω0t

Therefore, ∫ t

−∞x(λ)dλ

FS←→

cn

jnω0, n 6= 0

Note that integration attenuates the magnitude of the high frequency compo-nents of the signal. Integration may be viewed as an averaging operation andthus tends to smooth signals in time, it is sometimes called a smoothing opera-tion.

Figure 4.16: The effect of a nonzero average value on the integral of a periodicsignal x(t).

4.5.8 Multiplication

If x(t) and y(t) are periodic signals with the same period, their product z(t) isz(t) = x(t)y(t). Let

x(t)FS←→ αn, y(t)

FS←→ βn and z(t)

FS←→ γn (4.30)

Then

γn =1

T

T

z(t)e−jω0ntdt =1

T

T

x(t)y(t)e−jω0ntdt (4.31)

Page 85: My Signals Notes

4.5. PROPERTIES OF FOURIER SERIES 85

Then, using

y(t) =∞∑

n=−∞

βnejnω0t =

∞∑

m=−∞

βmejmω0t

in (4.31), we have

γn =1

T

T

x(t)

(∞∑

m=−∞

βmejmω0t

)

e−jω0ntdt

Reversing the order of integration and summation

γn =

∞∑

m=−∞

βm1

T

T

x(t)e−j(n−m)ω0tdt

︸ ︷︷ ︸=αn−m

Then

γn =

∞∑

m=−∞

βmαn−m

Therefore,

x(t)y(t)FS←→

∞∑

m=−∞

βmαn−m = αn ∗ βn

This result is a convolution sum of the two sequences αn and βn. So multiplyingCT periodic signals with the same period corresponds to convolving their Fourierseries coefficients.

4.5.9 Convolution

For periodic signals with the same period, a special form of convolution, knownas periodic or circular convolution, is defined by the integral

z(t) = x(t)~ y(t) =1

T

T

x(τ)y(t− τ)dτ

where the integral is taken over one period. It can be shown that

x(t)~ y(t)FS←→ αnβn

where αn and βn are the Fourier series coefficients of x(t) and y(t) respectively.So convolution of two CT periodic signals with the same period corresponds tomultiplication of their Fourier series coefficients.

Evaluate the periodic convolution of the sinusoidal signal Example 4.12

y(t) = 2 cos(2πt) + sin(4πt)

with the periodic square wave x(t) depicted in Figure 4.9 of Example 4.5 withT1 =

14 and T = 1.

Page 86: My Signals Notes

86 CHAPTER 4. THE FOURIER SERIES

� Solution Both x(t) and y(t) have fundamental period T = 1. The Fourierseries coefficients for x(t) may be obtained from (4.10) as

αn =2T1Tsinc

(2nT1T

)

=1

2sinc

(n2

)

The Fourier series coefficients of y(t) are

βn =

1 n = ±112j n = 2

− 12j n = −2

0 otherwise

Let z(t) = x(t)~y(t). The convolution property indicates x(t)~y(t)FS←→ αnβn.

Hence the Fourier coefficients for z(t) are

αnβn =

{1πn = ±1

0 otherwise

which implies that

z(t) =2

πcos(2πt) �

4.5.10 Effects of Symmetry

The Fourier series for the periodic signal in Figure 4.9 (Example 4.6) consists ofcosine terms only, and the series for the signal x(t) in Figure 4.13 (Example 4.8)consists of sine terms only. This is not coincidental, we can show that the Fourierseries of any even periodic function x(t) consists of cosine terms only and theseries for any odd periodic function x(t) consists of sine terms only. In knowingthe type of symmetry any signal possesses, unnecessary work in determiningthe Fourier coefficients can be avoided, thus simplifying the computations ofthe Fourier coefficients. The important types of symmetry are

1. even symmetry, x(t) = x(−t),

2. odd symmetry, x(t) = −x(−t),

3. half-wave symmetry, x(t− T2 ) = −x(t).

By knowing what type of symmetry and the signal over half period, theFourier coefficients can be computed by integrating over only half the periodrather than a complete period. To prove this result, recall that

a0 =1

T

T

x(t)dt (4.32)

an =2

T

T

x(t) cosnω0t dt (4.33)

bn =2

T

T

x(t) sinnω0t dt (4.34)

Page 87: My Signals Notes

4.5. PROPERTIES OF FOURIER SERIES 87

Recall also from section 2.1.4 that cos nω0t is an even function and sin nω0t isan odd function. If x(t) is an even function, then x(t) cosnωot is also an evenfunction and x(t) sinnω0t is an odd function. The Fourier series of an evensignal x(t) having period T is

x(t) = a0 +

∞∑

n=1

an cosnω0t

with coefficients

a0 =1

T

T

x(t)dt

an =2

T

T

x(t) cosnω0t︸ ︷︷ ︸even

dt =4

T

∫ T/2

0

x(t) cosnω0t dt (4.35)

since ∫ T

−Tx(t)︸︷︷︸even

dt = 2

∫ T/2

0

x(t)dt

and

bn =2

T

T

x(t) sinnω0t︸ ︷︷ ︸odd

dt

= 0

since ∫ T

−Tx(t)︸︷︷︸odd

dt = 0

Similarly, if x(t) is an odd function, then x(t) cosnω0t is an odd function andx(t) sinnω0t is an even function. Therefore

a0 = an = 0

bn =4

T

∫ T/2

0

x(t) sinnω0t dt (4.36)

Observe that, because of symmetry, the integration required to compute thecoefficients need to be performed over only half the period.If the two halves of one period of a periodic signal are of identical shape

except that one is the negative of the other, the periodic signal is said to havea half wave symmetry. The signal in Figure 4.13 is a clear example of sucha symmetry. If a periodic signal x(t) with period T satisfies the half-wavesymmetry condition then

x(t− T2 ) = −x(t)

In this case all even-numbered harmonics vanish (note also a0 = 0) , and theodd-numbered harmonic coefficients are given by

an =4

T

∫ T/2

0

x(t) cosnω0t dt

bn =4

T

∫ T/2

0

x(t) sinnω0t dt

The consequence of these symmetries are summarized in Table 4.1

Page 88: My Signals Notes

88 CHAPTER 4. THE FOURIER SERIES

Symmetry a0 an bn Remarks

Even a0 6= 0 an 6= 0 bn = 0 Integrate over T2only

and multiply the coefficients by 2

Odd a0 = 0 an = 0 bn 6= 0 Integrate over T2only

and multiply the coefficients by 2

Half-wave a0 = 0 a2n = 0 b2n = 0 Integrate over T2only

a2n+1 6= 0 b2n+1 6= 0 and multiply the coefficients by 2

Table 4.1: Effects of Symmetry

Effect of Symmetry in Exponential Fourier Series

When x(t) has an even symmetry, bn = 0, and from Equation (4.27), cn =an2

which is real (positive or negative). Hence ∠cn can only be 0 or ±π. Moreover,we may compute cn =

an2 using Equation (4.35). Similarly, when x(t) has an

odd symmetry, an = 0, and cn = −jbn2 is imaginary (positive or negative).

Hence, ∠cn can only be 0 or ±π2 . We can compute cn = −jbn2 using Equation

(4.36).

4.5.11 Parseval’s Theorem

In Chapter 1, it was shown that the average power of a periodic signal x(t) isgiven by

P =1

T

T

|x(t)|2 dt (4.37)

For example the complex exponential signal x(t) = cnejnω0t has an average

power of

P =1

T

T

cnejnω0tc∗ne

−jnωotdt

=1

T

T

|cn|2dt = |cn|

2

A question that may well be asked at this point: If x(t) = cnejnω0t has an

average power |cn|2 will the signal x(t) =∞∑

n=−∞

cnejnω0t has an average power

∞∑

n=−∞

|cn|2. One can further ask what is the relationship between the average

power of the signal x(t) and its harmonics?

Using the exponential Fourier series and substituting in Equation (4.37)

P =1

T

T

∞∑

m=−∞

cmejmω0t

(∞∑

n=−∞

cnejnω0t

)∗

dt

Page 89: My Signals Notes

4.6. SYSTEM RESPONSE TO PERIODIC INPUTS 89

Reversing the order of integration and summation

P =

∞∑

m=−∞

cm

∞∑

n=−∞

c∗n1

T

T

ej(m−n)ω0tdt

︸ ︷︷ ︸=

{0 m 6= n

T m = n

(4.38)

The integral in (4.38) is zero except for the special case when m = n. For thisspecific condition the double summation reduces to a single summation and wehave a new relationship for the average power in terms of the magnitudes of thecoefficients

P =

∞∑

n=−∞

cnc∗n =

∞∑

n=−∞

|cn|2 (4.39)

Combining Equations (4.37) and (4.39) we obtain a relationship known as Par-seval’s theorem for periodic signals

P =1

T

T

|x(t)|2 dt =∞∑

n=−∞

|cn|2 (4.40)

The result indicates that the total average power of a periodic signal x(t) isequal to the sum of the powers of its Fourier coefficients. Therefore, if we knowthe function x(t), we can find the average power. Alternatively, if we knowthe Fourier coefficients, we can find the average power. Interpreting the resultphysically simply means writing a signal as a Fourier series does not change itsenergy. A graph of |cn|2 versus ω can plotted, such a graph is called the powerspectrum.We can apply the same argument to the trigonometric Fourier series. It can

be shown that

P = a20 +1

2

∞∑

n=1

A2n

4.6 System Response to Periodic Inputs

Consider a linear time-invariant CT system with impulse response h(t). FromChapter 3, we know that the response y(t) resulting from an input x(t) is givenby

y(t) =

∫ ∞

−∞h(τ)x(t− τ)dτ

For complex exponential inputs of the form x(t) = ejωt, the output of the systemis

y(t) =

∫ ∞

−∞h(τ)ejω(t−τ)dτ

= ejωt∫ ∞

−∞h(τ)e−jωτdτ

By defining

H(ω) =

∫ ∞

−∞h(τ)e−jωτdτ

Page 90: My Signals Notes

90 CHAPTER 4. THE FOURIER SERIES

we can writey(t) = H(ω)ejωt (4.41)

H(ω) is called the system frequency response and is constant for fixed ω. Equa-tion (4.41) tells us that the system response to a complex exponential is also acomplex exponential, with the same frequency ω, scaled by the quantity H(ω).The magnitude |H(ω)| is called the magnitude function of the system, and∠H(ω) is known as the phase function of the system. In summary, the responsey(t) of a CT LTI system to an input sinusoid of period T is also a sinusoid ofperiod T . Knowing H(ω), we can determine if the system changes the amplitudeof the input and how much of a phase shift the system adds to the sinusoidalinput.To determine the response y(t) of a LTI system to a periodic input x(t) we sawearlier that

ejωt︸︷︷︸input

−→ H(ω)ejωt︸ ︷︷ ︸output

From the linearity property

∞∑

n=−∞

cnejnω0t

︸ ︷︷ ︸input

−→∞∑

n=−∞

cnH(nω0)ejnω0t

︸ ︷︷ ︸output

The response y(t) of a LTI system to a periodic input with period T is periodicwith the same period.

Find the output voltage y(t) of the system shown in Figure 4.17 if the in-Example 4.13put voltage is the periodic signal x(t) = 4 cos t − 2 cos 2t, assume R = 1Ω andL = 1H.

Figure 4.17: System for Example 4.13

� Solution Applying Kirchoff’s voltage law to the circuit yields

d

dty(t) +

R

Ly(t) =

R

Lx(t)

If we set x(t) = ejωt in this equation, the output voltage is y(t) = H(ω)ejωt.Using the system differential equation, we obtain

jωH(ω)ejωt +R

LH(ω)ejωt =

R

Lejωt

Solving for H(ω) yields

H(ω) =R/L

R/L+ jω

Page 91: My Signals Notes

4.6. SYSTEM RESPONSE TO PERIODIC INPUTS 91

At any frequency ω = nω0, the frequency response is

H(nω0) =R/L

R/L+ jnω0

For our case, ω0 = 1 andRL= 1, hence

H(n) =1

1 + jn

Using Euler identity the input signal is expressed as

x(t) = 2ejt + 2e−jt − ej2t − e−j2t

The output signal is

y(t) =

∞∑

n=−∞

cnH(n)ejnt

= c−2H(−2)ej(−2)t + c−1H(−1)e

j(−1)t + c1H(1)ej(1)t + c2H(2)e

j(2)t

= (−1)1

1− j2e−j2t + (2)

1

1− je−jt + (2)

1

1 + jejt + (−1)

1

1 + j2ej2t

= 2√2 cos(t− 45◦)−

2√5cos(2t− 63◦) �

Page 92: My Signals Notes

92 CHAPTER 4. THE FOURIER SERIES

Page 93: My Signals Notes

Chapter 5

The Fourier Transform

In Chapter 4 we saw how to represent a periodic signal as a sum of complexexponentials or trigonometric series. The Fourier series is a powerful analysistool, but unfortunately it is limited. It can describe any signal over a finite timeand any periodic signal over all time as a linear combination of sinusoids. Butit cannot describe a non-periodic signal for all time. In this chapter we extendthis representation to non-periodic signals developing the Fourier transform. Wewill see that the Fourier series is just a special case of the Fourier transform.

5.1 Development of the Fourier Transform

The difference between a periodic signal and a non-periodic signal is that aperiodic signal repeats in a finite time T , called the fundamental period. It hasbeen repeating with that fundamental period forever and will continue to doso forever. On the other hand, a non-periodic signal does not have a period.It may repeat a pattern many times within some finite time, but not over alltime. In making the transition from the Fourier series to the Fourier transformour approach will be to consider a periodic signal and represent it in terms ofcomplex exponentials and then letting the fundamental period approach infinity.If the fundamental period goes to infinity, the signal cannot repeat in a finitetime and therefore is no longer periodic. Before proceeding any further let usinvestigate the effect of changing T on the frequency spectrum of a periodicsignal x(t) as the one shown in Figure 5.1. We saw earlier that the Fourier

Figure 5.1: Rectangular pulse train x(t).

93

Page 94: My Signals Notes

94 CHAPTER 5. THE FOURIER TRANSFORM

series coefficients are

cn =2T1Tsinc

(2nT1T

)

For fixed T1, increasing T reduces the amplitude of each harmonic as well asthe fundamental frequency and, hence, the spacing between harmonics. How-ever, the relative shape of the spectrum does not change as T increases exceptfor the amplitude factor Figure 5.2.

(a)

(b)

(c)

-60 -40 -20 0 20 40 60 n

-30 -20 -10 0 10 20 30 n

-15 -10 -5 0 5 10 15 n

0

0.05

0

0.1

0

0.2

Figure 5.2: Line spectra for x(t). (a) Magnitude spectrum for T = 10T1, (b)Magnitude spectrum for T = 20T1, (c) Magnitude spectrum for T = 40T1.

We conclude that as the period increases, the amplitude becomes smallerand the spectrum becomes denser. In the limit as T →∞, ω0 → 0 and cn → 0.This is a fascinating behaviour, this result means the spectrum is so dense thatthe spectral components are spaced at zero (infinitesimal) intervals. At the sametime, the amplitude of each component is zero (infinitesimal)!! We have nothingof everything, yet we have something.Suppose we are given a non-periodic signal x(t) such as the one shown in

Figure 5.3. To represent this signal as a sum of exponential functions over alltime we construct a new periodic signal x

T(t) with large enough period T so that

x(t) = xT(t) for t ∈ (−T2 ,

T2 ), as illustrated in Figure 5.3. The periodic signal

Figure 5.3: Construction of a periodic signal by periodic extension of x(t).

Page 95: My Signals Notes

5.1. DEVELOPMENT OF THE FOURIER TRANSFORM 95

xT(t) can be represented by an exponential Fourier series. The non-periodic

signal x(t) can be obtained back by letting T →∞, that is

limT→∞

xT(t) = x(t)

Taking the limit as the period approaches infinity, the pulses in the periodicsignal repeat after an infinite interval. In other words we moved all the pulsesto infinity except the desired pulse located at the origin. The new function x

T(t)

is a periodic signal and can be represented by an exponential Fourier series givenby

xT(t) =

∞∑

n=−∞

cnejnω0t (5.1)

where

cn =1

T

∫ T/2

−T/2xT(t)e−jnω0t dt (5.2)

and

ω0 =2π

T

Before taking any limiting operation, we need to do some adjustments so thatthe magnitude components of the cn do not all go to zero as the period isincreased. We make the following changes

X(nω0) , Tcn (5.3)

When we use this definition, (5.1) and (5.2) become

xT(t) =

∞∑

n=−∞

1

TX(nω0)e

jnω0t (5.4)

X(nω0) =

∫ T/2

−T/2xT(t)e−jnω0t dt (5.5)

If we were to multiply cn by T before plotting it, the amplitude in Figure 5.2would not go to zero as T approached infinity but would stay where it is. AsT →∞, ω0 → 0, i.e., the spacing between adjacent lines in the line spectrum arespaced at zero (infinitesimal) intervals. Hence, we shall replace ω0, the spacingof x

T(t), by a more appropriate notation Δω that is

Δω =2π

T

Using this relation for T in (5.4), we get

xT(t) =

∞∑

n=−∞

X(nΔω)ej(nΔωt)Δω

2π(5.6)

In the limit as T →∞, the discrete lines in the spectrum of xTmerge and the

frequency spectrum becomes continuous i.e. Δω → 0, furthermore xT(t)→ x(t).

Therefore,

x(t) = limT→∞

xT(t) = lim

Δω→0

1

∞∑

n=−∞

X(nΔω)ej(nΔωt)Δω (5.7)

Page 96: My Signals Notes

96 CHAPTER 5. THE FOURIER TRANSFORM

becomes

x(t) =1

∫ ∞

−∞X(ω)ejωt dw (5.8)

In a similar manner (5.5) becomes

X(ω) =

∫ ∞

−∞x(t)e−jωt dt (5.9)

Equations (5.8) and (5.9) are commonly referred to as the Fourier transformpair. Equation (5.9) is known as the direct Fourier transform of x(t) (morecommonly just the Fourier transform). Equation (5.8) is known as the inverseFourier transform. Symbolically, we use the following operator notation:

X(ω) = F [x(t)] =∫ ∞

−∞x(t)e−jωt dt

x(t) = F−1[X(ω)] =1

∫ ∞

−∞X(ω)ejωt dw

It is also useful to note that the complex exponential Fourier series coefficientscan be evaluated in terms of the the Fourier transform by using (5.3) to give

cn =1

TX(ω)

∣∣∣∣ω=nω0

(5.10)

This means that the Fourier coefficients cn are1Ttimes the samples of X(ω)

uniformly spaced at intervals of ω0.

Find the Fourier transform of x(t) = e−atu(t).Example 5.1

� Solution By definition [equation 5.9],

X(ω) =

∫ ∞

−∞e−atu(t)e−jωt dt =

∫ ∞

0

e−(a+jω)t dt =−1a+ jω

e−(a+jω)t∣∣∣∣

0

But∣∣e−jωt

∣∣ = 1. Therefore, as t → ∞, e−(a+jω)t = e−ate−jωt = 0 if a > 0.

Therefore

X(ω) =1

a+ jωa > 0

Since X(ω) is complex, Expressing a+jω in polar form as√a2 + ω2 ej tan

−1(ωa ),we obtain

X(ω) =1

√a2 + ω2

e−j tan−1(ωa )

Therefore

|X(ω)| =1

√a2 + ω2

and ∠X(ω) = − tan−1(ωa

)

The amplitude spectrum |X(ω)| and the phase spectrum ∠X(ω) are depictedin Figure 5.4. Observe that |X(ω)| is an even function of ω, and ∠X(ω) is anodd function of ω, as expected. �

Page 97: My Signals Notes

5.2. EXAMPLES OF THE FOURIER TRANSFORM 97

(a)

(b)

|X(ω)|

∠X(ω)

0 ω →

0 ω →

−π/2

0

π/2

0

0.5

1

Figure 5.4: Fourier spectra for x(t) = e−atu(t), a = 1. (a) Amplitude spectrum|X(ω)|, (b) Phase spectrum ∠X(ω).

Existence of the Fourier Transform

In Example 5.1 we observed that when a < 0, the Fourier transform for e−atu(t)does not exist. Therefore, not all signals have a Fourier transform. The existenceof the Fourier transform is assured for any x(t) satisfying the Dirichlet conditionsmentioned in section 4.4.1. The first of these conditions is

∫ ∞

−∞|x(t)| dt <∞ (5.11)

Because∣∣e−jωt

∣∣ = 1, then from equation (5.9), we obtain

|X(ω)| ≤∫ ∞

−∞|x(t)| dt <∞

This inequality shows that the existence of the Fourier transform is assuredif condition (5.11) is satisfied. Although this condition is sufficient, it is notnecessary for the existence of the Fourier transform of a signal. We shall seelater many signals violates condition (5.11), but does have a Fourier transform.

5.2 Examples of The Fourier Transform

In this section, we compute the transform of some useful time signals.

Find the Fourier transform of the unit impulse δ(t). Example 5.2

� Solution Using the sifting property of the impulse

F [δ(t)] =∫ ∞

−∞δ(t)e−jωt

= e−jωt∣∣∣∣t=0

= 1

Page 98: My Signals Notes

98 CHAPTER 5. THE FOURIER TRANSFORM

orδ(t)

F←→ 1

If the impulse is time shifted, we have

F [δ(t− τ)] =∫ ∞

−∞δ(t− τ)e−jωt

= e−jωt∣∣∣∣t=τ

= e−jωτ �

Find the Fourier transform of the rectangular pulse x(t) = rect(tτ

)(Figure 5.5a).Example 5.3

� Solution The Fourier transform is

X(ω) =

∫ ∞

−∞rect

(t

τ

)

e−jωtdt

=

∫ τ/2

−τ/2e−jωtdt

= −1

(e−jωτ/2 − ejωτ/2

)

=2 sin

(ωτ2

)

ω

= τsin(ωτ2

)

(ωτ2

) = τ sinc(ωτ2π

)

Alternatively,

X(ω) =

∫ ∞

−∞rect

(t

τ

)

e−jωtdt

=

∫ τ/2

−τ/2e−jωtdt =

∫ τ/2

−τ/2

cosωt︸ ︷︷ ︸even

−j sinωt︸ ︷︷ ︸odd

dt

= 2

∫ τ/2

0

cosωt dt =2

ωsin(ωτ2

)

= τsin(ωτ2

)

(ωτ2

) = τ sinc(ωτ2π

)

Therefore,

rect

(t

τ

)F←→ τ sinc

(ωτ2π

)(5.12)

Recall, sinc(x) = 0 when x = ±n. Hence, sinc(ωτ2π

)= 0 when ωτ2π = ±n; that

is when ω = ± 2πnτ, (n = 1, 2, 3, ∙ ∙ ∙ ), as depicted in Figure 5.5b. The Fourier

transform X(ω) shown in Figure 5.5b is the amplitude spectrum since it exhibitspositive and negative values. Since X(ω) is a real valued function its phase iszero for all ω. If the magnitude spectrum is required, the negative amplitudescan be considered as a positive amplitude with a phase of −π or π. The magni-tude spectrum |X(ω)| and the phase spectrum ∠X(ω) are shown in Figure 5.5cand d respectively. Note that the phase spectrum, which is required to be anodd function of ω since x(t) is real, may be drawn in several other ways becausea negative sign can be accounted for by a phase ±nπ where n is any odd integer.

Page 99: My Signals Notes

5.2. EXAMPLES OF THE FOURIER TRANSFORM 99

Figure 5.5: A rect function x(t), its Fourier spectrum X(ω), magnitude spec-trum |X(ω)|, and phase spectrum ∠X(ω)

Find the Fourier transform of a dc or constant signal x(t) = 1, −∞ < t <∞. Example 5.4

� Solution Clearly this is an example of a problem using the Fourier transform,the transform is

X(ω) =

∫ ∞

−∞(1)e−jωtdt

The integral does not converge. Therefore, strictly speaking, the Fourier trans-form does not exist. But we can avoid this problem by use of the followingtrick

X(ω) = limτ→∞

∫ τ/2

−τ/2(1)e−jωtdt

= limτ→∞

−1

(e−jωτ/2 − ejωτ/2 )

This can be simplified to

X(ω) = limτ→∞

2

ωsin(ωτ2

)

= limτ→∞

τ sinc(ωτ2π

)(5.13)

In the limit as τ →∞, X(ω) in Equation (5.13) approaches the impulse functionδ(w) with strength 2π since the area under the sinc function is 2π from Figure5.6 ( in our case A = τ and B = τ/2). Therefore,

1F←→ 2πδ(ω)

The frequency function 2πδ(ω) is called the generalized Fourier transform of thesignal x(t). This result shows that the spectrum of a constant signal x(t) = 1is an impulse 2πδ(ω). This situation can be thought of as a dc signal which hasa single frequency ω = 0 (dc). �

Page 100: My Signals Notes

100 CHAPTER 5. THE FOURIER TRANSFORM

Figure 5.6: Area under a sinc function =

∫ ∞

−∞

A sinBt

Btdt = π

A

B

It is interesting to see the result in Example 5.4 using the inverse Fourier trans-form as shown in the next example.

Find the inverse Fourier transform of δ(ω).Example 5.5

� Solution On the basis of Equation (5.8) and the sifting property of theimpulse function,

F−1[δ(ω)] =1

∫ ∞

−∞δ(ω)ejωt dw =

1

Therefore,1

F←→ δ(ω)

or

1F←→ 2πδ(ω) �

If an impulse at ω = 0 is a spectrum of a dc signal, what does an impulse atω = ω0 represent? Let us investigate this question in the next example.

Find the inverse Fourier transform of δ(ω − ω0).Example 5.6

� Solution Using the sifting property of the impulse function, we obtain

F−1[δ(ω − ω0)] =1

∫ ∞

−∞δ(ω − ω0)e

jωt dw =1

2πejω0t

Therefore,1

2πejω0t

F←→ δ(ω − ω0)

or

ejω0tF←→ 2πδ(ω − ω0)

It follows that

e−jω0tF←→ 2πδ(ω + ω0) �

Page 101: My Signals Notes

5.3. FOURIER TRANSFORM OF PERIODIC SIGNALS 101

5.3 Fourier Transform of Periodic Signals

Periodic signals are power signals and we anticipate that the Fourier transformcontains impulses. In Chapter 4, we examined the spectrum of periodic signalsby computing the Fourier series coefficients. We found that the spectrum con-sists of a set of lines at ±nω0. Next we find the Fourier transform of periodicsignals and show that the spectra of periodic signals consists of train of impulses.First consider the exponential signal x(t) = ejω0t. The Fourier transform of

this signal (see example 5.6) is

ejω0tF←→ 2πδ(ω − ω0)

A periodic signal x(t) of period T ; thus ω0 =2πThas the Fourier series repre-

sentation

x(t) =

∞∑

n=−∞

cnejnω0t

Taking the Fourier transform of both sides

X(ω) =

∞∑

n=−∞

cnF[ejnω0t

]

=

∞∑

n=−∞

2πcnδ(ω − nω0) (5.14)

Thus the Fourier transform of a periodic signal is simply an impulse train, withimpulses located at ω = nω0, each of which has a strength of 2πcn, and allimpulses are separated from each other by ω0.A periodic function of considerable importance is that of a periodic sequence

of unit impulse functions shown in Figure 5.7. For convenience, we write such

a sequence with period T as δT (t) =∞∑

n=−∞

δ(t−nT ). Because this is a periodic

function, we can express it in terms of a Fourier series by choosing ω0 =2πT,

(see example 4.5)

δT (t) =

∞∑

n=−∞

cnejnω0t

where cn =1Tso that

δT (t) =1

T

∞∑

n=−∞

ejnω0t

Figure 5.7: A train of impulses

Page 102: My Signals Notes

102 CHAPTER 5. THE FOURIER TRANSFORM

Using (5.14), the Fourier transform of δT (t) is

F [δT (t)] =2π

T

∞∑

n=−∞

δ(ω − nω0) = ω0∞∑

n=−∞

δ(ω − nω0) (5.15)

The Fourier transform of a periodic impulse train in the time domain gives animpulse train that is periodic in the frequency domain. The frequency spectrumof δ

T(t) is shown in Figure 5.8.

Figure 5.8: The frequency spectrum of a train of impulses δT(t)

.

Find the Fourier transform of the periodic rectangular waveform x(t) shown inExample 5.7Figure 5.9.

Figure 5.9: A periodic rectangular waveform x(t)

� Solution From (4.10),

cn =2T1Tsinc

(2nT1T

)

Substituting this into (5.14) yields the Fourier transform of the periodic rect-

Page 103: My Signals Notes

5.4. PROPERTIES OF THE FOURIER TRANSFORM 103

angular pulses

X(ω) =

∞∑

n=−∞

2T1ω0 sinc

(2nT1T

)

δ(ω − nω0)

=

∞∑

n=−∞

2T1ω0 sinc

(nω0T1

π

)

δ(ω − nω0)

The frequency spectrum is sketched in Figure 5.10. The dashed curve indicatesthe weights of the impulse functions. Note that in distribution of impulses infrequency, Figure 5.10 shows the particular case that T = 8T1 �

Figure 5.10: A periodic rectangular waveform x(t)

In the previous example it is interesting to note how the Fourier series co-efficients of the periodic rectangular waveform x(t) is related to the Fouriertransform of the truncated signal x

T(t) shown in Figure 5.9. From (5.10),

cn =1

TXT(ω)

∣∣∣∣ω=nω0

The Fourier transform of XT(ω) = 2T1sinc

(ωT1π

)(see example 5.3), therefore,

cn =2T1Tsinc

(ωT1

π

) ∣∣∣∣ω=nω0

In words, the Fourier series coefficients of a periodic signal can be obtained fromsamples of the Fourier transform of the truncated signal divided by the periodT , provided the periodic signal and the truncated one are equal in one period.Note that in general the Fourier transform of a periodic function is not periodic.

5.4 Properties of the Fourier Transform

In this section we consider a number of properties of the Fourier Transform.These properties allow some problems to be solved merely by inspection.

Page 104: My Signals Notes

104 CHAPTER 5. THE FOURIER TRANSFORM

5.4.1 Linearity

The Fourier transform is a linear operation based on the properties of integra-tion. Thus if

x1(t)F←→ X1(ω)

x2(t)F←→ X2(ω)

then

αx1(t) + βx2(t)F←→ αX1(ω) + βX2(ω)

where α and β are arbitrary constants. The property is the direct result of thelinear operation of integration.Proof Let z(t) = αx1(t) + βx2(t), the proof is trivial and follows as

Z(ω) =

∫ ∞

−∞z(t)e−jωtdt

=

∫ ∞

−∞[αx1(t) + βx2(t)]e

−jωtdt

= α

∫ ∞

−∞x1(t)e

−jωtdt+ β

∫ ∞

−∞x2(t)e

−jωtdt

= αX1(ω) + βX2(ω) �

Find the Fourier transform of cos(ωot).Example 5.8

� Solution Using the Euler identities we can write

cos(ω0t) =1

2ejωot +

1

2e−jωot

We saw earlier that

ejωotF←→ 2πδ(ω − ωo)

e−jωotF←→ 2πδ(ω + ωo)

Because of linearity

1

2ejωot +

1

2e−jωot

F←→ πδ(ω − ω0) + πδ(ω + ω0)

The Fourier spectrum of cosω0t consist of only two components of frequenciesω and −ω0. Therefore the spectrum has two impulses at ω and −ω0. �

5.4.2 Time Shifting

If

x(t)F←→ X(ω)

then

x(t− t0)F←→ e−jωt0X(ω) (5.16)

Page 105: My Signals Notes

5.4. PROPERTIES OF THE FOURIER TRANSFORM 105

Proof By definition

F [x(t− t0)] =∫ ∞

−∞x(t− t0)e

−jωtdt

Letting u = t− t0, we have

F [x(t− t0)] =∫ ∞

−∞x(u)e−jω(u+to)du

= e−jωt0∫ ∞

−∞x(u)e−jωudu = e−jωt0X(ω) �

This result shows that if the signal is delayed in time by t0, its magnitude spec-trum remains unchanged. The phase spectrum, however, is changed and a phaseof −ωt0, which is a linear function of ω, is added to each frequency component.The slope of this linear phase term is equal to the time shift t0.

Find the Fourier transform of the rectangular pulse x(t) = rect(t−τ/2τ

). Example 5.9

� Solution The pulse x(t) is the rectangular pulse rect(tτ

)in Figure ?? de-

layed by τ/2 seconds. Hence, according to (5.16), its Fourier transform is theFourier transform of rect

(tτ

)multiplied by e−jω

τ2 . Therefore

X(ω) = τ sinc(ωτ2π

)e−jω

τ2

The amplitude spectrum |X(ω)| is the same as that indicated in Figure 5.5. Butthe phase spectrum has an added linear term −ωτ2 , as shown in Figure 5.11.

Figure 5.11: The Phase spectrum ∠X(ω)

5.4.3 Frequency Shifting (Modulation)

The dual of the time shifting property is the frequency shifting property. If

x(t)F←→ X(ω)

thenx(t)ejω0t

F←→ X(ω − ω0) (5.17)

Page 106: My Signals Notes

106 CHAPTER 5. THE FOURIER TRANSFORM

Proof By definition

F [x(t)ejω0t] =∫ ∞

−∞x(t)ejω0te−jωtdt

=

∫ ∞

−∞x(t)e−j(ω−ω0)tdt

= X(ω − ω0)

Therefore, the multiplication of a signal by a factor ejω0t causes the spectrumof that signal to be shifted by ω0.

Find the Fourier transform of the complex sinusoidal pulse x(t) defined asExample 5.10

x(t) =

{ej10t, |t| ≤ π

0, otherwise

� SolutionWe may express x(t) as a product of a complex sinusoid, ej10t, anda rectangular pulse

x(t) = ej10t rect

(t

)

Using (5.12) we know that

rect

(t

)F←→ 2π sinc (ω)

Therefore

ej10t rect

(t

)F←→ 2π sinc (ω − 10) �

Changing ω0 to −ω0 in Equation (5.17) yields

x(t)e−jω0tF←→ X(ω + ω0)

For real valued x(t), it is now easy to find the Fourier transform of x(t) multipliedby a sinusoid since for example x(t) cosω0t can be expressed as

x(t) cosω0t =1

2[x(t)ejω0t + x(t)e−jω0t]

It follows from the frequency shifting property that

F [x(t) cosω0t]F←→

1

2[X(ω − ω0) +X(ω + ω0)] (5.18)

Multiplying a signal by a sinusoid cosω0t amounts to modulating the sinusoidamplitude. Modulating means changing the amplitude of one signal by mul-tiplying it by the other. This type of modulation is thus known as amplitudemodulation. The sinusoid cosω0t is called the carrier, the signal x(t) is themodulating signal, and the signal x(t) cosω0t is the modulated signal. Furtherdiscussion of modulation and demodulation will appear in chapter 6.

Page 107: My Signals Notes

5.4. PROPERTIES OF THE FOURIER TRANSFORM 107

Find and sketch the Fourier transform of rect(t4

)cos 10t. Example 5.11

� Solution From (5.12) we find rect(t4

) F←→ 4 sinc

(2ωπ

), which is depicted

in Figure 5.12(a). From (5.18) it follows that

x(t) cos 10tF←→

1

2[X(ω + 10) +X(ω − 10)]

In this case X(ω) = 4 sinc( 2ωπ). Therefore

x(t) cos 10tF←→ 2 sinc [2(ω + 10)] + 2 sinc[2(ω − 10)]

The spectrum of x(t) cos 10t is obtained by shifting X(ω) in Figure 5.12(b) to

Figure 5.12: Frequency shifting by amplitude modulation.

the left by 10 and also to the right by 10, and then multiplying it by one-half,as depicted in Figure 5.12(d). �

5.4.4 Time Scaling and Frequency Scaling

Ifx(t)

F←→ X(ω)

then, for any real valued scaling constant a

x(at)F←→

1

|a|X(ωa

)

and1

|a|x

(t

a

)F←→ X (aω)

Proof For a positive real constant a and changing the variable of integrationto u = at, we have

F [x(at)] =1

a

∫ ∞

−∞x(u)e−jωu/adu =

1

aX(ωa

)for a > 0

Page 108: My Signals Notes

108 CHAPTER 5. THE FOURIER TRANSFORM

However, if a < 0, the limits on the integral are reversed when the variable ofintegration is changed so that

F [x(at)] =1

a

∫ −∞

∞x(u)e−jωu/adu

= −1

a

∫ ∞

−∞x(u)e−jωu/adu = −

1

aX(ωa

)for a < 0

We can write the two cases as one because the factor − 1ais always positive when

a < 0; i.e.,

F [x(at)] =1

|a|X(ωa

)� (5.19)

The frequency scaling property can be proven in a similar manner and the resultis

1

|a|x

(t

a

)F←→ X (aω)

If a is positive and greater than unity, x(at) is a compressed version of x(t) andclearly the function X

(ωa

)represents the function X(ω) expanded in frequency

by the same factor a. The scaling property states that time compression of asignal results in its spectral expansion, and time expansion of the signal resultsin its spectral compression. Figure 5.13 shows two cases where the pulse lengthdiffers by a factor of two. Notice the longer pulse in Figure 5.13a has a narrowertransform shown in Figure 5.13b.

Figure 5.13: The Fourier transform duality property.

What is the Fourier transform of rect(tτ

).Example 5.12

� Solution The Fourier transform of rect(tτ

)is, by example 5.3,

rect

(t

τ

)F←→ τ sinc

(ωτ2π

)

Page 109: My Signals Notes

5.4. PROPERTIES OF THE FOURIER TRANSFORM 109

By (5.19) the Fourier transform of rect(2tτ

)is

rect

(2t

τ

)F←→

τ

2sinc

(ωτ4π

)�

5.4.5 Time Reflection

Ifx(t)

F←→ X(ω)

thenx(−t)

F←→ X(−ω)

Proof By letting a = −1 in (5.19) we get

F [x(−t)] =1

| − 1|X

−1

)

= X(−ω) �

5.4.6 Time Differentiation

Ifx(t)

F←→ X(ω)

thendx(t)

dt

F←→ jωX(ω)

Proof Differentiation of both sides of (5.8) yields

d

dtx(t) =

d

dt

[1

∫ ∞

−∞X(ω) ejωtdω

]

=1

∫ ∞

−∞jωX(ω) ejωtdω �

This result shows thatdx(t)

dt

F←→ jωX(ω)

The differentiation property can be extended to yield the following

dnx(t)

dtnF←→ (jω)nX(ω) (5.20)

Using the time differentiation property, find the Fourier transform of the triangle Example 5.13pulse x(t) = Δ

(tτ

)illustrated in Figure 5.14a and defined as

Δ

(t

τ

)

=

{1− 2|t|

τ, |t| ≤ τ

2

0, otherwise

� Solution To find the Fourier transform of this pulse we differentiate the pulsesuccessively as illustrated in Figure 5.14b and c. From the time differentiationproperty (5.22)

d2x(t)

dt2F←→ (jω)2X(ω) = −ω2X(ω)

The d2xdt2, consists of a sequence of impulses, as depicted in Figure 5.14c; that is

d2x(t)

dt2=2

τ[δ(t+ τ2 )− 2δ(t) + δ(t−

τ2 )]

Page 110: My Signals Notes

110 CHAPTER 5. THE FOURIER TRANSFORM

Figure 5.14: The Fourier transform using the time differentiation property.

Taking the Fourier transform

−ω2X(ω) = F

[2

τ[δ(t+ τ2 )− 2δ(t) + δ(t−

τ2 )]

]

we obtain

−ω2X(ω) = 2τ[ejωτ/2 − 2 + e−jωτ/2] = 4

τ(cos ωτ2 − 1) = −

8τsin2

(ωτ4

)

and

X(ω) =8

ω2τsin2

(ωτ4

)=τ

2

[sin(ωτ4 )ωτ4

]2=τ

2sinc2

(ωτ4π

)�

We must be careful when using the differentiation property. Note that since

F

[dx(t)

dt

]

= jωX(ω)

the differentiation property would suggest

X(ω) =F[dx(t)dt

]

jω(5.21)

Page 111: My Signals Notes

5.4. PROPERTIES OF THE FOURIER TRANSFORM 111

This relationship is indeterminate at ω = 0, note that differentiating x(t) de-stroys any dc component of x(t) and, consequently, the Fourier transform of thedifferentiated signal at ω = 0 is zero. Hence (5.21) applies only to signals withzero average value, that is, X(0) = 0. Using the differentiation property to findthe Fourier transform of the unit step for example will yield a wrong answer.The unit step is known to have an average value and differentiating the unitstep would destroy this average value. The derivative of the unit step is

du(t)

dt= δ(t)

Taking Fourier transform of both sides yields

jωU(ω) = 1

We might be tempted at this stage to claim that the Fourier transform of u(t)is

U(ω) =1

This is not true, since U(0) 6= 0 and the above result is indeterminate at ω = 0.At this point the signum function sgn(t) becomes handy since we know that thesignal being an odd function must have an average value of zero. Furthermore,we can attempt to find the Fourier transform of the unit step since we can al-ways express the unit step in terms of the signum function.

Find the Fourier transform of the signum function x(t) = sgn(t). Example 5.14

� Solution First express sgn(t) in terms of the unit step function as

x(t) = sgn(t) = 2u(t)− 1

The time derivative of sgn(t) is given by

dx(t)

dt= 2δ(t)

Using the differentiation property and taking the Fourier transform of both sides

jωX(ω) = 2

or

jω F [sgn(t)] = 2

Hence

sgn(t)F←→

2

We know that X(ω) = 0 because sgn(t) is an odd function and thus has zeroaverage value. This knowledge removes the indeterminacy at ω = 0 associatedwith the differentiation property. �

Find the Fourier transform of the unit step function u(t). Example 5.15

Page 112: My Signals Notes

112 CHAPTER 5. THE FOURIER TRANSFORM

� Solution The unit step function can be written as

u(t) =1

2+1

2sgn(t)

By linearity of the Fourier transform, we obtain

u(t)F←→ πδ(ω) +

1

Therefore, the Fourier transform of the unit step function contains an impulseat ω = 0 corresponding to the average value of 12 . �

5.4.7 Frequency Differentiation

If

x(t)F←→ X(ω)

then

−jtx(t)F←→

d

dωX(ω)

Proof Differentiation of both sides of (5.9) yields

d

dωX(ω) =

d

[∫ ∞

−∞x(t) e−jωtdt

]

=

∫ ∞

−∞−jtx(t) e−jωtdt �

This result shows that

−jtx(t)F←→

d

dωX(ω)

Note that this result can also be written as

tx(t)F←→ j

d

dωX(ω)

The frequency differentiation property can be extended to yield the following

tnx(t)F←→ (j)n

dn

dωnX(ω) (5.22)

Find the Fourier transform of z(t) = te−atu(t)Example 5.16

� Solution Using (5.22) and letting x(t) = e−atu(t)

F [tx(t)] = jd

(1

a+ jω

)

= j−j

(a+ jω)2

Hence,

te−atu(t)F←→

1

(a+ jω)2�

Page 113: My Signals Notes

5.4. PROPERTIES OF THE FOURIER TRANSFORM 113

5.4.8 Time Integration

If

x(t)F←→ X(ω)

then ∫ t

−∞x(τ)dτ

F←→

1

jωX(ω) + πX(0)δ(ω) (5.23)

where

X(0) = X(ω)

∣∣∣∣ω=0

=

∫ ∞

−∞x(t)dt

If x(t) has a nonzero average (dc value), then X(0) 6= 0.

Proof By definition

F

[∫ t

−∞x(τ)dτ

]

=

∫ ∞

−∞

[∫ t

−∞x(τ)dτ

]

e−jωtdt

=

∫ ∞

−∞

[∫ ∞

−∞x(τ)u(t− τ)dτ

]

e−jωtdt

Interchanging the order of integration and noting that x(τ) does not depend ont, we have

F

[∫ t

−∞x(τ)dτ

]

=

∫ ∞

−∞x(τ)

[∫ ∞

−∞u(t− τ)e−jωtdt

]

︸ ︷︷ ︸Fourier transform of shifted u(t)

=

∫ ∞

−∞x(τ)

[U(ω)e−jωτ

]dτ

= U(ω)

∫ ∞

−∞x(τ)e−jωτdτ

︸ ︷︷ ︸Simply X(ω)

The Fourier transform of u(t) from Example 5.14 is

U(ω) = πδ(ω) +1

Therefore,

F

[∫ t

−∞x(τ)dτ

]

=

(

πδ(ω) +1

)

X(ω)

which can be written as

F

[∫ t

−∞x(τ)dτ

]

=1

jωX(ω) + πX(0)δ(ω) �

The factor X(0) in the second term on the right follows from the sifting propertyof the impulse function. This second term is needed to account for the averagevalue of x(τ). Recall that a dc component will show as an impulse at ω = 0. If

Page 114: My Signals Notes

114 CHAPTER 5. THE FOURIER TRANSFORM

x(τ) has no dc component to consider the time integration property will simplybe ∫ t

−∞x(τ)dτ

F←→

1

jωX(ω)

Using the integration property derive the Fourier transform of the unit stepExample 5.17function u(t).

� Solution The unit step may be expressed as the integral of the impulsefunction

u(t) =

∫ t

−∞δ(τ) dτ

Since δ(t)F←→ 1, using (5.23) suggests

u(t) =

∫ t

−∞δ(τ)dτ

F←→

1

jω+ πδ(ω)

as found earlier in Example 5.14. �

5.4.9 Time Convolution

If

x(t)F←→ X(ω)

h(t)F←→ H(ω)

thenx(t) ∗ h(t)

F←→ X(ω)H(ω)

Proof The proof follows from the definition of the convolution property, hence,

F [x(t) ∗ h(t)] =∫ ∞

−∞

[∫ ∞

−∞x(τ)h(t− τ)dτ

]

e−jωtdt

=

∫ ∞

−∞x(τ)

[∫ ∞

−∞h(t− τ)e−jωtdt

]

︸ ︷︷ ︸Fourier transform of shifted h(t)

=

∫ ∞

−∞x(τ)

[H(ω)e−jωτ

]dτ

= H(ω)

∫ ∞

−∞x(τ)e−jωτdτ

︸ ︷︷ ︸= X(ω)

= H(ω)X(ω) = Y (ω) �

Thus convolution in the time domain is equivalent to multiplication in the fre-quency domain. The use of the convolution property for LTI systems is demon-strated in Figure 5.15. The amplitude and phase spectrum of the output y(t)are related to the input x(t) and impulse response h(t) in the following manner:

|Y (ω)| = |X(ω)| |H(ω)|

∠Y (ω) = ∠X(ω) + ∠H(ω)

Page 115: My Signals Notes

5.4. PROPERTIES OF THE FOURIER TRANSFORM 115

Thus the amplitude spectrum of the input is modified by |H(ω)| to producethe amplitude spectrum of the output, and the phase spectrum of the input ischanged by ∠H(ω) to produce the phase spectrum of the output. H(ω), theFourier transform of the system impulse response, is generally referred to as thefrequency response of the system.

Figure 5.15: Convolution property of LTI system response.

Using the time convolution property prove the integration property. Example 5.18

� Solution Consider the convolution of x(t) with a unit step function it follows

x(t) ∗ u(t) =∫ ∞

−∞x(τ)u(t− τ) dτ

The unit step function u(t − τ) has a value zero for t < τ and a value of 1 fort > τ , therefore,

x(t) ∗ u(t) =∫ t

−∞x(τ) dτ

Now from the time convolution property, it follows that

x(t) ∗ u(t) =∫ t

−∞x(τ)dτ

F←→X(ω)

[1

jω+ πδ(ω)

]

=1

jωX(ω) + πX(0)δ(ω) �

5.4.10 Frequency Convolution (Multiplication)

It will not be surprising that since convolution in the time domain correspondsto multiplication of the Fourier transform, that multiplication in time domaincorresponds to convolution of the Fourier transforms. Therefore, if

x(t)F←→ X(ω)

p(t)F←→ P (ω)

then

x(t)p(t)F←→

1

2π[X(ω) ∗ P (ω)]

Proof By definition

F [x(t)p(t)] =∫ ∞

−∞x(t)p(t) e−jωtdt (5.24)

We substitute for p(t) in (5.24) by

p(t) =1

∫ ∞

−∞P (σ)ejσtdσ

Page 116: My Signals Notes

116 CHAPTER 5. THE FOURIER TRANSFORM

using a different dummy variable since the variable ω is already used in (5.24),therefore,

F [x(t)p(t)] =∫ ∞

−∞x(t)

[1

∫ ∞

−∞P (σ)ejσtdσ

]

︸ ︷︷ ︸p(t)

e−jωtdt

Interchanging the order of integration, we have

F [x(t)p(t)] =1

∫ ∞

−∞P (σ)

[∫ ∞

−∞x(t)e−jωtejσtdt

]

=1

∫ ∞

−∞P (σ)

[∫ ∞

−∞x(t)e−j(ω−σ)tdt

]

︸ ︷︷ ︸Frequency shifting = X(ω − σ)

=1

∫ ∞

−∞P (σ)X(ω − σ)dτ

=1

2πX(ω) ∗ P (ω) �

Figure 5.16 depicts a block diagram representation of the multiplication prop-erty. The importance of this property is that the spectrum of a signal suchas x(t) cosω0t can be easily computed. These type of signals arise in manycommunication systems, such as amplitude modulators as we shall see later.Since

cosω0t =1

2ejω0t +

1

2e−jω0t

then

F [x(t) cosω0t] =1

2πX(ω) ∗ [πδ(ω − ω0) + πδ(ω + ω0)]

=1

2X(ω − ω0) +

1

2X(ω + ω0)

a result we have seen earlier (see the frequency shifting property). Some authorsrefer to the multiplication property as modulation property.

Figure 5.16: Block diagram representation of the multiplication property.

5.4.11 Symmetry - Real and Imaginary Signals

First, suppose x(t) is real. This implies that x(t) = x∗(t), then

X(−ω) = X∗(ω)

Page 117: My Signals Notes

5.4. PROPERTIES OF THE FOURIER TRANSFORM 117

where ∗ denotes the complex conjugate. This is referred to as conjugate sym-metry.

Proof The property follows by taking the conjugate of both sides of (5.9)

X∗(ω) =

[∫ ∞

−∞x(t)e−jωtdt

]∗

=

∫ ∞

−∞x∗(t)ejωtdt

Using the fact that x(t) is real we obtain

X∗(ω) =

∫ ∞

−∞x(t)ejωtdt

= X(−ω)

Now, if we express X(ω) in polar form, we have

X(ω) = |X(ω)|ej∠X(ω) (5.25)

Conjugating both sides of (5.25) yields

X∗(ω) = |X(ω)|e−j∠X(ω)

Replacing each ω by −ω in (5.25) results in

X(−ω) = |X(−ω)|ej∠X(−ω)

Since X∗(ω) = X(−ω), the last two equations are equal. It then follows that

|X(ω)| = |X(−ω)| (5.26)

∠X(ω) = −∠X(−ω) (5.27)

showing that the magnitude spectrum is an even function of frequency and thephase spectrum is an odd function of frequency.Now suppose x(t) is purely imaginary so that x(t) = −x∗(t). In this case,

we may write

X∗(ω) =

[∫ ∞

−∞x(t)e−jωtdt

]∗

=

∫ ∞

−∞x∗(t)ejωtdt

= −∫ ∞

−∞x(t)ejωtdt

= −X(−ω)

It then follows that

|X(ω)| = −|X(−ω)|

∠X(ω) = ∠X(−ω)

Page 118: My Signals Notes

118 CHAPTER 5. THE FOURIER TRANSFORM

i.e., the magnitude spectrum is an odd function of frequency and the phasespectrum is an even function of frequency.

Show that if x(t) is real, the expression of the inverse Fourier transform inExample 5.19(5.8) can be changed to an expression involving real cosinusoidal signals.

� Solution For real x(t)

x(t) =1

∫ ∞

−∞X(ω)ejωt dω

=1

∫ 0

−∞X(ω)ejωt dω +

1

∫ ∞

0

X(ω)ejωt dω

Expressing X(ω) in polar form as in (5.25) and replacing ω by −ω the firstintegral term above yields (paying particular attention to how the limits of in-tegration has changed)

x(t) =1

∫ ∞

0

|X(−ω)|ej∠X(−ω)e−jωt dω +1

∫ ∞

0

|X(ω)|ej∠X(ω)ejωt dω

Using (5.26) and (5.27) we obtain

x(t) =1

∫ ∞

0

|X(ω)|e−j(ωt+∠X(ω)) dω +1

∫ ∞

0

|X(ω)|ej(ωt+∠X(ω)) dω

=1

∫ ∞

0

|X(ω)|[ej(ωt+∠X(ω)) + e−j(ωt+∠X(ω))

]dω

=1

∫ ∞

0

2|X(ω)| cos [ωt+ ∠X(ω)] dω �

5.4.12 Symmetry - Even and Odd Signals

Assume x(t) is real valued and has even symmetry. These conditions implyx∗(t) = x(t) and x(−t) = x(t). Using these relationships we may write

X∗(ω) =

∫ ∞

−∞x∗(t)ejωtdt

=

∫ ∞

−∞x(t)ejωtdt

=

∫ ∞

−∞x(−t)ejωtdt

Now perform a change of variable τ = −t to obtain

X∗(ω) =

∫ ∞

−∞x(τ)e−jωτdt

= X(ω) �

Therefore, the Fourier transform of an even and real valued time signal is an evenand a real valued signal in the frequency domain. Similarly, we may show thatif x(t) is real and odd, then X∗(ω) = −X(ω), i.e., the Fourier transform of anodd and real valued time signal is an odd and imaginary signal in the frequencydomain. Table 5.1 summarizes the four types of combined symmetries.

Page 119: My Signals Notes

5.4. PROPERTIES OF THE FOURIER TRANSFORM 119

x(t) X(ω)

Real and Even Real and Even

Real and Odd Imaginary and Odd

Imaginary and Even Imaginary and Even

Imaginary and Odd Real and Odd

Table 5.1: Symmetries of the Fourier transform.

5.4.13 Duality

A duality exists between the time domain and the frequency domain. Equations(5.8) and (5.9) show an interesting fact: the direct and the inverse transformoperations are remarkably similar. The property states that if

x(t)F←→ X(ω) (5.28)

thenX(t)

F←→ 2πx(−ω) (5.29)

This property states that if x(t) has the Fourier transform X(ω) and a functionof time exists such that

X(t) = X(ω)

∣∣∣∣ω=t

then F [X(t)] = 2πx(−ω), where x(−ω) = x(t)

∣∣∣∣t=−ω

.

Proof According to (5.8)

x(t) =1

∫ ∞

−∞X(σ)ejσt dσ

Hence

2πx(−t) =∫ ∞

−∞X(σ)e−jσt dσ

Changing t to ω yields

2πx(−ω) =∫ ∞

−∞X(σ)e−jσω dσ

Furthermore, changing σ to t yields

2πx(−ω) =∫ ∞

−∞X(t)e−jωt dt �

The duality relationship described by Equations (5.28) and (5.29) is illustratedin Figure 5.17.

Page 120: My Signals Notes

120 CHAPTER 5. THE FOURIER TRANSFORM

Figure 5.17: The Fourier transform duality property.

Use duality to find the Fourier transform ofExample 5.20

x(t) =2

1 + t2

� Solution First recognize that

f(t) = e−|t|F←→ F (ω) =

2

1 + ω2

Using duality

which indicates that

X(ω) = 2πf(−ω)

= 2πe−|ω| �

Use duality to find the inverse Fourier transform of X(ω) = sgn(ω)Example 5.21

� Solution We recognize that

f(t) = sgn(t)F←→ F (ω) =

2

Using duality

Page 121: My Signals Notes

5.4. PROPERTIES OF THE FOURIER TRANSFORM 121

which indicates that2

jt

F←→ 2πsgn(−ω)

Using the fact that sgn(ω) = −sgn(−ω) we obtain

x(t) =j

πt�

5.4.14 Energy of Non-periodic Signals

In Section 4.5.11, we related the total average power of a periodic signal to theaverage power of each frequency component in its Fourier series. We did thisthrough Parseval’s theorem. We would like to find analogous relationship fornon-periodic signals. Non-periodic signals are energy signals, next we show thatthe energy of these signals can be computed from their transform X(ω). Thesignal energy is defined as

E∞ =

∫ ∞

−∞|x(t)|2dt =

∫ ∞

−∞x(t)x∗(t)dt

Using (5.8) in the above equation

E∞ =

∫ ∞

−∞x(t)

[1

∫ ∞

−∞X∗(ω)e−jωt dω

]

dt

Interchanging the order of integration gives

E∞ =1

∫ ∞

−∞X∗(ω)

[∫ ∞

−∞x(t)e−jωt dt

]

=1

∫ ∞

−∞X∗(ω)X(ω) dω

=1

∫ ∞

−∞|X(ω)|2 dω

Consequently,

E∞ =

∫ ∞

−∞|x(t)|2dt =

1

∫ ∞

−∞|X(ω)|2 dω (5.30)

Equation (5.30) is known as Parseval’s theorem for energy signals.

Evaluate the following integral Example 5.22

∫ ∞

−∞

2

|jω + 2|2dω

Page 122: My Signals Notes

122 CHAPTER 5. THE FOURIER TRANSFORM

� Solution Let

X(ω) =1

jω + 2

Now the inverse Fourier transform of X(ω) is x(t) = e−2tu(t). Using (5.30)

1

∫ ∞

−∞

2

|jω + 2|2dω = 2

∫ ∞

−∞|x(t)|2dt

∫ ∞

−∞

2

|jω + 2|2dω = 4π

∫ ∞

0

e−4tdt

= π �

For convenience, a summary of the basic Fourier transform properties are listedin Table 5.2.

Property Name x(t) X(ω)

Linearity αx1(t) + βx2(t) αX1(ω) + βX2(ω)

Time Shifting x(t− t0) X(ω)e−jωt0

Frequency Shifting x(t)ejω0t X(ω − ω0)

Time Scaling x(at)1

|a|X(ωa

)

Time Reflection x(−t) X(−ω)

Conjugation x∗(t) X∗(−ω)

Time differentiationdnx(t)

dtn(jω)nX(ω)

Frequency Differentiation (−jt)nx(t)dnX(ω)

dωn

Time Integration

∫ t

−∞x(τ)dτ

1

jωX(ω) + πX(0)δ(ω)

Time Convolution x(t) ∗ h(t) X(ω)H(ω)

Multiplication x(t)p(t)1

2πX(ω) ∗ P (ω)

Duality X(t) 2πx(−ω)

Parseval’s Theorem

∫ ∞

−∞|x(t)|2dt

1

∫ ∞

−∞|X(ω)|2dω

Table 5.2: Basic Fourier transform properties.

Page 123: My Signals Notes

5.5. ENERGY AND POWER SPECTRAL DENSITY 123

5.5 Energy and Power Spectral Density

5.5.1 The Spectral Density

The Fourier spectrum (i.e.X(ω)) of a signal indicates the relative amplitudes andphases of the sinusoids that are required to synthesize that signal. A periodicsignal Fourier spectrum has finite amplitudes and exist at discrete frequencies(ω0 and its multiples). Such a spectrum is easy to visualize, but the spectrumof a non-periodic signal is not easy to visualize because it has a continuousspectrum. The continuous spectrum concept means the spectrum exists forevery value of ω, but the amplitude of each component in the spectrum iszero (see Section 6.1.1). The meaningful measure here is not the amplitude ofa component of some frequency but the spectral density per unit bandwidth.Equation (5.8) represents x(t) as a continuous sum of exponential functions withfrequencies lying in the interval (−∞,∞). If the signal x(t) represents a voltage,X(ω) has the dimensions of voltage multiplied by time. Because frequencyhas the dimensions of inverse time, we can consider X(ω) as a voltage-densityspectrum, known more generally as the spectral density. In other words it is thearea under the spectral density function X(ω) that contributes and not eachpoint on the X(ω) curve. From (5.7) it is clear that x(t) is synthesized byadding exponentials of the form ejnΔωt, in which the contribution by any oneexponential component is zero. But the contribution by the exponentials in aninfinitesimal band Δω located at ω = nΔω is 12πX(nΔω)Δω and the additionof all these components yields x(t) in the integral form:

x(t) = limΔω→0

1

∞∑

n=−∞

X(nΔω)ej(nΔωt)Δω =1

∫ ∞

−∞X(ω)ejωtdω

The contribution by components within a band dω is 12πX(ω) dω = X(ω) dF ,

where dF = dω2π is the bandwidth in hertz. Clearly, X(ω) is the spectral density

per unit bandwidth (in hertz). It also follows that even if the amplitude ofany one component is zero, the relative amount of component of frequency ωis X(ω). Although X(ω) is a spectral density, in practice it is often called thespectrum of x(t) rather that the spectral density of x(t). More commonly, X(ω)is called the Fourier spectrum (or Fourier transform) of x(t).

5.5.2 Energy Spectral Density

We turn our attention now to investigate how any given frequency band con-tributes to the energy of the signal. Equation (5.30) can be interpreted tomean that the energy of a signal x(t) results from energies contributed by allthe spectral components of the signal x(t). The total signal energy is the areaunder |X(ω)|2 (divided by 2π). If we consider an infinitesimally small bandΔω (Δω → 0) as illustrated in Figure 5.18, the energy ΔE∞ of the spectralcomponents in this band is the area |X(ω)|2 under this band (divided by 2π):

ΔE∞ =1

2π|X(ω)|2Δω = |X(ω)|2ΔF

Therefore, the energy contributed by the components in this band of ΔF (inhertz) is |X(ω)|2ΔF . The total signal energy is the sum of energies of all

Page 124: My Signals Notes

124 CHAPTER 5. THE FOURIER TRANSFORM

Figure 5.18: Interpretation of Energy spectral density of a signal

such bands and is indicated by the area under |X(ω)|2 as in (5.30). Therefore,|X(ω)|2 is called energy spectral density or simply the energy spectrum and isgiven the symbol Ψ, that is Ψ(ω) = |X(ω)|2. Thus the energy spectrum is thatfunction

• that describes the relative amount of energy of a given signal x(t) versusfrequency.

• whose total area under Ψ(ω) is the energy of the signal.

Note that the quantity Ψ(ω) describes only the relative amount of energy atvarious frequencies. For continuous Ψ(ω), the energy at any given frequency iszero, it is the area under Ψ(ω) that contributes energy. The energy containedwithin a band ω1 ≤ ω ≤ ω2 is

ΔE∞ =1

∫ ω2

ω1

|X(ω)|2 dω (5.31)

For real valued time signals, Ψ(ω) is an even function and (5.30) can be reducedto

E∞ =1

π

∫ ∞

0

|X(ω)|2 dω (5.32)

Note that the energy spectrum of a signal depends on the magnitude of thespectrum and not the phase.

Find the energy of the signal x(t) = e−tu(t) in the frequency band −4 < ω < 4Example 5.23

� Solution The total energy of x(t) is

E∞ =

∫ ∞

0

e−2tdt =1

2

The energy in the frequency band −4 < ω < 4 is

ΔE∞ =1

π

∫ 4

0

1

1 + ω2dω

=1

πtan−1 ω

∣∣∣∣

4

0

= 0.422 �

Thus, approximately 84% of the total energy content of the signal lies in thefrequency band −4 < ω < 4. A result that could not be obtained in the timedomain.

Page 125: My Signals Notes

5.5. ENERGY AND POWER SPECTRAL DENSITY 125

5.5.3 Power Spectral Density

Not all signals of interest have finite energy, some signals have infinite energy butfinite average power. A function that describes the distribution of the averagepower of the signal as a function of frequency is called the power spectral densityor simply the power spectrum. In the following, we develop an expression forthe power spectrum of power signals. Let x(t) be a power signal (not necessaryperiodic) shown in Figure 5.19a and define xτ (t), a truncated version of thispower signal shown in Figure 5.19c, as

xT(t) =

{x(t), |t| < T

0, otherwise

= x(t)rect

(t

2T

)

We also assume that

xT(t)

F←→ X

T(ω)

The average power in signal x(t) is

P∞ = limT→∞

[1

2T

∫ T

−T|x(t)|2dt

]

= limT→∞

[1

2T

∫ ∞

−∞|xT(t)|2dt

]

(5.33)

Figure 5.19: The time truncation of a power signal.

Page 126: My Signals Notes

126 CHAPTER 5. THE FOURIER TRANSFORM

Using Parseval’s relation, (5.33) can be written as

P∞ =1

2πlimT→∞

[1

2T

∫ ∞

−∞|X

T(ω)|2dω

]

=1

∫ ∞

−∞limT→∞

[|X

T(ω)|2

2T

]

=1

∫ ∞

−∞S(ω) dω

where

S(ω) = limT→∞

[|X

T(ω)|2

2T

]

S(ω) is referred to as power spectrum of signal x(t) and describes the distribu-tion of the power of the signal versus frequency.The above discussion holds for any general power signal. Next we show how

to compute the power spectrum of a periodic signal. Assume x(t) is periodicand that it is represented by the exponential Fourier series

x(t) =

∞∑

n=−∞

cnejnω0t

Define the truncated signal xT(t) as the product x(t)rect

(t2T

). By using the

modulation property

xT(t) = x(t)rect

(t

2T

)F←→ X

T(ω) =

1

[

X(ω) ∗ 2Tsinc

(ωT

π

)]

=1

∫ ∞

−∞2Tsinc

(λT

π

)

X(ω − λ) dλ

Substituting (5.14) for X(ω) we have

XT(ω) =

1

∫ ∞

−∞2Tsinc

(λT

π

)[ ∞∑

n=−∞

2πcnδ(ω − λ− nω0)

]

Interchanging the operations of integration and summation yields

Xτ (ω) =

∞∑

n=−∞

2Tcn

[∫ ∞

−∞sinc

(λT

π

)

δ(ω − nω0 − λ) dλ

]

︸ ︷︷ ︸

By sifting = sinc(λTπ

)∣∣∣∣λ=ω−nω0

=∞∑

n=−∞

2Tcn sinc

((ω − nω0)T

π

)

Next we form the function|X

T(ω)|2

2Tto obtain

|XT(ω)|2

2T=

∞∑

n=−∞

∞∑

m=−∞

2Tcn c∗m sinc

((ω − nω0)T

π

)

sinc

((ω −mω0)T

π

)

The power spectrum of periodic signal x(t) is obtained by taking the limit ofthe last expression as T → ∞. It has been observed earlier (see Example 5.4)

Page 127: My Signals Notes

5.5. ENERGY AND POWER SPECTRAL DENSITY 127

that as T → ∞ the sinc function approaches an impulse function. Therefore,we anticipate that the two sinc functions in the previous expression approachδ(ω − kω0), k = m and n, with strength πT . Also observe that

δ(ω − nω0)δ(ω −mω0) =

{δ(ω − nω0), m = n

0, otherwise

Then the power spectrum of the periodic signals is

S(ω) = limT→∞

|XT(ω)|2

2T= 2π

∞∑

n=−∞

|cn|2δ(ω − nω0) (5.34)

Now that we have obtained our result, note that to convert any line power

spectrum (i.e.

∞∑

n=−∞

|cn|2) to a power spectral density simply change the lines

to impulses. The weights (areas) of these impulses are equal to the squaredmagnitudes of the line heights and multiplied by the factor 2π. Integrating thepower spectrum S(ω) in (5.34) over all frequencies yields

P∞ =1

∫ ∞

−∞S(ω)dω =

∫ ∞

−∞

[∞∑

n=−∞

|cn|2δ(ω − nω0)dω

]

=

∞∑

n=−∞

|cn|2

[∫ ∞

−∞δ(ω − nω0)dω

]

︸ ︷︷ ︸=1

=

∞∑

n=−∞

|cn|2

Find the power spectral density for the periodic signal x(t) = A cos(ω0t+ θ) Example 5.23

� Solution Using Euler

x(t) =A

2ejθejω0t +

A

2e−jθe−jω0t

Writing an exponential Fourier series for x(t), we find

c−1 =

(A

2

)

e−jθ and c1 =

(A

2

)

ejθ

Using (5.34), we have

S(ω) = 2π

[(A2

4

)

δ(ω + ω0) +

(A2

4

)

δ(ω − ω0)

]

=1

2πA2δ(ω + ω0) +

1

2πA2δ(ω − ω0)

Page 128: My Signals Notes

128 CHAPTER 5. THE FOURIER TRANSFORM

The power can be found from integrating the power spectrum

P∞ =1

∫ ∞

−∞S(ω)dω

=1

∫ ∞

−∞

[1

2πA2δ(ω + ω0) +

1

2πA2δ(ω − ω0)

]

=1

4A2 +

1

4A2 =

A2

2

This result can be checked easily in the time domain. Also note that the resultcould be obtained easily using Parseval’s relation

∞∑

n=−∞

|cn|2 = c−1c

∗−1 + c1c

∗1 =A2

4+A2

4=A2

2�

5.6 Correlation Functions

The word correlation in general refers to the degree by which things are related,we say drugs and crime are correlated. In the study of signal and system analysisthe characteristics of individual signals are important, but often the relationshipbetween signals are just as important. Correlation functions mathematicallydefine the similarity between two signals in both time domain and frequencydomain. The mathematical definition of a correlation function depends on thetype of signals being analyzed.

5.6.1 Energy Signals

For two real energy signals x(t) and y(t) the correlation function Rxy is definedby (please note different authors use different definitions of correlation, however,they establish the same results)

Rxy(t) =

∫ ∞

−∞x(τ)y(τ − t)dτ =

∫ ∞

−∞x(t+ τ)y(τ)dτ

For complex signals we define

Rxy(t) =

∫ ∞

−∞x(τ)y∗(τ − t)dτ =

∫ ∞

−∞x(t+ τ)y∗(τ)dτ

5.6.2 Power Signals

The correlation between two power signals x(t) and y(t) is defined by

Rxy(t) = limT→∞

1

T

T

x(τ)y(τ − t)dτ = limT→∞

1

T

T

x(t+ τ)y(τ)dτ

For complex signals we define

Rxy(t) = limT→∞

1

T

T

x(τ)y∗(τ − t)dτ = limT→∞

1

T

T

x(t+ τ)y∗(τ)dτ

Page 129: My Signals Notes

5.6. CORRELATION FUNCTIONS 129

5.6.3 Convolution and Correlation

Notice the similarity between the correlation function for two energy signals andthe convolution of two signals presented earlier. The convolution of two signalsx(t) and y(t) is

x(t) ∗ y(t) =∫ ∞

−∞x(τ)y(t− τ)dτ

We notice there is a simple mathematical relationship between correlation andconvolution

Rxy(t) = x(t) ∗ y(−t)

Using the convolution property of the Fourier transform and the fact that for a

real valued signal y(−t)F←→ X∗(ω) we have

Rxy(t)F←→ X(ω)Y ∗(ω)

5.6.4 Autocorrelation

A very important special case of the correlation function is the correlation ofa function with itself. This type of correlation is called the autocorrelationfunction. If x(t) is an energy signal and real valued, its autocorrelation is

Rxx(t) =

∫ ∞

−∞x(τ)x(τ − t)dτ =

∫ ∞

−∞x(t+ τ)x(τ)dτ

At a shift of zero, i.e. t = 0, that becomes

Rxx(0) =

∫ ∞

−∞x2(τ)dτ

which is the total signal energy of the signal. On the other hand if x(t) is apower signal, the autocorrelation is

Rxx(t) = limT→∞

1

T

T

x(τ)x(τ − t)dτ = limT→∞

1

T

T

x(t+ τ)x(τ)dτ

If the shift is zero we have

Rxx(0) = limT→∞

1

T

T

x2(τ)dτ

which is the average signal power of the signal. Note that if x(t) is periodic thelimiting operation in the determination of Rxx(t) can be replaced by a compu-tation over one period. The subscript (xx) of the autocorrelation function Rxxis often written as Rx.

Determine and sketch the autocorrelation function of a periodic square wave Example 5.24x(t) shown in Figure 5.20.

� Solution Because x(t) is real valued and periodic the autocorrelation functionis given by

Rx(t) =1

T

T

x(τ)x(τ − t)dτ

Page 130: My Signals Notes

130 CHAPTER 5. THE FOURIER TRANSFORM

Figure 5.20: A periodic square wave signal

For −T2 < t < 0, see Figure 5.21

Rx(t) =1

T

∫ t+(T4 )

−T4

A2dτ = A2(1

2+t

T

)

For 0 < t < T2 ,

Rx(t) =1

T

∫ T4

t−(T4 )A2dτ = A2

(1

2−t

T

)

Figure 5.21: A periodic square wave signal

A graph of Rx(t) is shown in Figure 5.22. Note that since x(t) is periodic,all calculations repeat over every period. It follows that the autocorrelationfunction of a periodic waveform is periodic. It is interesting to notice that

Rx(0) =A2

2 , the average signal power of the signal, a result we should all befamiliar with by now!! �

Figure 5.22: Autocorrelation function of periodic square wave signal

Page 131: My Signals Notes

5.7. CORRELATION AND THE FOURIER TRANSFORM 131

5.7 Correlation and the Fourier Transform

In Section 5.5 we have seen how signals are handled using the power spectraldensity function S(ω) and the energy spectral density Ψ(ω). Spectral densityfunctions give us great insight into which frequency band contains more energyor more power. The question now naturally arises: Is there some operationin the time domain which is equivalent to finding the power spectrum or theenergy spectrum in frequency?

5.7.1 Autocorrelation and The Energy Spectrum

In section 5.6.4 we have seen how autocorrelation functions are related to signalenergy and signal power. A relation between the autocorrelation function andthe energy spectrum can also be established. The autocorrelation function Rx(t)for energy signals is

Rx(t) =

∫ ∞

−∞x(τ)x∗(τ − t)dτ =

∫ ∞

−∞x(t+ τ)x∗(τ)dτ (5.35)

The Fourier transform of Equation (5.35) gives

F [Rx(t)] =∫ ∞

−∞

[∫ ∞

−∞x(t+ τ)x∗(τ)dτ

]

e−jωtdt

Interchanging the order of integration, we have

F [Rx(t)] =∫ ∞

−∞

[∫ ∞

−∞x(t+ τ)e−jωtdt

]

x∗(τ)dτ

=

∫ ∞

−∞

[X(ω)ejωτ

]x∗(τ)dτ = |X(ω)|2

Therefore,

Rx(t)F←→ Ψ(ω)

and we conclude that the energy spectral density Ψ(ω) is the Fourier transformof the autocorrelation function of energy signals. It is clear that Rx(t) providesspectral information of x(t) directly. It is interesting to note that for real valuedenergy signal x(t), we can write

Rx(t)F←→ X(ω)X(−ω)

Recognizing multiplication in the frequency domain is equivalent to convolutionin the time domain

Rx(t) = x(t) ∗ x(−t) =∫ ∞

−∞x(τ)x(τ − t)dτ (5.36)

which is exactly the definition of the autocorrelation function. From (5.36) it isclear that

Rx(−t) = x(−t) ∗ x(t) = Rx(t)

Therefore, for real x(t), autocorrelation function Rx(t) is an even function of t.

Page 132: My Signals Notes

132 CHAPTER 5. THE FOURIER TRANSFORM

5.7.2 Autocorrelation and the Power Spectrum

Autocorrelation function has the same relation to power spectral density as ithad to the energy spectral density. In particular we show that the power spectraldensity S(ω) is the Fourier transform of the autocorrelation function of powersignals, i.e.,

Rx(t)F←→ S(ω)

The autocorrelation function Rx(t) for power signals is

Rx(t) = limT→∞

1

2T

∫ T

−Tx(τ)x∗(τ − t)dτ = lim

T→∞

1

2T

∫ T

−Tx(t+ τ)x∗(τ)dτ

We begin by taking the inverse Fourier transform of S(ω)

F−1[S(ω)] =1

∫ ∞

−∞

[

limT→∞

|XT(ω)|2

2T

]

ejωtdω

Interchange the order of operations yields

F−1[S(ω)] = limT→∞

1

2T

∫ ∞

−∞

1

2πXT(ω)X∗

T(ω)ejωtdω

= limT→∞

1

2T

∫ ∞

−∞

1

[∫ T

−Tx(τ)e−jωτdτ

][∫ T

−Tx∗(μ)ejωμdμ

]

ejωtdω

= limT→∞

1

2T

∫ T

−Tx(τ)

[∫ T

−Tx∗(μ)

] [1

∫ ∞

−∞ejω(t−τ+μ)dω

]

︸ ︷︷ ︸= δ(t− τ + μ)

︸ ︷︷ ︸

By sifting = x∗(μ)

∣∣∣∣μ=τ−t

= limT→∞

1

2T

∫ T

−Tx(τ)x∗(τ − t)dτ

We now have another method to find the power spectrum, i.e., first determinethe autocorrelation function and then take a Fourier transform.

Page 133: My Signals Notes

Chapter 6

Applications of The FourierTransform

The continuous time Fourier transform is a very important tool that has numer-ous applications in communication systems, signal processing, control systems,and many other engineering disciplines. In this chapter, we discuss some ofthese applications, including linear filtering, modulation, and sampling.

6.1 Signal Filtering

Filtering is the process by which the essential and useful part of a signal isseparated from undesirable components. The idea of filtering using LTI systemsis based on the convolution property of the Fourier transform. We will analyzesystems called filters which are designed to have a certain frequency response.We will define the term ideal filter. Since frequency response is so important inthe analysis of systems we discuss it in more details next.

6.1.1 Frequency Response

Every LTI system has an impulse response h(t) and, through the Fourier trans-form, also a frequency response H(ω). Since LTI systems are completely definedby convolution, the convolution property is essential for understanding LTI sys-tems, as well as simplifying their analysis. Figure 6.1 depicts the time domainand frequency domain representation of a LTI system. A cascade connection oftwo LTI systems can be combined and simplified into a single equivalent systemas shown in Figure 6.2. The equivalent frequency response of a cascade of LTIsystem is H(ω) = H1(ω)H2(ω). In a parallel connection of LTI systems the

Figure 6.1: LTI system depicted in both the time and frequency domain.

133

Page 134: My Signals Notes

134 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM

Figure 6.2: Frequency domain representation of cascade connection of Lineartime invariant systems.

equivalent frequency response is H(ω) = H1(ω)+H2(ω) as shown in Figure 6.3.

Figure 6.3: Frequency domain representation of parallel connection of Lineartime invariant systems.

The output of a LTI system is given byExample 6.1

y(t) =

∫ ∞

0

e−τx(t− τ)dτ

Find the inverse system.

� Solution First recognize that the impulse response to the system is h(t) =e−tu(t). Therefore,

H(ω) =1

1 + jω

andH−1(ω) = 1 + jω

In other wordsX(ω) = (1 + jω)Y (ω)

or

x(t) = y(t) +d

dty(t) �

Page 135: My Signals Notes

6.1. SIGNAL FILTERING 135

Find the frequency response of the LTI system shown in Figure 6.4. Example 6.2

Figure 6.4: Example of cascade/parallel system and delay.

� Solution First, the impulse response of the parallel system is h1(t) = δ(t)−δ(t− T ). The frequency response of the parallel part is

H1(ω) = 1− e−jωT

and the frequency response of the integrator is

H2(ω) = πδ(ω) +1

Therefore, the frequency response of the overall system in Figure 6.4 is theproduct

H(ω) = H1(ω)H2(ω)

=(1− e−jωT

)(

πδ(ω) +1

)

= π(1− e−jωT

)δ(ω)

︸ ︷︷ ︸= 0

+(1− e−jωT

)(1

)

=

(ejωT/2 − e−jωT/2

)

jωe−jωT/2

=sin(ωT/2)

(ω/2)e−jωT �

6.1.2 Ideal Filters

An ideal filter is one that passes certain frequencies without any change andstops the rest. The range of frequencies that pass through is called the passbandof the filter, whereas the range of frequencies that don not pass is referred to asthe stopband. In the ideal case, |H(ω)| = 1 in the passband, while |H(ω)| = 0in the stopband. The most common types of filters are the following:

1. Lowpass filter is a one that has its passband in the range of 0 < |ω| < ωc,where ωc is called the cutoff frequency of the lowpass filter, Figure 6.5a.

2. Highpass filter is a one that has its stopband in the range 0 < |ω| < ωcand a passband that extends from ω = ωc to infinity, Figure 6.5b.

Page 136: My Signals Notes

136 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM

3. Bandpass filter has its passband in the range 0 < ω1 < |ω| < ω2 <∞ andall other frequencies are stopped, Figure 6.5c.

4. Bandstop filter stops frequencies in the range 0 < ω1 < |ω| < ω2 <∞ andpass all other frequencies, Figure 6.5d.

Figure 6.5: Frequency responses of most common type of ideal filters.

The class of filters described previously are referred to as ideal filters. It isimportant to note that ideal filters are impossible to construct physically. Con-sider the ideal low pass filter for instance, its impulse response corresponds tothe inverse Fourier transform of the frequency response shown in Figure 6.5aand is given by

h(t) =ωc

πsinc

(ωct

π

)

Clearly the impulse response of this ideal filter is not zero for t < 0, as shownin Figure 6.6. Systems such as this are noncausal and hence not physically re-alizable.

Page 137: My Signals Notes

6.1. SIGNAL FILTERING 137

Figure 6.6: The impulse response of an ideal low pass filter.

For the system shown in Figure 6.7, often used to generate communication Example 6.3signals, design an ideal filter assuming that a certain application requires y(t) =3 cos 1200πt.

Figure 6.7: Generation of communication signals

� Solution The signals x1(t) and x2(t) are multiplied together to give

x3(t) = x1(t)x2(t) = 10 cos 200πt cos 1000πt

Using Euler’s identity

x3(t) = 10 cos 200πt

(ej1000πt + e−j1000πt

2

)

= 5 cos 200πt ej1000πt + 5 cos 200πt e−j1000πt

We can use the frequency shifting property together with the Fourier transformof cosω0t to find the frequency spectrum of x3(t) as

X3(ω) = 5π[δ(ω − 200π − 1000π) + δ(ω + 200π − 1000π)]

+ 5π[δ(ω − 200π + 1000π) + δ(ω + 200π + 1000π)]

= 5π[δ(ω − 1200π) + δ(ω − 800π) + δ(ω + 800π) + δ(ω + 1200π)]

The frequency spectra of x1(t), x2(t), and x3(t) are shown in Figure 6.8. TheFourier transform of the required signal y(t) is

Y (ω) = 3π[δ(ω − 1200π) + δ(ω + 1200π)]

It can be seen that this can be obtained from X3(ω) by a high pass filter whosefrequency response is H(ω) shown in Figure 6.8, the filtering process can bewritten as

Y (ω) = X3(ω)H(ω)

where

H(ω) = 0.6

[

1− rect

2ωc

)]

, 800π < ωc < 1200π �

Page 138: My Signals Notes

138 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM

Figure 6.8: The frequency spectrum of 10 cos 200πt cos 1000πt

Consider the periodic square wave given in Example 5.7 as the input signal toExample 6.4an ideal low pass filter, sketch the output signal in the time domain if the cutofffrequency |ωc| = 8ω0.

� Solution Recall in Example 5.7 we showed that the periodic square wavehas the Fourier transform depicted in Figure 6.9 which consists of impulses atinteger multiples of ω0. If this signal is the input to an ideal low pass filter withcutoff frequency ωc such that ωc = 8ω0 then the Fourier transform of the out-put contains only the impulses lying within the filters bandwidth. The frequencyresponse of the low pass filter is drawn on top of the Fourier transform X(ω).The output signal is plotted at the right side of Figure 6.9. �

It is appropriate here to define a word used in the previous example, namelythe term bandwidth, a word that is very commonly used in signal analysis.

6.1.3 Bandwidth

The term bandwidth is applied to both signals and filters. It generally means arange of frequencies. This could be the range of frequencies present in a signalor the range of frequencies a filter allows to pass. Usually, only the positivefrequencies are used to describe the range of frequencies. For example, the ideal

Page 139: My Signals Notes

6.1. SIGNAL FILTERING 139

Figure 6.9: Example of ideal low pass filter.

low pass filter in the previous example with cutoff frequencies of ±8ω0 is saidto have a bandwidth of 8ω0, even though the width of the filter is 16ω0. Theideal bandpass filter in Figure 6.5c has a bandwidth of ω2 − ω1 which is thewidth of the region in positive frequency in which the filter passes a signal.There are many different kinds of bandwidths, including absolute bandwidth,3-dB bandwidth or half-power bandwidth and the null-to-null bandwidth or zerocrossing bandwidth.

Absolute Bandwidth

This definition is used in conjunction with band-limited signals. A signal x(t)is called band-limited if its Fourier transform satisfies the conditions X(ω) = 0for |ω| ≥ ωB . Furthermore, with the above conditions still satisfied it is alsocalled a baseband if X(ω) is centered at ω = 0. On the other hand if |X(ω)| = 0for |ω − ω0| ≥ ωB and X(ω) centered at ω0 it is called a bandpass signal, seeFigure6.10. If x(t) is a baseband signal and |X(ω)| = 0 outside the interval

Figure 6.10: Amplitude spectra for baseband and bandpass signals.

Page 140: My Signals Notes

140 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM

Figure 6.11: Absolute bandwidth of signals.

|ω| < ωB as shown in Figure 6.11, then the absolute bandwidth is

B = ωB

But if x(t) is a bandpass signal and |X(ω)| = 0 outside the interval ω1 < ω < ω2,then

B = ω2 − ω1

3-dB (Half-Power) Bandwidth

For baseband signals it is defined as the frequency ω1 (Figure 6.12a), such that

|X(ω1)| =1√2|X(0)|

Note that inside the band 0 < ω < ω1, the magnitude |X(ω)| falls no lowerthan 1/

√2 of its value at ω = 0. The term 3-dB bandwidth comes from the

relationship

20 log10

(1√2

)

= −3 dB

For bandpass signals (Figure 6.12b), note here that inside the band ω1 < ω < ω2the magnitude |X(ω)| falls no lower than 1/

√2 of its value at ω = 0, then B =

ω2−ω1. The 3-dB bandwidth is also known as the half power bandwidth becauseif the magnitude of voltage or current is divided by

√2, the power delivered to

a load by that signal is halved. The bandwidth can also be determined using

|X(ω1)|2 =1

2|X(0)|2

Determine the 3-dB (half power) bandwidth for the signal x(t) = e−t/Tu(t).Example 6.5

� Solution The signal x(t) is a baseband signal and has the Fourier trans-form

X(ω) =1

(1T

)+ jω

Page 141: My Signals Notes

6.1. SIGNAL FILTERING 141

Figure 6.12: 3-dB or half power bandwidth..

The power spectrum shown in Figure 6.13 is given by

|X(ω)|2 =1

(1T

)2+ ω2

Clearly, |X(0)|2 = T 2, and the 3-dB bandwidth is given by

B =1

T

Null-to-Null (Zero-Crossing) Bandwidth

The null-to-null bandwidth or zero crossing is shown in Figure 6.14. It is definedas the distance between the first null in the frequency spectrum above ωm and

Figure 6.13: Half power bandwidth..

Page 142: My Signals Notes

142 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM

Figure 6.14: Null-to-null and first null bandwidth.

the first null in the spectrum below ωm, where ωm is the frequency at whichthe spectrum has its maximum magnitude. For baseband signals , the spectrummaximum is at ω = 0 and the bandwidth is the distance between the first nulland the origin.

6.1.4 Practical Filters

As we saw earlier ideal filters does not exist in real life as they are impossibleto build. In practice, we can realize a variety of filter characteristics which canonly approach ideal characteristics. An ideal filter makes a sudden transitionfrom the passband to the stopband. There is no transition band. For practicalfilters, on the other hand, the transition from the passband to the stopband(or vice versa) is gradual, and takes place over a finite band of frequencies, asshown in Figure 6.15.

Consider the RC circuit shown below.Example 6.6

The impulse response of this circuit is given by

h(t) =1

RCe(−t/RC)u(t)

Page 143: My Signals Notes

6.2. AMPLITUDE MODULATION 143

Figure 6.15: Practical filters.

and the frequency response is

H(ω) =1

1 + jωRC

The magnitude spectrum is shown in Figure 6.16. It is clear that the RC circuitwith the output taken as the voltage across the capacitor performs as a low passfilter. It is common practice to have the transition between the passband, andthe stopband at the 3-dB cutoff frequency. Setting |H(ω)| = 1/

√2, we obtain

ωc =1

RC

as shown in Figure 6.16.

Figure 6.16: Magnitude spectrum of lowpass RC circuit .

6.2 Amplitude Modulation

One of the most important applications of the Fourier transform is in the anal-ysis and design of communication systems. The goal of all communication sys-

Page 144: My Signals Notes

144 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM

tems is to convey information from one point to another. Prior to sending theinformation signal through the transmission channel, the information signal isconverted to a useful form through what is known as the modulation process. Inamplitude modulation, the amplitude of a sinusoidal signal is constantly beingmodified in proportion to a given signal. This has the effect of simply shiftingthe spectrum of the given signal up and down by the sinusoid frequency in thefrequency domain.The use of amplitude modulation may be advantageous whenever a shift in

the frequency components of a given signal is desired. Consider for example thetransmission of a human voice through satellite communications. The maximumvoice frequency is 3 kHz, on the other hand, satellite links operate at muchhigher frequencies (3-30 GHz). For this form of transmission to be feasible, weclearly need to do things: shift the essential spectral content of a speech signalto some higher frequency so that it lies inside the assigned frequency rangefor satellite transmission, and shift it back to its original frequency band onreception. The first operation is simply called modulation, and the second wecall demodulation.We consider a very simple method of modulation called the double-sideband,

suppressed-carrier, amplitude modulation (DSB/SC-AM). This type of modula-tion is accomplished by multiplying the information-carrying signal m(t) (knownas modulating signal), by a sinusoidal signal called the carrier signal, cos ω0t,ω0 is the carrier frequency as shown in Figure 6.17.

Figure 6.17: Signal multiplier.

The waveforms of Figure 6.18 illustrates the amplitude modulation process inthe time domain for a slowly varying m(t). We will now examine the spec-

Figure 6.18: DSB/SC amplitude modulation.

Page 145: My Signals Notes

6.2. AMPLITUDE MODULATION 145

trum of the output signal (modulated signal). As was indicated earlier by thefrequency shifting property of the Fourier transform, we have

y(t) = m(t) cosω0tF←→

1

2[M(ω − ω0) +M(ω + ω0)] = Y (ω) (6.1)

Recall thatM(ω−ω0) isM(ω) shifted to the right by ω0 andM(ω+ω0) isM(ω)shifted to the left by ω0. Thus, the process of modulation shifts the spectrum ofthe modulating signal to the left and right by ω0 (Figure 6.19). Note also thatthe if bandwidth of m(t) is ωB , then as indicated in Figure 6.19, the bandwidthof the modulated signal is 2ωB . We also observe that the modulated signalspectrum spectrum centered at ω0 is composed of two parts: a portion that liesabove ω0, known as the upper sideband (USB), and a portion that lies below ω0,known as the lower sideband (LSB), thus the name double sideband (DSB). Thename suppressed carrier (SC) comes from the fact that DSB/SC spectrum doesnot have the component of the carrier frequency ω0. In other words, there is noimpulse at the carrier frequency in the spectrum of the modulated signal. Therelationship of ωB to ω0 is of interest. Figure 6.19 shows that ω0 ≥ ωB in orderto avoid overlap of the spectra centered at ±ω0. If ω0 < ωB , the spectra overlapand the information of m(t) is lost in the process of modulation, a loss whichmakes it impossible to get back m(t) from the modulated signal m(t) cosω0t.

Figure 6.19: DSB/SC amplitude modulation in the frequency domain.

To be able extract the information signal m(t) from the modulated signal themodulation process must be reversed at the receiving end. This process is calleddemodulation. In effect, demodulation shifts back the message spectrum to itsoriginal low frequency position. This can be done by multiplying the modulatedsignal again with cosω0t (generated by a so-called local oscillator) and thenfiltering the result, as shown in Figure 6.20. The local oscillator is tuned toproduce a sinusoid wave at the same frequency as the carrier frequency, thisdemodulation technique is known as synchronous detection. To see how the

Page 146: My Signals Notes

146 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM

Figure 6.20: Demodulation of DSB/SC.

system of Figure 6.20 works, note that since z(t) = y(t) cosω0t. It follows thatZ(ω) is

Z(ω) =1

2Y (ω − ω0) +

1

2Y (ω + ω0) (6.2)

Substituting (6.1) for Y (ω) in (6.2) shows that Z(ω) illustrated in Figure 6.21has three copies of M(ω)

Z(ω) =1

2

[1

2M(ω − ω0 − ω0) +

1

2M(ω + ω0 − ω0)

]

+1

2

[1

2M(ω + ω0 − ω0) +

1

2M(ω + ω0 + ω0)

]

=1

4M(ω − 2ω0) +

1

2M(ω) +

1

4M(ω + 2ω0)

Note that we obtained the desired baseband spectrum, 12M(ω), in Z(ω) inaddition to unwanted spectrum at ±2ω0.

Figure 6.21: Demodulation process.

If the condition ω0 ≥ ωB is satisfied, it will be possible to extract M(ω) withan ideal low pass filter shown in Figure 6.21, of the form

H(ω) =

{G, |ω| < ωc0, otherwise

where the gain of the lowpass filter should be G = 2 and the cutoff frequencyshould satisfy ωB < ωc < 2ω0 − ωB .

Page 147: My Signals Notes

6.3. SAMPLING 147

6.3 Sampling

Sampling a signal is the process of acquiring its values only at discrete points intime. The main reason we acquire signals in this way is that most signal pro-cessing and analysis today is done using digital computers. A digital computerrequires that all information in processes be in the form of numbers. Therefore,the samples are acquired and stored as numbers. Since the memory and massstorage capacity of a computer are finite, it can only handle a finite number ofnumbers. Therefore, if a digital computer is to be used to analyze a signal, itcan only be sampled for a finite time. The questions that may rise here are:To what extent do the samples accurately describe the signal from which theyare taken? How can all the signal information be stored in a finite number ofsamples? How much information is lost, if any, by sampling the signal?

6.3.1 The Sampling Theorem

If we are to obtain a set of samples from a continuous function of time x(t), themost important question is how to sample the signal so as to retain the informa-tion it carries. The amazingly simple answer is given by the Shannon samplingtheorem which states that a bandlimited signal x(t) can be reconstructed ex-actly from its samples if the samples are taken at a rate ωs =

2πT, provided ωs

is at least as large as 2ωB , which is twice the highest frequency present in thebandlimited signal x(t). The minimum rate 2ωB is known as the Nyquist rate.

In practice sampling is most commonly done with two devices, the sample-and-hold (S/H) and the analog-to-digital convertor (ADC). The input signal x(t)is sampled at sampling instants nT , to obtain a sample x(nT ), where n is aninteger. The sampled signal may be mathematically represented as the productof the original continuous time signal and an impulse train as shown in Figure6.22. This representation is commonly termed impulse sampling. The result isa sampled signal xs(t) that consists of impulses spaced every T seconds (thesampling interval). The validity of the sampling theorem can be demonstratedusing either the modulation property or the frequency convolution property of

Figure 6.22: The ideal sampling process.

Page 148: My Signals Notes

148 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM

the Fourier transform. As can be seen from Figure 6.22, the sampled signal isconsidered to be the product (modulation) of the continuous time signal x(t)and the impulse train δT (t) and, hence, is usually referred to as the impulsemodulation model for the sampling operation. To obtain a frequency domainrepresentation of the sampled signal, we can begin by deriving an expression forthe Fourier transform of the signal xs(t). To do this we represent the periodicimpulse train in terms of its Fourier transform. From (6.3), it follows that

F [δT (t)] =2π

T

∞∑

n=−∞

δ(ω − nω0) = ωs∞∑

n=−∞

δ(ω − nωs) (6.3)

where ωs = 2π/T is the sampling frequency in radians/second. The samplingfrequency in hertz is given by fs = 1/T ; therefore, ωs = 2πfs. Now,

Xs(ω) =1

2πX(ω) ∗

[

ωs

∞∑

n=−∞

δ(ω − nωs)

]

=1

T

∞∑

n=−∞

X(ω − nωs) (6.4)

Figure 6.23a shows a typical bandlimited Fourier transform representation X(ω).From (6.4), we see that the effect of sampling x(t) is to replicate the frequencyspectrum of X(ω) about the frequencies nωs, n = ±1, ±2, ±3, ∙ ∙ ∙ . This resultis illustrated in Figure 6.23b. Note that Figure 6.23b was constructed with theassumption that ωs > 2ωB , so the shifted copies of Xs(ω) do not overlap. Torecover the original signal x(t), we have to pass xs(t) through a lowpass filter

Figure 6.23: (a) Magnitude spectrum of a bandlimited signal. (b) The frequencyspectrum of a bandlimited signal which has been sampled at ωs > 2ωB .

Page 149: My Signals Notes

6.3. SAMPLING 149

Figure 6.24: Impulse modulation followed by signal reconstruction.

with frequency response

H(ω) =

{T, |ω| ≤ ωB0, otherwise

as shown in Figure 6.24. For there to be no overlap between the differentcomponents, it is clear that the sampling rate should be such that

ωs − ωB ≥ ωB

as illustrated in Figure 6.23b. Thus, the signal x(t) can be recovered from itssamples only if

ωs ≥ 2ωB

This is the sampling theorem that we stated earlier. The maximum time spacingbetween samples is

T =π

ωB

If T does not satisfy this condition, the different components of Xs(ω) overlapand we will not be able to recover x(t) exactly. This is referred to as aliasing.If x(t) is not bandlimited there will always be aliasing irrespective of the chosensampling rate. Figure 6.25 shows the magnitude spectrum of a bandlimitedsignal which has been impulse-sampled at twice its highest frequency. If thesampling rate was lower than 2ωB the components of Xs(ω) would overlapand no filter could recover the original signal directly from the sampled signal.

Figure 6.25: Magnitude spectrum of a bandlimited signal which has been sam-pled at twice its highest frequency.

Page 150: My Signals Notes

150 CHAPTER 6. APPLICATIONS OF THE FOURIER TRANSFORM

Consider the signal x(t) given byExample 6.7

x(t) =1

3π+1

3πcos(πt+ π2 )

Demonstrate how sampling and reconstruction is performed.

� Solution The Fourier transform of x(t) is

X(ω) =2

3δ(ω) +

j

3δ(ω − π)−

j

3δ(ω + π)

The highest frequency in this signal is π rad/sec, so any sampling rate such thatωs ≥ 2π will guarantee that we can exactly reconstruct the signal x(t) from itssamples. Figure 6.26a shows X(ω) and Figure 6.26b shows Xs(ω) for ωs = 6π.The light coloured box in Figure 6.26b is the frequency response of the ideallowpass filter H(ω) whose cutoff frequency is 3π rad/sec. The output of theideal lowpass filter will be exactly equal to the original signal as required. �

Figure 6.26: (a) Fourier transform of the signal x(t) (b) Fourier transform ofthe sampled signal xs(t).