signal analysis

103
Bauhaus-University Weimar Institute for Mathematics/Physics Supports to the Lecture Signal Analysis (Mathematical Fundamentals of Signal Processing) Supplemented by the repetition of some basics of Linear Algebra and Analysis and by some basics of Functional Analysis for Natural Hazards and Risks in Structural Engineering (NHRE) from Klaus Markwardt Professur Angewandte Mathematik Institut f¨ ur Mathematik/Physik Bauhaus-Universit¨ at Weimar Coudraystr. 13 B, Zimmer 202 99421 Weimar Homepage: http://webuser.uni-weimar.de/~markward/

Upload: obinna-obiefule

Post on 12-Feb-2016

27 views

Category:

Documents


0 download

DESCRIPTION

Signal Analysis

TRANSCRIPT

Page 1: Signal Analysis

Bauhaus-University WeimarInstitute for Mathematics/Physics

Supports to the Lecture

Signal Analysis(Mathematical Fundamentals of Signal Processing)

Supplemented bythe repetition of some basics of Linear Algebra and Analysis

and by some basics of Functional Analysis

for

Natural Hazards and Risks in Structural Engineering (NHRE)

from

Klaus Markwardt

Professur Angewandte MathematikInstitut fur Mathematik/PhysikBauhaus-Universitat WeimarCoudraystr. 13 B, Zimmer 20299421 WeimarHomepage:http://webuser.uni-weimar.de/~markward/

Page 2: Signal Analysis

Contents

1 Notations 4

2 Repetition - Some Basics of Linear Algebra and Analysis 7

2.1 Complex numbers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.2 Differential Calculus for real Functions . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.3 Integral Calculus for real Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.3.1 Rules of Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.3.2 Basics in numerical Integration . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.3.3 Improper integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.3.4 Gaussian probability density functions . . . . . . . . . . . . . . . . . . . . . . 15

3 Repetition - Real and complex Fourier series 18

3.1 Trigonometric polynomials and discrete frequency spectra . . . . . . . . . . . . . . . 18

3.2 Real amplitude-phase representations of trigonometric polynomials . . . . . . . . . . 19

3.3 Complex representation of real trigonometric polynomials . . . . . . . . . . . . . . . 21

3.4 Complex amplitude-phase representations of trigonometric polynomials . . . . . . . 22

3.5 Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3.6 Fourier-series of even and odd periodic functions . . . . . . . . . . . . . . . . . . . . 26

3.7 Complex representation of Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.8 Approximation of a rectangular oscillation by trigonometric polynomials . . . . . . 27

3.9 Piecewise continuous functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

3.10 Pointwise and uniform convergence of Fourier series . . . . . . . . . . . . . . . . . . 30

3.11 Smoothness of signals and asymptotic behavior of their Fourier coefficients . . . . . . 31

3.12 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4 Basics of Functional Analysis 33

1

Page 3: Signal Analysis

4.1 Linear Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4.2 Linear operators and linear functionals . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4.3 Normed spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4.4 Real inner product spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4.5 Application to real Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

4.6 Complex inner product spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

4.7 Application to complex Fourier series . . . . . . . . . . . . . . . . . . . . . . . . . . 46

4.8 Metric spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

5 Fourier transform 51

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

5.2 Relations between Fourier transform and complex Fourier coefficients . . . . . . . . . 53

5.3 Amplitude and Phase Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

5.4 Basic properties and examples of the Fourier transform . . . . . . . . . . . . . . . . 58

5.5 Scalar products and signal energy considerations . . . . . . . . . . . . . . . . . . . . 64

5.6 The convolution of functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

5.7 Translation, dilation and differentiation of signals . . . . . . . . . . . . . . . . . . . . 69

5.8 Cross-correlation and autocorrelation of signals . . . . . . . . . . . . . . . . . . . . . 69

6 Important Fourier Transformation Pairs 70

7 Discrete Fourier Transform (DFT) 72

7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

7.2 Properties of the Discrete Fourier Transform . . . . . . . . . . . . . . . . . . . . . . 77

7.3 Some basic hints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

7.4 Application of the DFT for generally sampling period . . . . . . . . . . . . . . . . . 79

8 Lecture Graphs - Part 1 83

2

Page 4: Signal Analysis

9 Lecture Graphs - Part 2 90

10 Lecture Graphs - Part 3 99

3

Page 5: Signal Analysis

Literature Hints

Fourier and wavelet analysisBooks in English: [1], [17], [15], [8], Books in German : [14], [6], [7], [19],

Wavelet analysisBooks in English : [18], [9], [3],[4]Later studies : [5], [10], [11], [16],Books in German : [2], [12]

1 Notations

C Set of complex numbers

N Set of natural numbers, N = 1, 2, · · ·

No No = 0, 1, 2, · · ·

R Set of real numbers

Z Set of whole numbers, Z = · · · ,−2,−1, 0, 1, 2, · · ·

Ck(R) space of continuous functions on R that have continuous derivatives up to order k

Ck(I) space of continuous functions on an interval I that havecontinuous derivatives up to order k

Lp (R) f ∈ Lp (R) if∞∫−∞|f(t)|p dt exists, p ∈ N

Mn(f) Mn(f) =∞∫−∞

tnf(t) dt, moment of order n (continuous moment of a real function)

supp(f) support of a function f

δ(t) Dirac delta distribution, Dirac impulse

δlk Kronecker delta

F(f) , f F(f)(ω) = f(ω) =∞∫−∞

f(t) e−iωt dt, Fourier-Transformation

ω Angular frequency : ω = 2πξ

ξ Frequency

f (n) n-th derivative of a real or complex valued function

L(f) L(f)(z) =∞∫0

f(t) e−zt dt, z ∈ C, Laplace Transform

4

Page 6: Signal Analysis

χA or 1A characteristic function (indicator function) of a set A

Re(f) , Im(f) real- and imaginary part of f

〈f, g〉 〈f, g〉 =∞∫−∞

f(t) g(t) dt inner product or scalar product in L2 (R),

also scalar products in the vector spaces Rn and Cn

‖f‖ = ‖f‖2 ‖f‖ =√〈f, f〉, in Rn and Cn norm in L2 (R) in

bcc floor of a real number c

dce ceil of a real number c

⊂ relation mark for proper inclusion (proper containment) in set theory

⊆ relation mark for inclusion (containment) in set theory

A * B set A is not contained in set B

Vector Sets

Rn linear space of n-dimensional vectors with real componentsCn linear space of n-dimensional vectors with complex componentsZn set of n-dimensional vectors with whole-numbered components

u, v, w vectors

Set of Matrices

(n,m)-matrix a matrix with n rows and m columns

R(n,m) set of (n,m)-matrices with real entries

C(n,m) set of (n,m)-matrices with complex entries

Z(n,m) set of (n,m)-matrices with whole-numbered entriesA, B, C matrices

aij entry from row i and column j regarding matrix Adet(A) determinant of the matrix A

other Notions

∀ u ∈ Cn for all n-dimensional vectors with complex entries

A ∈ R(n,m) A is a real (n,m)-matrix

5

Page 7: Signal Analysis

Lower case Greek alphabet

name character name character name character

alpha α iota ι rho ρbeta β kappa κ sigma σgamma γ lambda λ tau τdelta δ mu µ upsilon υepsilon ε nu ν phi φzeta ζ xi ξ chi χeta η omicron o psi ψtheta θ pi π omega ω

6

Page 8: Signal Analysis

2 Repetition - Some Basics of Linear Algebra and Analysis

2.1 Complex numbers

http://en.wikipedia.org/wiki/Complex_number

http://www.mathematics-online.org/kurse/kurs9/

Imaginary unit i:i2 = −1

Cartesian or algebraic representation complex numbers:

z = a+ b i, a, b ∈ R

Properties 2.1. (Arithmetic operations)Addition :

(a+ b i) + (c+ d i) = (a+ c) + (b+ d) i

Multiplication:(a+ b i) · (c+ d i) = (ac− bd) + (ad+ bc) · i

Subtraction :(a+ b i)− (c+ d i) = (a− c) + (b− d) i

Division:a+ b i

c+ d i=

(a+ b i)(c− d i)(c+ d i)(c− d i) =

ac+ bd

c2 + d2+bc− adc2 + d2

i

Examples 2.2.

(3 + 2i) + (5 + 5i) = (3 + 5) + (2 + 5)i = 8 + 7i,

(5 + 5i)− (3 + 2i) = (5− 3) + (5− 2)i = 2 + 3i,

(2 + 5i) · (3 + 7i) = (2 · 3− 5 · 7) + (2 · 7 + 5 · 3)i = −29 + 29i,

(2 + 5i)

(3 + 7i)=

(2 + 5i)

(3 + 7i)· (3− 7i)

(3− 7i)=

(6 + 35) + (15i− 14i)

(9 + 49) + (21i− 21i)=

41 + i

58=

41

58+

1

58i

Complex plane

7

Page 9: Signal Analysis

Remark 2.3. (Polar form)

Polar coordinates r und ϕ: x = r · cosϕ

y = r · sinϕ,

=⇒ Representation in polar form:

z = r · (cosϕ+ i · sinϕ) (2.1)

with absolute value (modulus)

r = |z| =√x2 + y2

and argument ϕ

ϕ = arg(z) =

arctan( yx

)for x > 0

arctan( yx

)+ π for x < 0, y ≥ 0

arctan( yx

)− π for x < 0, y < 0

π2 for x = 0, y > 0

−π2 for x = 0, y < 0

(2.2)

The argument ϕ can also calculated by

ϕ = arg(z) =

arccos xr for y ≥ 0

− arccos xr for y < 0

(2.3)

which is undefined for r = 0 . Also the following formula can be used

ϕ = arg(z) =

2 arctan

(yr+x

)for r + x > 0

π for r + x = 0(2.4)

For the above defined argument ϕ the standard interval is given by

−π < ϕ ≤ π.

Other variants modulo 2π are possible for arg(z) (ambiguity of representation) but our standard ismostly used in signal analysis.

With Euler’s formulaeiϕ = cos(ϕ) + i sin(ϕ) (2.5)

one get’s (cp. http://en.wikipedia.org/wiki/Euler%27s_formula)

z = r · eiϕ. (2.6)

8

Page 10: Signal Analysis

Euler’s formula

Conjugate complex number

Compare http://en.wikipedia.org/wiki/Exponential_function

Exponential function with complex argument:

ez = ex+i y = exeiy = ex [ cos (y) + i sin (y) ] (2.7)

Absolut value of it|ez| = ex

One can prove that

|z1 · z2| = |z1| · |z2|∣∣∣∣z1

z2

∣∣∣∣ =|z1||z2|

, z2 6= 0

Remark 2.4. Verification of Euler’s formula with Taylor series for ex, cos(x) and sin(x) :http://en.wikipedia.org/wiki/Taylor_series

9

Page 11: Signal Analysis

Remark 2.5. From Euler’s formula follows de Moivre’s formula(cp. http://en.wikipedia.org/wiki/De_Moivre%27s_formula )

cos(nα) + i · sin(nα) = (cosα+ i · sinα)n , n ∈ N (2.8)

Moivre′sformula =⇒ formulas for cosine and sine:Angle sum and difference identities, double-, triple-, and half-angle formulae(cp. http://en.wikipedia.org/wiki/List_of_trigonometric_identities)

sin(x+ y) = sin(x) cos(y) + sin(y) cos(x)

cos(x+ y) = cos(y) cos(x)− sin(x) sin(y)

sin(2x) = 2 sin(x) cos(x)

cos(2x) = cos2(x)− sin2(x)

sin(nx) = n sin(x) cosn−1(x)−(n

3

)sin3(x) cosn−3(x) +

(n

5

)sin5(x) cosn−5(x) − + . . .

=

dn2e∑

i=1

(−1)i+1

(n

2i− 1

)sin2i−1(x) cosn−2i+1(x)

cos(nx) = cosn(x)−(n

2

)sin2(x) cosn−2(x) +

(n

4

)sin4(x) cosn−4(x) − + . . .

=

bn2c∑

i=0

(−1)i(n

2i

)sin2i(x) cosn−2i(x)

De Moivre’s formula can be used to find the n-th roots of a complex number.(cp. http://en.wikipedia.org/wiki/Nth_root) , Parts: Roots of unity and Complex roots

With a = |a| eiα and 0 ≤ α < 2π the solutions of zn − a = 0

can be represented by

zk = n√|a| · exp

(iα

n+ k · 2πi

n

)with k = 0, 1, . . . , n− 1 . (2.9)

Special case a = 1 =⇒ n roots of unity

Remark 2.6. Derivatives and integrals of complex valued functions f : R −→ CReal domain of definition (short: domain) and complex codomain=⇒ Differentiate or integrate real and imaginary part of function f

10

Page 12: Signal Analysis

All third roots of unity (find the two errors!)

All fifth roots of the complex number a = 1 + i√

3

In analytical modelling often the following trigonometric function values are used.

Formelsammlung (DVP Mathematik)

Trigonometrische Funktionen und Hyperbelfunktionen

Definition

coshx =1

2

(ex + e−x

)sinhx =

1

2

(ex − e−x

)

Symmetrie

sin(−x) = − sinx, cos(−x) = cosx, tan(−x) = − tanx,sinh(−x) = − sinhx, cosh(−x) = coshx, tanh(−x) = − tanhx

Eigenschaften

cos2 x+ sin2 x = 1, cosh2 x− sinh2 x = 1,sin(x+ 2π) = sinx, cos(x+ 2π) = cosx, tan(x+ π) = tanx

Funktionswerte

x 0 π/6 π/4 π/3 π/2 2π/3 3π/4 5π/6 π

sinx 0 12

12

√2 1

2

√3 1 1

2

√3 1

2

√2 1

2 0

cosx 1 12

√3 1

2

√2 1

2 0 −12 −1

2

√2 −1

2

√3 −1

tanx 0√33 1

√3 ±∞ −

√3 −1 −

√33 0

Additionstheoreme

sin(x± y) = sinx cos y ± cosx sin y sin 2x = 2 sinx cosx

cos(x± y) = cosx cos y ∓ sinx sin y cos 2x = cos2 x− sin2 x = 1− 2 sin2 x = 2 cos2 x− 1

tan(x± y) = tanx±tan y1∓tanx tan y tan 2x = 2 tanx

1−tan2 x

cosh(x± y) = coshx cosh y ± sinhx sinh y cosh 2x = cosh2 x+ sinh2 x

sinh(x± y) = sinhx cosh y ± coshx sinh y sinh 2x = 2 sinhx coshx

Komplexe Zahlen

• kartesische Form z = x+ iy,

• trigonometrische Form z = r(cosϕ+ i sinϕ),

• exponentielle Form z = reiϕ

Fur z = reiϕ erfullt zk = n√rei(

ϕn+k 2π

n ) die Gleichung z nk = z, k = 0, . . . , n− 1.

Es gilt Ln z = ln r + iϕ.

1

11

Page 13: Signal Analysis

2.2 Differential Calculus for real Functions

Comparehttp://en.wikipedia.org/wiki/Differential_calculus

http://en.wikipedia.org/wiki/Numerical_differentiation

http://en.wikipedia.org/wiki/List_of_Differentiation_Identities

2.3 Integral Calculus for real Functions

2.3.1 Rules of Integration

Comparehttp://en.wikipedia.org/wiki/Antiderivative

http://en.wikipedia.org/wiki/Riemann_integral

http://en.wikipedia.org/wiki/Integral_calculus

http://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus

http://en.wikipedia.org/wiki/Integration_by_parts

http://en.wikipedia.org/wiki/Integration_by_substitution

http://en.wikipedia.org/wiki/Lists_of_integrals

Indefinite integrals

Integration by parts:

∫f(x) · g′(x) dx = f(x) · g(x)−

∫f ′(x) · g(x) dx (2.10)

erhalt man aus der Produktregel zur Differentialrechnung.

Integration by substitution:

With continuous differentiable u = g(x)

follows ∫f(g(x)) · g′(x) dx =

∫f(u) du

• If the left side is structurally given :Calculate the right side and substitute then u = g(x).

• If the right side is structurally given :Find a substitution u = g(x) with existing inverse function x = g−1(u), so that you cancalculate

F (x) + C =

∫f(g(x)) · g′(x) dx

Substituting x = g−1(u) gives the indefinite integral

F (u) = F (g−1(u))

12

Page 14: Signal Analysis

Riemann integrals (definite integrals)

If f is continuous on a compact interval I = [a, b] and F is an antiderivative von f then

∫ b

af(x) dx = F (b)− F (a) (2.11)

=⇒ Generalisation of (2.11) for piecewise continuous functions f

Integration by parts:

∫ b

af(x) · g′(x) dx = [ f(x) · g(x) ]ba −

∫ b

af ′(x) · g(x) dx (2.12)

Integration by substitution:

With u = g(x) und du = g′(x) dx fur x ∈ [a, b]

First rule: ∫ b

af(g(x)) · g′(x) dx =

∫ g(b)

g(a)f(u) du

Second rule: ∫ β

αf(u) du =

∫ g−1(β)

g−1(α)f(g(x)) · g′(x) dx if g−1 exists.

Withα = g(a) und β = g(b),

anda = g−1(α) und b = g−1(β),

we get a simpler form ∫ b

af(g(x)) · g′(x) dx =

∫ β

αf(u) du.

2.3.2 Basics in numerical Integration

Comparehttp://archives.math.utk.edu/visual.calculus/4/riemann_sums.4/index.html

http://en.wikipedia.org/wiki/Rectangle_method

http://en.wikipedia.org/wiki/Numerical_integration

Rectangle methodSpecifically, the interval [a, b] over which the function f is to be integrated is divided into n subin-tervals Ik = [xk xk−1]. If all subintervals Ik sufficiently small then we have

∫ b

af(x) dx ≈

n∑

k=1

f(ξk) · (xk − xk−1) with xk−1 ≤ ξk ≤ xk, a = x0, b = xn,

In the case of equidistant decomposition

∆x = xk − xk−1 = const for all k

13

Page 15: Signal Analysis

follows with sufficiently small ∆x

∫ b

af(x) dx ≈ ∆x

n∑

k=1

f(ξk) (2.13)

Substituting the left points of the subintervals Ik

ξk = xk−1

one gets the left corner approximation

∫ b

af(x) dx = ∆x

n−1∑

k=0

f(a+ k · ∆x) + E(`)n (f) with some error E(`)

n (f) (2.14)

Substituting the right points of the subintervals

ξk = xk

one gets the right corner approximation

∫ b

af(x) dx = ∆x ·

n∑

k=1

f(a+ k · ∆x) + E(r)n (f) with some error E(r

n (f) (2.15)

For the errors in (2.15) and (2.14) we get with

h = ∆x =(b− a)

n

the estimates ∣∣∣E(r)n (f)

∣∣∣ ≤ Mf ′(b− a)

2h,

∣∣∣E(`)n (f)

∣∣∣ ≤ Mf ′(b− a)

2h (2.16)

withMf ′ = max

a≤x≤b

∣∣f ′(x)∣∣.

Mf ′ can be substitutet by an upper bound of |f ′(x)| in the given interval.

The composite trapezoidal rule is more exactly. We get it substituting

f(ξk) byf(xk−1) + f(xk)

2

in (2.13) :

∫ b

af(x) dx = ∆x ·

(1

2· f(a) +

n−1∑

k=1

f(a+ k · ∆x) +1

2· f(b)

)+ E(n)(f) (2.17)

This composite trapezoidal rule can be interpreted as the arithmetic mean of the left corner ap-proximation and the right corner approximation. For the error in (2.17) one gets the estimate

|E(f)| ≤ (b− a)

12h2 max

a≤x≤b

∣∣f ′′(x)∣∣ with h = ∆x =

(b− a)

n(2.18)

or

|E(f)| ≤ (b− a)

12Mf ′′ h

2 with Mf ′′ = maxa≤x≤b

∣∣f ′′(x)∣∣

Mf ′′ can be substituted here by an upper bound of |f ′′(x)| in [a, b]Compare http://en.wikipedia.org/wiki/Trapezoidal_rule

14

Page 16: Signal Analysis

2.3.3 Improper integrals

Comparehttp://en.wikipedia.org/wiki/Improper_integral

1. Unbounded integrand on interval of finite length

(a) Singularity in the right boundary point

∫ b

af(x) dx := lim

ε→0

∫ b−ε

af(x) dx

(b) Singularity in the left boundary point

∫ b

af(x) dx := lim

ε→0

∫ b

a+εf(x) dx

(c) Singularity in an inner point c of integration interval

∫ b

af(x) dx := lim

ε→0

∫ c−ε

af(x) dx + lim

γ→0

∫ b

c+γf(x) dx, a < c < b

2. Domain of integration is unbounded

(a) Integration interval of form [ a,+∞ )

∫ ∞

af(x) dx := lim

b→+∞

∫ b

af(x) dx

(b) Integration interval of form (−∞, b ]

∫ b

−∞f(x) dx := lim

a→−∞

∫ b

af(x) dx

(c) Integration interval of form (−∞,+∞ )

∫ ∞

−∞f(x) dx := lim

a→−∞lim

b→+∞

∫ b

af(x) dx

2.3.4 Gaussian probability density functions

Comparehttp://en.wikipedia.org/wiki/Normal_distribution

http://en.wikipedia.org/wiki/Probability_density_function

One of the most used improper integrals is given by∫ ∞

−∞e−x

2dx =

√π (2.19)

It is used in the theory of normal distributions, cp. [13]. The corresponding family of Gaussianprobability density functions (short : Gaussian functions) can be written as.

fµ,σ(x) :=1

σ√

2πe−

(x−µ)2

2σ2 , σ > 0, µ ∈ R (2.20)

15

Page 17: Signal Analysis

Mean value : µ, Variance : σ2 and Standard deviation : σ =√

Var(X)

Standard deviation is a widely used measure of dispersion.The graph of every probability density function (2.20) is bell-shaped, with a peak at the meanvalue µ.The special probability density function in (2.20) with µ = 0 and σ = 1 is connected withstandard normal Distribution.

f0,1(x) :=1√2π

e−x2

2

From (2.19) one gets ∫ ∞

−∞fµ,σ(x) dx = 1

Furthermore by

E(X) =1

σ√

+∞∫

−∞

x exp

(−(x− µ)2

2σ2

)dx = µ (2.21)

Var(X) =1

σ√

+∞∫

−∞

(x− µ)2 exp

(−(x− µ)2

2σ2

)dx = σ2 (2.22)

fundamental equations of probability theory are given.

16

Page 18: Signal Analysis

Standard normal Distribution

Different Parameters µ and σ

17

Page 19: Signal Analysis

3 Repetition - Real and complex Fourier series

3.1 Trigonometric polynomials and discrete frequency spectra

Comparehttp://en.wikipedia.org/wiki/Periodic_function

http://en.wikipedia.org/wiki/Simple_harmonic_motion

http://de.wikipedia.org/wiki/Trigonometrisches_Polynom

(take above not the corresponding English variant)http://en.wikipedia.org/wiki/Fourier_series

http://en.wikipedia.org/wiki/Frequency_spectrum

Real trigonometric polynomial of order n (standard form) :

Ψn(t) =a0

2+

n∑

k=1

ak cos(k ω1t) + bk sin(k ω1t), ak, bk ∈ R, ω1 =2π

T(3.1)

Here ω1 is used instead of ω, because later ω gets the part of angular frequency variable. But ω1

is in (3.1) a given fixed angular frequency. Likewise ξ gets the part of frequency variable later.

ω = 2π ξ

In this script ξ is used for frequency variable not f and not ν !

Interpretation of (3.1) in the sense of oscillation theory :

• T ist a period of Ψn, the basic period (period of all in Ψn contained oscillations).

• ω1 is corresponding angular frequency to T (basic angular frequency)

• Basic frequency ξ1 of (3.1) is given by ω1 = 2π ξ1

(3.1) includes oscillations with integer multiples of the basic frequency ξ1

Corresponding angular frequencies are integer multiples of the basic angular frequency ω1 .

ξk = k · ξ1, ωk = k · ω1 , k = 1, 2, · · · n

Ψn has a special frequency spectrum : discrete, bounded and finite

a(ξo)

2=a(0)

2=a0

2, a(ξk) = ak, b(ξk) = bk, k = 1, 2, · · · n (3.2)

=⇒ Fourier coefficients are plotted versus frequency ξ.

Ψn has a special angular frequency spectrum : discrete, bounded and finite

a(ωo)

2=a(0)

2=a0

2, a(ωk) = ak, b(ωk) = bk, k = 1, 2, · · · n (3.3)

What is the difference to the equation (3.2)?=⇒ Fourier coefficients are plotted versus angular frequency ω.

The coefficient a02 represents the mean value of Ψn(t) over every basic period interval [ to, to+T ].

a02 is a special spectral value, belonging to the frequency 0 (ξo = 0 , ωo = 0) .

18

Page 20: Signal Analysis

Example 3.1. In particular for 2π-periodic trigonometric polynomials, i.e. for T = 2π, onegets

a0

2+

n∑

k=1

ak cos(k t) + bk sin(k t) (3.4)

with

ξk =k

2π, and ωk = k for k = 0, 1, 2, · · · n

Example 3.2. For 1-periodic trigonometric polynomials, i.e. for T = 1, one gets

a0

2+

n∑

k=1

ak cos(2 k π t) + bk sin(2 k π t) (3.5)

withξk = k, and ωk = 2kπ for k = 0, 1, 2, · · · n

3.2 Real amplitude-phase representations of trigonometric polynomials

In the engineering literature the real trigonometric polynomial (3.1) is often given in the followingreal amplitude-phase representation.

Cosine-representation :

Ψn(t) =a0

2+

n∑

k=1

Ak cos(k ω1 t− λk) with − π < λk ≤ π (3.6)

Fromcos(α− β) = cosβ cosα+ sinα sinβ

you get for every k ∈ N

Ak cos(k ω1 t− λk) = Ak cos(λk) cos(k ω1 t) +Ak sin(λk) sin(k ω1 t).

Compare this with (3.1) and get

ak = Ak · cos(λk), bk = Ak · sin(λk) for k = 1, 2, · · · (3.7)

If this types of amplitudes Ak and phases λk are given, then coefficients ak and bk in (3.1) arewell-defined. If the coefficients ak and bk are given, then compare with polar coordinates, look atformulas (2.3) and (2.4).

Set x = ak, y = bk, r = Ak, ϕ = λk

and get between (3.1) and (3.6) the following formulas

Ak =√a2k + b2k, k = 1, 2, · · · (3.8)

λk =

arccos akAk

if bk ≥ 0

for k = 1, 2, · · · n− arccos ak

Akif bk < 0

(3.9)

19

Page 21: Signal Analysis

or

λk =

2 arctan(

bkAk+ak

)if Ak + ak > 0

for k = 1, 2, · · · nπ if Ak + ak = 0

(3.10)

If Ak = 0, then λk is not defined uniquely.Setting

A0 =a0

2

you get the spectral value A0, which can become also negative. (Other definition of A0 connectedwith ϕ0 is possible, but not used here)The above formulas can be interpreted as relations between different spectra of special time signals:

between the spectrum ak, bk and the spectrum Ak, λkCharacterisation of the corresponding time signals :They are periodic, real valued and include only finitely many (angular) frequencies.This angular frequencies are whole integer multiples of some basic angular frequency ω1

ωk = k · ω1

Spectral plot :=⇒ Amplitudes Ak and phases λk are plotted versus angular frequency ω.or=⇒ Amplitudes Ak and phases λk are plotted versus frequency ξ.

Sine-representation :

Now let’s consider a similar sine-representation of real trigonometric polynomials.

Ψn(t) =a0

2+

n∑

k=1

Ak sin(k ω1 t− µk) with − π < µk ≤ π (3.11)

Fromsin(α− β) = cos(β) sin(α)− sin(β) cos(α)

you get for every k

Ak sin(k ω1 t− µk) = Ak cos(µk) sin(k ω1 t)−Ak sin(µk) cos(k ω1 t)

Compare this with (3.1) and get for the real Fourier coefficients

ak = −Ak sin(µk), bk = Ak cos(µk) k = 1, 2, · · · (3.12)

If the coefficients ak and bk are given, then do the following :Substitute in (3.7) bk 7→ −ak and ak 7→ bk and get between (3.1) and (3.11) the formulas

Ak =√a2k + b2k, k = 1, 2, · · · (3.13)

A0 =a0

2is also the same mean value like in the cosine-representation

µk =

arccos bkAk

if ak ≤ 0

for k = 1, 2, · · · n− arccos bk

Akif ak > 0

(3.14)

20

Page 22: Signal Analysis

or

µk =

−2 arctan(

akAk+bk

)if Ak + bk > 0

for k = 1, 2, · · · nπ if Ak + bk = 0

(3.15)

If Ak = 0, then also µk is not defined uniquely.The real amplitude spectrum is here the same like in the cosine-case, but the phase-spectrum ischanged.

Spectral plot :=⇒ Amplitudes Ak and phases µk are plotted versus angular frequency ω (ωk = k · ω1).or=⇒ Amplitudes Ak and phases µk are plotted versus frequency ξ (ξk = k · ξ1)..

Exercise 3.3. How the above formulas between Fourier coefficients, amplitudes and phases arechanged, if you use instead of (3.6) and (3.11)

Ψn(t) =a0

2+

n∑

k=1

Ak cos(k ω1 t+ λk) with − π < λk ≤ π

and

Ψn(t) =a0

2+

n∑

k=1

Ak sin(k ω1 t+ µk) with − π < µk ≤ π

as bases for real amplitude-phase representations.

3.3 Complex representation of real trigonometric polynomials

Alternative to (3.1), (3.6) respectively (3.11) a complex representation of the real trigonometricpolynomials is often used.

Ψn(t) =n∑

k=−nCk ei kω1 t . (3.16)

This representation is connected with the later introduced discrete Fourier-transform (DFT).Between the real spectrum of (3.1) and the complex spectrum of (3.16) the relations

2 C0 = a0, ak = Ck + C−k, bk = i (Ck − C−k), k ∈ N

and

C0 =a0

2(3.17)

Ck =ak − i bk

2, k ∈ N,

C−k =ak + i bk

2, k ∈ N

are valid. Show it with Euler’s formula and with

cos(kω1t) =1

2

(ei kω1 t + e−i kω1 t

)(3.18)

sin(kω1t) =1

2 i

(ei kω1 t − e−i kω1 t

)(3.19)

21

Page 23: Signal Analysis

In general this spectral values Ck become complex, also if the trigonometric polynomial Ψn isreal-valued. They own real part Re(Ck) and imaginary part Im(Ck).

So Ψn(t) is connected with a complex angular frequency spectrum :This is discrete, bounded, finite and contains also negative angular frequencies.All containing frequencies are whole multiples of a basic angular frequency ω1 .

ωk = k · ω1, ω−k = −ωk, C(ωk) = Ck, k = −n, · · · − 1, 0, 1, · · · , n (3.20)

Spectral plot :=⇒ Real and imaginary part of the spectrum are plotted versus angular frequency ω.

If Ψn is real-valued, thenthe real part Re(C(ωk)) of the spectrum is an even function of ω with discrete domainandthe imaginary part Im(C(ωk)) of the spectrum is an odd function of ω with discrete domain.

Of course Ψn(t) is connected also with a complex frequency spectrum :This is discrete, bounded, finite and contains also negative frequencies, but only whole multiplesof a basic frequency ξ1.

ξk = k · ξ1, ξ−k = −ξk, C(ξk) = Ck, k = −n, · · · − 1, 0, 1, · · · , n (3.21)

Spectral plot :=⇒ Real and imaginary part of the spectrum are plotted versus frequency ξ.

3.4 Complex amplitude-phase representations of trigonometric polynomials

The complex representation (3.16) of the real trigonometric polynomial can be transferred in

Ψn(t) =n∑

k=−n|Ck| ei (kω1 t+ϕk) =

n∑

k=−n|Ck| ei ϕk ei kω1 t (3.22)

withCk = |Ck| · ei ϕk

From this representation we get the complex amplitude-phase spectrum by

|Ck| =1

2

√a2k + b2k for k = 0, ±1, ±2, ±3 · · · . if you set b0 = 0 (3.23)

ϕk = arg(Ck) for k = 0, ±1, ±2, ±3 · · · (3.24)

The phases ϕk are the arguments of the Ck in the complex plane (Argand plane)

In our case of real valued trigonometric polynomials we get from (3.17)

ϕk = ϕ(ωk) = ϕ(ξk) = arg(ak − i bk) = −arg(ak + i bk) for k = 1, 2, · · · n

ϕ−k = ϕ(−ωk) = ϕ(−ξk) = arg(ak + i bk) for k = 1, 2, · · · n

C0 is real and so for ϕ0 there are 3 possibilities:1. ϕ0 = 0 if C0 > 0

22

Page 24: Signal Analysis

2. ϕ0 = π if C0 < 03. ϕ0 is not uniquely defined for C0 = 0.

Since Ψn(t) is real valued, we get

|C−k| = |Ck| and ϕ−k = −ϕk for k = 1, 2, · · · n

respectively

|C(−ωk)| = |C(ωk)| and ϕ(−ωk) = −ϕ(ωk) for k = 1, 2, · · · n

Sothis discrete amplitude spectrum |C(ωk)| becomes an even function of ωandthis discrete phase spectrum ϕ(ωk) becomes an odd function of ω.

Similar to (3.6), (3.9) and (3.10) we get now formulas for calculating the in (3.24) defined argumentsϕk.

For the negative angular frequencies we get :

ϕ−k = ϕ(−ωk) =

arccos ak2 |Ck|

if bk ≥ 0

for k = 1, 2, · · · n− arccos ak

2 |Ck|if bk < 0

or

ϕ−k = ϕ(−ωk) =

2 arctan(

bk2 |Ck|+ak

)if 2 |Ck|+ ak ≥ 0

for k = 1, 2, · · · nπ if 2 |Ck|+ ak = 0

This results inϕ−k = λk for k = 1, 2, · · · n.

For the positive angular frequencies we get then

ϕk = −ϕ−k or ϕ(ωk) = −ϕ(−ωk) for k = 1, 2, · · · n.

3.5 Fourier series

Very general periodic signals can represented as Fourier series. This are limits of trigonometricpolynomials.This means: Periodic signals can be decomposed into a sum of simple oscillating functions, namelysines and cosines or in complex exponentials (possibly infinitely many summands,infinitely manyharmonic oscillations). The study of Fourier series is a branch of Fourier analysis and providesessential basics for discrete spectral analysis.

23

Page 25: Signal Analysis

Compare http://en.wikipedia.org/wiki/Square_wave

Examples for periodic signals

Examples for periodic signals

A Fourier series arises as a limit of trigonometric polynomials (3.1) for n → ∞. A sufficientlyregular function f(t) with period T can be decomposed into a Fourier series

f(t) =a0

2+∞∑

k=1

ak cos(k ω1t) + bk sin(k ω1t), ak, bk ∈ R, ω1 =2π

T(3.25)

if the convergence concept for the right side is determined properly, compare for instance section3.10. Then the real representation (3.6) and (3.11), the complex representation (3.16) and thecomplex amplitude-phase representation (3.22) also converge for n→∞. The relations between thedifferent discrete spectra remain valid, but now the corresponding spectra are countable infinitelydiscrete spectra.

24

Page 26: Signal Analysis

The angular frequency based spectrum of a periodic signal f is connected with it’s Fourier series(3.25)

a(ωo)

2=a0

2, a(ωk) = ak, b(ωk) = bk, k ∈ N (3.26)

with whole multiples of one basic angular frequency ω1

ωk = k · ω1 , k ∈ N

a02 is the integral mean value of signal f .

Similar the frequency based spectrum :a(ξo)

2=a0

2, a(ξk) = ak, b(ξk) = bk, k ∈ N

This discrete spectral values (3.26) are called Fourier coefficients.For real time signals f(t) (real valued functions) the spectrum (3.26) is real, that means such signalshave only real Fourier coefficients in (3.25).These with (3.25) connected Fourier coefficients can be calculated by

ak =2

T

t0+T∫

t0

f(t) cos(kω1t) dt (3.27)

bk =2

T

t0+T∫

t0

f(t) sin(kω1t) dt (3.28)

if the integrals exist (more accurate later). With t0 any time shifting of the periodicity interval canbe used (Simplification of the calculation!). Especially we get from (3.27)

a0

2=

1

T

t0+T∫

t0

f(t)dt (3.29)

Compare Example 1 in http://en.wikipedia.org/wiki/Fourier_series

Often used variants of (3.27) and (3.28) are

ak =2

T

T∫

0

f(t) cos(kω1t) dt (3.30)

bk =2

T

T∫

0

f(t) sin(kω1t) dt (3.31)

or

ak =2

T

T2∫

−T2

f(t) cos(kω1t) dt (3.32)

bk =2

T

T2∫

−T2

f(t) sin(kω1t) dt (3.33)

25

Page 27: Signal Analysis

Particularly for 2π-periodic signals f(t), that implies T = 2π, the Fourier series (3.25) becomes

a0

2+∞∑

k=1

ak cos(k t) + bk sin(k t) ak, bk ∈ R, ω1 = 1, T = 2π (3.34)

In the formulas (3.27), (3.28), (3.29) (3.30), (3.31) (3.32) and (3.33) T = 2π and ω1 = 1 is toinsert.

For practical examples in generally the Fourier coefficients in (3.25) cannot be calculated in aclosed form. Then this coefficients must be approximated up to a choosen order n. So you get anapproximation of a given periodic oscillation by a trigonometric polynomial of order n. Here n mustbe choosen so large, that all essential oscillation components are contained in the approximation. Inpractice composed oscillations will be measured by piezoelectric accelerometers. To get the essentialspectral values of the measured signals, you can use the discrete Fourier transform (DFT). In manyprograms this is implemented as a fast algorithm, the so called fast Fourier transform (FFT), seeMatlab, Maple, etc. With this things we deal later.

3.6 Fourier-series of even and odd periodic functions

If the periodic time signal f(t) is an odd function, then the ak are zero. The Fourier series (3.25)becomes

f(t) =∞∑

k=1

bk sin(kω1t)

This is called a sine-series. The sine-series becomes zero at t = 0. The derivative of a sine series isa formal cosine series, but take account of it’s convergence behavior.

If the periodic time signal f(t) is an even function, then the bk are zero. The Fourier series (3.25)becomes

f(t) =a0

2+∞∑

k=1

ak cos(kω1t)

It is also called cosine series. The derivative of a cosine series is a formal sine series, but takeaccount of it’s convergence behavior.Compare http://en.wikipedia.org/wiki/Even_and_odd_functions

3.7 Complex representation of Fourier series

We get such Fourier series from the complex trigonometric polynomial (3.16) if we calculate allcomplex Fourier coefficients Ck and if there exist some well defined limit for n against infinity.

f(t) =∞∑

k=−∞Ck ei kω1 t (3.35)

The coefficients of (3.25) and 3.35 are linked by (3.17).This discrete spectral values can be calculated also directly by

Ck =1

T

to+T∫

to

f(t) e−i kω1 tdt for arbitrary real to, ω1 =2π

T

26

Page 28: Signal Analysis

Compare (3.27), (3.28), (3.29) and the following representation formulas there.Mostly the versions for to = 0 and for to = T

2 are used.

Ck = C(ωk) = C(ξk) =1

T

T∫

0

f(t) e−i kω1 tdt (3.36)

Ck = C(ωk) = C(ξk) =1

T

T2∫

−T2

f(t) e−i kω1 tdt (3.37)

3.8 Approximation of a rectangular oscillation by trigonometric polynomials

Integral mean value (of every period)

Trigonometric polynomial of 1. order

Trigonometric polynomial of 3. order

27

Page 29: Signal Analysis

Trigonometric polynomial of 5. order

Trigonometric polynomial of order 9

28

Page 30: Signal Analysis

3.9 Piecewise continuous functions

The following definitions of piecewise continuity and piecewise continuous differentiability are spe-cial connected with the field of Fourier series. In other fields you find sometimes modifications ofthis definitions.

Definition 3.4. A function f : [α, β ] → C is called piecewise continuous on this closed interval[α, β ], if it is continuous on all but a finite number of exception points t1, t2, · · · tm in whichthe onesided limits exist.

Remarks :

1. If f : [α, β ]→ C is continuous, then we have no exception points and the set t1, t2, · · · tmis empty. A continuous f will be considered as a special piecewise continuous function.

2. No every piecewise continuous f is continuous.

3. The function value f(tk) in an exception point tk plays no role, because f is there notcontinuous. So we include in the definition 3.4 the case, in which the f(tk) are not defined.Although the domain D of f in that case is D = [α, β ]\t1, t2, · · · tm, we use the notationin definition 3.4.

4. Definition 3.4 is often used for the special case of real valued functions f : [α, β ]→ R.

The functions (signals) defined in Definition 3.4 are special cases of absolute integrable functionsf on [α, β ](short : f ∈ L1([α, β]), look at the equation (4.16) with p = 1 .

If

T∫

0

|f(t)| dt < ∞, then f is called absolut integrable on [α, β ].

The functions in Definition 3.4 are also special cases of quadratic integrable functions on [α, β ](short : f ∈ L2([α, β]) , look at the examples 4.19 and 4.29. In signal theory such signals f arecalled signals with finite energy on [α, β].

If

T∫

0

|f(t)|2 dt < ∞, then f is called quadratic integrable on [α, β ].

Theorem 3.5. On every bounded interval [α, β] a quadratic integrable function f is also absolutintegrable.

L2([α, β]) ⊂ L1([α, β])

In general for functions f : I → C with unbounded domains I this not true. For

I = (−∞, β], I = [α,∞) and I = (−∞,∞) (3.38)

we getL2(I) * L1(I) and L1(I) * L2(I).

Theorem 3.6. All Fourier coefficients, that means all spectral values exist, if the T -periodic signalf is absolute integrable on one period, for instance absolut integrable on one of the intervals

[α, β] = [0, T ], [α, β] = [−T, 0] or [α, β] =

[−T

2,T

2

].

29

Page 31: Signal Analysis

Definition 3.7. A piecewise continuous function f : [α, β ] → C is called piecewise continuousdifferentiable on [α, β ], if the following both conditions are fulfilled.1. f is continuous differentiable on all but a finite number of exception points t1, t2, · · · tp.2. In this exception points the derivatives f ′(tk) does not exist but all onesided limits of f ′(t).

Remarks :

1. Every piecewise continuous differentiable f is also piecewise continuous.

2. If f : [α, β ]→ C is continuous differentiable, then we have no exception points and then fis also piecewise continuous differentiable.

3. No every piecewise continuous differentiable f is continuous.

4. Definition 3.7 is often used for the special case of real valued functions f : [α, β ]→ R.

Definition 3.8. A function f : I → C with unbounded domain I of the type (3.38)

• is called piecewise continuous, if it is on every bounded subinterval [α, β ] of I piecewisecontinuous,

• is called piecewise continuous differentiable, if it is on every bounded subinterval [α, β ] of Ipiecewise continuous differentiable.

Theorem 3.9. If a T -periodic function f : R→ C is piecewise continuous (piecewise continuousdifferentiable) on one period, then it is piecewise continuous (piecewise continuous differentiable)on the whole domain R.

3.10 Pointwise and uniform convergence of Fourier series

Comparehttp://en.wikipedia.org/wiki/Convergence_of_Fourier_series

Theorem 3.10. If f(t) is T -periodic and piecewise continuous differentiable on one period of lengthT then the Fourier series is pointwise convergent for all t ∈ R. With ω1 = 2π

T the limit of theFourier series is then given by

f(t) =a0

2+∞∑

k=1

ak cos(k ω1t) + bk sin(k ω1t) in every point t of continuity

f(t−) + f(t+)

2=a0

2+

∞∑

k=1

ak cos(k ω1t) + bk sin(k ω1t) in every point t of discontinuity

In every bounded closed interval [α, β ] on which in addition f is continuous the Fourier seriesconverges uniformly to f(t).If f(t) is additionally continuous on R, then the Fourier series converges uniformly to f(t) on thewhole domain R.

Remark 3.11. Under the same assumptions you can formulate a similar convergence theorem byusing (3.35) and (3.37) or (3.36)

f(t) =

∞∑

k=−∞Ck ei kω1 t in every point t of continuity

30

Page 32: Signal Analysis

f(t−) + f(t+)

2=

∞∑

k=−∞Ck ei kω1 t in every point t of discontinuity

Remark 3.12. (Gibbs phenomenon)Compare http://en.wikipedia.org/wiki/Gibbs_phenomenon

The Gibbs phenomenon involves both the fact that Fourier sums overshoot at a jump discontinuity,and that this overshoot does not die out as the order of the approximating Fourier polynomials(trigonometric polynomials) increases.

Gibbs phenomenon - Overshoot

3.11 Smoothness of signals and asymptotic behavior of their Fourier coefficients

Theorem 3.13. If the periodic signal f(t) has continuous derivatives up to order m and if f (m)(t)is piecewise continuous differentiable then there exists a constant L > 0 with

∣∣∣Ck∣∣∣ ≤ L

|k|m+1for all discrete spectral values Ck

A similar statement holds for the spectral values ak, b

k.

3.12 Exercises

Exercise 3.14. The time signal f has the following properties: f(t) = f(t + 2π) for all t ∈ Rand

f(t) =

0 for − π ≤ t ≤ −π2

4π for − π2 < x < π

2

0 for π2 ≤ t < π

1. What is the primitive period of f(t). Sketch the signal in the interval [−3π, 3π]. Has f(t) asymmetry property?

2. Determine the real and complex Fourier coefficients ak, bk and Ck. Write the Fourier seriesdown.

31

Page 33: Signal Analysis

3. Sketch the corresponding spectra for |ω| ≤ 4. Change the angular frequency scaling by fre-quency scaling ξ and adapt the above spectral representations.

4. Calculate and sketch for the above complex spectrum the corresponding parts of the amplitudeand of the phase spectrum.

Exercise 3.15. Let a T -periodic function f(t) with the convergent Fourier series (3.25) be given.

1. How the basic angular frequency, the basic frequency, the basic period, the real and complexFourier coefficients are changed, if we replace f(t) by f(t) = f(p t) with some p > 0 (timedilation of the signal f). Use for the new parameters ω1, · · · ak · · · .

2. Take for f(t) the time signal of the exercise 3.14 and set p = 2. Sketch 3 periods of f(t).Sketch the corresponding spectra for |ω| ≤ 8 and compare this results with the last exercise.

Exercise 3.16. Let a T -periodic function f(t) with the convergent Fourier series (3.25) be given.

1. How the basic angular frequency, the basic frequency, the basic period, the real and complexFourier

coefficients are changed, if we replace f(t) by f(t) = f(t− to) with some to > 0 (time shiftingof the signal f). Use for the new parameters ω1, · · · ak · · · .

2. Take for f(t) the time signal of the exercise 3.14 and set to = π4 . Sketch 3 periods of f(t).

Sketch the corresponding spectra for |ω| ≤ 8 and compare this results with the last exercise.

32

Page 34: Signal Analysis

4 Basics of Functional Analysis

4.1 Linear Spaces

The notion vector space will be generalised in this section. So function spaces and signal spaces(spaces of discrete signals and spaces of signals with continuous domain) can be interpreted asspecial cases of such a generalized structure. In the following definition consider only the casesF = R : field of real numbersF = C : field of complex numbers

Definition 4.1. A set V together with an operation ′′+′′ (addition) is called a linear space overa field F if the following axioms are valid:

1. If u,v,w ∈ V then v + w ∈ V and

• v + w = w + v

• (v + w) + u = v + (w + u)

• there is a zero vector 0 ∈ V so that v + 0 = v for all v ∈ V

• each v ∈ V has an additive inverse x ∈ V such that x + v = 0.

2. If α ∈ F and v ∈ V then there is defined an operation ′′·′′ (outer multiplication) withα · v ∈ V, so that the properties

• (α+ β) · v = α · v + β · v• α · (v + w) = α · v + β ·w• (α · β) · v = α · (β · v)

• 1 · v = v

hold for all α, β ∈ F and for all v,w ∈ V.

Remark 4.2. Also the notion vector space is conventional for that, what we have called linearspace. But we will use the term linear space, because we reserve the notion vector space for then-dimensional vector spaces Rn and Cn.

Examples 4.3. The following examples are essentially and the symbols are often used.

• Rn: n-dimensional real vector space. Real vectors are written as column vectors, so thetransposition T is used in in (4.1), all components are real numbers, usual addition betweenvectors and usual multiplication of vectors by real numbers (real scalars) , vectors in Rn canbe write in the form

u = (u1, u2, · · · , un)T (4.1)

• Cn n-dimensional complex vector space, complex vectors, complex components are possible(the adjectives real or complex are omit, if the context is clear)

• the space of polynomials with real coefficients and maximally degree n

• the space of polynomials with complex coefficients and maximally degree n

• Spaces of infinite sequences

• Spaces of continuous functions

• Spaces of piecewise continuous functions

• Spaces of absolute integrable functions

• Spaces of quadratic integrable functions (signals with finite energy)

33

Page 35: Signal Analysis

4.2 Linear operators and linear functionals

Definition 4.4. If two linear spaces V and W over the same field F are given, then a map

A : V −→W

is called linear or a linear operator if for all u,v ∈ V und λ ∈ F the following conditions arefulfilled

• A is homogeneous:A (λv) = λA (v)

• A is additive:A (u + v) = A (u) +A (v)

In the casesA : V −→ R or A : V −→ C

such an operator is called a linear functional (real or complex).

Examples of linear functionals :Scalar products with a fixed vector, definite integrals, function values on a fixed pointExamples of linear operators:Linear transformations in vector spaces by matrices, differentiation operators, the gradient, integralswith variable upper limit, integration operators like Fourier and Laplace transform

4.3 Normed spaces

Definition 4.5. Given a linear space V over R or C then a norm on V is a mapping

‖ · ‖ : V −→ R

f with the following properties for all x,y ∈ V and all scalars α :

‖x‖ ≥ 0 (4.2)

‖x‖ = 0 ⇔ x = 0 (Definitheit) (4.3)

‖α · x‖ = |α| · ‖x‖ (positive homogeneous) (4.4)

‖x+ y‖ ≤ ‖x‖+ ‖y‖ (triangle inequality, subadditivity ) (4.5)

0 is here the zero element (zero vector) and 0 is the number zero. The norm is positive definite.A linear space with a norm is called a normed space (normed vector space).

A linear space can be provided with several different norms.

Example 4.6. V = Rn with Euclidean norm

‖x‖2 :=√x2

1 + · · ·+ x2n =

√√√√n∑

i=1

x2i , x ∈ Rn

Unit sphere in R2 concerning the Euclidean norm :

34

Page 36: Signal Analysis

Example 4.7. Generalisation of Euclidean norm on Cn :

‖z‖2 :=√|z1|2 + · · ·+ |zn|2 =

√√√√n∑

i=1

|zi|2, z ∈ Cn

Example 4.8. Maximum norm in Rn or Cn

‖x‖∞ := max (|x1|, . . . , |xn|)

Unit sphere in R2 is now

Example 4.9. Maximum (absolute) sum norm in Rn or Cn :

‖x‖1 :=n∑

i=1

|xi|.

Unit sphere in R2 is now

35

Page 37: Signal Analysis

Examples 4.10. p-Norms in Rn and in CnOnly for p ≥ 1 one gets by

x = (x1, x2, · · · , xn) =⇒ ‖x‖p :=

(n∑

k=1

|xk|p) 1

p

, p ≥ 1 (4.6)

a norm in Rn or in Cn.The triangle inequality becomes now the form

(n∑

k=1

|xk + yk|p) 1

p

≤(

n∑

k=1

|xk|p) 1

p

+

(n∑

k=1

|yk|p) 1

p

(4.7)

This special case of triangle inequality is known as Minkowski inequality for vectors and finitesequences respectively.With q as the corresponding conjugate Holder exponent of p, that means

1 ≤ p, q ≤ ∞, 1

p+

1

q= 1 (set here for once

1

∞ = 0 ), (4.8)

the so called Holder’s inequality

∣∣∣∣∣n∑

k=1

xk yk

∣∣∣∣∣ ≤n∑

k=1

|xk yk| ≤(

n∑

k=1

|xk|p) 1

p(

n∑

k=1

|yk|q) 1

q

(4.9)

is valid.With dot product (scalar product) in Rn

〈x, y〉 =n∑

k=1

xk yk, 〈x, y〉 ∈ R (4.10)

and component-by-component multiplication in Rn

x · y = (x1 · y1, x2 · y2, · · · , xn · yn), x · y ∈ Rn (4.11)

(distinguish this two definitions carefully)one can write the inequalities (4.9) in a shorter form

|〈x, y〉| ≤ ‖x · y‖1 ≤ ‖x‖p ‖y‖q . (4.12)

It is possible to bring forward (4.12) and other inequalities in Rn or Cn on special function spaces(spaces of signals with continuous time or continuous frequency domain). Especially for the casep = q = 2 the inequalities (4.9) or (4.12) become a variant of Cauchy-Schwarz inequality (Schwarzinequality). It estimates scalar products (4.10) with Euclidean norms (lengths of vectors).

With the complex valued scalar product in Cn

〈x, y〉 =n∑

k=1

xk yk , (4.13)

which is a generalisation of (4.10), the notions and statements between (4.8) and (4.12) remainvalid, if we substitute in (4.9)

(n∑

k=1

xk yk

)by

(n∑

k=1

xk yk

).

.

36

Page 38: Signal Analysis

Remark 4.11. Inequalities for p-norms in Rn and CnAs norms on finite dimensional vector spaces all p-norms including the maximum norm (casep =∞) are equivalent to each other. This results follow from the following inequalities:

‖x‖p ≤ ‖x‖1 ≤ p√np−1 ‖x‖p, p ≥ 1, n = 1, 2, 3, · · ·

‖x‖∞ ≤ ‖x‖p ≤ p√n ‖x‖∞, p ≥ 1, n is the dimension of space

There are infinite dimensional spaces with similar p-norms, where this inequalities are not valid.From the last row of inequalities we get especially

limp→∞

‖x‖p = ‖x‖∞ in Rn and Cn

Example 4.12. For a given closed interval I = [a, b] we define C(I) as the set of continuousfunctions on I This set C(I) becomes in a simple way a linear space. With

||f ||∞ = max |f(t)| : t ∈ I (4.14)

is defined a norm on C(I), the so called Maximum norm.If I is an open or a half open interval, whereas the intervals of infinite length are included, thenlet Cb(I) be the linear space of continuous and bounded functions on I. The norm ||f ||∞ is nowdefined as

||f ||∞ = sup |f(t)| : t ∈ I, (4.15)

whereas sup is the supremum of the set defined in the curly brackets. This ||f ||∞ is calledsupremum norm of f . In the case of C(I) it is the same as (4.14).

Example 4.13 (Sequence spaces). For every fixed p with

1 ≤ p <∞

the set of all real or complex sequences a = (an) over the index set N with

∞∑

n=1

|an|p <∞,

forms a linear space `p(N) (usual term by term Addition and usual outer number multiplication).This linear space becomes with

||a||p =

( ∞∑

n=1

|an|p) 1

p

a normed space.In the case p =∞ by

`∞(N) = (an) : supn|an| <∞

is the linear space of all bounded sequences defined (real or complex valued). This linear spacebecomes with

||a||∞ = supn|an|

a normed space.All statements between (4.6) and (4.13) can be taken on this example, if one substitutes n by ∞.But the used series must exist.

The linear spaces `∞(No), `p(No), `∞(Z) and `p(Z) and corresponding norms for p ≥ 1 aresimilar defined.

37

Page 39: Signal Analysis

Example 4.14. With I as interval in R the set of all functions

f : I → R resp. f : I → C

for which ∫

I

|f(t)|p dt <∞ is valid with fixed p, 1 ≤ p <∞ (4.16)

represent the linear space Lp(I). By identification of functions which are only different on a set ofmeasure 0 one gets in this space with

‖f‖p :=

(∫

I|f(t)|p dt

) 1p

(4.17)

a norm. The triangle inequality for this special norm is known again as Minkowski inequality.If one substitute in the examples 4.10 the sums by appropriate integrals, then the statements of thatexamples can be translate to the space Lp(I).

Most of linear spaces V mentioned in this subsection are complete.Completeness of V means:Every Cauchy sequence in the space V has a limit that is also in V.In particular in a complete space V = V(‖ · ‖)

follows from

∞∑

n=1

‖fn‖ < ∞, with fn ∈ V for all n

always∞∑

n=1

fn = f with some unique defined f ∈ V.

4.4 Real inner product spaces

Comparehttp://en.wikipedia.org/wiki/Inner_product_space

http://en.wikipedia.org/wiki/Dot_product

An inner product space is sometimes also called a pre-Hilbert-space, since its completion withrespect to the metric, induced by its inner product, is a Hilbert space.

In this subsection let V be a real linear space.

Definition 4.15. A scalar product (or an inner product) on V is a map

〈·, ·〉 : V ×V −→ R

that satisfies the following 5 axioms for all x, y, z ∈ V and all λ ∈ R

1.〈x, x〉 ≥ 0 (non negative)

38

Page 40: Signal Analysis

2.〈x, x〉 = 0 ⇔ x = 0 (definite)

3.〈x, y〉 = 〈y, x〉 (symmetric)

4.〈x+ y, z〉 = 〈x, z〉+ 〈y, z〉 (additive in the first argument)

5.〈λx, y〉 = λ〈x, y〉 (homogeneous in the first argument)

From the symmetric property follows

〈x, y + z〉 = 〈x, y〉+ 〈x, z〉 (additive in the second argument)

und〈x, λy〉 = λ〈x, y〉 (homogeneous in the second argument)

- Axioms 1 and 2 together : positive-definiteness of 〈· , ·〉- Axioms 4 and 5 together : linearity in the first argument- Such a scalar product is a symmetric positive-definite bilinear form.

A real linear space V with such an scalar product is called an inner product space. Innerproduct spaces have a naturally defined norm based upon the given inner product.

Theorem 4.16. Every inner product produces by

‖x‖ :=√〈x, x〉 (4.18)

a norm, the so called induced norm.

Theorem 4.17. Cauchy-Schwarz inequality (also Schwarz inequality or Bunyakovsky inequal-ity)The induced norm of an inner product space V satisfies

|〈x, y〉| ≤ ‖x‖ · ‖y‖ for all x, y ∈ V . (4.19)

Beweis:(4.19) is trivial in the case y = 0. So only the case 〈y, y〉 6= 0 for any real number λ ∈ R is toexamine. One gets

0 ≤ 〈x− λy, x− λy〉 = 〈x− λy, x〉 − λ〈x− λy, y〉 = 〈x, x〉 − 2λ〈x, y〉+ λ2〈y, y〉

With the special choice

λ =〈x, y〉〈y, y〉 = 〈x, y〉 · ‖y‖−2

results0 ≤ ‖x‖2 − 〈x, y〉2 · ‖y‖−2

and then0 ≤ ‖x‖2 − 〈x, y〉2 · ‖y‖−2.

So we get〈x, y〉2 ≤ ‖x‖2‖y‖2.

The square root of both sides of this inequality provides the Schwarz inequality (4.19).

39

Page 41: Signal Analysis

Example 4.18. In Euclidean spaces Rn the inequality (4.19) takes the form

∣∣∣∣∣n∑

i=1

xi · yi∣∣∣∣∣ ≤

√√√√(

n∑

i=1

x2i

√√√√(

n∑

i=1

y2i

)(4.20)

Compare http://en.wikipedia.org/wiki/Euclidean_space

Example 4.19. The linear space of all real valued functions over I = [a, b] which are quadraticintegrable there, is termed by

L2 (I) oder L2 ([a, b]) ,

An inner product is here given with

〈f, g〉 :=

b∫

a

f(t) g(t) dt (4.21)

The induced norm is

‖f‖ :=√〈f, f〉 =

√√√√√b∫

a

f2(t) dt. (4.22)

For this special case the squared Schwarz inequality (4.19) gets the shape

∣∣∣∣∣∣

b∫

a

f(t) g(t) dt

∣∣∣∣∣∣

2

b∫

a

f2(t) dt

·

b∫

a

g2(t) dt

(4.23)

This space L2 (I) is complete. For I also I = [a,∞) or I = (−∞, b] can be choosen.

Skalar product and angle in R2 and R3

In Euclidean geometry scalar product, length and angle are related. The (smaller) angle γ betweentwo non-zero vectors ~a and ~b is given by

~a ·~b = |~a| · |~b| · cos(γ) with γ = ](~a,~b), 0 ≤ γ ≤ π

This scalar product becomes 0 if and only if the angle between the corresponding vectors is π2 .

Then the vectors are orthogonal to each other.

40

Page 42: Signal Analysis

Definition 4.20. Abstract definition of angleA scalar product in a real linear space V (abstract vector space with inner product) provides by

cos γ =〈x, y〉√

〈x, x〉 ·√〈y, y〉

, 0 ≤ γ ≤ π (4.24)

the definition of the angle between two non-zero elements (abstract vectors).

By

|〈x, y〉|2 ≤ 〈x, x〉 · 〈y, y〉 =⇒∣∣∣∣∣

〈x, y〉√〈x, x〉 ·

√〈y, y〉

∣∣∣∣∣ ≤ 1

this is a correct definition.

Definition 4.21. OrthogonalityIn a real inner product space V elements x 6= 0 and y 6= 0 are called orthogonal if

〈x, y〉 = 0. (4.25)

The following notations can be generalised

• System of pairwise orthogonal elements

• Orthogonal basis

• System of pairwise orthonormal elements

• Orthonormal basis

• Orthogonal projection on a subspace

4.5 Application to real Fourier series

Example 4.22. Basic functions regarding real Fourier series for the basic interval [ 0, 2π ] are

gn(t) = cos(n t), hm(t) = sin(mt), n = 0, 1, 2, · · · , m = 1, 2, · · · , t ∈ [ 0, 2π ]

This is the most known example of an orthogonal function system. Using angle sum and differenceidentities, comparehttp://en.wikipedia.org/wiki/List_of_trigonometric_identities#Trigonometric_functions

one gets

sin(n t) sin(mt) =1

2(cos(n t−mt)− cos(n t+mt)) (4.26)

cos(n t) cos(mt) =1

2(cos(n t−mt) + cos(n t+mt)) (4.27)

sin(n t) cos(mt) =1

2(sin(n t−mt) + sin(n t+mt)) (4.28)

Because of sin(α + π2 ) = cos(α) the formula (4.27) directly follows from (4.26). Using Euler’s

formula it becomes possible to proof them all.From ( 4.26), (4.27) and (4.28) the following orthogonality relations result

2π∫

0

sin(n t) sin(mt) dt =

π for n = m, m = 1, 2, · · ·

0 for n 6= m, n, m = 1, 2, · · ·(4.29)

41

Page 43: Signal Analysis

2π∫

0

cos(n t) cos(mt) dt =

π for n = m, m = 1, 2, · · ·

2π for n = m = 0

0 for n 6= m, n, m = 1, 2, · · ·

(4.30)

2π∫

0

cos(n t) sin(mt) dx = 0 for n ∈ N0, m ∈ N (4.31)

Formula (4.31) is valid also for the case m = n.Instead of (4.29), (4.30) and (4.31) with the Kronecker delta symbol you can write shorter

〈hn, hm〉 = π δnm for n, m = 1, 2, · · ·

‖g0‖2 = 〈g0, g0〉 = 2π, 〈gn, gm〉 = π δnm for n, m = 1, 2, · · ·〈gn, hm〉 = 0 for n = 0, 1, 2, · · · , m = 1, 2, · · · ,

The function setgn ∪ hm, n ∈ N0, m ∈ N

is an orthogonal basis in L2 ([0, 2π]), this means :Pairwise different functions are orthogonal and the function set is complete. Completeness resultsfrom the fact, that every f ∈ L2 ([0, 2π]) can be calculated with it’s Fourier series, which convergesin the energy norm of every period.If we change the orthogonal basis gn ∪ hm, n ∈ N0, m ∈ N by

g0 =1√2πg0, and gn =

1√πgn, hn =

1√πhn for n = 1, 2, · · ·

then the function setgn ∪ hm, n ∈ N0, m ∈ N

provides an orthonormal basis in L2 ([0, 2π]).=⇒ Signal theory :A periodic signal f with finite energy in every period is a special signal with finite average power.Such periodic signals are uniquely determined by it’s discrete spectrum. Here we had the specialperiod T = 2π .

Theorem 4.23. If f ∈ L2 ([0, 2π]) then you get for the Fourier series

f(t) =a0

2+∞∑

k=1

ak cos(k t) + bk sin(k t) with convergence in the energy norm of L2 ([0, 2π]) .

This means that with the corresponding trigonometric polynomials

fN (t) =a0

2+

N∑

k=1

ak cos(k t) + bk sin(k t) results ‖f − fN ‖2 → 0 for N →∞ .

So the energy of the approximation error can be made arbitrarily small, if you choose N sufficientlylarge.For the Fourier coefficients the following Parseval’s identity hold

|a0|24

+1

2

∞∑

k=1

(|ak|2 + |bk|2

)=

1

2π∫

0

|f(t)|2dt =‖f‖22π

42

Page 44: Signal Analysis

Similar we have for the trigonometric approximations fN (t)

|a0|24

+1

2

N∑

k=1

(|ak|2 + |bk|2

)=

1

2π∫

0

|fN (t)|2dt =‖fN ‖2

So the signal energy |f(t)|2 or |fN (t)|2 can be calculated by the spectrum of the signal. On theright sides of the last two equations the energy of one signal period will divided by the length ofone time period. This is the average signal power of f or fN in this period. Because the signalsare 2π-periodic this is here also the finite average power of the complete signals.

Now the general case of real Fourier series :Basic functions regarding real Fourier series for the basic interval [ 0, T ] are

gn(t) = cos(nω1 t), hm(t) = sin(mω1 t), n = 0, 1, 2, · · · , m = 1, 2, · · · , t ∈ [ 0, T ]

with

ω1 =2π

T

as basic angular frequency. They will be used to develop T -periodic functions in Fourier series.One gets from (4.29), (4.30), (4.31) by the substitution

t = ω1 · t, dt = ω1 dt.

the orthogonality relations

T∫

0

sin(nω1 t) sin(mω1 t) dt =

T2 for n = m, m = 1, 2, · · ·

0 for n 6= m, n, m = 1, 2, · · ·, (4.32)

T∫

0

cos(nω1 t) cos(mω1 t) dt =

T2 for n = m, m = 1, 2, · · ·

T for n = m = 0

0 for n 6= m, n, m = 1, 2, · · ·

(4.33)

andT∫

0

cos(nω1 t) sin(mω1 t) dt = 0 for n ∈ N0, m ∈ N, (4.34)

arise for the function system

cos(nω1 t), sin(mω1 t), n = 0, 1, 2, · · · , m = 1, 2, · · · .

Exercise 4.24. Define the orthogonality relations for the functions gn, hm in the last examplesimilar to the example 4.22. Then construct also an orthonormal basis in L2 ([0, T ]) for thisgeneralized case.

Theorem 4.25. If f ∈ L2 ([0, T ]) then you get for the Fourier series

f(t) =a0

2+

∞∑

k=1

ak cos(k ω1t)+bk sin(k ω1t) with convergence in the energy norm of L2 ([0, T ]) .

This means that with the corresponding trigonometric polynomials

fN (t) =a0

2+

N∑

k=1

ak cos(k ω1t) + bk sin(k ω1t) results ‖f − fN ‖2 → 0 for N →∞ .

43

Page 45: Signal Analysis

So the energy of the approximation error can be made arbitrarily small, if you choose N sufficientlylarge.For the Fourier coefficients the following Parseval’s identity hold

|a0|24

+1

2

∞∑

k=1

(|ak|2 + |bk|2

)=

1

T

T∫

0

|f(t)|2dt =‖f‖2T

: average signal power of f(t)

Similar we have for the trigonometric approximations fN (t)

|a0|24

+1

2

N∑

k=1

(|ak|2 + |bk|2

)=

1

T

T∫

0

|fN (t)|2dt =‖fN ‖2T

: average signal power of fN (t)

Interpret theorem 4.25 like theorem 4.23.The energy error of signal approximation in one period can calculated by

‖f − fN ‖2 =

T∫

0

|f(t)− fN (t)|2dt =

T∫

0

|f(t)|2dt− |a0|24

T − T

2

N∑

k=1

(|ak|2 + |bk|2

)

From this we get quick the approximation error measured in the average signal power.

4.6 Complex inner product spaces

(also complex pre-Hilbert-space or complex vector space with scalar product called)Compare http://en.wikipedia.org/wiki/Inner_product

Definition 4.26. A scalar product (also inner product called) on a complex vector space V is amap

〈·, ·〉 : V ×V −→ C

that satisfies the following five axioms for all elements x, y, z ∈ V (abstract vectors) and all scalarsλ ∈ C (complex numbers)

1.〈x, x〉 ≥ 0 (non negative)

2.〈x, x〉 = 0 ⇔ x = 0 (definite)

3.〈x, y〉 = 〈y, x〉 Hermitian symmetry

4.〈x+ y, z〉 = 〈x, z〉+ 〈y, z〉 (additive respective first argument)

5.〈λx, y〉 = λ〈x, y〉 (homogeneous respective first argument)

44

Page 46: Signal Analysis

Application of the 3. axiom one the 4. and 5. gives

〈x, y + z〉 = 〈x, y〉+ 〈x, z〉 (additivity resp. second argument)

und〈x, λy〉 = λ〈x, y〉 (conjugate homogeneous respective first argument) (4.35)

Such a complex inner product is also called a positive-definite Hermitian form. In real spaces weworked with similar positive-definite symmetric bilinear forms.Analogue to the real case now also a norm can be produced.

Definition 4.27. Every complex inner product produces by

‖x‖ :=√〈x, x〉 (4.36)

a norm, the so called induced norm.

So every complex inner product space becomes a special normed space.

Example 4.28. In Cn let’s define the scalar product by matrix calculus as

〈x, y〉 :=n∑

i=1

xi · yi = y∗ x, if x and y are column vectors

〈x, y〉 :=n∑

i=1

xi · yi = x y∗, if x and y are row vectors

Remark : The adjoint matrix is defined by A∗ = AT

. Vectors are special matrices. The inducednorm is now defined by

‖x‖ :=

√√√√n∑

i=1

|xi|2 =

√√√√n∑

i=1

xi · xi =√xx∗, if x is a row vector

The Schwarz inequality (4.19) becomes now

∣∣∣∣∣n∑

i=1

xi · yi∣∣∣∣∣ ≤

√√√√(

n∑

i=1

|xi|2)·

√√√√(

n∑

i=1

|yi|2)

(4.37)

Example 4.29. In the linear space of complex valued and quadratic integrable functions over anbounded interval I = [a, b]

kurz: L2 (I) oder L2 ([a, b]) ,

Schwarz inequality (4.19) taken in quadrat gets the form∣∣∣∣∣∣

b∫

a

f(t) g(t) dt

∣∣∣∣∣∣

2

b∫

a

|f(t)|2 dt

·

b∫

a

|g(t)|2 dt

(4.38)

If the energy of the time signals f and g are finite then

〈f, g〉 :=

b∫

a

f(t) g(t) dt (4.39)

‖f‖ :=√〈f, f〉 =

√√√√√b∫

a

|f(t)|2 dt (4.40)

The complex inner product space is complete, if regarding induced norm every Cauchy sequence isconvergent. Such a space is called a Hilbert space.

45

Page 47: Signal Analysis

4.7 Application to complex Fourier series

Often in the applications complex Fourier expansions are used, to analyse oscillating structures.Here for instance the complex function system

fn(t) = exp(i n t), n ∈ Z, t ∈ [ 0, 2π ] (4.41)

which is related with that of example 4.22 and connected with the complex representation of Fourierseries for ω1 = 1 and T = 2π is given.The orthogonality relations corresponding the interval [ 0, 2π ] are now simpler to calculate

2π∫

0

ei n t eim t dx =

2π for n = m, m ∈ Z

0 for n 6= m, n, m ∈ Z(4.42)

or〈fn, fm〉 = 2π δnm

Pay attention toeim t = e−im t.

The function system (4.41) characterises a system of complex harmonic oscillations which contains

the frequencies ξ : ± 1

2π, ± 2

2π, · · · ± n

2π· · · (4.43)

the angular frequencies ω : ±1, ±2, · · · ± n · · · (4.44)

and the

corresponding periods 2π,2π

2, · · · 2π

n· · ·

The primitive or basic period is here 2π.The function system (4.41) provides an orthogonal basis in in the complex function space L2 ([0, 2π]).Change it so, that you get a orthonormal basis in L2 ([0, 2π]).With the above function systems one can analyse periodic oscillations with a basic period T = 2πresp. a basic frequency ξ = 1

2π . The frequency spectrum of such an oscillation corresponds to(4.43) resp. (4.44) completed by ξo = 0 resp. ωo = 0. This last discrete frequency is connectedwith the mean value of the oscillation.

Theorem 4.30. If f ∈ L2 ([0, 2π]) then you get for the complex Fourier series (3.35) with ω1 = 1

f(t) =∞∑

k=−∞Ck ei kt with convergence in the energy norm of L2 ([0, 2π]) .

This means that with the corresponding trigonometric polynomials

fN (t) =N∑

k=−NCk ei kt results ‖f − fN ‖2 → 0 for N →∞ .

So the energy of the approximation error can be made arbitrarily small, if you choose N sufficientlylarge.For the Fourier coefficients the following Parseval’s identity hold

∞∑

k=−∞| Ck|2 =

1

2π∫

0

|f(t)|2dt =‖f‖22π

: average signal power of f(t)

46

Page 48: Signal Analysis

Similar we have for the trigonometric approximations fN (t)

N∑

k=−N| Ck|2 =

1

2π∫

0

|fN (t)|2dt =‖fN ‖2

2π: average signal power of fN (t)

Between the spectra of theorem 4.30 and theorem 4.23 the equations

|a0|24

+1

2

∞∑

k=1

(|ak|2 + |bk|2

)=

∞∑

k=−∞| Ck|2

|a0|24

+1

2

N∑

k=1

(|ak|2 + |bk|2

)=

N∑

k=−N| Ck|2

hold.As a rule periodic oscillations with any basic angular frequency ω1 > 0 are to analyse. So youmust substitute (4.43) and (4.44) by

frequencies ξ : ±n · ω1

2π, n = 1, 2, · · · (4.45)

angular frequencies ω : ±n · ω1 , n = 1, 2, · · · (4.46)

corresponding periods T :2π

n · ω1

, n = 1, 2, · · ·

That in this case of complex Fourier series used function system, compare (3.35)

fn(t) = ei n ω1 t, n ∈ Z, t ∈ [ 0, T ] (4.47)

satisfies the orthogonality relations

T∫

0

ei n ω1 t eimω1 t dt =

T for n = m

0 for n 6= m(4.48)

or〈fn, fm〉 = T δnm

The function system (4.48) provides an orthogonal basis in in the complex function space L2 ([0, T ]).Exercise : Change it so, that you get an orthonormal basis in L2 ([0, T ]).

Theorem 4.31. If f ∈ L2 ([0, T ]) then you get for the complex Fourier series (3.35)

f(t) =∞∑

k=−∞Ck ei kω1 t with convergence in the energy norm of L2 ([0, T ]) .

This means that with the corresponding trigonometric polynomials

fN (t) =N∑

k=−NCk ei kω1 t results ‖f − fN ‖2 → 0 for N →∞ .

So the energy of the approximation error can be made arbitrarily small, if you choose N sufficientlylarge.For the Fourier coefficients the following Parseval’s identity hold

∞∑

k=−∞| Ck|2 =

1

T

T∫

0

|f(t)|2dt =‖f‖2T

47

Page 49: Signal Analysis

Similar we have for the trigonometric approximations fN (t)

N∑

k=−N| Ck|2 =

1

T

T∫

0

|fN (t)|2dt =‖fN ‖2T

On the right sides of the last two equations is standing the average signal power of f(t) and fN (t).The energy error of signal approximation in one period can calculated by

‖f − fN ‖2 =

T∫

0

|f(t)− fN (t)|2dt =

T∫

0

|f(t)|2dt− TN∑

k=−N| Ck|2

From this we get quick the approximation error measured in the average signal power.Between the spectra of theorem 4.31 and theorem 4.25 again the equations

|a0|24

+1

2

∞∑

k=1

(|ak|2 + |bk|2

)=

∞∑

k=−∞| Ck|2

|a0|24

+1

2

N∑

k=1

(|ak|2 + |bk|2

)=

N∑

k=−N| Ck|2

hold.

4.8 Metric spaces

A metric space is a set where a notion of distance between elements of the set is defined. Such adistance d is also called a distance function or a metric.

Definition 4.32. A metric d on a set M is defined by the following axioms

d : M×M −→ Rd (x, y) ≥ 0 (non negative)

d (x, y) = 0 ⇔ x = y (definite)

d (x, y) = d (y, x) (symmetry)

d (x, y) ≤ d(x, z) + d(z, y) (triangle inequality)

which are valid ∀x, y, ∈M. A set M with such a metric d is called a metric space M = (M, d).

In any normed space V = (V, ‖ · ‖) is a metric defined by

d(x, y) ≡ ‖x− y‖, ∀x, y ∈ V

a metric. So every normed space becomes a special metric space with the metric induced by thegiven norm.

Special metrics induced by norms:

48

Page 50: Signal Analysis

• The normed space Rn

‖x‖2 =√x2

1 + · · ·+ x2n =

√√√√n∑

i=1

x2i , ( Euclidean norm)

becomes with the induced Euclidean distance

d(x,y) = ‖x− y‖2 =√

(x1 − y1)2 + (x2 − y2)2 + · · ·+ (xn − yn)2 =

√√√√n∑

i=1

(xi − yi)2

a metric space.

• Similarly the normed space Cn

‖z‖2 =√|z1|2 + · · ·+ |zn|2 =

√√√√n∑

i=1

|zi|2, z ∈ Cn

becomes with the induced distance

d(z,w) = ‖z−w‖2 =√|z1 − w1|2 + |z2 − w2|2 + · · ·+ |zn − wn|2 =

√√√√n∑

i=1

|zi − wi|2

a metric space.

• From maximum norm in Rn (p =∞)

‖x‖∞ = max (|x1|, . . . , |xn|)

results maximum metric

d(x,y) = ‖x− y‖∞ = max (|x1 − y1|, . . . , |xn − yn|) .

• From maximum absolute sum norm in Rn (p = 1)

‖x‖1 =n∑

i=1

|xi|

results the distance (Manhattan Metric)

d(x,y) = ‖x− y‖1 =

n∑

i=1

|xi − yi|

• From p-norm in Rn

‖x‖p =

(n∑

i=1

|xi|p) 1

p

, 1 ≤ p <∞

results the distance (Minkowski metric)

d(x,y) = ‖x− y‖p =

(n∑

i=1

|xi − yi|p) 1

p

Every by a norm induced distance is translation invariant.

49

Page 51: Signal Analysis

Example 4.33. With interval I = [a, b] in the linear space C(I) the maximum norm is given by

||f ||∞ = max |f(t)| : t ∈ I

This norm induces by

d(f, g) = ||f − g||∞ = max |f(t)− g(t)| : t ∈ I

on V = C(I) a distance. With this distance one can construct a uniform neighbourhood of everygiven f ∈ C(I).

Example 4.34. For the in (4.16) and (4.17) defined normed space Lp(I) the distance is definedby

d(f, g) = ‖f − g‖p :=

I

|f(t)− g(t)|p dt

1p

Mostly used are the casesp = 1 : absolut integrable functionsp = 2 : quadratic integrable functions or signals with finite energy

50

Page 52: Signal Analysis

5 Fourier transform

5.1 Introduction

Consider a periodic signal fT : R → R with period T, its complex Fourier series and its discretespectral values Ck, compare (3.35). By using the discrete spectral values

Ck = C(ωk) =

1

T

T2∫

−T2

fT(t) e−iω

kt dt with ω1 =

Tand ω

k= k · ω1 =

2kπ

T(5.1)

of fT(t) we get the representation formulas (reconstruction formulas)

fT(t) =

∞∑

k=−∞Ck eikω1 t respectively fT(t) =

∞∑

k=−∞C(ω

k) eiωk t with ω

k= k·ω1 . (5.2)

Certainly some convergence conditions must be satisfied, compare for instance subsection 3.10.

Now let f(t) and f(ω) be two functions with

f : R→ C and f : R→ C,

which are not periodic and which are connected by

f(ω) = Ft f(t) (ω) :=

∞∫

−∞

f(t) e−iωt dt, (5.3)

f(t) = Fω−1 f(ω) (t) :=

1

∞∫

−∞

f(ω) eiωt dω . (5.4)

If both integrals exist and the equations hold, then we call f(t), f(ω) a Fourier transform pair.

In applications we often use only the case of a real valued time signal f(t).

To a time signal f(t) of such a pair corresponds an angular frequency spectrum f(ω). Similar in(5.1) we had considered the discrete spectrum C(ω

k) of the periodic time signal fT(t). This also

with C(ω) termed spectrum has a discrete support of isolated frequencies

· · · , −2ω1 , −ω1 , 0, ω1 , 2ω1 , · · · .

But the spectrum f(ω) in (5.3) has a continuous support.Often this is phrased by : f(ω) is a continuous spectrum. Pay attention to the fact that f(ω) is notalways a continuous function of ω. This is another property. So you have to distinguish betweenthe notion spectrum with continuous support and the notion continuous spectral function !

Physically interpretation :

• C(ω) contains complex amplitudes to isolated angular frequencies ωk.

• f(ω) is an angular frequency density, this means,it’s a complex amplitude density (per angular frequency unit). The frequency spectrum isnow blurred or smeared.

51

Page 53: Signal Analysis

For understanding this, compare it in principle with some types of loading in structural mechanics :

• A sum of concentrated loads on a beamis a set of forces which act on some discrete (isolated) points.

• A load density on a beam (force per length)is a load blurred on some interval.

Symbolically one can write instead of (5.3) and (5.4) also

f(t) p F−−−→ f(ω) and f(ω) p F−1

−−−−→ f(t)

The Fourier transform F in (5.3) and its inverse transform F−1in (5.4) can be written in terms

of frequency variable ξ instead of angular frequency variable ω with ω = 2πξ.

f(ξ) =

∞∫

−∞

f (t) e−i2πξt dt and f(t) =

∞∫

−∞

f(ξ) ei2πξt dξ with ω = 2πξ . (5.5)

Now the frequency axis is otherwise scaled.Look at the change of the factor in the representation formula for calculating f(t) and on thedifferent integration variables in (5.5) and (5.4).Equations above show that the transform and its inverse have the same general form. The integralkernels are exponential terms with different signs in the exponent.

Example 5.1. Calculate the F-transform of a zero centered rectangular impulse of height A andtime duration 1 .

f(t) = Arect (t) .

The result isf(ω) = Asinc

(ω2

).

In ξ-domain we getf(ξ) = Asinc (πξ) .

Now calculate the F-transform of a special dilated and right shifted version of f(t).Hint: First operation is dilation, second is shifting. Time center of the resulting rectangular impulseis τ

2 , duration time of the impulse is τ .

g(t) = Arect

(t− τ

2

τ

)with τ > 0

g(ω) =

∞∫

−∞

g(t) e−iωt dt =

τ∫

0

A e−iωt dt

g(ω) = A1− e−iωτ

iω= Aτ

eiωτ2 − e−iω

τ2

iωτe−iω

τ2

With

sinc(ωτ

2

)=

sin(ωτ2

)ωτ2

.

one gets

g(ω) = Aτsinc(ωτ

2

)e−iω

τ2 .

Apply the table of Fourier transformation pairs to short the calculations:

f(t) 7−→ f

(t

τ

)7−→ f

(t− τ

2

τ

)

f(ω) 7−→ τ f(τ ω) 7−→ f(τ ω) e−iτ2ω

52

Page 54: Signal Analysis

5.2 Relations between Fourier transform and complex Fourier coefficients

Plausibility considerations:

Let f(t) be an absolute integrable signal, short : f ∈ L1 (R). Such a signal is non-periodic.Furthermore let f(t) be piecewise continuous differentiable.

Choose now a sufficient large T so, that f(t) is essentially localized in the interval [−T2 ,

T2 ].

Let fT(t) be a periodic signal fT : R→ R with

fT(t) = f(t) for all t ∈[−T

2,T

2

]and fT(t+ T ) = fT(t) for all t

Remark: Looking on fT(t) in one special period means looking on the essentially part of f(t) .Remark: From T →∞ follows particularly fT(t) −→ f(t) with L1-convergence.

By setting ∆ω = ω1 one gets from (5.1) with a small angular frequency difference ∆ω the relations

∆ω =2π

Tand ω

k= k · ∆ω =

2kπ

Tfor the equally spaced ω

k.

The analyzing formula (5.1) can be compared now with (5.3) for sufficient large T :

C(ωk) =

∆ω

T2∫

−T2

fT(t) e−iω

kt dt ≈ ∆ω

∞∫

−∞

f(t) e−iωk t dt =∆ω

2πf(ω

k)

So we get

C(ωk) ≈ ∆ω

2πf(ω

k) or C(ω

k) ≈ 1

Tf(ω

k) with ∆ω =

T. (5.6)

This means that the discrete spectral values of periodic time signals can be approximated bycalculation the continuous Fourier transform at the corresponding angular frequency values ω

k.

Enlarging of T results in downsizing of ∆ω.This is connected with denser lying discrete spectral lines of (5.1).In addition the magnitudes of the complex Fourier coefficients C(ω

k) become then smaller. The

energy of the continuous spectrum will be distributed on more denser lying discrete spectral values(better approximation).

The reconstruction formula (5.2) can be approximated by using the approximation formulas (5.6).

fT(t) =∞∑

k=−∞C(ω

k) eiωk t ≈ 1

∞∑

k=−∞f(ω

k) eiωk t ∆ω ≈ 1

∞∫

−∞

f(ω) eiωt dω .

For T → ∞ we get ∆ω → 0. Then the difference of the two series goes to zero. The rightexpression represents the inverse Fourier transform.

With T → ∞ follows for the angular frequency increment ∆ω → 0 and you get in everypoint t of continuity

f(t) =1

∞∫

−∞

f(ω) eiωt dω

53

Page 55: Signal Analysis

This is the reconstruction formula (5.4) for non-periodic time signals. You get it as a limit case ofthe periodic-time-signal considerations.If t is no point of continuity then you get similar to subsection 3.10

f(t−) + f(t+)

2=

1

∞∫

−∞

f(ω) eiωt dω .

Generally we identify functions which are distinct only on a set of measure 0. Then the lastrepresentation for some special points t becomes irrelevant.

Example 5.2. Begin like in example 5.1 with

f(t) = Arect(t) .

With T > 1 you get by

fT(t) =

∞∑

n=−∞f(t− nT )

a T -periodic function with the property fT(t) = f(t) for − T2 < t ≤ T

2 . The periodic functionfT(t) is here constructed by a function f(t) with bounded support. f(t) is not periodic! Now calculatethe complex Fourier coefficients (5.1) for

ω1 =2π

Tand ω

k= k · ω1 =

2kπ

Twith T > 1

From ∫e−iωk t dt =

e−iωk t

−i ωk

+ C = ie−iωk t

ωk

+ C = iexp(−iω

kt)

ωk

+ C for ωk6= 0

(this is an indefinite integral of a complex valued time function with a parameter ωk)

results

Ck = C(ωk) =

1

T

T2∫

−T2

fT(t) e−iω

kt dt =

1

T

12∫

− 12

Ae−iωk t dt =A

T

12∫

− 12

e−i ωk t dt

Ck = C(ωk) =

A

Ti

exp(−i ωk2 )

ωk

− exp(iωk2 )

ωk

=A

Ti−2 i sin(

ωk2 )

ωk

=A

T

sin(ωk2 )

ωk2

C(ωk) =

A

Tsinc

(ωk

2

)also valid for ω0 = 0

Here you get with the F-transform of example 5.1

C(ωk) =

1

Tf(ω

k) or C(ω

k) =

∆ω

2πf(ω

k).

In this example instead of the approximations (5.6) even the corresponding equations hold for T > 1.

Exercise 5.3. Realize the same operations and calculations for the signal g(t) in example 5.1

Theorem 5.4. For a time signal f with f ∈ L2 (R) and supp(f) ⊆ [ to, to + T ] and for acorresponding periodization given by

fT(t) =∞∑

n=−∞f(t− nT )

the following properties hold.

1. f ∈ L1 (R)

54

Page 56: Signal Analysis

2. fT ∈ L2 ([ to, to + T ]), fT ∈ L2([−T

2 ,T2 ]), fT ∈ L2 ([ 0, T ]), the same finite energy in every

period.

3. fT ∈ L1 ([ to, to + T ]), fT ∈ L1([−T

2 ,T2 ]), fT ∈ L1 ([ 0, T ]), the same L1-norm in every

period.

4. The Fourier transform f(ω) exists for all ω and is a continuous function with the specialpropertiesf(ω)→ 0 for ω →∞ and f(ω)→ 0 for ω → −∞. (Riemann-Lebesgue-Lemma)

5. f ∈ L2 (R). This means : Also the spectrum has finite energy.

6. Between the complex Fourier coefficients Ck of fT(t) and the Fourier transform f(ω) of f(t)the relations

Ck = C(ωk) =

1

Tf(ω

k) or Ck = C(ω

k) =

∆ω

2πf(ω

k) = ∆ξ f(ξ

k) with ∆ξ =

1

T, ξ

k=k

T

are valid. (f(ξ) is the frequency based Fourier transform)

7. The Fourier transform f(ω) is here continuous and can be approximated by

f(ω) ≈∞∑

k=−∞f(ω

k) rect

(ω − ω

k

∆ω

)=

∆ω

∞∑

k=−∞C(ω

k) rect

(ω − ω

k

∆ω

)= T

∞∑

k=−∞C(ω

k) rect

(ω − ω

k

∆ω

)

If the complex Fourier coefficients are known then the continuous Fourier transform can beapproximated. The Fourier coefficients depend on the choosing of the above period lengthT . By enlarging T the approximation of the Fourier transform with the Fourier coefficientsbecomes better.

Remark 5.5. For a given time signal g ∈ L2 (R) ∩ L1 (R) we get by

f(t) = g(t) · rect(t

T

)signals f(t) and fT(t) =

∞∑

n=−∞f(t− nT )

which satisfy the assumptions of theorem 5.4 with to = −T2 . Especially proposition 6 of theorem 5.4

is valid. If the Fourier transform g is known (and of simple structure) but the Fourier transform fnot then can we try to approximate the Fourier coefficients

Ck = C(ωk) =

1

Tf(ω

k) by Ck = C(ω

k) =

1

Tg(ω

k) .

Estimation of the absolute approximation errors :

∣∣∣Ck − Ck∣∣∣ ≤ 1

T

∣∣∣∣∣∣∣∣

T2∫

−T2

g(t) e−iωk t dt−∞∫

−∞

g(t) e−iωk t dt

∣∣∣∣∣∣∣∣≤ 1

T

∣∣∣∣∣∣∣

−T2∫

−∞

g(t) e−iωk t dt

∣∣∣∣∣∣∣+

1

T

∣∣∣∣∣∣∣∣

∞∫

T2

g(t) e−iωk t dt

∣∣∣∣∣∣∣∣

or weakened but simplified

∣∣∣Ck − Ck∣∣∣ ≤ 1

T

−T2∫

−∞

|g(t)| dt+1

T

∞∫

T2

|g(t)| dt

55

Page 57: Signal Analysis

Estimation of the relative approximation errors

∣∣∣Ck − Ck∣∣∣

∣∣∣Ck∣∣∣

=

∣∣∣∣∣∣∣∣∣∣∣∣

−T2∫

−∞g(t) e−iωk t dt+

∞∫T2

g(t) e−iωk t dt

T2∫

−T2

g(t) e−iωk t dt

∣∣∣∣∣∣∣∣∣∣∣∣

−T2∫

−∞|g(t)| dt+

∞∫T2

|g(t)| dt∣∣∣f(ω

k)∣∣∣

−T2∫

−∞|g(t)| dt+

∞∫T2

|g(t)| dt

|g(ωk)|

if f(ωk) 6= 0 and g(ω

k) 6= 0 .

This are only plausibility considerations but try the application for some very rapid decaying timesignals with unbounded support.For another period starting value to all calculations can be adapted simple.

5.3 Amplitude and Phase Spectra

Let the Fourier transform of a real signal f(t) be represented by f(ω). This spectral density orspectrum f(ω) is usually complex and can be written in normal form as

f(ω) = Ref(ω) + Imf(ω) i , or short f(ω) = R(ω) + I(ω) i

if it don’t cause confusion. The functions R(ω) and I(ω) are the real and the imaginary part ofthe spectrum. In the polar form

f(ω) =∣∣∣f(ω)

∣∣∣ ei ϕ(ω) or f(ω) =∣∣∣f(ω)

∣∣∣ · exp (i ϕ(ω))

the amplitude spectrum and the phase spectrum of the signal f are given by∣∣∣f(ω)

∣∣∣ =√R2(ω) + I2(ω),

ϕ(ω) = arg(f(ω)

)

For calculating the phase spectrum we can use the formulas

ϕ(ω) =

arccos

(R(ω)

|f(ω)|

)for I(ω) ≥ 0

− arccos

(R(ω)

|f(ω)|

)for I(ω) < 0

undefined for all ω with f(w) = 0

or

ϕ(ω) =

2 arctan

(I(ω)

|f(ω)|+R(ω)

)for

∣∣∣f(ω)∣∣∣+R(ω) 6= 0

π for∣∣∣f(ω)

∣∣∣+R(ω) = 0

By these formulas we get −π < ϕ(ω) ≤ π for the phase-codomain.For better understanding sometimes ϕ(ω) = ϕ(ω) + 2kπ with some k ∈ Z is used.

If f(t) is real valued then the spectrum f(ω) satisfies the properties that R(ω) is even and I(ω) isodd. That means

f(t) real valued =⇒ R(−ω) = R(ω) and I(−ω) = −I(ω)

56

Page 58: Signal Analysis

This comes from

R(ω) =

∞∫

−∞

f(t)cos(ωt)dt and I(ω) = −∞∫

−∞

f(t)sin(ωt)dt .

So we get for a real time signal f(t)

f(−ω) = f(ω) .

If f(t) is a real time signal then it’s amplitude spectrum∣∣∣f(ω)

∣∣∣ is even and its phase spectrum

ϕ(ω) is odd. That means∣∣∣f(−ω)

∣∣∣ =∣∣∣f(ω)

∣∣∣ and ϕ(−ω) = −ϕ(ω).

If f(t) is real valued, then with fe as the even part of f and fo as the odd part of f we get

f(t) = fe(t) + fo(t) p F−−−→ R(ω) + jI(ω),

fe(t) = 12 f(t) + f(−t) p F−−−→ R(ω) and

fo(t) = 12 f(t)− f(−t) p F−−−→ i I(ω).

The F-transform of a real and even function is real and even. The F-transform of a real and oddfunction is pure imaginary and odd.A real signal f(t) which can be expressed by

f(t) =1

∞∫

−∞

f(ω) eiωt dω (inverse Fourier transform) ,

has also the representations

f(t) =1

∞∫

−∞

∣∣∣f(ω)∣∣∣ eiϕ(ω) eiωt dω =

1

∞∫

−∞

∣∣∣f(ω)∣∣∣ ei(ϕ(ω)+ωt) dω .

For real f(t) the amplitude spectrum∣∣∣f(ω)

∣∣∣ is even and the phase spectrum ϕ(ω) is odd. Then

alsosin(ϕ(ω) + ω t) is an odd function for every fixed parameter t. So we get the representation

f(t) =1

π

∞∫

0

∣∣∣f(ω)∣∣∣ cos(ϕ(ω) + ωt) dω for real f(t)

Example 5.6. The Fourier transform of

g(t) = rect

(t− τ

2

τ

)with τ > 0 is g(ω) = τ sinc

(τ ω2

)e−iω

τ2 , compare 5.1 for the case A = 1.

Here the amplitude spectrum result in

|g(ω)| = τ∣∣∣sinc

(τ ω2

)∣∣∣ .

The phase spectrum is calculated with the help of

ϕ(ω) =

− τ ω

2 for sinc(ωτ2

)> 0

π − τ ω2 for sinc

(ωτ2

)< 0

by ϕ = ϕ+ 2 k π ,

where the whole number k is choosen so that −π < ϕ ≤ π.

57

Page 59: Signal Analysis

Exercise 5.7. Sketch the time signal g(t) of the last example.Sketch an essentiell section of the corresponding amplitude spectrum |g(ω)|.Sketch the envelope of this amplitude spectrum and characterise it’s asymptotic behavior.Sketch the phase spectrum.Think about the essentiell support of |g(ω)| which contains all significant angular frequencies. Ingeneral the essentiell support is not the support. It must be defined in some sense. From this we getthe essentiell (angular frequency) bandwidth. In general it is not the (angular frequency) bandwidth(exact bandwidth).

Exercise 5.8. Replace g(t) in example 5.6 by the time signal

g(t) = rect

(t− toτ

)with some arbitrary fixed to , some arbitrary fixed τ > 0 .

Adapt the considerations and calculations of example 5.6 and that of the last exercise.

5.4 Basic properties and examples of the Fourier transform

For the Fourier transformation of a linear combination of time signals the property

Ft

n∑

k=1

αk fk(t)

(ω) =

n∑

k=1

αk Ft fk(t) (ω) =

n∑

k=1

αk fk(ω) ,

holds that means the F-Transform is a linear operator (a linear map). A shorter formulation ofthis superposition principle is given by

n∑

k=1

αkfk(t) p F−−−→n∑

k=1

αkfk(ω) . (5.7)

The proof follows from the fact that the integral is a linear functional.

In principle the F-Transform can be applied to complex valued time functions f(t) also. For theconjugate

complex f(t) of f(t) you get

Ftf(t)

(ω) =

∞∫

−∞

f(t) e−iωt dt =

∞∫

−∞

f(t) eiωt dt = Ft f(t)(−ω) = F∗t f(t) (−ω)

or shorterf(t) p F−−−→ f(−ω) (5.8)

The F-transform of a shifted signal (time delay of a signal) can be calculated by

Ftf(t− to)(ω) = e−i toωFtf(t)(ω)

shortly

f(t− to) p F−−−→ e−i toωf(ω) (5.9)

So time shifting yields to complex modulation of the spectrum.

The linearity and this last rule can be used to find the following Fourier transform pairs. For everyreal parameter τ 6= 0 you get

1

2 f(t− τ) + f(t+ τ) p F−−−→ f(ω) cos(τ ω) ,

58

Page 60: Signal Analysis

1

2 f(t+ τ)− f(t− τ) p F−−−→ i f(ω) sin(τ ω) , (5.10)

f(t+ τ)− f(t− τ)

2 τp F−−−→ i f(ω)

sin(τ ω)

τ.

What happens in the last equation for τ → 0 .

Example 5.9. Look at the examples 5.2 and 5.6 and calculate the F-transform of the followingtime signal.

f(t) = rect

(t+ τ

2

τ

)− rect

(t− τ

2

τ

)with parameter τ 6= 0 .

f(ω) = exp(iτ ω

2

)Ftrect

(t

τ

)(ω)− exp

(−i τ ω

2

)Ftrect

(t

τ

)(ω)

f(ω) = 2 i τ sin(τ ω

2

)sinc

(τ ω2

)

This (angular) frequency spectrum is pure imaginary and odd (time signal f(t) self is real and odd).Here a time-limited signal was given (bounded time support), but it’s F-transform is not frequency-limited (no bounded frequency support).

Time scaling of a signal f(t) is connected with the following equation in frequency domain

Ft f(a t) (ω) =1

|a| F f(t) (ω) ,

simpler formulated by

f(a t) p F−−−→ 1

|a| f(ωa

)for all a 6= 0 (5.11)

In the case a < 0 the operatorf(t) 7−→ f(a t)

realises time dilation and time reflection.

Example 5.10. Find the F-transforms of the signal dilations

f1(t) = rect

(tτ2

)and f2(t) = rect

(t

)for all τ > 0 .

From the transform pair

f(t) = rect

(t

τ

)p F−−−→ τ sinc

(τ ω2

)

resultsf1(t) = f(2 t) =⇒ f1(t) p F−−−→ τ

2sinc

(τ ω4

)

f2(t) = f

(t

2

)=⇒ f2(t) p F−−−→ 2 τ sinc(τ ω)

Sketch the two time signals and their amplitude spectra for some parameter τ > 0. Compare thetime signals, the amplitude spectra and their main lobes.The main lobe width of |f1(ω)| is twice as the main lobe width of |f(ω)|, whereas the main lobewidth of |f2(ω)| is half of the main lobe width of |f(ω)|. Here the amplitudes of all time pulses are1.

59

Page 61: Signal Analysis

Exercise 5.11. Change all rectangular pulses of the last examples by appropriate multiplicationswith constants, so that the signal energy becomes 1. Adapt for them the considerations and tasksof the last example.Choose then the coefficients so that you get rectangular impulses of L1-Norm 1 and adapt again theconsiderations.

A special case of the above time scalings is the time reversal operation (time reflection R)

R : f(t) 7−→ f(−t)

The F-transform of g(t) = f(−t) can be calculated with the help of

Ftf(−t)(ω) = Ff(−ω) ,

shorter expressed by

f(−t) p F−−−→ f(−ω) . (5.12)

Proof it also directly : With t = −t you get

∞∫

−∞

f(−t) e−iωt dt = −−∞∫

f(t) eiωtdt =

∞∫

−∞

f(t) eiωt dt = f(−ω)

If f(t) is real valued then we get

f(−ω) = f(ω) and f(−t) p F−−−→ f(ω) , (5.13)

which is more used in the applications.

Example 5.12. Spectral density calculation of the following time signals :

1.f1(t) = e−αtH(t) α > 0

f1(ω) =

∞∫

−∞

e−αtH(t) e−iωtdt

=

∞∫

0

e−αt e−iωtdt =

∞∫

0

e−(α+iω)tdt

=e−(α+iω)t

−(α+ iω)

∣∣∣∣∣

t=+∞

t=o

f1(ω) =1

α+ iω=

α

α2 + ω2− i

ω

α2 + ω2

2. Use now the formula for time reflection to calculate the next signal spectrum.

f2(t) = f1(−t) = eαtH(−t) =⇒ f2(ω) =1

α− iω

60

Page 62: Signal Analysis

3. Use the superposition property to calculate the spectrum of

f3(t) = e−α |t|H(t) forα > 0

We get with

f3(t) = f1(t) + f1(−t) = f1(t) + f2(t) =⇒ f3(ω) =1

α+ iω+

1

α− iω

the result

f3(ω) =2α

α2 + ω2forα > 0

Neither the time representation nor the frequency representation of this signal has a boundedsupport.

Duality properties :

f(t) p F−−−→ f(ω) =⇒ f(t) p F−−−→ 2π f(−ω) (5.14)

f(ω) p F−1

−−−−→ f(t) =⇒ f(ω) p F−1

−−−−→ 1

2πf(−t)

Proof of the first property : From the formula for F−1 you get

2πf(t) =

∞∫

−∞

f(ω) eiωt dω ,

subsequent changing of t by −t results in

2πf(−t) =

∞∫

−∞

f(ω) e−iωt dω .

The exchange of the variable notations t ↔ ω yields to

2π f(−ω) =

∞∫

−∞

f(t) e−iωt dt ,

which gives the first statement.An equivalent ξ-based duality property holds by

f(t) p F−−−→ f(ξ) =⇒ f(t) p F−−−→ f(−ξ) (5.15)

f(ξ) p F−1

−−−−→ f(t) =⇒ f(ξ) p F−1

−−−−→ f(−t)

Example 5.13. With the duality theorem show that

g(t) =1

a2 + t2p F−−−→ π

ae−a|ω| = g(ω) for a > 0

Solution : By

e−a|t| p F−−−→ 2a

a2 + ω2=⇒ 1

2ae−a|t| p F−−−→ 1

a2 + ω2

and the duality property of the F-transform, we get

1

a2 + t2p F−−−→ 2π

1

2ae−a|−ω|

61

Page 63: Signal Analysis

1

a2 + t2p F−−−→ π

ae−a|ω| and

π

ae−a|ω| p F

−1

−−−−→ 1

a2 + t2but only for a > 0

Conclusion ∞∫

−∞

cos(ωt)

a2 + t2dt =

π

ae−a|ω| for a > 0

Example 5.14. Calculate

Ftsin(a t)

πt

(ω) for a > 0

rect

(t

τ

)p F−−−→ τ sinc

(ω τ2

)

a =τ

2=⇒ rect

(t

2 a

)p F−−−→ 2 a sinc(aω)

1

2 arect

(t

2 a

)p F−−−→ sinc(aω)

Apply the duality property and get with the even function rect(x)

sinc(a t) p F−−−→ 2π1

2 arect

( ω2 a

).

Withsin(a t)

πt=

a

π

sin(a t)

at=

a

πsinc(a t)

follows the required Fourier transform

sin(a t)

π tp F−−−→ rect

( ω2 a

)for a > 0

Use the special parameter representation a = 2π B and get with

1

2B

sin(2πB t)

π t=

sin(2πB t)

2πB t= sinc(2πB t)

the transformation pair

sinc(2πB t) p F−−−→ 1

2Brect

( ω

4πB

)for B > 0

This two time signals (time filters) are signals (filters) with finite bandwidth. They have a boundedω-support.

Theorem 5.15. If the function f(t) has a Fourier transform f(ω) which is is continuous atω = 0 , then

f(0) =

∞∫

−∞

f(t) dt

is valid. If f(t) is continuous at t = 0 and f ∈ L1 (R) then we get

f(0) =1

∞∫

−∞

f(ω) dω

62

Page 64: Signal Analysis

Examples 5.16. Dirac impulses and periodic signalsDirac impulses are used in connection with the transforms of periodic functions. Both types ofsignals neither elements of L1 (R) nor elements of L1 (R). The considerations have to be gener-alised by using elements of distribution theory. Dirac impulses are special distributions (generalisedfunctions) but very useful in signal analysis considerations.Fourier transform of Dirac impulses in time domain :

δ(t) p F−−−→ 1 and δ(t− to) p F−−−→ e−i to ω (5.16)

Every time-shifted Dirac impulse contain all frequencies with the same amplitude |e−i to | = 1, thismeans that this impulse has a constant amplitude spectrum.This results from

∞∫

−∞

δ(t− to) e−iωtdt = exp(−i ω to)(because e−iωt is a continuous function of t

).

Fourier transform of complex periodic time signals :

eiωct p F−−−→ 2π δ(ω − ωc) and in particular 1 p F−−−→ 2π δ(ω) (5.17)

This results from

F−1ω δ(ω − ωc)(t) =

1

∞∫

−∞

δ(ω − ωc) eiωtdω

δ(ω − ωc) p F−1

−−−−→ 1

2πeiωct

=⇒ 1

2πeiωct p F−−−→ δ(ω − ωc)

Complex modulation and frequency translation

f(t) ei ωct p F−−−→ f(ω − ωc) or f(ω − ωc) p F−1

−−−−→ f(t) ei ωct (5.18)

This complex modulation modifies every time signal f(t) so that the spectral density will be shifted(shifting of the corresponding angular frequency spectrum).Sequently application of time dilation, complex modulation and F-transform yields to

f(at) eiωct p F−−−→ 1

|a| f(ω − ωca

)(5.19)

by using at first

f(at) p F−−−→ 1

|a| f(ωa

)

and then the left of the formulas (5.18).

F-transform for real modulated time signalsFor

fmod

(t) = f(t) cos(ωct + α)

we get

Ft f(t) cos(ωct + α)(ω) =1

2e−iαf(ω + ωc) +

1

2eiαf(ω − ωc)

or shorter

fmod

(ω) =1

2e−iαf(ω + ωc) +

1

2eiαf(ω − ωc) (5.20)

This results directly from formula (5.18).

63

Page 65: Signal Analysis

Definition 5.17. A signal f is called frequency or band limited to some ωo > 0,

if |f(ω)| = 0 for all ω with |ω| > ωo > 0.

If f ∈ L2 (R) is band limited to some sufficiently small ωo > 0 , then it will be called also a lowpass filter. For 0 < ωo < ωc we get then with f

moda band pass filter with

|fmod

(ω)| = 0 for |ω| > |ωc + ωo| and |ω| < |ωc − ωo|

If above all f(ω) is centered around 0 then fmod

is centered around ωc. This means here that f(ω)is centered around ωc in (0,+∞) and centered around −ωc in (−∞, 0) . Illustrate this by a sketch.

As special examples of (5.20) we get

for α = 0 : f(t) cos(ωct) p F−−−→ 1

2f(ω − ωc) +

1

2f(ω + ωc) (5.21)

and

for α = −π2

: f(t) sin(ωct) p F−−−→ 1

2 if(ω − ωc) −

1

2 if(ω + ωc) (5.22)

5.5 Scalar products and signal energy considerations

The energy contained in a real or complex time signal with f ∈ L2 (R) is defined by

E(f) =

∞∫

−∞

|f(t)|2dt . (5.23)

The integrand |f(t)|2 is called the energy density per time. Since for f ∈ L2 (R) the equations

∞∫

−∞

|f(t)|2dt =1

∞∫

−∞

|f(ω)|2 dω =

∞∫

−∞

|f(ξ)|2 dξ , (Parseval’s theorem) (5.24)

are valid you can also write

E(f) =1

∞∫

−∞

|f(ω)|2 dω or E(f) =

∞∫

−∞

|f(ξ)|2 dξ (Rayleigh’s Energy Theorem)

(5.25)Here 1

2π |f(ω)|2 is the energy density per angular frequency ω and |f(ξ)|2 is the energy densityper frequency ξ.In particular : If f is quadratic integrable then f also and conversely.More generally we get for f, g ∈ L2 (R) relations between the scalar products in time domainand in frequency domains.

〈f, g〉t =

∞∫

−∞

f(t) g(t) dt =1

∞∫

−∞

f(ω) g(ω) dω =1

⟨f , g⟩ω

(5.26)

or

〈f, g〉t =

∞∫

−∞

f(ξ) g(ξ) dξ =⟨f , g⟩ξ

(5.27)

64

Page 66: Signal Analysis

Example 5.18. For f(t) = rect(t) you get E(f) = 1 and for g(t) = f( tτ ) = rect( tτ ) you getE(g) = τ if τ > 0 .

Using the energy theorem and rect( tτ ) p F−−−→ τ sinc(τ ω2

)you can realise the equations

E(g) =1

∞∫

−∞

τ2[sinc

(τ ω2

)]2dω =

∞∫

−∞

[rect

(t

τ

)]2

dt = τ for τ > 0 .

With ω = 2π ξ you can transform it in a ξ-based representation

E(g) =

∞∫

−∞

τ2 [ sinc (τ π ξ) ]2 dξ =

∞∫

−∞

[rect

(t

τ

)]2

dt = τ for τ > 0 .

Use this to calculate the integrals

∞∫

−∞

sinc2(τ ω

2

)dω =

τfor τ > 0

∞∫

−∞

sinc2 (τ π ξ) dξ =1

τfor τ > 0 .

Proof that

sinc(τ ω

2

)p F

−1

−−−−→ 1

τrect

(t

τ

)=: hτ (t) for every parameter τ > 0

The energy of the parameter dependent signal hτ goes to infinity if τ → 0 .

Let’s go back now to the signal g with time representation g(t) and angular frequency representationg(ω).

The zero crossing points ω` = −2πτ and ωr = 2π

τ of g(ω) have the least distance to the 0-point ofthe ω-axis.

Calculate now for g the energy portion Ep in the ω-band [−2πτ ,

2πτ ] respectively in the ξ-band [− 1

τ ,1τ ]

Substitution ω = 2πτ α , dω = 2π

τ dα and subsequent numerical integration results in

Ep =1

2πτ∫

− 2πτ

τ2 sinc2(τω

2

)dω = τ

1∫

−1

sinc2(πα) dα ≈ 0, 924 · τ

Exercise : Realise this with rectangular method or trapezoid rule.Result:Circa 92% of the signal energy are contained in the ω-frequency band [−2π

τ ,2πτ ] resp. in the ξ-

frequency band [− 1τ ,

1τ ]. This means 92% of the signal energy is contained in the main lobe of the

frequency representations.

5.6 The convolution of functions

Compare http://en.wikipedia.org/wiki/Convolution

The convolution

(f ∗ g)(t) =

+∞∫

−∞

f(t− τ) g(τ) dτ or (g ∗ f)(t) =

+∞∫

−∞

f(τ) g(t− τ) dτ, (5.28)

65

Page 67: Signal Analysis

ist defined, if one of the two integrals almost every exists. If one of them exists almost every thenalso the other with

(f ∗ g)(t) = (g ∗ f)(t)

Theorem 5.19. Convolution in L1(R)If f and g are elements of L1(R) then the following properties hold:

1. (f ∗ g)(t) exists almost everywhere

2. f ∗ g ∈ L1(R) with‖f ∗ g‖1≤ ‖f‖1 ‖g‖1.

So the convolution is a continuous bilinear operation in L1(R).

∗ : L1(R)× L1(R) −→ L1(R).

Shortly:f, g ∈ L1(R) =⇒ f ∗ g ∈ L1(R) with ‖f ∗ g‖1≤ ‖f‖1 ‖g‖1 (5.29)

Theorem 5.20. For f, g, h ∈ L1(R) and arbitrary λ ∈ C the following operation rules hold :

f ∗ (λ g) = (λ f) ∗ g = λ (f ∗ g)

f ∗ (g + h) = (f ∗ g) + (f ∗ h)

f ∗ g = g ∗ ff ∗ (g ∗ h) = (f ∗ g) ∗ h

L1(R) becomes with the operations ” + ” and ” ∗ ” a commutative algebra. In L1(R) does notexists an identity element respect to the convolution. This means there is no function g ∈ L1(R)with (f ∗ g)(t) = f(t).

If f(t) is continuous around the time point t = 0 then it’s convolution with the Dirac impulseδ(t) results in

(f ∗ δ)(t) =

+∞∫

−∞

f(t− τ) δ(τ) dτ =

+∞∫

−∞

f(τ) δ(t− τ) dτ = f(t) (5.30)

whereat the integral is to interpret in a generalized way. δ(t) is a distribution (a generalizedfunction) not a function but the use of the Dirac impulse simplifies many considerations. With

δα(t) = δ(t− α)

you get a shifted Dirac localized at t = α. In the case α > 0 this is a delayed version of the Diracimpulse.The following properties hold.

(δ ∗ f)(t) = f(t), (δ ∗ δ)(t) = δ(t), (δα ∗ f)(t) = f(t− α) (5.31)

(δα ∗ δβ)(t) = δα+β(t) = δ(t− (α+ β))

So δα(t) realizes a shifting of signals. In engineering notation you can formulate the above propertiesalso in the following way.

δ(t) ∗ f(t) = f(t), δ(t) ∗ δ(t) = δ(t), δ(t− α) ∗ f(t) = f(t− α) (5.32)

δ(t− α) ∗ δ(t− β) = δ(t− (α+ β))

66

Page 68: Signal Analysis

Definition 5.21. The support of a piecewise continuous function f : R→ C is defined by

supp(f) = t : f(t) 6= 0 (5.33)

Lemma 5.22. If f(t) and g(t) are measurable functions and (f ∗g)(t) exists almost everywherethen

supp(f ∗ g) ⊆ supp(f) + supp(g) (5.34)

Conclusions of this Lemma :Convolution in the case of left-side limited supports

supp(f) ⊆ [a, +∞), supp(g) ⊆ [b, +∞) =⇒ supp(f ∗ g) ⊆ [a+ b, +∞) (5.35)

Convolution in the case of right-side limited supports

supp(f) ⊆ (−∞, c], supp(g) ⊆ (−∞, d] =⇒ supp(f ∗ g) ⊆ (−∞, c+ d] (5.36)

Convolution in the case of bounded supports

supp(f) ⊆ [a, c], supp(g) ⊆ [b, d] =⇒ supp(f ∗ g) ⊆ [a+ b, c+ d] (5.37)

In the case of special limited supports modifications of the formula (5.28) can be used to calculatethe convolution.

Theorem 5.23. If (f ∗ g)(t) exists almost everywhere then

supp(f) ⊆ [0, +∞) =⇒ (f ∗ g)(t) =

+∞∫

0

f(τ) g(t− τ) dτ (5.38)

supp(f) ⊆ [0, +∞) =⇒ (f ∗ g)(t) =

t∫

−∞

f(t− τ) g(τ) dτ (5.39)

supp(f) ⊆ [0, +∞) and supp(g) ⊆ [0, +∞) =⇒ (f ∗ g)(t) =

t∫

0

f(t− τ) g(τ) dτ (5.40)

supp(f) ⊆ [0, +∞) and supp(g) ⊆ [0, +∞) =⇒ (f ∗ g)(t) =

t∫

0

f(τ) g(t− τ) dτ (5.41)

Theorem 5.24. If f(t) and g(t) are both piecewise continuous functions with supp(f) ⊆ [a, +∞)and supp(g) ⊆ [b, +∞) then their convolution (f ∗ g)(t) is for all t continuous and has the prop-ertysupp(f ∗ g) ⊆ [a+ b, +∞).

Corollary 5.25. If f(t) and g(t) are both piecewise continuous functions with compact support,then their convolution (f ∗ g)(t) exists, has also a compact support and is continuous.

Now convolution of f(t) and g(t) in different function spaces,

Theorem 5.26. If f ∈ L1loc

(R), g ∈ L1(R) and supp(g) is bounded then the convolution (f ∗g)(t)is almost everywhere defined and belongs to L1

loc(R).

67

Page 69: Signal Analysis

Compare the last theorem with theorem 5.19 above.

Theorem 5.27. If f ∈ L1loc

(R), g ∈ L1(R) and f(t) is a bounded function then the convolution(f ∗ g)(t) exist for all t ∈ R and belongs to L∞(R) with

‖f ∗ g‖∞ ≤ ‖f‖∞ ‖g‖1

f ∗ g ∈ L∞(R) means that (f ∗ g)(t) is an essentiell bounded function.

Theorem 5.28. If f ∈ Lp(R) and g ∈ Lq(R) with

p ≥ 1, q ≥ 1 and1

p+

1

q= 1 (conjugate Holder-exponents)

then (f ∗ g)(t) is defined everywhere. Furthermore (f ∗ g)(t) is then continuous and bounded onR with

‖f ∗ g‖∞ ≤ ‖f‖p · ‖g‖q

Mostly used special cases :p = 2, q = 2 results in

f, g ∈ L2(R) =⇒ (f ∗g)(t) is continuous and bounded with ‖f ∗g‖∞ ≤ ‖f‖2 ·‖g‖2 (5.42)

The convolution of two signals with finite energy is a continuous and bounded function on R.

p = 1, q =∞ results in

f ∈ L1(R), g ∈ L∞(R) =⇒ (f ∗ g)(t) is continuous and bounded with ‖f ∗ g‖∞ ≤ ‖f‖1 · ‖g‖∞(5.43)

If g(t) is a bounded piecewise continuous function then g ∈ L∞(R). So an absolute integrablefunction can be convoluted with a bounded piecewise continuous function. The result is a continuousand bounded function. Filtering with f ∈ L1(R) means

g(t) 7−→ (f ∗ g)(t) .

Such filtering improves here the regularity of the input signal g(t) .

Let’s consider a further possibility of convolution.

Theorem 5.29. If f ∈ L1(R) and g ∈ L2(R) then exists (f ∗ g)(t) almost everywhere withf ∗ g ∈ L2(R) and

‖f ∗ g‖2≤ ‖f‖1‖g‖2

Such a filter f(t) maps a time signal with finite energy in a time signal with finite energy.

Theorem 5.30. (Derivations of convolution)Let’s assume that f ∈ L1(R) and g ∈ Cp(R). If in addition for k = 0, 1, · · · , p the functionsg(k)(t) are bounded then

f ∗ g ∈ Cp(R)

and(f ∗ g)(k) = f ∗ g(k) fur k = 1, 2, · · · , p

are valid.

68

Page 70: Signal Analysis

Under suitable conditions the Fourier transform of a convolution (f ∗ g)(t) in the time domain isthe pointwise product of the two corresponding Fourier transforms.

Ft(f ∗ g)(t)(ω) = Ftf(t)(ω) · Ftg(t)(ω) (5.44)

Similar the Fourier transform of a usual product of time signals can be calculated by

Ftf(t) · g(t)(ω) =1

2πFtf(t)(ω) ∗ Ftg(t)(ω)

under suitable conditions (more in the lectures).

5.7 Translation, dilation and differentiation of signals

More in the lectures

5.8 Cross-correlation and autocorrelation of signals

compare withhttp://en.wikipedia.org/wiki/Cross-correlation andhttp://en.wikipedia.org/wiki/Autocorrelation

More in the lectures

69

Page 71: Signal Analysis

6 Important Fourier Transformation Pairs

Some general properties of the Fourier transformation :

Definition of Inverse Fourier Transform Definition of Fourier Transform

f(t) = 12π

∞∫−∞

f(ω) eiωt dω f(ω) =∞∫−∞

f(t) e−iωt dt

f(t) = F−1

ω f(ω) (t) f(ω) = Ftf(t) (ω)

f(t− to) e−i to ω f (ω)

f(a t) 1|a| f(ωa ) for a 6= 0

eiωot f(t) f(ω − ωo)

f(t) 2π f(−ω)

f (n)(t) (i ω)nf(ω)

(−i t)n f(t) f (n)(ω)

t∫−∞

f(τ) dτ f(ω)i ω + π f(0) δ(ω)

(f ∗ g)(t) f(ω) g(ω)

f(t) g(t) 12π (f ∗ g)(ω)

70

Page 72: Signal Analysis

Important examples :

∞∑n=−∞

δ(t− nMT ) 2π∞∑

k=−∞e−ikMTω

∞∑n=−∞

δ(t− nMT ) 2πMT

∞∑k=−∞

δ(ω − k 2π

MT

)Poisson formula

H(t) π δ(ω)− iω Heaviside

tri(t) sinc2(ω2 ) Triangular pulse :tri(t) = (rect ∗ rect)(t)

δ(t) 1

1 2πδ(ω)

δ(t − t0) e−it0ω

eiωot 2π δ(ω − ω0)

cos(ω0t) π[δ(ω + ω0) + δ(ω − ω0)]

sin(ω0t)πi [δ(ω − ω0) − δ(ω + ω0)]

rect(t) sinc(ω2 )

rect( ta) a sinc(aω2 ) a > 0

sgn(t) −2 iω

sinc(t) π rect(ω2 )

71

Page 73: Signal Analysis

sinc(πt) rect( ω2π )

e−t22σ2 σ

√2π e

−σ2ω22

H(t) e−αt 1α+ iω α > 0

e−α|t| 2αα2 +ω2 α > 0

H(t) t e−αt 1(α+ iω)2

α > 0

H(t) e−αt cos(ωot)(α+ iω)

ω02 + (α+ iω)2

α > 0

H(t) e−αt sin(ωot)ω0

ω02 + (α+ iω)2

α > 0

7 Discrete Fourier Transform (DFT)

7.1 Introduction

Compare:http://en.wikipedia.org/wiki/Discrete_Fourier_transform

The discrete Fourier transform DFT is the most important discrete transform, used to realize apractical Fourier analysis for sampled signals in engineering applications.The DFT maps every discrete time signal with N samples on a special frequency domain represen-tation. The output of the DFT is a discrete signal of the same length N .

The DFT requires a discrete input of finite length N (row or column vector). Such inputs are oftencreated by sampling a continuous signal in a chosen finite time interval of length T . The standardform of this time segment is here [ 0, T ].Generally an analog signal f(t) will be converted to a discrete signal by equally spaced samplingthe continuous signal.See the definitions of sampling period and sampling frequency inhttp://en.wikipedia.org/wiki/Sampling_%28signal_processing%29+

You have to chose a sufficient large time duration T in which the given continuous signal f(t) wassampled to f

k= f(t

k), k = 0, 1, 2, · · · and a sufficient small sampling period ∆t to get a realistic

frequency analysis of f(t).But chose T not to large and ∆t not to small (computing time).

Two principal cases for an appropriate choosing of T :

72

Page 74: Signal Analysis

a) f(t) is essentially localized in an interval of length T . This means that a sufficient large part ofthe signal energy is contained in the chosen interval.

b) If a) is not possible then reduce T in a practical way. But look careful on your physical andmathematical model and decide, if you can get your expected calculation results with sufficientcorrectness.

Theoretically the application of DFT in both cases is connected with a T -periodization.

Example 7.1. As an academical example the following signal with continuous domain let be given.

f(t) =

e−3 t cos(5 t− π6 ) for t ≥ 0

0 for t < 0

You can consider it in some chosen [ 0, T ] so that the case a) is valid. This restriction of f(t) is anon-periodic signal. But it’s T-periodization, which starts in [ 0, T ] becomes of course T-periodic.For our engineering applications only one basic period is of interest. In this period we sample f(t)and get our discrete input signal for the DFT. The discrete output is a frequency spectrum. This isonly an academical example, because in practise we don’t have a closed-form analytical expression(also no approximation) before sampling the time signal.

Remark 7.2. If a continuous T -periodic signal f(t) is given by a closed-form analytical expression,for instance by

f(t) =

(t− T

2

)2

for t ∈ [ 0, T ] and f(t+ T ) = f(t) for all t ∈ R.

then we have an special case of b). Now we ’sample’ f(t) in the given basic period of [ 0, T ]with a sufficient small ∆t by evaluation the f

k= f(k · ∆t), k = 0, 1, 2, · · · . Then we can

calculate the discrete spectrum with the help of Matlab-fft, vide infra. This gives the possibility ofquick signal approximation by trigonometrical polynomials connected with the corresponding spectraldecomposition (compare chapter Real and complex Fourier series).

The DFT can be computed efficiently in practice by using a fast Fourier transform algorithm (FFT).The term FFT is often used to mean DFT, but DFT refers to a mathematical transformation of a(time) signal and FFT refers to a specific family of algorithms for computing the DFT.FFT-algorithms are implemented in many program systems (Matlab, Maple, Mathematica andothers).

FFT provides opportunities for fast calculations of many practical tasks,for instance- calculations of essential frequencies contained in a sampled signal,- calculations of discrete convolutions with long filters- approximation of continuous convolutions and- calculations of signal correlations.

Remark 7.3. The input of the DFT could be a sampling of some continuous time signal f(t) withthe special sampling period ∆t = 1 .

f0 = f(0), f1 = f(1), , f2 = f(2), · · · · · · , fN−1 = f(N − 1) (7.1)

Put this uniformly spaced samples in a vector f .

f =(f0 , f1 , · · · , · · · , fN−1

)

73

Page 75: Signal Analysis

The chosen time duration of the continuous signal f(t) is in this special case T = N .In principle the DFT implies a N -periodization of the discrete input signal f (theory).

fk

= fk+N

for all k ∈ Z

So we would get

f =(· · · · · · , f−2 , f−1 , f0 , f1 , · · · · · · , fN−1 , fN , fN+1 , · · · · · ·

)

=(· · · · · · , fN−2 , fN−1 , f0 , f1 , · · · · · · , fN−1 , f0 , f1 , · · · · · ·

)

or especially in this example

f =(· · · · · · , f(N − 2), f(N − 1), f(0), f(1), · · · · · · , f(N − 1), f(0), f(1), · · · · · ·

)

With this N -periodization is connected a T -periodization fp(t) of f(t), which starts on the interval[ 0, T ].This results in fp(t) = fp(t+ T ) for all t.If you know the values in one period, then all values are defined. So the next definition is reasonable.

With the imaginary unit i and ΩN as the primitive N’th root of 1 is given the following definition.

Definition 7.4 (DFT). The sequence of N complex numbers (vector with N components)

f =(f0 , f1 , · · · , · · · , fN−1

)

is transformed into the sequence of N complex numbers

F =(F0 , F1 , · · · , · · · , FN−1

)

by the DFT according to the formula

Fk

=

N−1∑

n=0

fn Ω−knN

with ΩN = e2πiN , k = 0, . . . , N − 1 (7.2)

This is sometimes shorted by

Fk

= DFT fn with k = 0, . . . , N − 1, n = 0, . . . , N − 1

The DFT (7.2) can be written also in the form

Fk

=

N−1∑

n=0

fn e− 2πi

Nkn, with k = 0, . . . , N − 1

Remark 7.5. In principle there is for every positive integer N one DFT, so this N must be carefulchosen at the beginning.The DFT implies a N -periodization of it’s discrete output signal F also (theory). From

Ω−(k+N)nN

= Ω−k nN

you can verify that the formula (7.2) for DFT is defined for all k ∈ Z with Fk+N

= Fk. So the

frequency domain representation becomes also N -periodic.

Theorem 7.6. The inverse transform of (7.2) exists and is called inverse discrete Fourier trans-form (IDFT). It is given by

fn =1

N

N−1∑

k=0

Fk

ΩknN

with ΩN = e2πiN , n = 0, . . . , N − 1 (7.3)

74

Page 76: Signal Analysis

(7.3) is sometimes shorted by

fn = IDFTFk with k = 0, . . . , N − 1, n = 0, . . . , N − 1

This IDFT formula (7.3) can be written also in the form

fn =1

N

N−1∑

k=0

Fke

2πiNkn, n = 0, . . . , N − 1

Remark 7.7. Mostly the discrete time signal f is real valued. But the DFT F can be a complexvalued vector even if f is real valued.From

Ωk·(n+N)N

= Ωk·nN

results that the formula (7.3) for IDFT is defined for all n ∈ Z with fn+N = fn. So the N -periodicity of the discrete time signals is confirmed by the IDFT.

Remark 7.8. With the N−th standard unit root ΩN = e2πiN you get the special Vandermonde

matrix F of size (N,N) and it’s generally n−th column F(: , n) by

F =

Ω−0·0

NΩ−0·1

NΩ−0·2

N. . . Ω

−0·(N−1)

N

Ω−1·0

NΩ−1·1

NΩ−1·2

N. . . Ω

−1·(N−1)

N

Ω−2·0

NΩ−2·1

NΩ−2·2

N. . . Ω

−2·(N−1)

N

......

.... . .

...

Ω−(N−1)·0

NΩ−(N−1)·1

NΩ−(N−1)·2

N. . . Ω

−(N−1)·(N−1)

N

, F(: , n) =

Ω−0·(n−1)

N

Ω−1·(n−1)

N

Ω−2·(n−1)

N

...

Ω−(N−1)·(n−1)

N

.

(7.4)

The column vectors of the matrix F form an orthogonal basis in the space CNof N-dimensional

complex vectors. But this basis is not orthonormal. With the scalar product (4.13) the columnvectors of F satisfy

〈F(: , n) , F(:,m) 〉 = N · δ(n,m)

where δ(n,m) is the Kronecker delta. A similar property holds for the rows of F.Hint: F(: ,m) results from Matlab-notation. In a similar way F(k, :) denotes the row with theindex k.

By using column vectors f and F for the time sampling and the corresponding frequency spectrumthe DFT (7.2) can be realized in matrix form

F = F f .

The matrix F is symmetric. It’s inverse can be calculated by the formula

F−1 =1

NF∗

in which F∗ is the adjoint matrix of F. The adjoint matrix will be called also as conjugate transposematrix or Hermitian transpose matrix.

75

Page 77: Signal Analysis

So you get for F−1 the descriptive representation

F−1 =1

N

Ω0·0

0·1

0·2

N. . . Ω

0·(N−1)

N

Ω1·0

1·1

1·2

N. . . Ω

1·(N−1)

N

Ω2·0

2·1

2·2

N. . . Ω

2·(N−1)

N

......

.... . .

...

Ω(N−1)·0

(N−1)·1

(N−1)·2

N. . . Ω

(N−1)·(N−1)

N

. (7.5)

The inverse discrete Fourier transform (7.3) can be realized in matrix form now.

f = F−1 F

Hint: If you have to calculate (7.4) or (7.5) for some N , for instance N = 4 or N = 5, then youcan use

Ωp+k·NN

= ΩpN, for all k ∈ Z

to simplify the work with considerations modulo N.

By using the floor function (see Matlab help) we define now a special remainder function, whichresults in non-negative integer values of one indexing period. Here N > 0 is the length of thisindexing period.

(x)N := x−⌊ xN

⌋·N, x ∈ Z, N ∈ N (7.6)

(x)N computes the unique nonnegative remainder on division of the integer x by the positive integerN . It returns an integer r such that x = q ·N + r holds for some integer q. In addition, we have

r = (x)N =⇒ 0 ≤ r < N

The interpretation of

(x− y)N = (x− y)−⌊x− yN

is clear: calculate first the difference and then the remainder function.With the function (7.6) we can define operations on infinite discrete periodic sequences by restrictingthem on one basic period of length N (vector with N components).

fn =(f0 , f1 , · · · , · · · , fN−1

), n = 0, 1, · · · , N − 1

Simple circular right shift of fn

f(n−1)N =(fN−1 , f0 , f1 , · · · , · · · , fN−2

)

Simple circular left shift of fn

f(n+1)N =(f1 , f2 , · · · , · · · , fN−1 , f0

)

k-times circular right shift of fn

f(n−k)N =(fN−k , f

N−k+1, · · · , f0 , · · · , · · · , f

N−k−1

)

k-times circular left shift of fn

f(n+k)N =(fk, f

k+1, · · · , · · · , f

k−2, f

k−1,)

In the last 2 representations k > 1 is chosen.

76

Page 78: Signal Analysis

7.2 Properties of the Discrete Fourier Transform

Let fn, gn, hn, be discrete time signals with the same length N and theindex variable n = 0, 1, . . . , N − 1. The corresponding DFT-spectra Fk, Gk, Hk, with theindex variable k = 0, 1, . . . , N − 1 are then also of length N . As above we use

ΩN = e2πiN

Under this assumptions the following definitions and properties hold.Definitions:

a) Circular convolutionWe call hn the circular convolution of fn and gn and write hn = fn ~ gn if

hn =N−1∑

m=0

fm g(n−m)N (7.7)

The circular convolution is a commutative operation.If fn and gn don’t have the same length, then define N = max(Nf , Ng) withNf = length(fn) and Ng = length(gn). Subsequently apply zero-padding to the shortersignal.

b) ReversalThe signal gn = f(−n)N is called the reversal of fn. The reversal is defined as the circularreversal. By the special notion it’s the time-reversal. The frequency-reversal is similar defined.

c) Circular symmetry

The signal fn is called circular symmetric if fn = f(−n)N (7.8)

Properties of the DFT:

• Linearity of the DFT-Transform

DFT α fn + β gn = αDFT fn+ β DFT gn (7.9)

• Circular shift in time domain ⇔ Modulation in frequency domain

DFTf(n−m)N

= Ω−m·k

N· Fk for every fixed m ∈ Z (7.10)

• Modulation in time domain ⇔ Circular shift in frequency domain

DFT

Ωm·nN· fn

= F(k−m)N for every fixed m ∈ Z (7.11)

• Reversal in time domain ⇔ Reversal in frequency domain

DFTf(−n)N

= F(−k)N (7.12)

• Complex conjugation in time domain⇔ Complex conjugation of reversal in frequency domain

DFTfn

= F(−k)N (7.13)

• If fn is real valued then the conjugate spectrum is equal to the frequency-reversal.

Fk = F(−k)N (7.14)

77

Page 79: Signal Analysis

• If fn is real valued and circular symmetric then Fk also.

fn ∈ R, fn = f(−n)N ⇐⇒ Fk ∈ R, Fk = F(−k)N (7.15)

• Circular convolution in time domain ⇔ Multiplication in frequency domain

DFT fn ~ gn = Fk · Gk (7.16)

• Multiplication in time domain ⇔ Circular convolution frequency domain

DFT fn · gn =1

NFk ~ Gk (7.17)

• Parseval’s Theorem for the DFT

N−1∑

n=0

fn gn =1

N

N−1∑

k=0

Fk Gk (7.18)

7.3 Some basic hints

Now we put together some basic essentials.

• T is the time duration of the signal with continuous domain (practical choice).

• [ 0, T ] is the continuous interval, in which the signal will be sampled. This sampling resultsin a discrete signal.

• ∆t is the time sampling period. It is the time between neighboring samples. This can be alsoconsidered as the time resolution of the measuring.

• 1∆t is the sampling frequency.The sampling frequency (also sampling rate or sample rate) defines the number of samplesper unit of time (usually second, abbr.: sec) taken from a continuous signal to make adiscrete signal. For time-domain signals, the unit for sampling rate is hertz, (Hz = sec−1 ).Sometimes the notion Sa

sec (samples per second) is used.

• N is the number of sampling points.

t0 = 0, t1 = ∆t, · · · · · · , tN−1 = (N − 1) · ∆t

tN = T = N · ∆t is not a member of the sampling domain. With tN a new periodstarts(theory).

• In Matlab the indexing of vectors starts with 1, so we get there

t[1] = 0, t[2] = ∆t, · · · · · · , t[N ] = (N − 1) · ∆tThe measured values (of elongation, force, acceleration velocity, et cetera) at the discretepoints

f0 = f(0), f1 = f(∆t), · · · · · · , f(N − 1) = f((N − 1) · ∆t)become in Matlab indexing the form

f [1] = f(0), f [2] = f(∆t), · · · · · · , f [N ] = f((N − 1) · ∆t)

So you have to add 1 in indexing if you implement corresponding formulas.

78

Page 80: Signal Analysis

• Nyquist-Shannon sampling theoremThe perfect reconstruction of a signal f(t) from it’s sampled version f(k ·∆t) is possible whenthe sampling frequency is greater than twice the maximum frequency of the signal f(t).

ξmax maximum frequency contained in f(t)

Tm =1

ξmaxthe corresponding period duration

ξSa =1

∆tsampling frequency

The perfect reconstruction property is fulfilled

if ξSa > 2 · ξmax or if 2 · ∆t < Tm

For aliasing problem and definition of Nyquist frequency, comparehttp://en.wikipedia.org/wiki/Sampling_frequency

http://en.wikipedia.org/wiki/Nyquist_frequency

• fft is a Matlab-function for a FFT, a special algorithm for calculating the DFT. Applicationof fft on the vector

f = f [1], f [2], · · · · · · , f [N ]results in a vector with complex components

fft(f) = F [1], F [2], · · · · · · , F [N ]

• Matlab-implemented fft is connected with the standard definition of the DFT (7.2) andproperties formulated between (7.9) and (7.18).For practical applications it is connected with the property that the discrete time signal f issampled with the sampling period

∆t = 1 .

The frequency resolution becomes in this standard case

∆ξ =1

T=

1

N · ∆t =1

N.

• But for generally ∆t we get for the frequency resolution

∆ξ =1

T=

1

N · ∆t (7.19)

7.4 Application of the DFT for generally sampling period

In most books and implemented programs the DFT is given without the sampling period ∆t asscaling factor. In this case all relations will be considered for the special case ∆t = 1 only. Forinstance the FFT-algorithm in Matlab fft is connected with ∆t = 1 . If you use Matlab include

DFTfn = fft(fn) with formula (7.2)

Also we have introduced this standard to avoid confusion.But in the research of oscillation structures the scaling factor ∆t is to keep in mind. Otherwise youwould get no realistic frequency-amplitudes. This is especially important if you want to comparemeasurements with different sampling periods. Therefore we introduce now a ∆t-scaled variant ofDFT.

79

Page 81: Signal Analysis

For every fixed time sampling period ∆t > 0 a DFT is given by

Fk

= DFTfn ,∆t = ∆t ·N−1∑

n=0

fn Ω−knN

with ΩN = e2πiN , k = 0, . . . , N − 1 (7.20)

The comparison with (7.2) provides

DFTfn , 1 = DFTfn .

Generally the spectrum Fk

is otherwise scaled now. For ∆t < 1 the magnitudes |Fk| become

smaller. This is more realistic if N is very large.With the fft-function in Matlab we can realize (7.20) by the scaling factor ∆t

DFTfn ,∆t = ∆t ·DFTfn = ∆t · fft(fn)

The inverse transform of (7.20) is given by

fn = IDFTFk,∆ξ = ∆ξ

N−1∑

k=0

Fk

ΩknN

with ∆ξ =1

T=

1

N · ∆t , n = 0, . . . , N − 1 (7.21)

where ∆ξ is the frequency resolution.How the DFT-formulas from (7.9) until (7.18) change, if we replace the transformation pair (7.2)-(7.3) by (7.20)-(7.21).

Properties of the ∆t-scaled DFT (7.20):

• Linearity of the DFT-Transform

DFT α fn + β gn, ∆t = αDFT fn, ∆t + β DFT gn, ∆t (7.22)

• Circular shift in time domain ⇔ Modulation in frequency domain

DFTf(n−m)N , ∆t

= Ω−m·k

N· Fk for every fixed m ∈ Z (7.23)

• Modulation in time domain ⇔ Circular shift in frequency domain

DFT

Ωm·nN· fn, ∆t

= F(k−m)N for every fixed m ∈ Z (7.24)

• Reversal in time domain ⇔ Reversal in frequency domain

DFTf(−n)N , ∆t

= F(−k)N (7.25)

• Complex conjugation in time domain⇔ Complex conjugation of reversal in frequency domain

DFTfn, ∆t

= F(−k)N (7.26)

• If fn is real valued then the conjugate spectrum is equal to the frequency-reversal.

Fk = F(−k)N (7.27)

• If fn is real valued and circular symmetric then Fk also.

fn ∈ R, fn = f(−n)N ⇐⇒ Fk ∈ R, Fk = F(−k)N (7.28)

80

Page 82: Signal Analysis

• Circular convolution in time domain ⇔ Multiplication in frequency domain

∆t ·DFT fn ~ gn , ∆t = Fk · Gk (7.29)

• Multiplication in time domain ⇔ Circular convolution in frequency domain

DFT fn · gn , ∆t = ∆ξ Fk ~ Gk with ∆ξ =1

T=

1

N · ∆t (7.30)

• Parseval’s Theorem for the DFT

∆t

N−1∑

n=0

fn gn = ∆ξN−1∑

k=0

Fk Gk with ∆ξ =1

T=

1

N · ∆t (7.31)

If we change the scaling in the standard-DFT (7.2) by the factor ∆t, then compared with the set ofproperties from (7.9) until (7.18) only the last three formulas (7.29), (7.30) and (7.31) are changed.

Interpretation of the DFT-spectrum for even NWith Ny = N

2 the Nyquist-frequency becomes

ξny = Ny · ∆ξ =N

2· ∆ξ =

N

2· 1

N · ∆t =1

2 · ∆t =1

2·(

1

∆t

)

The Nyquist-frequency is the half of the sampling frequency. You can interpret the spectrum Fkof a ∆t-sampled time signal fn now by

F0 = F (0), F1 = F (∆ξ), · · · , FNy−1 = F ((Ny − 1) · ∆ξ), FNy = F (Ny · ∆ξ) = F (ξny),

FNy+1 = F ((1−Ny)·∆ξ), FNy+2 = F ((2−Ny)·∆ξ), · · · · · · , FN−2 = F (−2·∆ξ), FN−1 = F (−∆ξ)

So for k > Ny = N2 the spectral values Fk are connected with the negative frequencies

(1−Ny) · ∆ξ, (2−Ny) · ∆ξ, · · · · · · , −2 · ∆ξ, −∆ξ

This is connected with the N -periodicity of Fk.In Matlab-indexing you get

F [1] = F (0), F [2] = F (∆ξ), · · · , F [Ny+1] = F (ξny), F [Ny+2] = F ((1−Ny)·∆ξ), · · · , F [N ] = F (−∆ξ)

Remark 7.9. In generally the sampled time signals are real in our applications. By (7.27) yo getthen

Fk = F(−k)N .

In the N -periodization this is connected with an even real part and an odd imaginary part. Now allfrequency information is contained in the left half of the ∆t-scaled output-spectrum of Matlab:

∆t · F [1] = F0, ∆t · F [2] = F1, · · · · · · , ∆t · F [Ny + 1] = FNy with Ny =N

2

F0 = F (0), F1 = F (∆ξ), · · · · · · , FNy = F (Ny · ∆ξ) = F (ξny)

Often only sections of this half part spectrum are plotted to get better visual frequency resolution.

81

Page 83: Signal Analysis

Remark 7.10. In applications it is often necessary to filter ∆t-sampled time signals gn of finitelength N . Such filtering can be realized by circular convolution.

gn 7−→ fn ~ gn

In the spectral domain of the ∆t-scaled DFT the filter is connected with pointwise multiplication.

Fk 7−→1

∆tFk · Gk

Compare (7.29). If fn is real valued and circular symmetric then the spectrum Fk also. Compare(7.28). By using the sampling function sinc and circular shifting in the time domain you canconstruct circular symmetric filters fn with the property

1

∆tFk =

1 for 0 ≤ k ≤ Nf

1 for N −Nf ≤ k ≤ N − 1

0 for Nf ≤ k ≤ N −Nf − 1

with Nf <N

2.

With the corresponding time representation

hn =1

∆tfn

you can realize in a computer program an ideal distortion-free low pass filtering now. Distortion-freemeans, that the phase-spectrum of the signal Gk will be changed not by filtering. The correspondingfrequency band is given by [

0 , ξf

]with ξ

f= Nf · ∆ξ

Verify that the spectrum Hk = 1∆t Fk is circular symmetric.

The real (Re or R) and imaginary part (Im or I) of this discrete spectrum, the amplitude spectrum(magnitude) and the phase spectrum (argument) are defined similarly to the case of continuousFourier transform.

Amplitude spectrum : |F | =(|F0 |, |F1 |, · · · , · · · , |FN−1 |

)

with

|Fk| =√

[R(Fk)]2 + [I(Fk)]2

andPhase spectrum : ϕ = arg(F ) =

(ϕ0 , ϕ1 , · · · , · · · , ϕN−1

)

withϕk

= arg(Fk) = atan2(I(Fk),R(Fk)

)

Here atan2 is the two-argument form of the arctan-function, seehttp://en.wikipedia.org/wiki/Atan2.For considering energy relations between the time domain and the frequency domain use Parseval’sequation for the scaled DFT (7.31).Remember : ∆t = 1 is connected with the standard formulation.

82

Page 84: Signal Analysis

8 Lecture Graphs - Part 1

Example 1.1

Example 1.2

83

Page 85: Signal Analysis

Example 1.3

Example 1.4

84

Page 86: Signal Analysis

Example 1.5

Example 1.6

85

Page 87: Signal Analysis

Example 1.7

Example 1.8

86

Page 88: Signal Analysis

Example 1.9

87

Page 89: Signal Analysis

88

Page 90: Signal Analysis

Example 1.10

89

Page 91: Signal Analysis

9 Lecture Graphs - Part 2

Example 2.1

Example 2.2

90

Page 92: Signal Analysis

Example 2.3

91

Page 93: Signal Analysis

Example 2.4

92

Page 94: Signal Analysis

Example 2.5

Example 2.6

93

Page 95: Signal Analysis

Example 2.7

94

Page 96: Signal Analysis

Example 2.8

95

Page 97: Signal Analysis

Example 2.9

Example 2.10

96

Page 98: Signal Analysis

Example 2.11

97

Page 99: Signal Analysis

Example 2.12

Example 2.13

98

Page 100: Signal Analysis

10 Lecture Graphs - Part 3

Example 3.1

Example 3.2

99

Page 101: Signal Analysis

Example 3.3

100

Page 102: Signal Analysis

101

Page 103: Signal Analysis

References

[1] G. Bachman. Fourier and wavelet analysis. Springer, New York, 2000.

[2] W. Bani. Wavelets: Eine Einfuhrung fur Ingenieure. Oldenbourg Verlag, Munchen, 2001.

[3] Y. T. Chan. Wavelet Basics. Kluver Academic Publishers, Boston, Dordrecht, London, 1995.

[4] C. K. Chui. An Introduction to Wavelets. Wavelet Analysis and its Applications. AcademicPress, Boston, San Diego, New York, London, Sydney, Tokyo, Toronto, 1992.

[5] I. Daubechies. Ten Lectures on Wavelets. Society for Industrial and Applied Mathematics(SIAM), Philadelphia, 1992.

[6] O. Follinger. Laplace- und Fourier-Transformation. Huthig, Heidelberg, 1993.

[7] O. Follinger. Laplace-, Fourier- und z-Transformationen. Huthig, Heidelberg, 2003. 8. uberarb.Aufl. / bearb. von Mathias Kluwe, 424 S.

[8] K. Grochenig. Foundations of time-frequency analysis. Birkhauser, Boston, 2001.

[9] G. Kaiser. A Friendly Guide to Wavelets. Birkhauser, Boston, Basel, Berlin, 1994.

[10] A. K. Louis and P. Maaß and A. Rieder. Wavelets: Theorie und Anwendungen. Teubner,Stuttgart, 2nd edition, 1998.

[11] S. Mallat. A Wavelet Tour of Signal Processing. Academic Press, 1999.

[12] K. Markwardt. Die schnelle Wavelet-Transformation als Grundlage fur Verfahren zur System-und Parameteridentifikation. ISM-Berichte, Bauhausuniversitat Weimar, 2007. 128 Seiten,ISSN: 1610-738.

[13] D. C. Montgomery. Applied statistics and probability for engineers. Wiley, Hoboken, 2006.

[14] H. G. Natke. Einfuhrung in Theorie und Praxis der Zeitreihen- und Modalanalyse: Identifika-tion schwingungsfahiger elastomechanischer Systeme. Vieweg /and Sohn Verlagsgesellschaft,Braunschweig/Wiesbaden, 1992.

[15] H. J. Nussbaumer. Fast fourier transform and convolution algorithms. Springer, Berlin, 1981.

[16] Resnikoff. Wavelet Analysis. Springer, New York-Berlin-Heidelberg, 1998.

[17] S. W. Smith. Digital signal processing: A practical guide for engineers and scientists. Ams-terdam, 1955.

[18] G. Strang and T. Nguyen. Wavelets and Filter Banks. Wellesley-Cambridge Press, WellesleyMA 0218181 USA, 1997.

[19] T. Westermann. Mathematik fur Ingenieure mit Maple, Differential- und Integralrechnung furFunktionen mehrerer Variablen, gewohnliche und partielle Differentialgleichungen, Fourier-Analysis, graph. Darst. + 1 CD-ROM (12 cm). Berlin [u.a.], 1997. Umfang: XIV, 563 S.

102