solution manual for digital signal processing --- system analysis and design ( paulos.r. diniz,...
DESCRIPTION
Solution Manual for Digital Signal Processing --- System Analysis and Design ( PauloS.R. Diniz ) is a solution manual of very famous book that maybe helpful for communication engineering students.TRANSCRIPT
Solutions of the Book
DIGITAL SIGNAL PROCESSINGSystem Analysis and Design
Paulo S. R. DinizEduardo A. B. da SilvaSergio L. Netto
Markus Vinícius Santos LimaWallace Alves Martins
Lisandro LovisoloAndre da Rocha Vassali
Miguel Benedito Furtado Jr.Paulo Vitor Magacho da Silva
ii
Rio de Janeiro, Brazil
Typeseted with LATEX.
iii
DISCLAIMER NOTICE
The information in this document is subject to change without notice and is intended to be used as acomplementary material to the textbook Digital Signal Processing: System Analysis and Design by Diniz,da Silva, and Netto. These solutions cover the proposed exercises of the textbook and are intended only forthose who purchased the textbook, and no part of it may be reproduced, distributed or transmitted in anyform without the prior agreement of the textbook authors. The material has been produced to be used assupport for instructors of graduate and undergraduate courses that use the textbook.
The authors welcome any comments, corrections, or reviews as part of the process of improving thismaterial. These comments can be e-mailed to [email protected].
The authors will not be responsible in any event for errors in this document or for any damages, inci-dental or consequential (including monetary losses), that might arise for the use of this document or theinformation in it.
The document and all its parts are considered protected by copyright according to the applicable laws.Product names mentioned in this document may be trademarks of companies, they are mentioned for
identification purposes.
Copyright © 2010
iv
Contents
1 DISCRETE-TIME SYSTEMS 1Exercise 1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Exercise 1.1.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Exercise 1.1.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Exercise 1.1.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2Exercise 1.1.(d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2Exercise 1.1.(e) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3Exercise 1.1.(f) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3Exercise 1.1.(g) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Exercise 1.1.(h) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Exercise 1.1.(i) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Exercise 1.1.(j) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6Exercise 1.1.(k) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Exercise 1.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7Exercise 1.2.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7Exercise 1.2.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7Exercise 1.2.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8Exercise 1.2.(d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8Exercise 1.2.(e) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8Exercise 1.2.(f) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Exercise 1.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9Exercise 1.3.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9Exercise 1.3.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Exercise 1.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Exercise 1.4.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Exercise 1.4.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Exercise 1.4.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Exercise 1.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Exercise 1.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Exercise 1.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Exercise 1.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Exercise 1.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Exercise 1.9.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Exercise 1.9.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Exercise 1.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14Exercise 1.11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Exercise 1.11.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Exercise 1.11.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17Exercise 1.11.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19Exercise 1.11.(d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20Exercise 1.11.(e) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Exercise 1.12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
v
vi CONTENTS
Exercise 1.12.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23Exercise 1.12.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Exercise 1.13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24Exercise 1.14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Exercise 1.14.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26Exercise 1.14.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26Exercise 1.14.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26Exercise 1.14.(d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Exercise 1.15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27Exercise 1.15.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27Exercise 1.15.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Exercise 1.16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27Exercise 1.17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28Exercise 1.18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Exercise 1.18.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30Exercise 1.18.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30Exercise 1.18.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Exercise 1.19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31Exercise 1.19.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31Exercise 1.19.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31Exercise 1.19.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Exercise 1.20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32Exercise 1.20.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32Exercise 1.20.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33Exercise 1.20.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33Exercise 1.20.(d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34Exercise 1.20.(e) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34Exercise 1.20.(f) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Exercise 1.21 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34Exercise 1.22 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34Exercise 1.23 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35Exercise 1.24 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Exercise 1.24.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36Exercise 1.24.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37Exercise 1.24.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Exercise 1.25 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38Exercise 1.26 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39Exercise 1.27 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41Exercise 1.28 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43Exercise 1.29 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44Exercise 1.30 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Exercise 1.30.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44Exercise 1.30.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44Exercise 1.30.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Exercise 1.31 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44Exercise 1.32 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45Exercise 1.33 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Exercise 1.33.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45Exercise 1.33.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
CONTENTS vii
2 THE Z AND Fourier TRANSFORMS 47Exercise 2.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Exercise 2.1.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47Exercise 2.1.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47Exercise 2.1.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48Exercise 2.1.(d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48Exercise 2.1.(e) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48Exercise 2.1.(f) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49Exercise 2.1.(g) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Exercise 2.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49Exercise 2.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50Exercise 2.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51Exercise 2.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51Exercise 2.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Exercise 2.6.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51Exercise 2.6.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52Exercise 2.6.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52Exercise 2.6.(d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52Exercise 2.6.(e) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Exercise 2.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53Exercise 2.7.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53Exercise 2.7.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Exercise 2.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54Exercise 2.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54Exercise 2.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Exercise 2.10.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54Exercise 2.10.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54Exercise 2.10.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Exercise 2.11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55Exercise 2.11.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55Exercise 2.11.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Exercise 2.12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56Exercise 2.13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57Exercise 2.14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
Exercise 2.14.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57Exercise 2.14.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57Exercise 2.14.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Exercise 2.15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59Exercise 2.15.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59Exercise 2.15.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Exercise 2.16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59Exercise 2.17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61Exercise 2.18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Exercise 2.18.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62Exercise 2.18.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
Exercise 2.19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63Exercise 2.20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63Exercise 2.21 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Exercise 2.21.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63Exercise 2.21.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64Exercise 2.21.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64Exercise 2.21.(d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64Exercise 2.21.(e) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65Exercise 2.21.(f) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
viii CONTENTS
Exercise 2.21.(g) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65Exercise 2.22 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66Exercise 2.23 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Exercise 2.23.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67Exercise 2.23.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Exercise 2.24 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67Exercise 2.25 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
Exercise 2.25.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67Exercise 2.25.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67Exercise 2.25.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67Exercise 2.25.(d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Exercise 2.26 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68Exercise 2.26.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68Exercise 2.26.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68Exercise 2.26.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68Exercise 2.26.(d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
Exercise 2.27 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69Exercise 2.28 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69Exercise 2.29 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70Exercise 2.30 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70Exercise 2.31 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70Exercise 2.32 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71Exercise 2.33 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3 DISCRETE TRANSFORMS 73Exercise 3.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Exercise 3.1.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73Exercise 3.1.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Exercise 3.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73Exercise 3.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74Exercise 3.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Exercise 3.4.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74Exercise 3.4.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75Exercise 3.4.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75Exercise 3.4.(d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Exercise 3.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76Exercise 3.5.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76Exercise 3.5.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Exercise 3.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77Exercise 3.6.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77Exercise 3.6.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Exercise 3.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77Exercise 3.7.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77Exercise 3.7.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77Exercise 3.7.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
Exercise 3.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78Exercise 3.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79Exercise 3.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79Exercise 3.11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79Exercise 3.12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Exercise 3.12.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80Exercise 3.12.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Exercise 3.13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81Exercise 3.13.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
CONTENTS ix
Exercise 3.13.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82Exercise 3.13.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83Exercise 3.13.(d) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Exercise 3.14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84Exercise 3.15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86Exercise 3.16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
Exercise 3.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86Exercise 3.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Exercise 3.17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89Exercise 3.18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90Exercise 3.19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91Exercise 3.20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92Exercise 3.21 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93Exercise 3.22 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94Exercise 3.23 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95Exercise 3.24 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96Exercise 3.25 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96Exercise 3.26 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97Exercise 3.27 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98Exercise 3.28 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99Exercise 3.29 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100Exercise 3.30 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100Exercise 3.31 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Exercise 3.31.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101Exercise 3.31.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101Exercise 3.31.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
Exercise 3.32 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102Exercise 3.33 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102Exercise 3.34 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
4 DIGITAL FILTERS 109Exercise 4.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Exercise 4.1.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109Exercise 4.1.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Exercise 4.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110Exercise 4.2.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110Exercise 4.2.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111Exercise 4.2.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Exercise 4.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111Exercise 4.4.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111Exercise 4.4.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111Exercise 4.4.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Exercise 4.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113Exercise 4.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Exercise 4.6.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114Exercise 4.6.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114Exercise 4.6.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
Exercise 4.14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119Exercise 4.14.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119Exercise 4.14.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119Exercise 4.14.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Exercise 4.15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121Exercise 4.15.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121Exercise 4.15.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
x CONTENTS
Exercise 4.15.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121Exercise 4.16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Exercise 4.16.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123Exercise 4.16.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Exercise 4.17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124Exercise 4.17.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124Exercise 4.17.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
Exercise 4.18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126Exercise 4.19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126Exercise 4.20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127Exercise 4.31 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
5 FIR FILTERS APPROXIMATIONS 129Exercise 5.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129Exercise 5.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Exercise 5.3.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129Exercise 5.3.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129Exercise 5.3.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Exercise 5.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130Exercise 5.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130Exercise 5.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130Exercise 5.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131Exercise 5.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133Exercise 5.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135Exercise 5.12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141Exercise 5.13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142Exercise 5.14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Exercise 5.14.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144Exercise 5.14.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144Exercise 5.14.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Exercise 5.15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146Exercise 5.16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147Exercise 5.17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148Exercise 5.18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Exercise 5.18.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149Exercise 5.18.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150Exercise 5.18.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
Exercise 5.19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151Exercise 5.20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153Exercise 5.23 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154Exercise 5.25 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155Exercise 5.26 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157Exercise 5.28 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
6 IIR FILTERS APPROXIMATIONS 161Exercise 6.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161Exercise 6.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162Exercise 6.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165Exercise 6.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168Exercise 6.13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171Exercise 6.18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174Exercise 6.19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174Exercise 6.20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Exercise 6.20.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177Exercise 6.20.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
CONTENTS xi
Exercise 6.23 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182Exercise 6.24 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183Exercise 6.25 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184Exercise 6.26 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185Exercise 6.28 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
7 SPECTRAL ESTIMATION 191Exercise 7.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191Exercise 7.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193Exercise 7.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195Exercise 7.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199Exercise 7.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199Exercise 7.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202Exercise 7.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203Exercise 7.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Exercise 7.8.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203Exercise 7.8.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Exercise 7.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203Exercise 7.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206Exercise 7.11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208Exercise 7.12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Exercise 7.12.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210Exercise 7.12.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210Exercise 7.12.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Exercise 7.13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210Exercise 7.14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212Exercise 7.15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213Exercise 7.16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215Exercise 7.17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216Exercise 7.18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217Exercise 7.19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217Exercise 7.20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219Exercise 7.21 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220
Exercise 7.21.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220Exercise 7.21.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221Exercise 7.21.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223
Exercise 7.22 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224Exercise 7.23 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224
Exercise 7.23.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225Exercise 7.23.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226Exercise 7.23.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
Exercise 7.24 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228Exercise 7.25 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230Exercise 7.26 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
Exercise 7.26.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230Exercise 7.26.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230Exercise 7.26.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
Exercise 7.27 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232Exercise 7.28 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 234Exercise 7.29 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235Exercise 7.30 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235Exercise 7.31 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
xii CONTENTS
8 MULTIRATE SYSTEMS 239Exercise 8.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239Exercise 8.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240Exercise 8.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Exercise 8.4.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241Exercise 8.4.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Exercise 8.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241Exercise 8.11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243Exercise 8.12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243Exercise 8.13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245Exercise 8.14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245Exercise 8.15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250Exercise 8.16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252Exercise 8.17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254Exercise 8.31 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260Exercise 8.32 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
9 FILTER BANKS 265Exercise 9.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265Exercise 9.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265Exercise 9.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265Exercise 9.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266Exercise 9.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267Exercise 9.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268Exercise 9.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270Exercise 9.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272Exercise 9.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273Exercise 9.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273Exercise 9.11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274Exercise 9.12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274Exercise 9.13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274Exercise 9.14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276Exercise 9.15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279Exercise 9.16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279Exercise 9.17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285Exercise 9.18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285Exercise 9.19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286Exercise 9.20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288Exercise 9.21 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288Exercise 9.22 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288Exercise 9.23 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 289Exercise 9.24 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291Exercise 9.25 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291Exercise 9.26 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292Exercise 9.27 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295Exercise 9.28 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296Exercise 9.29 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296Exercise 9.30 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296Exercise 9.31 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296Exercise 9.32 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296Exercise 9.33 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302Exercise 9.34 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302Exercise 9.35 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302Exercise 9.36 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302
CONTENTS xiii
Exercise 9.37 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 302Exercise 9.38 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304
10 WAVELETS 305Exercise 10.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305Exercise 10.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
Exercise 10.2.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306Exercise 10.2.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
11 FINITE PRECISSION EFFECTS 309Exercise 11.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309Exercise 11.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Exercise 11.2.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309Exercise 11.2.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
Exercise 11.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313Exercise 11.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Exercise 11.4.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314Exercise 11.4.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
Exercise 11.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315Exercise 11.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315Exercise 11.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 316Exercise 11.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Exercise 11.8.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317Exercise 11.8.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317Exercise 11.8.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317
Exercise 11.12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319Exercise 11.16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 321Exercise 11.17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322Exercise 11.18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325Exercise 11.19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325Exercise 11.20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328Exercise 11.21 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330Exercise 11.22 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332Exercise 11.23 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334Exercise 11.24 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338Exercise 11.26 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339Exercise 11.27 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341Exercise 11.28 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342Exercise 11.29 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343Exercise 11.30 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344Exercise 11.31 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
12 EFFICIENT FIR STRUCTURES 347Exercise 12.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347Exercise 12.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347Exercise 12.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348Exercise 12.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Execise 12.4.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348Exercise 12.4.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Exercise 12.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352Exercise 12.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352Exercise 12.11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353Exercise 12.12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355Exercise 12.16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356Exercise 12.19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
xiv CONTENTS
Exercise 12.21 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357Exercise 12.22 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359Exercise 12.23 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360Exercise 12.24 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361Exercise 12.25 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
13 EFFICIENT IIR STRUCTURES 371Exercise 13.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371Exercise 13.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371Exercise 13.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373Exercise 13.4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375Exercise 13.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377Exercise 13.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378Exercise 13.7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379Exercise 13.8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380Exercise 13.9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 380Exercise 13.10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381Exercise 13.11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 382Exercise 13.12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385Exercise 13.13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385Exercise 13.14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 386Exercise 13.15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388Exercise 13.16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391Exercise 13.17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391Exercise 13.20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392Exercise 13.21 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
Exercise 13.21.(a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393Exercise 13.21.(b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394Exercise 13.21.(c) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
List of Figures
1.1 Graphical view of the convolution sum of x (n) and y (n) for exercise 1.4b. . . . . . . . . 111.2 Graphical view of the convolution sum x (n) ∗ x (n) ∗ x (n) ∗ x (n) - Exercise 1.5. . . . . 131.3 Graphical view of the sequence y (n) - Exercise 1.12a. . . . . . . . . . . . . . . . . . . . 241.4 Graphical view of the sequence y (n) - Exercise 1.12b. . . . . . . . . . . . . . . . . . . . 241.5 Impulse response of the system - Exercise 1.15a. . . . . . . . . . . . . . . . . . . . . . . 271.6 Impulse response of the system - Exercise 1.15b. . . . . . . . . . . . . . . . . . . . . . . 281.7 Graphical view of the sequence y (n) - Exercise 1.19a. . . . . . . . . . . . . . . . . . . . 311.8 Graphical view of the sequence y (n) - Exercise 1.19b. . . . . . . . . . . . . . . . . . . . 321.9 Graphical view of the sequence y (n) - Exercise 1.19c. . . . . . . . . . . . . . . . . . . . 331.10 The sequences x (n) (above) and y (n) (bellow) - Exercise 1.23. . . . . . . . . . . . . . . 361.11 The analog signals x (t) (above) and y (t) (bellow) - Exercise 1.23. . . . . . . . . . . . . . 361.12 Approximated frequency response for the zero-order hold circuit (|H (Ω) |) - Exercise 1.26. 401.13 Analog signal y (t) - Exercise 1.26. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411.14 Discrete signal y (n) - Exercise 1.26. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.1 Region of m2 ×m1 plane in which the digital filter of exercise 2.3 is stable. . . . . . . . . 512.2 Stability region. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 552.3 Magnitude and phase of the system of exercise 2.14a. . . . . . . . . . . . . . . . . . . . . 582.4 Magnitude and phase of the system of exercise 2.14b. . . . . . . . . . . . . . . . . . . . . 592.5 Magnitude and phase of the system of exercise 2.14c. . . . . . . . . . . . . . . . . . . . . 602.6 Magnitude and phase of the filter of exercise 2.15a. . . . . . . . . . . . . . . . . . . . . . 602.7 Magnitude and phase of the filter of exercise 2.15b. . . . . . . . . . . . . . . . . . . . . . 61
3.1 Magnitude and phase of X (k) for exercise 3.13a. . . . . . . . . . . . . . . . . . . . . . . 823.2 Magnitude and phase of X (k) for exercise 3.13b. . . . . . . . . . . . . . . . . . . . . . . 833.3 Magnitude and phase of X (k) for exercise 3.13c. . . . . . . . . . . . . . . . . . . . . . . 843.4 Magnitude and phase of X (k) for exercise 3.13d. . . . . . . . . . . . . . . . . . . . . . . 843.5 Input and impulse response sequences. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873.6 First and second blocks of the impulse response sequences, h1(n)and h2(n), respectively. . 873.7 Sequences h1(−n) and h2(−n). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883.8 Circular convolutions 1 and 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883.9 Overall linear convolution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 883.10 Relation between N and k at exercise 3.17. . . . . . . . . . . . . . . . . . . . . . . . . . 903.11 Relation between M and k at exercise 3.18. . . . . . . . . . . . . . . . . . . . . . . . . . 913.12 Graph of the decimation-in-time 6-point FFT algorithm of exercise 3.21. . . . . . . . . . . 933.13 Graph of the basic cell radix-5 algorithm for exercise 3.22. . . . . . . . . . . . . . . . . . 953.14 Linear convolution using (above) and not using (bellow) an FFT algorithm for exercise
3.25 for the sequences (a) and (b) of exercise 3.13. . . . . . . . . . . . . . . . . . . . . . 973.15 Linear convolution using (above) and not using (bellow) an FFT algorithm for exercise
3.25 for the sequences (b) and (c) of exercise 3.13. . . . . . . . . . . . . . . . . . . . . . 983.16 Linear convolution between x (n) and h (n) using three different methods for exercise 3.32. 1023.17 DFTs of sequence x (n), using 64 (above) and 128 samples (bellow) for exercise 3.33. . . 103
xv
xvi LIST OF FIGURES
3.18 DFTs of the increased sequence x (n), using 128 (above), 256 (center) and 512 samples(bellow) for exercise 3.33. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
3.19 Sinusoidal signal corrupted with different levels of noise: (a) k = 3; (b) k = 4; (c) k = 5;and (d) k = 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
3.20 Absolute value of FFT of sinusoidal signal corrupted with different levels of noise: (a)k = 3; (b) k = 4; (c) k = 5; and (d) k = 6. . . . . . . . . . . . . . . . . . . . . . . . . . 106
3.21 Average (for M repetitions) of absolute value of FFT of sinusoidal signal corrupted withdifferent levels of noise: (a) k = 3,M = 5; (b) k = 4,M = 10; (c) k = 5,M = 16; and(d) k = 6, ,M = 26. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.1 Realization number 1 for exercise 4.1a. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1094.2 Realization number 2 for exercise 4.1a. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1094.3 Realization number 1 for exercise 4.1b. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1104.4 Realization number 2 for exercise 4.1b. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1104.5 Realization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1154.6 Magnitude of H(z). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1174.7 Transposed structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1184.8 Transposed circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1204.9 Frequency response for exercise 4.17a. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1244.10 Frequency response for exercise 4.17b. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
5.1 Magnitude response of the designed filter – Exercise 5.6. . . . . . . . . . . . . . . . . . . 1305.2 Windows for filter projects of exercise 5.7. . . . . . . . . . . . . . . . . . . . . . . . . . . 1325.3 Magnitude responses of the corresponding filters of exercise 5.7. . . . . . . . . . . . . . . 1335.4 Windows for filter projects of exercise 5.8. . . . . . . . . . . . . . . . . . . . . . . . . . . 1345.5 Magnitude responses of the corresponding filters of exercise 5.8. . . . . . . . . . . . . . . 1355.6 Practical impulse response of order 10 for exercise 5.9. . . . . . . . . . . . . . . . . . . . 1365.7 Practical impulse response of order 20 for exercise 5.9. . . . . . . . . . . . . . . . . . . . 1375.8 Practical impulse response of order 30 for exercise 5.9. . . . . . . . . . . . . . . . . . . . 1375.9 Combination of Responses. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1395.10 Filter characteristics with Hamming, Hanning, and Blackman window for exercise 5.12. . 1425.11 Kaiser windows for different β for exercise 5.13. . . . . . . . . . . . . . . . . . . . . . . 1435.12 Frequency responses for different β Kaiser windows of exercise 5.13. . . . . . . . . . . . 1435.13 Frequency response of exercise 5.14a. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1445.14 Frequency response of exercise 5.14b. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1455.15 Frequency response of exercise 5.14c. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1455.16 Frequency response of exercise 5.16. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1475.17 Frequency response for filters of exercise 5.17. . . . . . . . . . . . . . . . . . . . . . . . 1505.18 Frequency responses for the filters of exercise 5.19. . . . . . . . . . . . . . . . . . . . . . 1525.19 Frequency response for the filter of exercise 5.20. . . . . . . . . . . . . . . . . . . . . . . 1535.20 Frequency response for the filter of exercise 5.23. . . . . . . . . . . . . . . . . . . . . . . 1545.21 Magnitude responses of a filter for exercise 5.25. . . . . . . . . . . . . . . . . . . . . . . 1565.22 Magnitude responses of a filter for exercise 5.25. . . . . . . . . . . . . . . . . . . . . . . 1565.23 Magnitude responses of a filter for exercise 5.26. . . . . . . . . . . . . . . . . . . . . . . 1575.24 Magnitude responses of a filter for exercise 5.26. . . . . . . . . . . . . . . . . . . . . . . 1585.25 Frequency response for the filter of exercise 5.28. . . . . . . . . . . . . . . . . . . . . . . 159
6.1 Magnitude response for the filter of exercise 6.3. . . . . . . . . . . . . . . . . . . . . . . . 1616.2 Magnitude response of the analog filter of exercise 6.5. . . . . . . . . . . . . . . . . . . . 1636.3 Phase response of the analog filter of exercise 6.5. . . . . . . . . . . . . . . . . . . . . . . 1636.4 Magnitude response of the Butterworth implementation for the filter of exercise 6.6. . . . . 1656.5 Magnitude response of the Chebyshev implementation for the filter of exercise 6.6. . . . . 1666.6 Magnitude response of the Elliptic implementation for the filter of exercise 6.6. . . . . . . 1666.7 Magnitude responses for the filters of exercise 6.7. . . . . . . . . . . . . . . . . . . . . . 169
LIST OF FIGURES xvii
6.8 Magnitude response for the filter of exercise 6.18 corresponding to the analog filter ofexercise 6.3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
6.9 Magnitude response for the filter of exercise 6.18 corresponding to the analog filter ofexercise 6.6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
6.10 Magnitude response for the filter of exercise 6.23. . . . . . . . . . . . . . . . . . . . . . . 1826.11 Response for filters of exercise 6.24. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1846.12 Frequency response for filters of exercise 6.25. . . . . . . . . . . . . . . . . . . . . . . . 1856.13 Impulse response for the filter of exercise 6.26. . . . . . . . . . . . . . . . . . . . . . . . 1866.14 Quadratic error for the filter of exercise 6.26. . . . . . . . . . . . . . . . . . . . . . . . . 1866.15 Mean squared error for the filter of exercise 6.28. . . . . . . . . . . . . . . . . . . . . . . 188
7.1 PSD estimate using the periodogram method, considering L = 10 and different values ofN : (a) N = 1 (b) N = 10 (c) N = 100 (d) N = 1000. . . . . . . . . . . . . . . . . . . . 191
7.2 PSD estimate using the periodogram method, considering L = 100 and different values ofN : (a) N = 1 (b) N = 10 (c) N = 100 (d) N = 1000. . . . . . . . . . . . . . . . . . . . 192
7.3 PSD estimate using the periodogram method, considering L = 1000 and different valuesof N : (a) N = 1 (b) N = 10 (c) N = 100 (d) N = 1000. . . . . . . . . . . . . . . . . . . 193
7.4 Biased estimate of the autocorrelation of the white noise sequence, considering L = 1000and a single realization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
7.5 PSD estimate for the standard periodogram method. . . . . . . . . . . . . . . . . . . . . . 1947.6 Squared magnitude [dB] of the frequency response of the system H(z) = 1
1+0.9z−1 . . . . . 1947.7 PSD of x1(n). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1957.8 PSD estimates for periodogram-based methods (L = 512): (a) standard periodogram; (b)
averaged periodogram using four blocks; (c) windowed-data periodogram; (d) windowed-data periodogram with unbiased autocorrelation; (e) Blackman-Tukey scheme; (f) mini-mum variance method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
7.9 PSD estimates for periodogram-based methods (L = 1024): (a) standard periodogram; (b)averaged periodogram using four blocks; (c) windowed-data periodogram; (d) windowed-data periodogram with unbiased autocorrelation; (e) Blackman-Tukey scheme; (f) mini-mum variance method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
7.10 PSD estimates for periodogram-based methods (L = 2048): (a) standard periodogram; (b)averaged periodogram using four blocks; (c) windowed-data periodogram; (d) windowed-data periodogram with unbiased autocorrelation; (e) Blackman-Tukey scheme; (f) mini-mum variance method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
7.11 Actual PSD of the first-order AR process Y . . . . . . . . . . . . . . . . . . . . . . . . 2007.12 Minimum variance PSD estimates (L = 4): (a) Exact MV; (b) MV method. . . . . . . . . 2007.13 Minimum variance PSD estimates (L = 10): (a) Exact MV; (b) MV method. . . . . . . . . 2007.14 Minimum variance PSD estimates (L = 50): (a) Exact MV; (b) MV method. . . . . . . . . 2017.15 Minimum variance PSD estimates (L = 256): (a) Exact MV; (b) MV method. . . . . . . . 2017.16 PSD estimate for the minimum variance method. . . . . . . . . . . . . . . . . . . . . . . 2027.17 Squared magnitude [dB] of the frequency response of the system H(z) = 1
1+0.8z−1 . . . . . 2027.18 Pole-zero plot for the AR models using N = 1, 2, 3, 4, respectively. . . . . . . . . . . . . 2047.19 Comparison between the impulse responses of the ARMA and AR models using N =
1, 2, 3, 4, respectively. The ARMA impulse response is in blue, whereas the AR impulseresponse is in red. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204
7.20 Frequency responses of: (a) the original stable ARMA system, HARMA(z), and the stableminimum-phase ARMA system, H(z); (b) the allpass filter. . . . . . . . . . . . . . . . . . 206
7.21 N th-order AR approximation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2067.22 Magnitude responses for the ARMA system and for the N th-order AR approximation. . . 2097.23 Pole-zero plot for the MA models using M = 1, 2, 3, 4, respectively. . . . . . . . . . . . . 2117.24 Comparison between the impulse responses of the ARMA and MA models using M =
1, 2, 3, 4, respectively. The ARMA impulse response is in blue, whereas the MA impulseresponse is in red. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
7.25 PSD estimate for the covariance method. In this example, N = 4 and a = −0.9. . . . . . . 221
xviii LIST OF FIGURES
7.26 Squared magnitude [dB] of the frequency response of the system H(z) = 11+0.9z−1 . . . . . 221
7.27 Pole-zero plot for the AR model using N = 4. . . . . . . . . . . . . . . . . . . . . . . . . 2227.28 PSD estimate for the covariance method. In this example, N = 20 and a = −0.9. . . . . . 2227.29 Pole-zero plot for the AR model using N = 20. . . . . . . . . . . . . . . . . . . . . . . . 2227.30 PSD estimate for the covariance method. In this example, N = 4 and a = 0.9. . . . . . . . 2237.31 Squared magnitude [dB] of the frequency response of the system H(z) = 1
1−0.9z−1 . . . . . 2237.32 Pole-zero plot for the AR model using N = 4. . . . . . . . . . . . . . . . . . . . . . . . . 2247.33 PSD estimate considering: (a) biased autocorrelation estimation; (b) windowed-data unbi-
ased autocorrelation estimation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2247.34 PSD estimate for the autocorrelation method. In this example, N = 4 and a = −0.9. . . . 2257.35 Squared magnitude [dB] of the frequency response of the system H(z) = 1
1+0.9z−1 . . . . . 2257.36 Pole-zero plot for the AR model using N = 4. . . . . . . . . . . . . . . . . . . . . . . . . 2267.37 PSD estimate for the autocorrelation method. In this example, N = 20 and a = −0.9. . . . 2267.38 Pole-zero plot for the AR model using N = 20. . . . . . . . . . . . . . . . . . . . . . . . 2277.39 PSD estimate for the autocorrelation method. In this example, N = 4 and a = 0.9. . . . . 2277.40 Squared magnitude [dB] of the frequency response of the system H(z) = 1
1−0.9z−1 . . . . . 2287.41 Pole-zero plot for the AR model using N = 4. . . . . . . . . . . . . . . . . . . . . . . . . 2287.42 Magnitude response of the first-order AR process x1(n). . . . . . . . . . . . . . . . . . . 2307.43 PSD estimate of x(n) using N = 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2317.44 PSD estimate of x(n) using N = 20. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2317.45 Magnitude response of the first-order AR process x1(n). . . . . . . . . . . . . . . . . . . 2317.46 PSD estimate of x(n) using N = 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2327.47 PSD of the ARMA process. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2357.48 PSD estimate using the standard periodogram. . . . . . . . . . . . . . . . . . . . . . . . . 2357.49 PSD estimate using the autocorrelation method considering (a) N = 1; (b) N = 2; (c)
N = 3; (d) N = 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
8.1 Magnitude and phase responses for the lowpass filter of exercise 8.5 . . . . . . . . . . . . 2428.2 Magnitude and phase responses for the multiband filter of exercise 8.5. . . . . . . . . . . . 2438.3 Scheme for the efficient computation of the decimation filter of exercise 8.12. . . . . . . . 2448.4 Magnitude and phase responses for the lowpass filter of exercise 8.12. . . . . . . . . . . . 2448.5 Magnitude and phase responses for the multiband filter of exercise 8.12. . . . . . . . . . . 2468.6 Magnitude and phase responses for the lowpass filter of exercise 8.12. . . . . . . . . . . . 2468.7 Magnitude and phase responses for the resulting filter of exercise 8.12. . . . . . . . . . . . 2478.8 Scheme for the efficient computation of the decimation filter o exercise 8.13. . . . . . . . . 2478.9 Magnitude and phase responses for the decimation filter (decimation by 50) of exercise 8.14.2488.10 Magnitude and phase responses for the first decimation filter (decimation by 2) of exercise
8.14. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2498.11 Magnitude and phase responses for the second (or third) decimation filter (decimation by
5) of exercise 8.14. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2498.12 Magnitude and phase responses for the decimation (or interpolation) lowpass filter of exer-
cise 8.15. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2508.13 Scheme for the two-stage decimation/interpolation technique of exercise 8.15. . . . . . . . 2518.14 Magnitude and phase responses for the decimation (or interpolation) lowpass filter H1(z)
of exercise 8.15. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2518.15 Magnitude and phase responses for the decimation (or interpolation) lowpass filter H2(z)
of exercise 8.15. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2528.16 Magnitude and phase frequencies for the FIR filter for MFC detection of exercise 8.16 . . . 2538.17 Magnitude and phase frequencies for the decimation (or interpolation) filter for MFC de-
tection of exercise 8.16. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2538.18 Magnitude and phase responses for the filter HI
1 (z) of exercise 8.17. . . . . . . . . . . . . 2548.19 Magnitude and phase responses for the filter HI
2 (z) of exercise 8.17. . . . . . . . . . . . . 2558.20 Magnitude and phase responses for the filter F1(z) of exercise 8.17. . . . . . . . . . . . . 2558.21 Magnitude and phase responses for the filter F2(z) of exercise 8.17. . . . . . . . . . . . . 256
LIST OF FIGURES xix
8.22 Magnitude and phase responses for the final filter of exercise 8.17. . . . . . . . . . . . . . 2568.23 Realization of equation (8.2). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2598.24 Impulse Response, h(n), of the bandpass FIR filter, minimax. . . . . . . . . . . . . . . . . 2618.25 Frequency Response of h(n), minimax. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2618.26 Impulse Response, h(n), of the bandpass FIR filter, WLS. . . . . . . . . . . . . . . . . . 2638.27 Frequency Response of h(n), WLS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
9.1 Magnitude and phase responses for H0(z) - Exercise 9.6. . . . . . . . . . . . . . . . . . . 2699.2 Magnitude and phase responses for H1(z) - Exercise 9.6. . . . . . . . . . . . . . . . . . . 2709.3 Magnitude of filters H0(z) and H1(z) (above), and the overall magnitude response of the
Johnston filter bank - Exercise 9.7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2719.4 Magnitude of filters H0(z) and H1(z) (above), and the overall magnitude response of the
Johnston filter bank, when δ = 0.1 - Exercise 9.7. . . . . . . . . . . . . . . . . . . . . . . 2719.5 Magnitude of filters H0(z) and H1(z) (above), and the overall magnitude response of the
Johnston filter bank, when δ = 0.8 - Exercise 9.7. . . . . . . . . . . . . . . . . . . . . . . 2729.6 Magnitude of filters H0(z) and H1(z) (above), and the overall magnitude response of the
Johnston filter bank (δ = 0.1) - Exercise 9.8. . . . . . . . . . . . . . . . . . . . . . . . . . 2739.7 Magnitude (above) and impulse (below) responses for the prototype filter - Exercise 9.12. . 2759.8 Zeros of P (z) (above) and H0(z) (below) - Exercise 9.12. . . . . . . . . . . . . . . . . . 2759.9 Magnitude responses for H0(z) and H1(z) - Exercise 9.12. . . . . . . . . . . . . . . . . . 2769.10 Frequency Response (QMF attenuation) - Exercise 9.14. . . . . . . . . . . . . . . . . . . 2779.11 Frequency Response (QMF perfect reconstruction) - Exercise 9.14. . . . . . . . . . . . . . 2779.12 Frequency Response of the overall filter bank (QMF perfect reconstruction) - Exercise 9.14. 2789.13 Impulse Response of the halfband filter - Exercise 9.16. . . . . . . . . . . . . . . . . . . . 2809.14 Diagram of poles and zeros of P (z) - Exercise 9.16. . . . . . . . . . . . . . . . . . . . . . 2809.15 Frequency Response of h0(n) - Exercise 9.16. . . . . . . . . . . . . . . . . . . . . . . . . 2819.16 Frequency Response of h1(n) - Exercise 9.16. . . . . . . . . . . . . . . . . . . . . . . . . 2819.17 Frequency Response of g0(n) - Exercise 9.16. . . . . . . . . . . . . . . . . . . . . . . . . 2829.18 Frequency Response of g1(n) - Exercise 9.16. . . . . . . . . . . . . . . . . . . . . . . . . 2829.19 Perfect reconstruction (time domain) - Exercise 9.16. . . . . . . . . . . . . . . . . . . . . 2839.20 Perfect reconstruction (frequency domain) - Exercise 9.16. . . . . . . . . . . . . . . . . . 2839.21 Prototype filter: Impulse response (left) and Frequency Response (right) - Exercise 9.17. . 2859.22 Magnitude Frequency Response for the 6 analysis filters - Exercise 9.17. . . . . . . . . . . 2869.23 Prototype filter: Impulse response (left) and Frequency Response (right) - Exercise 9.18. . 2869.24 Magnitude Frequency Response for the 10 analysis filters - Exercise 9.18. . . . . . . . . . 2879.25 Prototype filter: Impulse response (left) and Frequency Response (right) - Exercise 9.19. . 2879.26 Impulse responses of the filters: from h0(n) and h1(n) (first row) to h8(n) and h9(n) (last
row) - Exercise 9.19. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2899.27 Frequency responses of the filters: from h0(n) and h1(n) (first row) to h8(n) and h9(n)
(last row) - Exercise 9.19. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2909.28 Reconstruction: Time domain (left) and Frequency domain (right) - Exercise 9.19. . . . . . 2919.29 Magnitude response for the prototype filter - Exercise 9.20. . . . . . . . . . . . . . . . . . 2919.30 Magnitude response for the prototype filter - Exercise 9.21. . . . . . . . . . . . . . . . . . 2929.31 Magnitude responses for the five analysis filters of the filter bank - Exercise 9.21. . . . . . 2929.32 Magnitude response for the prototype filter - Exercise 9.22. . . . . . . . . . . . . . . . . . 2939.33 Magnitude responses for the fifteen analysis filters of the filter bank - Exercise 9.22. . . . . 2939.34 Magnitude and phase responses for the analysis filters - Exercise 9.23. . . . . . . . . . . . 2949.35 Magnitude and phase responses for the analysis filters of the 8-band fast LOT - Exercise
9.25. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2959.36 Impulse responses (fast LOT): from h0(n) and h1(n) (first row) to h8(n) and h9(n) (last
row) - Exercise 9.32. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2989.37 Frequency responses (fast LOT): from h0(n) and h1(n) (first row) to h8(n) and h9(n) (last
row) - Exercise 9.32. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
xx LIST OF FIGURES
9.38 Frequency responses (BOLT): from h0(n) and h1(n) (first row) to h8(n) and h9(n) (lastrow) - Exercise 9.32. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
11.1 Circuit for exercise 11.1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30911.2 FIR implementation for exercise 12.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31511.3 Distributed arithmetic - exercise 12.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31611.4 Rounding probability density function for exercise 11.12. . . . . . . . . . . . . . . . . . . 32011.5 Truncation probability density function 11.12. . . . . . . . . . . . . . . . . . . . . . . . . 32011.6 Magnitude truncation probability density function for exercise 7.12. . . . . . . . . . . . . 32011.7 Magnitude response of D(z)−1 of exercise 11.16. . . . . . . . . . . . . . . . . . . . . . . 32211.8 Frequency response for original minimax and quantized filters of exercise 7.22. . . . . . . 33211.9 Frequency response for original minimax and quantized filters of exercise 7.22, originally
from exercise 4.4b. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33311.10Frequency response for original minimax and quantized filters for exercise 7.23. . . . . . . 33511.11Real and practical sensitivity figures of merit for exercise 7.23. . . . . . . . . . . . . . . . 33611.12Magnitude deviations for deterministic and stochastic analysis for exercise 7.23. . . . . . . 33611.13Uniform probability density function for exercise 7.24. . . . . . . . . . . . . . . . . . . . 33811.14Region of stability for exercise 7.26. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34011.15First zero-input limit cycles free region for exercise 7.26. . . . . . . . . . . . . . . . . . . 34111.16Second zero-input limit cycles free region for exercise 7.26. . . . . . . . . . . . . . . . . . 34111.17Stability region (gray), and zero-input limit cycles free region (dark gray) for exercise 7.27. 34211.18Overflow characteristics of exercise 7.29. . . . . . . . . . . . . . . . . . . . . . . . . . . 34311.19Overflow characteristics of two’s complement for exercise 7.30 . . . . . . . . . . . . . . . 344
12.1 Frequency response of the highpass minimax filter of exercise 10.9. . . . . . . . . . . . . 35212.2 Tf2latc reflection coefficients in function of the user supplied command fir2lattice
coefficients of exercise 10.9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35312.3 Impulse response of direct and lattice structures, for exercise 10.10. . . . . . . . . . . . . 35312.4 Difference of the impulse response of direct and lattice structures for exercise 10.10. . . . 35412.5 Magnitude response of different RRS filters of exercise 10.16. . . . . . . . . . . . . . . . 35612.6 The direct minimax, the prefilter and the interpolation approach frequency responses done
at exercise 10.19. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35812.7 RPSD and sensitivity figures of merit for exercise 10.21. . . . . . . . . . . . . . . . . . . 36012.8 Frequency response of the three structures for exercise 10.21. . . . . . . . . . . . . . . . . 36112.9 Frequency response of the minimax and the FRM-ERMD structures of exercise 10.22. . . 36212.10Frequency response of direct minimax and FRM without efficient ripple margin distribution
(ERMD) of exercise 10.22. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36312.11Frequency response of the minimax and the complementary FRM-ERMD structures of
exercise 10.23. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36412.12Frequency response of direct minimax and complementary FRM without efficient ripple
margin distribution ( ERMD ) of exercise 10.23. . . . . . . . . . . . . . . . . . . . . . . . 36512.13Frequency response of the minimax filter of exercise 10.24. . . . . . . . . . . . . . . . . . 36612.14Frequency response of the quadrature filter of exercise 10.24. . . . . . . . . . . . . . . . . 36712.15Frequency response of the minimax filter of exercise 10.25. . . . . . . . . . . . . . . . . . 36812.16Frequency response of the quadrature filter of exercise 10.25. . . . . . . . . . . . . . . . . 369
13.1 Frequency response of direct, cascade and parallel forms of exercise 11.10. . . . . . . . . 38313.2 Frequency response of direct, optimal and free of limit cycles state-space forms for exercise
11.10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38413.3 Frequency response of direct, cascade and parallel forms of exercise 11.11. . . . . . . . . 38613.4 Frequency response of direct, optimal and free of limit cycles state-space forms of exercise
11.11. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38713.5 Frequency response of direct, cascade and parallel forms for exercise 11.12. . . . . . . . . 38913.6 Frequency response of direct, cascade and parallel forms for exercise 11.13. . . . . . . . . 39013.7 Frequency response of the two-multiplier lattice for exercise 11.17. . . . . . . . . . . . . . 392
LIST OF FIGURES xxi
13.8 Wave filter for the doubly-terminated RLC network for exercise 11.20. . . . . . . . . . . . 39313.9 Output-noise RPSD for exercise 11.21a . . . . . . . . . . . . . . . . . . . . . . . . . . . 39413.10Frequency response for exercise 11.21c. . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
xxii LIST OF FIGURES
List of Tables
3.1 Results of exercise 3.32. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.1 Filter coefficients - Exercise 5.6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1305.2 Filter Coefficients (Hamming) – Exercise 5.12. . . . . . . . . . . . . . . . . . . . . . . . 1415.3 Filter Coefficients (Hanning) – Exercise 5.12. . . . . . . . . . . . . . . . . . . . . . . . . 1415.4 Filter Coefficients (Blackman) – Exercise 5.12. . . . . . . . . . . . . . . . . . . . . . . . 1415.5 Frequency response coefficients of exercise 5.17. . . . . . . . . . . . . . . . . . . . . . . 1485.6 Impulse response coefficients for the filter of exercise 5.17. . . . . . . . . . . . . . . . . . 1495.7 Filter coeficients for MFC detection filter of exercise 20. . . . . . . . . . . . . . . . . . . 1535.8 Filter coeficientes of a Hilber transformer of exercise 5.23. . . . . . . . . . . . . . . . . . 1555.9 Stopband minimun attenuation and the total stopband energy for the filters of exercise 5.28. 158
6.1 Filter coefficients, poles and zeros for exercise 6.3. . . . . . . . . . . . . . . . . . . . . . 1626.2 Filter coefficients, poles and zeros for exercise 6.5. . . . . . . . . . . . . . . . . . . . . . 1646.3 Filter coefficients, poles and zeros for exercise 6.18 corresponding to the analog filter of
exercise 6.3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1756.4 Filter coefficients, poles and zeros for exercise 6.18 corresponding to the analog filter of
exercise 6.6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1766.5 Data for exercise 6.24. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1836.6 Phase equalizer for exercise 6.24 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1836.7 Final filter of exercise 6.24. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1846.8 IIR filter found for Exercise 6.25. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1856.9 Correspondance between the number in the x-axis in Figure 6.15 and the values of M and
N for execise 28. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
7.1 MATLAB results for different values of L. . . . . . . . . . . . . . . . . . . . . . . . . . . 2167.2 MATLAB results for different values of L. . . . . . . . . . . . . . . . . . . . . . . . . . . 2207.3 Number of arithmetic operations necessary to compute the iteration i = 1. . . . . . . . . . 2297.4 Number of arithmetic operations necessary to compute any iteration i 6= 1. . . . . . . . . . 2297.5 Number of arithmetic operations necessary to compute the recursion from i = 2 to i = N . 2297.6 Total number of arithmetic operations necessary to compute the recursions from i = 1 to
i = N . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
8.1 Impulse response coefficients for the filter of exercise 8.31. . . . . . . . . . . . . . . . . . 2608.2 Impulse response coefficients for the filter of exercise 8.32. . . . . . . . . . . . . . . . . . 263
11.1 Internal data (exercise 12.4a) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31411.2 Internal data (exercise 12.4b) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31511.3 8 bits quantized version of the filter of exercise 12.7 . . . . . . . . . . . . . . . . . . . . . 31611.4 Memory contens of the filter of exercise 12.7 . . . . . . . . . . . . . . . . . . . . . . . . 31711.5 Memory contens for exercise 12.8a. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31711.6 Memory contens for exercise 12.8b. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31811.7 Memory contens for exercise 12.8c. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 318
xxiii
xxiv LIST OF TABLES
11.8 Original filter of exercise 4 .4a for exercise 7.22. . . . . . . . . . . . . . . . . . . . . . . . 33211.9 Quantized filter of exercise 4 .4a for exercise 7.22. . . . . . . . . . . . . . . . . . . . . . . 33211.10Original filter of exercise 4.4b for exercise 7.22. . . . . . . . . . . . . . . . . . . . . . . . 33311.11Quantized filter coefficients of exercise 4.4b for exercise 7.22. . . . . . . . . . . . . . . . 33311.12Quantized filter coefficients ( 6 bits ) for exercise 7.23. . . . . . . . . . . . . . . . . . . . 335
12.1 Processing time and floating point operations per output sample, at exercise 10.10. . . . . 35412.2 Processing time and floating point operations per output sample for the tests performed at
exercise 10.12. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35612.3 Comparisons among some filter structures done in exercise 10.19. . . . . . . . . . . . . . 35712.4 The structures characteristics for exercise 10.21. . . . . . . . . . . . . . . . . . . . . . . . 35912.5 Characteristics of the realizations of exercise 10.22. . . . . . . . . . . . . . . . . . . . . . 36012.6 Characteristics of the realizations for the highpass filter of exercise 10.23. . . . . . . . . . 36012.7 The FRM lowpass prototype filter characteristics of exercise 10.24. . . . . . . . . . . . . . 36212.8 The FRM lowpass prototype filter for the complementary quadrature filter for exercise 10.25.364
13.1 Direct-form elliptic filter coefficients for exercise 11.10. . . . . . . . . . . . . . . . . . . . 38113.2 Parallel-form scaled and quantized coefficients for exercise 11.10. . . . . . . . . . . . . . 38213.3 Cascade-form scaled and quantized coefficients for exercise 11.10. . . . . . . . . . . . . . 38213.4 Optimal state-space sections scaled and quantized coefficients for exercise 11.10. . . . . . 38213.5 State-space free of limit cycles sections scaled and quantized coefficients for exercise 11.10. 38313.6 Direct-form elliptic filter coefficients for exercise 11.11. . . . . . . . . . . . . . . . . . . . 38413.7 Parallel-form scaled and quantized coefficients for exercise 11.11. . . . . . . . . . . . . . 38513.8 Cascade-form scaled and quantized coefficients for exercise 11.11. . . . . . . . . . . . . . 38513.9 Optimal state-space sections scaled and quantized coefficients for exercise 11.11. . . . . . 38513.10State-space free of limit cycles sections scaled and quantized coefficients for exercise 11.11. 38613.11Direct-form elliptic filter coefficients for exercise 11.12. . . . . . . . . . . . . . . . . . . . 38713.12Cascade-form scaled and ESS coefficients for exercise 11.12. . . . . . . . . . . . . . . . . 38813.13Parallel-form L2 scaled and quantized coefficients for exercise 11.13. . . . . . . . . . . . 38813.14Parallel-form L∞ scaled and quantized coefficients for exercise 11.13. . . . . . . . . . . . 39013.15Two-multiplier lattice coefficients for exercise 11.17. . . . . . . . . . . . . . . . . . . . . 392
Chapter 1
DISCRETE-TIME SYSTEMS
1.1 (a) y(n) = (n+ a)2x(n+ 4)• Linearity:
Hkx (n) = (n+ a)2kx (n+ 4) , k ∈ R
= k (n+ a)2x (n+ 4)
= kHx (n)Hx1 (n) + x2 (n) = (n+ a)2 [x1 (n+ 4) + x2 (n+ 4)]
= (n+ a)2x1 (n+ 4) + (n+ a)2
x2 (n+ 4)= Hx1 (n)+Hx2 (n)
and therefore the system is linear.• Time Invariance:
y (n− n0) = (n− n0 + a)2x (n− n0 + 4)
Hx (n− n0) = (n+ a)2x (n− n0 + 4)
then y (n− n0) 6= Hx (n− n0), and the system is time varying.• Causality:
Hx1 (n) = (n+ a)2x1 (n+ 4)
Hx2 (n) = (n+ a)2x2 (n+ 4)
If x1 (n) = x2 (n) for n < n0, and x1 (n0 + 3) 6= x2 (n0+3), then for n = n0 − 1 < n0:
Hx1 (n)∣∣∣n=n0−1
= (n0 − 1 + a)2x1 (n0 + 3)
Hx2 (n)∣∣∣n=n0−1
= (n0 − 1 + a)2x2 (n0 + 3)
and then Hx1 (n) 6= Hx2 (n) for n < n0. Thus, the system is noncausal.
(b) y(n) = ax(n+ 1)• Linearity:
Hkx (n) = akx (n+ 1) , k ∈ R= kHx (n)
Hx1 (n) + x2 (n) = a [x1 (n+ 1) + x2 (n+ 1)]= ax1 (n+ 1) + ax2 (n+ 1)= Hx1 (n)+Hx2 (n)
1
2 CHAPTER 1. DISCRETE-TIME SYSTEMS
and therefore the system is linear.• Time Invariance:
y (n− n0) = ax [(n− n0) + 1]Hx (n− n0) = ax [(n− n0) + 1]
then y (n− n0) = Hx (n− n0), and the system is time invariant.• Causality:
Hx1 (n) = ax1 (n+ 1)Hx2 (n) = ax2 (n+ 1)
therefore, if x1 (n) = x2 (n) for n < n0, and x1 (n0) 6= x2 (n0), then for n = n0 − 1 < n0:
Hx1 (n)∣∣∣n=n0−1
= ax1 (n0 − 1 + 1) = ax1 (n0)
Hx2 (n)∣∣∣n=n0−1
= ax2 (n0 − 1 + 1) = ax2 (n0)
and then Hx1 (n) 6= Hx2 (n) for n < n0. Thus, the system is noncausal.
(c) y(n) = x(n+ 1) + x3(n− 1)• Linearity:
Hkx (n) = kx (n+ 1) + k3x3 (n− 1) , k ∈ RkHx (n) = kx (n+ 1) + kx3 (n− 1)Hkx (n) 6= kHx (n)
and therefore the system is nonlinear.• Time Invariance:
y (n− n0) = x (n− n0 + 1) + x3 (n− n0 − 1)Hx (n− n0) = x (n− n0 + 1) + x3 (n− n0 − 1)
then y (n− n0) = Hx (n− n0), and the system is time invariant.• Causality:
Hx1 (n) = x1 (n+ 1) + x31 (n− 1)
Hx2 (n) = x2 (n+ 1) + x32 (n− 1)
If x1 (n) = x2 (n) for n < n0, and x1 (n0) 6= x2 (n0), then, for n = n0 − 1 < n0:
Hx1 (n)∣∣∣n=n0−1
= x1 (n0 − 1 + 1) + x31 (n0 − 1− 1) = x1 (n0) + x3
1 (n0 − 2)
Hx2 (n)∣∣∣n=n0−1
= x2 (n0 − 1 + 1) + x32 (n0 − 1− 1) = x2 (n0) + x3
2 (n0 − 2)
and then Hx1 (n) 6= Hx2 (n) for n < n0. Thus, the system is noncausal.
(d) y(n) = x(n) sin(ωn)• Linearity:
Hkx (n) = kx (n) sin (ωn) , k ∈ R= kHx (n)
Hx1 (n) + x2 (n) = [x1 (n) + x2 (n)] sin (ωn)= x1 (n) sin (ωn) + x2 (n) sin (ωn)= Hx1 (n)+Hx2 (n)
and therefore the system is linear.
3
• Time Invariance:
y (n− n0) = x (n− n0) sin (ωn− ωn0)Hx (n− n0) = x (n− n0) sin (ωn)
then y (n− n0) 6= Hx (n− n0), and the system is time varying.• Causality:
Hx1 (n) = x1 (n) sin (ωn)Hx2 (n) = x2 (n) sin (ωn)
therefore if x1 (n) = x2 (n) for n < n0, then Hx1 (n) = Hx2 (n) for n < n0. Thus, thesystem is causal.
(e) y(n) = x(n) + sin(ωn)• Linearity:
Hkx (n) = kx (n) + sin (ωn) , k ∈ RkHx (n) = k [x (n) + sin (ωn)]
= kx (n) + k sin (ωn)Hkx (n) 6= kHx (n)
and therefore the system is nonlinear.• Time Invariance:
y (n− n0) = x (n− n0) + sin (ωn− ωn0)Hx (n− n0) = x (n− n0) + sin (ωn)
then y (n− n0) 6= Hx (n− n0), and the system is time varying.• Causality:
Hx1 (n) = x1 (n) + sin (ωn)Hx2 (n) = x2 (n) + sin (ωn)
If x1 (n) = x2 (n) for n < n0, then Hx1 (n) = Hx2 (n) for n < n0. Thus, the system iscausal.
(f) y(n) = x(n)x(n+3)
• Linearity:
Hkx (n) =kx (n)
kx (n+ 3), k ∈ R
=x (n)
x (n+ 3)
kHx (n) = kx (n)
x (n+ 3)Hkx (n) 6= kHx (n)
and therefore the system is nonlinear.• Time Invariance:
y (n− n0) =x (n− n0)
x (n− n0 + 3)
Hx (n− n0) =x (n− n0)
x (n− n0 + 3)
then y (n− n0) = Hx (n− n0), and the system is time invariant.
4 CHAPTER 1. DISCRETE-TIME SYSTEMS
• Causality:
Hx1 (n) =x1 (n)
x1 (n+ 3)
Hx2 (n) =x2 (n)
x2 (n+ 3)
If x1 (n) = x2 (n) for n < n0, and x1 (n0 + 2) 6= x2 (n0 + 2), then, for n = n0 − 1 < n0:
Hx1 (n)∣∣∣n=n0−1
=x1 (n0 − 1)x1 (n0 + 2)
Hx2 (n)∣∣∣n=n0−1
=x2 (n0 − 1)x2 (n0 + 2)
and then Hx1 (n) 6= Hx2 (n) for n < n0. Thus, the system is noncausal.
(g) y(n) = y(n− 1) + 8x(n− 3)
• Linearity:
ky(n) = ky(n− 1) + k8x(n− 3)y(n) = y(n− 1) + 8x(n− 3)
If x(n) = kx(n), the solution is given by y(n) = ky(n).
y1(n) = y1(n− 1) + 8x1(n− 3)y2(n) = y2(n− 1) + 8x2(n− 3)
thus y1(n) + y2(n) = y1(n− 1) + y2(n− 1) + 8 [x1(n− 3) + x2(n− 3)]y(n) = y(n− 1) + 8x(n− 3)
where y(n) = y1(n) + y2(n), y(n− 1) = y1(n− 1) + y2(n− 1) and x(n− 3) = x1(n− 3) +x2(n− 3). If x(n) = x1(n) + x2(n), the solution is given by y(n) = y1(n) + y2(n), and, thusthe system is linear.• Causality:
y1(n) = y1(n− 1) + 8x1(n− 3)y2(n) = y2(n− 1) + 8x2(n− 3)
Assume x1(n) = x2(n), n < n0. If y1(n) 6= y2(n), for n < n0 implies:
y1(n− 1) 6= y2(n− 1)→ · · · → y1(−∞) 6= y2(−∞), but y1(−∞) = y2(−∞) = 0,
then the system is causal.An alternative way to show that the system is causal (by Proakis) is to write the output as afunction of past samples of the input signal, i.e.,
y(n) = y(n0)︸ ︷︷ ︸=0
+8x(n− 3) + 8x(n− 4) + . . .+ 8x(n0) = f(x(n− 3), x(n− 4), . . . , x(n0)),
where n0 is the instant in which the input signal was applied to the system, which is assumed tobe inittially relaxed.
5
• Time Invariance:
y(n− n0) = y(n− n0 − 1) + 8x(n− n0 − 3)y(n) = y(n− 1) + 8x(n− 3)
If x(n) = x(n−n0), then y(n) = y(n−n0) is the solution and then the system is time invariant.
(h) y(n) = 2ny(n− 1) + 3x(n− 5)
• Linearity:
ky(n) = 2nky(n− 1) + 3kx(n− 5)y(n) = 2ny(n) + 3x(n− 5)
y1(n) = 2ny1(n− 1) + 3x1(n− 5)y2(n) = 2ny2(n− 1) + 3x2(n− 5)
y1(n) + y2(n) = 2n(y1(n− 1) + y2(n− 1)) + 3 [x1(n− 5) + x2(n− 5)]y(n) = 2ny(n) + 3x(n− 5)
And then, the system is linear.• Causality:
Assume x1(n) = x2(n), n < n0 :
y1(n) = 2ny1(n− 1) + 3x1(n− 5)y2(n) = 2ny2(n− 1) + 3x2(n− 5)
If y1(n) 6= y2(n), for n < n0: y1(n− 1) 6= y2(n− 1)→ · · · → y1(−∞) 6= y2(−∞), which isan absurd. Then the system is causal.
• Time Invariance:
y(n− n0) = 2(n− n0)y(n− n0 − 1) + 3x(n− n0 − 5)y(n) 6= 2ny(n) + 3x(n− 5)
Thus, the system is time varying.
(i) y(n) = n2y(n+ 1) + 5x(n− 2) + x(n− 4)
• Linearity:
ky(n) = n2ky(n+ 1) + 5kx(n− 2) + kx(n− 4)
y(n) = n2y(n+ 1) + 5x(n− 2) + x(n− 4)
y1(n) = n2y1(n+ 1) + 5x1(n− 2) + x1(n− 4)
y2(n) = n2y2(n+ 1) + 5x2(n− 2) + x2(n− 4)
y1(n) + y2(n) = n2(y1(n+1) + y2(n+1)) + 5(x1(n−2) + x2(n−2)) + x1(n−4) + x2(n−4)
y(n) = n2y(n+ 1) + 5x(n− 2) + x(n− 4)
Then the system is linear.
6 CHAPTER 1. DISCRETE-TIME SYSTEMS
• Causality:Assume x1(n) = x2(n), n < n0 :
y1(n) = n2y1(n+ 1) + 5x1(n− 2) + x1(n− 4)y2(n) = n2y2(n+ 1) + 5x2(n− 2) + x2(n− 4)
If y1(n) 6= y2(n), for n < n0: y1(n− 1) 6= y2(n− 1)→ · · · → y1(−∞) 6= y2(−∞), which isan absurd. Then the system is causal.Alternatively, one can write y(n) as a function of the past samples of the input signal (Proakis).Consider the variablem = n+1. Then, we can represent the system by the following alternativeform
y(m) =1n2y(m− 1)− 5x(m− 3)− x(m− 5) = f(x(m− 3), x(m− 4), . . . , x(m0))
where m0 is the instant in which the input signal was applied to the system, which is assumedto be inittially relaxed.
• Time Invariance:
y(n− n0) = (n− n0)2y(n− n0 + 1) + 5x(n− n0 − 2) + x(n− n0 − 4)y(n) 6= n2y(n+ 1) + 5x(n− 2) + x(n− 4)
Then the system is time varying.
(j) y(n) = y(n− 1) + x(n+ 5) + x(n− 5)
• Linearity:
ky(n) = ky(n− 1) + kx(n+ 5) + kx(n− 5)y(n) = y(n− 1) + x(n+ 5) + x(n− 5)y1(n) = y1(n− 1) + x1(n+ 5) + x1(n− 5)y2(n) = y2(n− 1) + x2(n+ 5) + x2(n− 5)
y1(n) + y2(n) = y1(n− 1) + y2(n− 1) + x1(n+ 5) + x2(n+ 5) + x1(n− 5) + x1(n− 2)y(n) = y(n− 1) + x(n+ 5) + x(n− 5)
The system is linear.• Causality:
x1(n) = δ(n) and x2(n) = 2δ(n) → x1(n) = x2(n) for n < 0y1(n) = y2(n), n < −5
y1(−5) = 1 6= y2(−5) = 2 (n0 = −5 < 0)
Then the system is noncausal.• Time Invariance:
y(n− n0) = y(n− n0 − 1) + x(n− n0 + 5) + x(n− n0 − 5)y(n) = y(n− 1) + x(n+ 5) + x(n− 5)
Then, the system is time invariant.
(k) y(n) = [2u(n− 3)− 1] y(n− 1) + x(n) + x(n− 1)
7
• Linearity:
ky(n) = [2u(n− 3)− 1] ky(n− 1) + kx(n) + kx(n− 1)y(n) = [2u(n− 3)− 1] y(n− 1) + x(n) + x(n− 1)y1(n) = [2u(n− 3)− 1] y1(n− 1) + x1(n) + x1(n− 1)y2(n) = [2u(n− 3)− 1] y2(n− 1) + x2(n) + x2(n− 1)
y1(n) + y2(n) = [2u(n−3)− 1] [y1(n−1) + y2(n−1)] + x1(n) + x2(n) + x1(n−1) + x2(n−1)y(n) = [2u(n− 3)− 1] y(n− 1) + x(n) + x(n− 1)
Then, the system is linear.• Causality:
Assume x1(n) = x2(n), n < n0 :
y1(n) = [2u(n− 3)− 1] y1(n− 1) + x1(n) + x1(n− 1)y2(n) = [2u(n− 3)− 1] y2(n− 1) + x2(n) + x2(n− 1)
If y1(n) 6= y2(n), for n < n0: y1(n− 1) 6= y2(n− 1)⇒ · · · ⇒ y1(−∞) 6= y2(−∞), which isan absurd. Then the system is causal.
• Time Invariance:
y(n− n0) = [2u(n− n0 − 3)− 1] y(n− n0 − 1) + x(n− n0) + x(n− n0 − 1)y(n) 6= [2u(n− 3)− 1] y(n− 1) + x(n) + x(n− 1)
Then, the system is time varying.
1.2 (a)
x (n) = cos2
(2π15n
)=
12
+12
cos(
4π15n
)x (n+N) =
12
+12
cos(
4π15n+
4π15N
), N ∈ Z
x (n+N) = x (n) for all n, if4π15N = 2πk, k ∈ Z
N =152k
N = 15, if k = 2
x (n) is periodic with period N = 15.
(b)
x (n) = cos(
4π5n+
π
4
)x (n+N) = cos
(4π5n+
4π5N +
π
4
), N ∈ Z
x (n+N) = x (n) for all n, if4π5N = 2πk, k ∈ Z
N =52k
N = 5, if k = 2
8 CHAPTER 1. DISCRETE-TIME SYSTEMS
x (n) is periodic with period N = 5.
(c)
x (n) = cos( π
27n+ 31
)x (n+N) = cos
( π27n+
π
27N + 31
), N ∈ Z
x (n+N) = x (n) for all n, ifπ
27N = 2πk, k ∈ Z
N = 54kN = 54, if k = 1
x (n) is periodic with period N = 54.
(d)
x (n) = sin (100n)x (n+N) = sin (100n+ 100N) , N ∈ Z
x (n+N) = x (n) for all n, if 100N = 2πk, k ∈ Z
N =2π100
k
@N ∈ Z that satisfies the above condition, so x (n) is not periodic.
(e)
x (n) = cos(
11π12
n
)x (n+N) = cos
(11π12
n+11π12
N
), N ∈ Z
x (n+N) = x (n) for all n, if11π12
N = 2πk, k ∈ Z
N =2411k
N = 24, if k = 11
x (n) is periodic with period N = 24.
(f)
x (n) = sin [(5π + 1)n]x (n+N) = sin [(5π + 1)n+ (5π + 1)N ] , N ∈ Z
x (n+N) = x (n) for all n, if (5π + 1)N = 2πk, k ∈ Z
N =2π
5π + 1k
@N ∈ Z that satisfies the above condition, so x (n) is not periodic.
9
1.3 (a) • Linearity:
Hkx (n) =∞∑
n=−∞kx (n) δ (m− nN), k ∈ R
= kHx (n)
Hx1 (n) + x2 (n) =∞∑
n=−∞[x1 (n) + x2 (n)] δ (m− nN)
=∞∑
n=−∞x1 (n) δ (m− nN) +
∞∑n=−∞
x2 (n) δ (m− nN)
= Hx1 (n)+Hx2 (n)and therefore the system is linear.
• Time Invariance:
y (m−m0) =∞∑
n=−∞x (n) δ (m−m0 − nN)
Hx (n−m0) =∞∑
n=−∞x (n−m0) δ (m− nN), l = n−m0
=∞∑
l=−∞
x (l) δ (m− (l +m0)N)
=∞∑
n=−∞x (n) δ (m−m0N − nN)
then y (m−m0) 6= Hx (n−m0), and the system is time varying.
(b) • Linearity:
Hkx (n) = kx (m)∞∑
n=−∞δ (m− nN), k ∈ R
= kHx (n)
Hx1 (n) + x2 (n) = [x1 (m) + x2 (m)]∞∑
n=−∞δ (m− nN)
= x1 (m)∞∑
n=−∞δ (m− nN) + x2 (m)
∞∑n=−∞
δ (m− nN)
= Hx1 (n)+Hx2 (n)and therefore the system is linear.• Time Invariance:
y (m−m0) = x (m−m0)∞∑
n=−∞δ (m−m0 − nN)
Hx (n−m0) = x (m−m0)∞∑
n=−∞δ (m− nN)
then y (m−m0) 6= Hx (m−m0), and the system is time varying.
10 CHAPTER 1. DISCRETE-TIME SYSTEMS
1.4 (a)
z (n) = x (n) ∗ h (n) =∞∑
k=−∞
x (k)h (n− k)
=4∑k=0
h (n− k)
for h (n) = anu (n)− anu (n− 8)
z (n) = anu (n)− anu (n− 8) + an−1u (n− 1)− an−1u (n− 9) ++ an−2u (n− 2)− an−2u (n− 10) + an−3u (n− 3)− an−3u (n− 11) ++ an−4u (n− 4)− an−4u (n− 12)= anu (n) + an−1u (n− 1) + an−2u (n− 2) + an−3u (n− 3) ++ an−4u (n− 4)− anu (n− 8)− an−1u (n− 9)− an−2u (n− 10)−− an−3u (n− 11)− an−4u (n− 12)
z (n) =
0, n < 0n∑k=0
ak, 0 ≤ n ≤ 4
4∑k=0
an−k, 5 ≤ n ≤ 7
11∑k=n
ak−4, 8 ≤ n ≤ 11
0, n > 11
.
(b)
z (n) = x (n) ∗ y (n) =∞∑
k=−∞
x (k) y (n− k)
=2∑k=0
y (n− k) +8∑k=7
y (n− k)
11
for y (n) = δ (n− 1) + 2δ (n− 2) + 3δ (n− 3) + 4δ (n− 4)
z (n) = δ (n− 1) + 2δ (n− 2) + 3δ (n− 3) + 4δ (n− 4) ++ δ (n− 2) + 2δ (n− 3) + 3δ (n− 4) + 4δ (n− 5) ++ δ (n− 3) + 2δ (n− 4) + 3δ (n− 5) + 4δ (n− 6) ++ δ (n− 8) + 2δ (n− 9) + 3δ (n− 10) + 4δ (n− 11) ++ δ (n− 9) + 2δ (n− 10) + 3δ (n− 11) + 4δ (n− 12)= δ (n− 1) + 3δ (n− 2) + 6δ (n− 3) + 9δ (n− 4) ++ 7δ (n− 5) + 4δ (n− 6) + δ (n− 8) + 3δ (n− 9) + 5δ (n− 10) ++ 7δ (n− 11) + 4δ (n− 12)
z (n) =
0, n < 0n∑k=0
k, 0 ≤ n ≤ 3
6∑k=n
k − 2, 4 ≤ n ≤ 6
0, n = 72n− 15, 8 ≤ n ≤ 11
4, n = 120, n > 12
.
Using the conv function, we confirm the above result, as it can be seen in Figure 1.1.
0 2 4 6 8 10 12
0
1
2
3
4
5
6
7
8
9
Figure 1.1: Graphical view of the convolution sum of x (n) and y (n) for exercise 1.4b.
12 CHAPTER 1. DISCRETE-TIME SYSTEMS
(c)
z (n) = x (n) ∗ h (n) =∞∑
k=−∞
x (k)h (n− k)
=∞∑
k=−∞
h (k)x (n− k)
z (n) =12x (n+ 1) + x (n) +
12x (n− 1)
x (n) = 0, for n oddx (n± 1) = 0, for n even
z (n) =
a (n) , n even12a (n+ 1) +
12a (n− 1) , n odd
.
1.5
x (n) = δ (n) + δ (n− 1)
y (n) = x (n) ∗ x (n) ∗ x (n) ∗ x (n) = [x (n) ∗ x (n)] ∗ [x (n) ∗ x (n)]
a (n) = x (n) ∗ x (n) =1∑k=0
x (n− k) = x (n) + x (n− 1)
= δ (n) + 2δ (n− 1) + δ (n− 2)
y (n) = a (n) ∗ a (n) =2∑k=0
a (n− k)
= a (n) + 2a (n− 1) + a (n− 2)= δ (n) + 2δ (n− 1) + δ (n− 2) + 2δ (n− 1) +
+4δ (n− 2) + 2δ (n− 3) + δ (n− 2) + 2δ (n− 3) + δ (n− 4)y (n) = δ (n) + 4δ (n− 1) + 6δ (n− 2) + 4δ (n− 3) + δ (n− 4)
Using the conv function, we confirm the above result, as it can be seen in Figure 1.2.
1.6
y (n) = x (n) ∗ h (n)
=∞∑
k=−∞
x (n− k)h (k)
=∞∑
k=−∞
an−kh (k) =∞∑
k=−∞
ana−kh (k)
y (n) = an∞∑
k=−∞
h (k) a−k
Eigenvalue:∞∑
k=−∞
h (k) a−k.
13
0 0.5 1 1.5 2 2.5 3 3.5 4
0
1
2
3
4
5
6
7
Figure 1.2: Graphical view of the convolution sum x (n) ∗ x (n) ∗ x (n) ∗ x (n) - Exercise 1.5.
1.7 Described the way it is in the exercise, we can depict that:
y (n) = h5 (n) ∗ [h2 (n) + h3 (n)] ∗ h1 (n) + h4 (n) ∗ x (n)= h5 (n) ∗ h2 (n) ∗ h1 (n) + h3 (n) ∗ h1 (n) + h4 (n) ∗ x (n)
y (n) = h5 (n) ∗ h2 (n) ∗ h1 (n) + h5 (n) ∗ h3 (n) ∗ h1 (n) + h5 (n) ∗ h4 (n) ∗ x (n)
1.8∞∑
n=−∞Ex (n)2 +
∞∑n=−∞
Ox (n)2 =∞∑
n=−∞
[Ex (n)2 +Ox (n)2]=
∞∑n=−∞
[(x (n) + x (−n)
2
)2
+(x (n)− x (−n)
2
)2]
=∞∑
n=−∞
[2x2 (n) + 2x2 (−n)
4
]
=∞∑
n=−∞
x2 (n)2
+∞∑
n=−∞
x2 (−n)2
, l = −n
=∞∑
n=−∞
x2 (n)2
+∞∑
l=−∞
x2 (l)2
=∞∑
n=−∞x2 (n)
1.9 (a)
y (n) = yp (n) + yh (n)yh (n) = k1 (−1)n + k2n (−1)n
14 CHAPTER 1. DISCRETE-TIME SYSTEMS
Since there is no input signal to the difference equation, the homogeneous solution is enough tofind y (n). Thus, we only need to apply the auxiliary conditions to find y (n).
y (0) = k1 + 0 = 1y (1) = −k1 − k2 = 0k1 = 1k2 = −1
y (n) = (−1)n − n (−1)n
(b)
y (n) = yp (n) + yh (n)
yh (n) = k1
(−√
2e arctan√
7)n
+ k2
(−√
2e− arctan√
7)n
Like the previous exercise, there is no input signal to the difference equation, and the homogeneoussolution is enough to find y (n). Thus, we only need to apply the auxiliary conditions to find y (n).
y (−1) = k1
(−√
2e arctan√
7)−1
+ k2
(−√
2e− arctan√
7)−1
= 1
y (0) = k1 + k2 = 1
By solving the above system, we conclude that:
k1 =12− 5
2√
7
k2 = k2∗ =
12
+5
2√
7
y (n) =(
12− 5
2√
7
)(−√
2e arctan√
7)n
+(
12
+5
2√
7
)(−√
2e− arctan√
7)n
1.10 We have to solve the following difference equation
y(n) + a2y(n− 2) = an sin(π
2n)u(n),
where y(n) = 0, for n < 0. In addition, we will assume that a 6= 0, otherwise we would end up withthe trivial solution y(n) = 0, for all n.
Using operator notation, the former equation becomes(1 + a2D−2
) y(n) = an sin(π
2n)u(n).
The homogeneous equation is
yh(n) + a2yh(n− 2) = 0.
Then, the characteristic polynomial equation from which we derive the homogeneous solution is
ρ2 + a2 = 0.
Since its roots are ρ = ae±π/2, then two solutions to the homogeneous equation are an sin[(π/2)n]and an cos[(π/2)n], as given in Table 1.1. Then, the general homogeneous solution becomes
yh(n) = an[c1 sin
(π2n)
+ c2 cos(π
2n)].
15
If one applies the correct annihilation to the excitation signals, the original difference equation istransformed into a higher order homogeneous equation. The solutions to this higher order homoge-neous equation include the homogeneous and particular solutions to the original difference equation.However, there is no annihilator polynomial for an sin[(π/2)n]u(n). Therefore, one can only com-pute the solution to the difference equation for n ≥ 0 when the term to be annihilated becomes justan sin[(π/2)n]. Therefore, for n ≥ 0, according to Table 1.2, for the given input signal, the annihilatorpolynomial is given by
Q(D) =(
1− aeπ/2D−1)(
1− ae−π/2D−1)
= 1 + a2D−2.
Applying the annihilator polynomial on the difference equation, we obtain(1 + a2D−2
) (1 + a2D−2
) y(n) = 0.
The corresponding polynomial equation is(ρ2 + a2
)2= 0.
It has four roots: ρ = aeπ/2 with multiplicity 2, and ρ = ae−π/2 with multiplicity 2. For n ≥ 0, thecomplete solution is then given by
y(n) = an[d1 sin
(π2n)
+ d2 cos(π
2n)
+ d3n sin(π
2n)
+ d4n cos(π
2n)].
The constants di, for i ∈ 1, 2, 3, 4, are computed in such a way that enforce y(n) to be a particularsolution to the nonhomogeneous equation. However, we notice that the terms multiplying the constantsd1 and d2 correspond to the solution to the homogeneous equation. Therefore, we do not need tosubstitute them in the equation since they will be annihilated. One can then compute d3 and d4 bysubstituting only their associated terms in the nonhomogeneous equation, leading to the followingalgebraic development:
nan[d3 sin
(π2n)
+ d4 cos(π
2n)]
+(n− 2)an−2a2d3 sin
[π2
(n− 2)]
+ d4 cos[π
2(n− 2)
]= an sin
(π2n)⇔
n[d3 sin
(π2n)
+ d4 cos(π
2n)]
+(n− 2)[d3 sin
(π2n− π
)+ d4 cos
(π2n− π
)]= sin
(π2n)⇔
n[d3 sin
(π2n)
+ d4 cos(π
2n)]
+(n− 2)[−d3 sin
(π2n)− d4 cos
(π2n)]
= sin(π
2n)⇔
2[d3 sin
(π2n)
+ d4 cos(π
2n)]
= sin(π
2n),
Therefore, we conclude that d3 = 1/2, d4 = 0, and the overall solution for n ≥ 0 is
y(n) =n
2an sin
(π2n)
+ an[d1 sin
(π2n)
+ d2 cos(π
2n)].
We now compute the constants d1 and d2 using the auxiliary conditions generated by the conditiony(n) = 0, for n < 0. This implies that y(−2) = y(−1) = 0. But the former equation is valid only forn ≥ 0. Thus, we need to run the difference equation from the auxiliary conditions y(−2) = y(−1) = 0in order to compute y(0) and y(1).
For n = 0, one has y(0) + a2y(−2) = y(0) = a0 sin(π2 0)u(0) = 0.
16 CHAPTER 1. DISCRETE-TIME SYSTEMS
For n = 1, one has y(1) + a2y(−1) = y(1) = a1 sin(π2 1)u(1) = a.
Using these auxiliary conditions in the overall solution we have
y(0) = 0 =02a0 sin
(π2
0)
+ a0[d1 sin
(π2
0)
+ d2 cos(π
20)]
= d2
y(1) = a =12a1 sin
(π2
1)
+ a1[d1 sin
(π2
1)
+ d2 cos(π
21)]
= ad1 +a
2⇔ d1 =
12,
in which we have used the assumption a 6= 0.
Substituting these values in the overall solution and using the fact that y(n) = 0 for n < 0, the generalsolution becomes
y(n) =an
2(n+ 1) sin
(π2n)u(n).
1.11 (a) We have to solve the following difference equation
y(n)− 1√2y(n− 1) + y(n− 2) = 2−n sin
(π4n)u(n),
where y(−2) = y(−1) = 0.Using operator notation, the former equation becomes(
1− 1√2D−1 +D−2
)y(n) = 2−n sin
(π4n)u(n).
The homogeneous equation is
yh(n)− 1√2yh(n− 1) + yh(n− 2) = 0.
Then, the characteristic polynomial equation from which we derive the homogeneous solution is
ρ2 − 1√2ρ+ 1 = 0.
Since its roots are ρ = (1 ± √
7)/(2√
2) = e±φ, where φ = arctan(√
7), then the generalhomogeneous solution is
yh(n) = [c1 cos (φn) + c2 sin (φn)] .
If one applies the correct annihilation to the excitation signals, the original difference equationis transformed into a higher order homogeneous equation. The solutions to this higher order ho-mogeneous equation include the homogeneous and particular solutions to the original differenceequation. However, there is no annihilator polynomial for 2−n sin[(π/4)n]u(n). Therefore, onecan only compute the solution to the difference equation for n ≥ 0 when the term to be annihilatedbecomes just 2−n sin[(π/4)n]. Therefore, for n ≥ 0, according to Table 1.2, for the given inputsignal, the annihilator polynomial is given by
Q(D) =(
1− 12eπ/4D−1
)(1− 1
2e−π/4D−1
)= 1− cos
(π4
)D−1 +
14D−2
= 1−√
22D−1 +
14D−2.
17
Applying the annihilator polynomial on the difference equation, we obtain(1−√
22D−1 +
14D−2
)(1− 1√
2D−1 +D−2
)y(n) = 0.
The corresponding polynomial equation is(ρ2 −
√2
2ρ+
14
)(ρ2 − 1√
2ρ+ 1
)= 0.
It has four roots, two of them are ρ = (1/2)e±π/4, whereas the other two are ρ = e±φ. Forn ≥ 0, the complete solution is then given by
y(n) =(
12
)n [d1 sin
(π4n)
+ d2 cos(π
4n)]
+ d3 sin (φn) + d4 cos (φn) .
The constants di, for i ∈ 1, 2, 3, 4, are computed following the same procedure described inExample 1.9 of the textbook.
(b) We have to solve the following difference equation
4y(n)− 2√
3y(n− 1) + y(n− 2) = cos(π
6n)u(n),
where y(−2) = y(−1) = 0.Using operator notation, the former equation becomes(
4− 2√
3D−1 +D−2)y(n) = cos
(π6n)u(n).
The homogeneous equation is
4yh(n)− 2√
3yh(n− 1) + yh(n− 2) = 0.
Then, the characteristic polynomial equation from which we derive the homogeneous solution is
4ρ2 − 2√
3ρ+ 1 = 0.
Since its roots are ρ = (√
3 ± )/4 = (1/2)e±π/6, then two solutions to the homogeneousequation are (1/2)n sin[(π/6)n] and (1/2)n cos[(π/6)n], as given in Table 1.1. Then, the generalhomogeneous solution becomes
yh(n) =(
12
)n [c1 sin
(π6n)
+ c2 cos(π
6n)].
If one applies the correct annihilation to the excitation signals, the original difference equationis transformed into a higher order homogeneous equation. The solutions to this higher order ho-mogeneous equation include the homogeneous and particular solutions to the original differenceequation. However, there is no annihilator polynomial for cos[(π/6)n]u(n). Therefore, one canonly compute the solution to the difference equation for n ≥ 0 when the term to be annihilatedbecomes just cos[(π/6)n]. Therefore, for n ≥ 0, according to Table 1.2, for the given input signal,the annihilator polynomial is given by
Q(D) =(
1− eπ/6D−1)(
1− e−π/6D−1)
= 1− 2 cos(π
6
)D−1 +D−2
= 1−√
3D−1 +D−2.
18 CHAPTER 1. DISCRETE-TIME SYSTEMS
Applying the annihilator polynomial on the difference equation, we obtain(1−√
3D−1 +D−2)(
4− 2√
3D−1 +D−2)y(n) = 0.
The corresponding polynomial equation is(ρ2 −
√3ρ+ 1
)(4ρ2 − 2
√3ρ+ 1
)= 0.
It has four roots, two of them are ρ = (1/2)e±π/6, whereas the other two are ρ = e±π/6. Forn ≥ 0, the complete solution is then given by
y(n) =(
12
)n [d1 sin
(π6n)
+ d2 cos(π
6n)]
+ d3 sin(π
6n)
+ d4 cos(π
6n).
The constants di, for i ∈ 1, 2, 3, 4, are computed in such a way that enforce y(n) to be a partic-ular solution to the nonhomogeneous equation. However, we notice that the terms multiplying theconstants d1 and d2 correspond to the solution to the homogeneous equation. Therefore, we donot need to substitute them in the equation since they will be annihilated. One can then computed3 and d4 by substituting only their associated terms in the nonhomogeneous equation, leading tothe following algebraic development:
4[d3 sin
(π6n)
+ d4 cos(π
6n)]− 2√
3d3 sin
[π6
(n− 1)]
+ d4 cos[π
6(n− 1)
]+d3 sin
[π6
(n− 2)]
+ d4 cos[π
6(n− 2)
]= cos
(π6n)⇔
4[d3 sin
(π6n)
+ d4 cos(π
6n)]−√
3[(√
3d3 + d4) sin(π
6n)
+ (√
3d4 − d3) cos(π
6n)]
+12
[(d3 +
√3d4) sin
(π6n)
+ (d4 −√
3d3) cos(π
6n)]
= cos(π
6n)⇔(
32d3 −
√3
2d4
)sin(π
6n)
+
(32d4 −
√3
2d3
)cos(π
6n)
= cos(π
6n).
Thus, one ends up with the following linear system:
32d3 −
√3
2d4 = 0
32d4 +
√3
2d3 = 1.
Therefore, we conclude that d3 =√
3/6, d4 = 1/2, and the overall solution for n ≥ 0 is
y(n) =√
36
sin(π
6n)
+12
cos(π
6n)
+(
12
)n [d1 sin
(π6n)
+ d2 cos(π
6n)].
We now compute the constants d1 and d2 using the auxiliary conditions generated by the conditiony(−2) = y(−1) = 0. But the former equation is valid only for n ≥ 0. Thus, we need to run thedifference equation from the auxiliary conditions y(−2) = y(−1) = 0 in order to compute y(0)and y(1).For n = 0, one has 4y(0)− 2
√3y(−1) + y(−2) = 4y(0) = cos
(π6 0)u(0) = 1⇔ y(0) = 1
4 .
For n = 1, one has 4y(1)− 2√
3y(0) + y(−1) = 4y(1)−√
32 = cos
(π6 1)u(0) =
√3
2 ⇔ y(1) =√
34 .
19
Using these auxiliary conditions in the overall solution we have
y(0) =14
=√
36
sin(π
60)
+12
cos(π
60)
+(
12
)0 [d1 sin
(π6
0)
+ d2 cos(π
60)]
=12
+ d2 ⇔ d2 = −14
y(1) =√
34
=√
36
sin(π
61)
+12
cos(π
61)
+(
12
)1 [d1 sin
(π6
1)
+ d2 cos(π
61)]
=√
312−√
316
+√
34
+d1
4⇔ d1 = −
√3
12.
Substituting these values in the overall solution and using the fact that y(n) = 0 for n < 0, thegeneral solution becomes
y(n) =
[√3
6sin(π
6n)
+12
cos(π
6n)−√
312
(12
)nsin(π
6n)− 1
4
(12
)ncos(π
6n)]
u(n).
(c) We have to solve the following difference equation
y(n) + 2y(n− 1) + y(n− 2) = 2nu(−n),
where y(1) = y(2) = 0.Using operator notation, the former equation becomes(
1 + 2D−1 +D−2) y(n) = 2nu(−n).
The homogeneous equation is
yh(n) + 2yh(n− 1) + yh(n− 2) = 0.
Then, the characteristic polynomial equation from which we derive the homogeneous solution is
ρ2 + 2ρ+ 1 = (ρ+ 1)2 = 0.
Since its roots are ρ = −1 with multiplicity 2, then the general homogeneous solution becomes
yh(n) = c1 (−1)n + c2n (−1)n .
If one applies the correct annihilation to the excitation signals, the original difference equationis transformed into a higher order homogeneous equation. The solutions to this higher order ho-mogeneous equation include the homogeneous and particular solutions to the original differenceequation. However, there is no annihilator polynomial for 2nu(−n). Therefore, one can only com-pute the solution to the difference equation for n ≤ 0 when the term to be annihilated becomesjust 2n. Therefore, for n ≤ 0, according to Table 1.2, for the given input signal, the annihilatorpolynomial is given by
Q(D) =(1− 2D−1
).
Applying the annihilator polynomial on the difference equation, we obtain(1− 2D−1
) (1 + 2D−1 +D−2
) y(n) = 0.
The corresponding polynomial equation is
(ρ− 2)(ρ2 + 2ρ+ 1
)= 0.
20 CHAPTER 1. DISCRETE-TIME SYSTEMS
It has three roots, one is ρ = 2, whereas the other two are ρ = −1 (multiplicity 2). For n ≤ 0, thecomplete solution is then given by
y(n) = d12n︸︷︷︸yp(n)
+ d2 (−1)n + d3n (−1)n︸ ︷︷ ︸yh(n)
.
Substituting y(n) by yp(n) in the original difference equation, for n ≤ 0, we obtain the value ofd1:
yp(n) + 2yp(n− 1) + yp(n− 2) = 2n
d12n + 2d12n−1 + d12n−2 = 2n −→ d1 =49
Using the initial conditions we can determine d2 and d3. However, the initial conditions corre-spond to n = 1 and n = 2, whereas the general solution y(n) requires that n ≤ 0. So, we need totransform the initial conditions y(1), y(2) into auxiliary conditions y(−1), y(0).
n = 2 : y(2) + 2y(1) + y(0) = 22u(−2) −→ y(0) = 0n = 1 : y(1) + 2y(0) + y(−1) = 2u(−1) −→ y(−1) = 0
Substituting these auxiliary conditions in the complete solution y(n) we determine the constantsd2 and d3.
n = 0 :49
+ d2 + 0 = 0 −→ d2 = −49
n = 1 :89− d2 − d3 = 0 −→ d3 =
89− d2 =
129
Thus, for n ≤ 0, the complete solution is
y(n) =49
2n − 49
(−1)n +129n (−1)n .
(d) We have to solve the following difference equation
y(n)− 56y(n− 1) + y(n− 2) = (−1)nu(n) = cos (πn)u(n),
where y(−2) = y(−1) = 0.Using operator notation, the former equation becomes(
1− 56D−1 +D−2
)y(n) = cos (πn)u(n).
The homogeneous equation is
yh(n)− 56yh(n− 1) + yh(n− 2) = 0.
Then, the characteristic polynomial equation from which we derive the homogeneous solution is
ρ2 − 56ρ+ 1 = 0.
Since its roots are ρ = (5± √119)/12, then the general homogeneous solution becomes
yh(n) = c1
(5 +
√119
12
)n+ c2
(5− √119
12
)n.
21
If one applies the correct annihilation to the excitation signals, the original difference equationis transformed into a higher order homogeneous equation. The solutions to this higher order ho-mogeneous equation include the homogeneous and particular solutions to the original differenceequation. However, there is no annihilator polynomial for cos(πn)u(n). Therefore, one can onlycompute the solution to the difference equation for n ≥ 0 when the term to be annihilated be-comes just cos(πn). Therefore, for n ≥ 0, according to Table 1.2, for the given input signal, theannihilator polynomial is given by
Q(D) =(1− eπD−1
) (1− e−πD−1
)= 1 + 2D−1 +D−2.
Applying the annihilator polynomial on the difference equation, we obtain
(1 + 2D−1 +D−2
)(1− 5
6D−1 +D−2
)y(n) = 0.
The corresponding polynomial equation is
(ρ2 + 2ρ+ 1
)(ρ2 − 5
6ρ+ 1
)= 0.
It has four roots: ρ = −1 = eπ with multiplicity 2 and ρ = (5 ± √119)/12. For n ≥ 0, thecomplete solution is then given by
y(n) = d1
(5 +
√119
12
)n+ d2
(5− √119
12
)n+ d3 cos (πn) + nd4 cos (πn) .
The constants di, for i ∈ 1, 2, 3, 4, are computed in such a way that enforce y(n) to be a partic-ular solution to the nonhomogeneous equation. However, we notice that the terms multiplying theconstants d1 and d2 correspond to the solution to the homogeneous equation. Therefore, we donot need to substitute them in the equation since they will be annihilated. One can then computed3 and d4 by substituting only their associated terms in the nonhomogeneous equation, leading tothe following algebraic development:
[(d3 + nd4) cos (πn)]− 56[d3 + (n− 1)d4] cos [π(n− 1)]
+ [d3 + (n− 2)d4] cos [π(n− 2)] = cos (πn)⇔[(d3 + nd4) cos (πn)] +
56[d3 + (n− 1)d4] cos (πn)
+ [d3 + (n− 2)d4] cos (πn) = cos (πn)⇔[176
(d3 − d4)]
cos (πn) + n
(176d4
)cos (πn) = cos (πn)
Therefore, we conclude that d3 = 6/17, d4 = 0, and the overall solution for n ≥ 0 is
y(n) = d1
(5 +
√119
12
)n+ d2
(5− √119
12
)n+
617
cos (πn) .
We now compute the constants d1 and d2 using the auxiliary conditions generated by the conditiony(−2) = y(−1) = 0. But the former equation is valid only for n ≥ 0. Thus, we need to run thedifference equation from the auxiliary conditions y(−2) = y(−1) = 0 in order to compute y(0)and y(1).For n = 0, one has y(0)− 5
6y(−1) + y(−2) = y(0) = (−1)0u(0) = 1.For n = 1, one has y(1)− 5
6y(0) + y(−1) = y(1)− 56 = (−1)1u(1) = −1⇔ y(1) = − 1
6 .
22 CHAPTER 1. DISCRETE-TIME SYSTEMS
Using these auxiliary conditions in the overall solution we have
y(0) = 1 = d1
(5 +
√119
12
)0
+ d2
(5− √119
12
)0
+617
cos (π0)
= d1 + d2 +617
y(1) = −16
= d1
(5 +
√119
12
)1
+ d2
(5− √119
12
)1
+617
cos (π1)
= (d1 + d2)512− 6
17+ (d1 − d2)
√11912
.
Therefore, we conclude that d1 = d2 = 6/34. Substituting these values in the overall solution andusing the fact that y(n) = 0 for n < 0, the general solution becomes
y(n) =634
[(5 +
√119
12
)n+
(5− √119
12
)n+ 2(−1)n
]u(n).
(e) We have to solve the following difference equation
y(n) + y(n− 3) = (−1)nu(−n),
where y(1) = y(2) = y(3) = 0.Using operator notation, the former equation becomes(
1 +D−3) y(n) = (−1)nu(−n).
The homogeneous equation is
yh(n) + yh(n− 3) = 0.
Then, the characteristic polynomial equation from which we derive the homogeneous solution is
ρ3 + 1 = (ρ+ 1)(ρ2 − ρ+ 1) = 0.
The roots are ρk ∈ −1, eπ/3, e−π/3. Then, the general homogeneous solution becomes
yh(n) = c1(−1)n +[c2 sin
(π3n)
+ c3 cos(π
3n)].
If one applies the correct annihilation to the excitation signals, the original difference equationis transformed into a higher order homogeneous equation. The solutions to this higher order ho-mogeneous equation include the homogeneous and particular solutions to the original differenceequation. However, there is no annihilator polynomial for (−1)nu(−n). Therefore, one can onlycompute the solution to the difference equation for n ≤ 0 when the term to be annihilated becomesjust (−1)n. Therefore, for n ≤ 0, according to Table 1.2, for the given input signal, the annihilatorpolynomial is given by
Q(D) =(1 +D−1
).
Applying the annihilator polynomial on the difference equation, we obtain(1 +D−1
) (1 +D−3
) y(n) = 0.
The corresponding polynomial equation is
(ρ+ 1)(ρ3 + 1
)= (ρ+ 1)(ρ+ 1)(ρ2 − ρ+ 1) = 0.
23
It has four roots, two of them are ρ = −1 (multiplicity 2), whereas the other two are ρ = e±π/3.For n ≤ 0, the complete solution is then given by
y(n) = d1(−1)n + d2n(−1)n +[d3 sin
(π3n)
+ d4 cos(π
3n)]
= d2n(−1)n︸ ︷︷ ︸yp(n)
+ d1(−1)n +[d3 sin
(π3n)
+ d4 cos(π
3n)]
︸ ︷︷ ︸yh(n)
.
Substituting y(n) by yp(n) in the original difference equation, for n ≤ 0, we obtain the value ofd2:
yp(n) + yp(n− 3) = (−1)n
d2n(−1)n + d2(n− 3)(−1)(n−3) = (−1)n −→ d2 = −13
Using the initial conditions we can determine d1, d3 and d4. However, the initial conditionscorrespond to n ∈ 1, 2, 3, whereas the general solution y(n) requires that n ≤ 0. So, we need totransform the initial conditions y(1), y(2), y(3) into auxiliary conditions y(−2), y(−1), y(0).
n = 3 : y(3) + y(0) = (−1)3u(−3) −→ y(0) = 0
n = 2 : y(2) + y(−1) = (−1)2u(−2) −→ y(−1) = 0
n = 1 : y(1) + y(−2) = (−1)1u(−1) −→ y(−2) = 0
Substituting these auxiliary conditions in the complete solution y(n) we determine the constantsd1, d3 and d4. Using matrix notation, the system of equations is represented by 1 0 1
−1 −√3/2 1/21 −√3/2 −1/2
︸ ︷︷ ︸
A
d1
d3
d4
︸ ︷︷ ︸
d
=
01/3−2/3
︸ ︷︷ ︸
b
,
and since A has full rank, the system Ad = b has a unique solution given by
d = A−1b =
−1/30.1925
1/3
.Thus, for n ≤ 0, the complete solution is
y(n) = −13
(−1)n − 13n(−1)n + 0.1925 sin
(π3n)
+13
cos(π
3n).
1.12 (a) y(1)=1; y(2)=0;for i=3:21,y(i)=-2*y(i-1)-y(i-2);
end;stem(0:20,y);
(b) y(1)=1; y(2)=1;for i=3:22,y(i)=-y(i-1)-2*y(i-2);
end;stem(0:20,y(2:22));
24 CHAPTER 1. DISCRETE-TIME SYSTEMS
0 2 4 6 8 10 12 14 16 18 20−20
−15
−10
−5
0
5
10
15
20
Figure 1.3: Graphical view of the sequence y (n) - Exercise 1.12a.
0 2 4 6 8 10 12 14 16 18 20−1000
−500
0
500
1000
1500
2000
Figure 1.4: Graphical view of the sequence y (n) - Exercise 1.12b.
1.13
N∑i=0
aiy (n− i) =M∑l=0
blx (n− l)
y (n) =M∑l=0
blx (n− l)−N∑i=1
aiy (n− i)
25
We suppose the input signal x (n) is causal, so that x (n) = 0,∀n < 0. Thus,
y (n) = −N∑i=1
aiy (n− i),∀n < 0
Considering that the N independent auxiliary conditions are set to zero, we conclude that y (n) =0,∀n < 0. Then,
y (0) = b0x (0)y (1) = b0x (1) + b1x (0)− a1y (0) = b0x (1) + b1x (0)− a1b0x (0)
...
If x (n) = kx1 (n) + x2 (n), and remembering that y1 (n) = y2 (n) = 0,∀n < 0,
y (0) = kb0x1 (0) + b0x2 (n) = ky1 (0) + y2 (0)y (1) = b0 [kx1 (1) + x2 (1)] + b1 [kx1 (0) + x2 (0)]− a1 [kb0x1 (0) + b0x2 (n)]y (1) = kb0x1 (1) + kb1x1 (0)− ka1y1 (0) + b0x2 (1) + b1x2 (0)− ka1y2 (0)
...
y (n) = k
M∑l=0
blx1 (n− l)− kN∑i=1
aiy1 (n− l) +M∑l=0
blx2 (n− l)−N∑i=1
aiy2 (n− l)
y (n) = ky1 (n) + y2 (n)
And therefore we can say that, if the auxiliary conditions are zero, then the system is linear.
To prove that the system is linear only if the auxiliary conditions are zero, we take a general systemwhere these conditions are not zero. With a causal input signal, the response of this system is shownbelow,
y (0) = b0x (0)−N∑i=1
aiy (−i), y (−1) 6= 0, . . . , y (−N) 6= 0
If x (n) = kx1 (n),
y (0) = kb0x1 (0)−N∑i=1
aiy (−i)
ky1 (0) = kb0x1 (0)− kN∑i=1
aiy (−i)
y (0) 6= ky1 (0)y (n) 6= ky1 (n)
And we prove that any system whose auxiliary conditions are not zero will be nonlinear. Thus, if asystem is linear its auxiliary conditions must be zero. We have already proved that if the auxiliaryconditions of a system are zero, the system is linear. By joining these two conclusions, we prove that:
A system is linear if and only if the auxiliary conditions are zero.
Now, suppose a system with a causal input signal x (n),
y (n) = Hx (n) =M∑l=0
blx (n− l)−N∑i=1
aiy (n− i)
z (n) = Hx (n− n0) =M∑l=0
blx (n− n0 − l)−N∑i=1
aiy (n− i)
26 CHAPTER 1. DISCRETE-TIME SYSTEMS
Since the auxiliary conditions are zero, we know that y (n) = 0,∀n < 0, and z (n) = 0,∀n < n0.
z (n0) = b0x (0) = y (0)z (n0 + 1) = b0x (1) + b1x (0)− a1z (n0) = y (1)
...z (n+ n0) = y (n)
y (n− n0) = z (n) = Hx (n− n0)Thus, the system is time invariant if the auxiliary conditions represent the values ofy (−1) , y (−2) , . . . , y (−N).
1.14 (a)
h (n) = 5δ (n) + 3δ (n− 1) + 8δ (n− 2) + 3δ (n− 4)
(b)
h (n) +13h (n− 1) = δ (n) +
12δ (n− 1)
h (n) = hh (n) + hp (n)
hh (n) = −13hh (n− 1)
hh (n) = k1
(−1
3
)n
hp (n) +13hp (n− 1) = δ (n) +
12δ (n− 1)
hp (n) = δ (n) +16
(−1
3
)n−1
u (n− 1)
h (−1) = 0 ⇒ k1 (−3) + 0 = 0k1 = 0
h (n) = δ (n) +16
(−1
3
)n−1
u (n− 1)
(c)
h (n)− 3h (n− 1) = δ (n)h (n) = hh (n) + hp (n)
hh (n) = 3hh (n− 1)hh (n) = k1 (3)n
hp (n) = 3hp (n− 1) + δ (n)hp (n) = (3)n u (n)
27
h (−1) = 0 ⇒ k1
(13
)+ 0 = 0
k1 = 0
h (n) = (3)n u (n)
(d) y(n) + 2y(n − 1) + y(n − 2) = x(n), x(n) = δ(n), and y(−1) = 0, y(−2) = 0. Applying theinput and the initial conditions to the system we obtain:
h(0) +2× 0 + 0 = 1↔ h(0) = 1h(1) +2× 1 + 0 = 0↔ h(1) = −2h(2) +2× (−2) + 1 = 0↔ h(2) = 3h(3) +2× 3− 2 = 0↔ h(3) = −4
...h(n) = (−1)n(n+ 1)u(n)
1.15 (a) Using the function impz, we can find the impulse response of the system, as it can be seen inFigure 1.5.
0 2 4 6 8 10 12 14
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
Figure 1.5: Impulse response of the system - Exercise 1.15a.
(b) By analogy, the function impz is used to find the impulse response of the system, as it can be seenin Figure 1.6.
1.16
h (n)− h (n− 1) = δ (n)− δ (n− 5)h (n) = hh (n) + hp (n)
hh (n)− hh (n− 1) = 0hh (n) = k1,∀n
28 CHAPTER 1. DISCRETE-TIME SYSTEMS
0 2 4 6 8 10 12 14 16 18 20−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
Figure 1.6: Impulse response of the system - Exercise 1.15b.
hp (n)− hp (n− 1) = δ (n)− δ (n− 5)hp (n) = u (n)− u (n− 5)
h (−1) = 0 ⇒ k1 (1) + 0 = 0k1 = 0
h (n) = u (n)− u (n− 5)
1.17 We will first solve the following difference equation
12y(n)− 7y(n− 1) + y(n− 2) = sin(π
3n)u(n).
Using operator notation, the former equation becomes(12− 7D−1 +D−2
) y(n) = sin(π
3n)u(n).
The homogeneous equation is
12yh(n)− 7yh(n− 1) + yh(n− 2) = 0.
Then, the characteristic polynomial equation from which we derive the homogeneous solution is
12ρ2 − 7ρ+ 1 = 0.
Since its roots are ρ = 1/3 and ρ = 1/4, then, the general homogeneous solution becomes
yh(n) = c1
(13
)n+ c2
(14
)n.
29
If one applies the correct annihilation to the excitation signals, the original difference equation is trans-formed into a higher order homogeneous equation. The solutions to this higher order homogeneousequation include the homogeneous and particular solutions to the original difference equation. How-ever, there is no annihilator polynomial for sin[(π/3)n]u(n). Therefore, one can only compute the so-lution to the difference equation for n ≥ 0 when the term to be annihilated becomes just sin[(π/3)n].Therefore, for n ≥ 0, according to Table 1.2, for the given input signal, the annihilator polynomial isgiven by
Q(D) =(
1− eπ/3D−1)(
1− e−π/3D−1)
= 1− 2 cos(π
3
)D−1 +D−2
= 1−D−1 +D−2.
Applying the annihilator polynomial on the difference equation, we obtain(1−D−1 +D−2
) (12− 7D−1 +D−2
) y(n) = 0.
The corresponding polynomial equation is(ρ2 − ρ+ 1
) (12ρ2 − 7ρ+ 1
)= 0.
It has four roots, two of them are ρ = e±π/3, whereas the other two are ρ = 1/3 and ρ = 1/4. Forn ≥ 0, the complete solution is then given by
y(n) = d1
(13
)n+ d2
(14
)n+ d3 sin
(π3n)
+ d4 cos(π
3n).
The constants di, for i ∈ 1, 2, 3, 4, are computed in such a way that enforce y(n) to be a particularsolution to the nonhomogeneous equation. However, we notice that the terms multiplying the constantsd1 and d2 correspond to the solution to the homogeneous equation. Therefore, we do not need tosubstitute them in the equation since they will be annihilated. One can then compute d3 and d4 bysubstituting only their associated terms in the nonhomogeneous equation, leading to the followingalgebraic development:
12[d3 sin
(π3n)
+ d4 cos(π
3n)]− 7
d3 sin
[π3
(n− 1)]
+ d4 cos[π
3(n− 1)
]+d3 sin
[π3
(n− 2)]
+ d4 cos[π
3(n− 2)
]= sin
(π3n)⇔
12[d3 sin
(π3n)
+ d4 cos(π
3n)]− 7
2
[(d3 +
√3d4) sin
(π3n)
+ (d4 −√
3d3) cos(π
3n)]
+12
[(−d3 +
√3d4) sin
(π3n)
+ (−d4 −√
3d3) cos(π
3n)]
= sin(π
3n)⇔(
8d3 − 3√
3d4
)sin(π
3n)
+(
3√
3d3 + 8d4
)cos(π
3n)
= sin(π
3n).
Thus, by defining a = 3√
3 and b = 8, one ends up with the following linear system:
ad3 + bd4 = 0bd3 − ad4 = 1.
Therefore, we conclude that d3 = b/(a2 + b2) = 8/91, d4 = −a/(a2 + b2) = −3√
3/91, and theoverall solution for n ≥ 0 is
y(n) = d1
(13
)n+ d2
(14
)n+
891
sin(π
3n)− 3√
391
cos(π
3n).
As we want to determine the steady-state response, we do not need to compute d1 and d2. Thus, thesteady-state response (response obtained when n is “very large”) is given by
y(n) =891
sin(π
3n)− 3√
391
cos(π
3n).
30 CHAPTER 1. DISCRETE-TIME SYSTEMS
1.18 (a)
y (n) = sin (ωn− 2ω)u (n− 2) + sin (ωn− ω)u (n− 1) + sin (ωn)u (n)
Steady-state⇒ n ≥ 2.
y (n) = sin (ωn− 2ω) + sin (ωn− ω) + sin (ωn)= sin (ωn) cos (2ω)− sin (2ω) cos (ωn) + sin (ωn) cos (ω)− sin (ω) cos (ωn) +
+ sin (ωn)= sin (ωn) [cos (2ω) + cos (ω) + 1] + cos (ωn) [− sin (2ω)− sin (ω)]
y (n) = A (ω) sin (ωn+ θ (ω))
A (ω) =√
(cos (2ω) + cos (ω) + 1)2 + (sin (2ω) + sin (ω))2
θ (ω) = arctan(− sin (2ω) + sin (ω)
1 + cos (ω) + cos (2ω)
)(b)
x (n) =[eωn − e−ωn] u (n)
2
y (n) =∞∑
k=−∞
h (k)[eω(n−k) − e−ω(n−k)
] u (n− k)2
Steady-state:
y (n) =eωn
2
∞∑k=−∞
h (k) e−ωk − e−ωn
2
∞∑k=−∞
h (k) eωk
=eωn
2H (eω)− e−ωn
2H(e−ω
), H (eω) =
∞∑k=−∞
h (k) e−ωk
=eωn
2|H (eω) |e∠H(eω) − e−ωn
2|H (e−ω) |e∠H(e−ω)
=eωn
2|H (eω) |e∠H(eω) − e−ωn
2|H (eω) |e−∠H(eω)
=|H (eω) |
2
[eωn+∠H(eω) − e−(ωn+∠H(eω))
]y (n) = |H (eω) | sin (ωn+ ∠H (eω))
h (n) = 2−nu (n)
H (eω) =∞∑k=0
2−ke−ωk
=∞∑k=0
(0.5e−ω
)kH (eω) =
11− 0.5e−ω
|H (eω) | =1√
(1− 0.5 cos (ω))2 + (0.5 sin (ω))2
∠H (eω) = − arctan(
0.5 sin (ω)1− 0.5 cos (ω)
)
31
y (n) = |H (eω) | sin (ωn+ ∠H (eω))
(c)
h (n) = δ (n− 2) + 2δ (n− 1) + δ (n)H (eω) = 1 + e−ω + e−2ω
|H (eω) | =√
(1 + 2 cos (ω) + cos (2ω))2 + (2 sin (ω) + sin (2ω))2
∠H (eω) = arctan(
2 sin (ω) + sin (2ω)1 + 2 cos (ω) + cos (2ω)
)
y (n) = |H (eω) | sin (ωn+ ∠H (eω))
1.19 (a)
• MatLab® Program:n=0:40;w1=pi/3;w2=pi/2;x1=sin(w1.*n);x2=sin(w2.*n);y1=filter([1 1 1],1,x1);y2=filter([1 1 1],1,x2);subplot(2,1,1), stem(0:40,y1);subplot(2,1,2), stem(0:40,y2);
• Graphical solution at Figure 1.7.
0 5 10 15 20 25 30 35 40−2
−1
0
1
2
0 5 10 15 20 25 30 35 40−1.5
−1
−0.5
0
0.5
1
1.5
Figure 1.7: Graphical view of the sequence y (n) - Exercise 1.19a.
(b)
32 CHAPTER 1. DISCRETE-TIME SYSTEMS
• MatLab® Program:
n=0:40;w1=pi/3;w2=pi/2;x1=sin(w1.*n);x2=sin(w2.*n);y1=filter([1],[1 -.5],x1);y2=filter([1],[1 -.5],x2);subplot(2,1,1), stem(0:40,y1);subplot(2,1,2), stem(0:40,y2);
• Graphical solution at Figure 1.8.
0 5 10 15 20 25 30 35 40−1.5
−1
−0.5
0
0.5
1
1.5
0 5 10 15 20 25 30 35 40−1
−0.5
0
0.5
1
Figure 1.8: Graphical view of the sequence y (n) - Exercise 1.19b.
(c)
• MatLab® Program:
n=0:40;w1=pi/3;w2=pi/2;x1=sin(w1.*n);x2=sin(w2.*n);y1=filter([1 2 1],1,x1);y2=filter([1 2 1],1,x2);subplot(2,1,1), stem(0:40,x1);subplot(2,1,2), stem(0:40,x2);
• Graphical solution at Figure 1.9.
33
0 5 10 15 20 25 30 35 40−1
−0.5
0
0.5
1
0 5 10 15 20 25 30 35 40−1
−0.5
0
0.5
1
Figure 1.9: Graphical view of the sequence y (n) - Exercise 1.19c.
1.20 (a)∞∑
n=−∞|h (n)| =
∞∑n=0
2−n
=∞∑n=0
0.5n
=1
1− 0.5= 2 <∞
The system is BIBO stable.
(b)∞∑
n=−∞|h (n)| =
∞∑n=0
1.5n
The above sum is unbounded, and so the system is unstable.
(c)∞∑
n=−∞|h (n)| =
∞∑n=−∞
0.1n
=−1∑
n=−∞0.1n +
∞∑n=0
0.1n
=−1∑
n=−∞10−n +
11− 0.1
, l = −n
=∞∑l=1
10l +109
The above sum is unbounded, and so the system is unstable.
34 CHAPTER 1. DISCRETE-TIME SYSTEMS
(d)
∞∑n=−∞
|h (n)| =0∑
n=−∞2−n, l = −n
=∞∑l=0
2l
The above sum is unbounded, and so the system is unstable.
(e)
∞∑n=−∞
|h (n)| =9∑
n=0
10n
=1− 1010
1− 10<∞
Therefore, the system is BIBO stable.
(f)
∞∑n=−∞
|h (n)| =−1∑
n=−∞0.5n +
∞∑n=5
0.5n
=−1∑
n=−∞2−n +
0.55
1− 0.5, l = −n
=∞∑l=1
2l +0.55
1− 0.5
The above sum is unbounded, and so the system is unstable.
1.21
Xi (Ω) =1T
∞∑k=−∞
Xa
(Ω− 2π
Tk
)
Xi
(
(Ω +
2πT
))=
1T
∞∑k=−∞
Xa
(
(Ω +
2πT
)− 2π
Tk
)
=1T
∞∑k=−∞
Xa
(Ω +
2πT− 2π
Tk
)
=1T
∞∑k=−∞
Xa
(Ω− 2π
T(k − 1)
), l = k − 1
=1T
∞∑l=−∞
Xa
(Ω− 2π
Tl
)Xi
(
(Ω +
2πT
))= Xi (Ω)
1.22 The discrete-time signal x (n) that corresponds to the analog signal xa (t) is obtained using the sam-
35
pling period T =1
4000.
x (n) = xa (nT )
= xa
( n
4000
)= 3 cos
(2π1000
n
4000
)+ 7 sin
(2π1100
n
4000
)x (n) = 3 cos (2π0.25n) + 7 sin (2π0.275n)
y (n) = x (n) + x (n− 2)= 3 cos (2π0.25n) + 7 sin (2π0.275n) +
+3 cos (2π0.25n− 2π0.5) + 7 sin (2π0.275n− 2π0.55)= 3 cos (2π0.25n) + 7 sin (2π0.275n) +
+3 cos (2π0.25n) cos (π) + +3 sin (2π0.25n) sin (π) ++7 sin (2π0.275n) cos (1.1π)− 7 sin (1.1π) cos (2π0.275n)
...y (n) = 2.1901 cos (0.55πn+ 0.45π)
ya (t) =∞∑
n=−∞2.1901 cos (0.55πn+ 0.45π)
sinπ (4000t− n)π (4000t− n)
...ya (t) = 2.1901 sin (2π1100t+ 0.45π)
The effect this processing has on the input signal is the cancellation of the component whose frequencywas 1000Hz, and the attenuation of the the other component (frequency of 1100Hz), which sufferedalso a phase delay.
1.23 MatLab® Program:
na=0:10000;xa=3*cos(2*pi*0.0025*na)+7*sin(2*pi*0.00275*na);x=xa(1:100:length(xa));y=filter([1 0 1],1,x);n=0:100:length(na);h=sinc(-10:.01:10);yaux=zeros(length(n),length(xa)+length(h)-1);for i=1:length(n),
yaux(i,100*i-99:100*i-100+length(h))=h*y(i);end;yc=sum(yaux);ya=yc(1001:length(yc)-1000);figure(1);subplot(2,1,1), stem(x);subplot(2,1,2), stem(y);Ta=1/400000;figure(2);subplot(2,1,1), plot(na*Ta,xa);subplot(2,1,2), plot(na*Ta,ya);
The discrete signals x (n) and y (n) can be seen in Figure 1.10, whereas the analog signals x (t) andy (t) are shown in Figure 1.11. It is easy to notice that the signal y (t) shows a transient component,
36 CHAPTER 1. DISCRETE-TIME SYSTEMS
0 20 40 60 80 100 120−10
−5
0
5
10
0 20 40 60 80 100 120−4
−2
0
2
4
6
8
Figure 1.10: The sequences x (n) (above) and y (n) (bellow) - Exercise 1.23.
0 0.005 0.01 0.015 0.02 0.025−10
−5
0
5
10
0 0.005 0.01 0.015 0.02 0.025−4
−2
0
2
4
6
8
Figure 1.11: The analog signals x (t) (above) and y (t) (bellow) - Exercise 1.23.
which is due to the fact that we used a causal signal x (t), and so the system response calculated inexercise 1.22 should be viewed as the steady-state response in Figure 1.11.
37
1.24 (a) The analog signal reconstructed from its sampled version is
xp(t) =∞∑
n=−∞x(n)g(t− nT )
=∞∑
n=−∞x(n)
∫ ∞−∞
g(τ)δ((t− nT )− τ)dτ
=∫ ∞−∞
g(τ)∞∑
n=−∞x(n)δ((t− nT )− τ)︸ ︷︷ ︸
xi(t−τ)
dτ
= xi(t) ∗ g(t)
Let’s explain the effect of this interpolation in the time domain. Firstly, observe Fig. 1.15 ofthe textbook and interpret yi(t) as xi(t). Then, note that the impulse response g(t) represents atriangular signal whose amplitude is 1 at the center (t = 0) and 0 on the edges (t = ±T ). So, theconvolution xi(t) ∗ g(t) generates one triangle at each time nT , n ∈ Z. In other words, each timeinstant t = nT will be the center of the triangular signal and x(n) = xa(nT ) will be its amplitudeat this instant. At t = (n ± i)T , i = 1, 2, 3, . . ., there is NO influence of x(n). Finally, note thatthere exists a 50% superposition between adjacent triangles, i.e., at t ∈ (nT, (n + 1)T ) there isa superposition of two triangles, the first having amplitude x(n) and the other having amplitudex((n+ 1)T ).
(b) In the frequency domain xp(t) becomes Xp(jΩ) given by
Xp(jΩ) = Xi(jΩ)G(jΩ)
=1T
∞∑k=−∞
Xa
(jΩ− j 2π
Tk
)G(jΩ)
and now we have to determine G(jΩ).
From a linear system course we know that a triangular signal may be generated by the convolutionof two gates. So, considering the gate h(t) introduced in Equation (1.190) of the textbook
h(t) =
1, for 0 ≤ t ≤ T0, otherwise
we can define an auxiliar signal a(t) = h(t) ∗ h(t) with the following impulse response
a(t) =
t, for 0 ≤ t ≤ T2T − t, for T ≤ t ≤ 2T
0, otherwise
Notice that a(t) is already a triangular signal, but it is a (time-)shifted and scaled version of g(t).Therefore, we can obtain g(t) from a(t) as
g(t) =1TejΩTa(t)
The advantage of this approach is that we can write G(jΩ) as a function of the already known
38 CHAPTER 1. DISCRETE-TIME SYSTEMS
H(jΩ), see Equation (1.195) of the textbook.
G(jΩ) = Fg(t)=
1TejΩTA(jΩ)
=1TejΩT (H(jΩ)H(jΩ))
=1TejΩT
(T 2e−jΩT
[sin(ΩT/2)
ΩT/2
]2)
= T
[sin(ΩT/2)
ΩT/2
]2
So, Xp(jΩ) is given by
Xp(jΩ) = Xi(jΩ)G(jΩ)
=1T
∞∑k=−∞
Xa
(jΩ− j 2π
Tk
)(T
[sin(ΩT/2)
ΩT/2
]2)
=[
sin(ΩT/2)ΩT/2
]2 ∞∑k=−∞
Xa
(jΩ− j 2π
Tk
)
(c) In order to compensate the distortion introduced by the frequency spectrum of the given pulse thelowpass filter should have the following desirable frequency response
L(jΩ) =
0, |Ω| ≥ Ωs
2 = πT[
sin(ΩT/2)ΩT/2
]−2
= TG(Ω) , |Ω| < Ωs
2 = πT
1.25 The energy of the continuous-time signal is denoted by Ea, whereas for the discrete-time signal weuse Ed. We want to prove that Ea = TsEd.
Ea =∫ ∞−∞|xa(t)|2dt (Definition)
=1
2π
∫ ∞−∞|Xa(Ω)|2dΩ (Parseval’s Theorem)
39
Ed =∑n∈Z|x(n)|2 (Definition)
=1
2π
∫ π
−π|X(eω)|2dω (Parseval’s Theorem)
=Ts2π
∫ Ωs/2
−Ωs/2
|Xi(Ω)|2dΩ (changing variables: ω = ΩTs)
=Ts2π
∫ Ωs/2
−Ωs/2
1T 2s
∣∣∣∣∣∑k∈Z
Xa(Ω− Ωsk)
∣∣∣∣∣2
dΩ (Equation (1.170))
=1
2πTs
∫ Ωs/2
−Ωs/2
∑k∈Z|Xa(Ω− Ωsk)|2 dΩ (no aliasing, i.e., cross-terms are zero)
=1
2πTs
∫ Ωs/2
−Ωs/2
|Xa(Ω)|2dΩ (limits of the integral)
=1
Ts 2π
∫ ∞−∞|Xa(Ω)|2dΩ (since it is zero outside the interval
(−Ωs
2,
Ωs2
))
=1TsEa (proved)
1.26
y (t) = A cos (Ωct)
yi (t) =∞∑
n=−∞y (n) δ (t− nT )
Yi (Ω) =1T
∞∑k=−∞
Ya (Ω− Ωsk), Ωs = 2Ωc + ε
Ya (Ω) = Aπ [δ (Ω− Ωc) + δ (Ω + Ωc)]
Yi (Ω) =Aπ
T
∞∑k=−∞
[δ (Ω− Ωc − Ωsk) + δ (Ω + Ωc − Ωsk)]
=Aπ
T
∞∑k=−∞
[δ (Ω− 2Ωck − εk − Ωc) + δ (Ω− 2Ωck − εk + Ωc)]
Yi (Ω) =Aπ
T
∞∑k=−∞
[δ
(Ω− Ωc
(1 + 2k +
ε
Ωck
))+ δ
(Ω + Ωc
(1− 2k − εk
Ωc
))]Suppose the lowpass filter used to recover the signal is a zero-order hold circuit, where:
H (Ω) = e−π
ΩΩs
sin(π
ΩΩs
)Ω2
, Ωs = 2Ωc + ε
H (Ω) = e−π
Ω2Ωc + ε
sin(π
Ω2Ωc + ε
)Ω2
40 CHAPTER 1. DISCRETE-TIME SYSTEMS
In fact, we can approximate the transfer function of the human eye by a zero-order hold circuit, or evenby the first lobe of its frequency response. By doing this, the signal Yi (Ω) will pass through a linearsystem with the frequency response depicted in Figure 1.12. As we can see from the above equations,
|H(jΩ)|
−Ωs = −(2Ω
c+ε) Ω
s = 2Ω
c+ε
Figure 1.12: Approximated frequency response for the zero-order hold circuit (|H (Ω) |) - Exercise 1.26.
signal Yi (Ω) is formed by an infinite sum of impulses. Nevertheless, just four impulses will composethe observed signal O (Ω) = Yi (Ω)H (Ω).
O (Ω) =Aπ
T[aδ (Ω− Ωc) + aδ (Ω + Ωc) + bδ (Ω− Ωc − ε) + bδ (Ω + Ωc + ε)]
The constants a and b come from the attenuation of the component, due to the lowpass filter. Aftersome mathematical development, we conclude that:
O (Ω) =A2πT
[(a+ b) cos
εt
2cos(
Ωct+εt
2
)+ (a− b) sin
εt
2sin(
Ωct+εt
2
)]As it is clearly seen, the observed signal has a low frequency variation (Moiré effect), with the fre-quency of
ε
2rad/s, which is equivalent to a frequency of
πε
2Ωc + εrad/sample.
For the MatLab® example, a signal with a 100 Hz frequency component was used, whereas the sam-pling frequency was 201 Hz. The MatLab® code can be seen in the following:
figure(2);wc=2*pi*100;t=0:.001:.6;ya=cos(wc*t);plot(t,ya);T=1/201;nT=0:T:.5;y=cos(wc*nT);figure(3);stem(nT,y);
The analog signal y (t) and its correspondent discrete version may be seen in Figures 1.13 and 1.14.
41
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
Figure 1.13: Analog signal y (t) - Exercise 1.26.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
Figure 1.14: Discrete signal y (n) - Exercise 1.26.
1.27 Firstly, let’s explain the problem. If instead of gi(t) we used xi(t) given in Equation (1.163) of thetextbook as
xi(t) =∑n∈Z
x(n)δ(t− nTs),
then we would have the following relation in the frequency domain, see Equation (1.170) of the text-book,
Xi(Ω) =1T
∑k∈Z
Xa
(Ω− 2π
Tsk
)=
1T
∑k∈Z
Xa(Ω− 2πfsk),
42 CHAPTER 1. DISCRETE-TIME SYSTEMS
where fs = 1Ts
is the sampling frequency.
Fig. 1.6f of the textbook depictsXi(Ω). In this figure we can see that the spectrumXa(Ω) is repeatedat each fs = 24 Hz, generating Xi(Ω). Therefore, in order to recover Xa(Ω) from Xi(Ω) one mustuse a lowpass filter with cutoff frequency less than half the sampling rate, i.e., 12 Hz (since there is noaliasing effect in Xi(Ω)). This lowpass filter has bandwidth 24 Hz (fs).
Now, note that a lowpass filter with bandwidth 48 Hz (2fs) would not remove, neither attenuate, theimages of Xa(Ω) that are centered at the frequencies ±fs = ±24 Hz, and that is the problem.
The easiest solution would be to double the sampling frequency, i.e., to use 2fs = 48 Hz as samplingfrequency. This way, the undesirable images of Xa(Ω) centered at frequencies ±fs would be shiftedto ±2fs, and thus being entirely at the stopband of the filter that models persistency of vision.
However, a more practical solution is to repeat each frame twice. In the time domain this is equivalentto filter xi(t) with a bandpass filter. In the frequency domain this filter attenuates the undesirableimages of Xa(Ω) centered at frequencies ±fs. Note that this filter will introduce some distortion toXa(Ω) centered at frequency 0 Hz and will not entirely eliminate the undesirable images of Xa(Ω),since such a simple filter cannot be very selective.
Indeed,
gi(t) =∑n∈Z
g(n)δ(t− nTs) +∑n∈Z
g(n)δ(t− nTs − Ts
2
)=∑n∈Z
ga(nTs)δ(t− nTs) +∑n∈Z
ga(nTs)δ(t− nTs − Ts
2
)= ga(t)
∑n∈Z
δ(t− nTs)︸ ︷︷ ︸p(t)︸ ︷︷ ︸
x′i(t)
+ ga
(t− Ts
2
)∑n∈Z
δ
(t− nTs − Ts
2
)︸ ︷︷ ︸
p(t−Ts2 )︸ ︷︷ ︸x′i(t−Ts2 )
= x′i(t) + x′i
(t− Ts
2
)and in the frequency domain we have
Gi(Ω) = X ′i(Ω) + e−ΩTs2 X ′i(Ω)
=(
1 + e−ΩTs2
)X ′i(Ω)
=(
1 + e−ΩTs2
) 1T
∑k∈Z
Ga
(Ω− 2π
Tsk
)=(
1 + e−πffs
) 1T
∑k∈Z
Ga(Ω− 2πfsk).
= e−π2ffs
[2 cos
(π
2f
fs
)]︸ ︷︷ ︸
A(f)≡A(Ω)
1T
∑k∈Z
Ga(Ω− 2πfsk).
Note that
0 ≤ |A(f)| = 2∣∣∣∣cos
(π
2f
fs
)∣∣∣∣ ≤ 2
is equal to 2 at f = 0,±2fs,±4fs, . . . , and 0 at frequencies f = ±fs,±3fs, . . .. Thus, the imagesof Xa(Ω) centered at frequencies ±(2k + 1)fs, k ∈ Z, are attenuated by this bandpass filter. Theimpulse response of this filter is a(t) = δ(t) + δ
(t− T
2
).
43
1.28 Firstly, let us focus on demonstrating that a random variable X , whose distribution is
φX(x) =1√
2πσ2e−
(x−µ)2
2σ2 ,
is such that E[X] = µ.
E[X] =∫ ∞−∞
xφX(x)dx =∫ ∞−∞
xφX(x)dx+ (−µ+ µ)× 1
=∫ ∞−∞
xφX(x)dx+ (−µ+ µ)∫ ∞−∞
φX(x)dx
=∫ ∞−∞
xφX(x)dx−∫ ∞−∞
µφX(x)dx+ µ
=∫ ∞−∞
(x− µ)φX(x)dx+ µ
=∫ µ
−∞(x− µ)φX(x)dx+
∫ ∞µ
(x− µ)φX(x)dx+ µ
= −∫ ∞
0
yφX(µ− y)dy +∫ ∞
0
zφX(µ+ z)dz + µ
= −∫ ∞
0
yφX(µ+ y)dy +∫ ∞
0
zφX(µ+ z)dz + µ
= µ.
From now on, we shall use the following well-known identity:∫Dudv = [uv]D −
∫Dvdu,
where D is the domain of integration.
Thus, let us focus on demonstrating that such a random variable X has variance σ2.
E[(X − µ)2] =∫ ∞−∞
(x− µ)2φX(x)dx
=∫ ∞−∞
[−σ2(x− µ)][−(x− µ)
σ2 φX(x)]dx
= −σ2
∫ ∞−∞
(x− µ)[dφXdx
(x)]dx
= −σ2
[(x− µ)φX(x)]∞−∞ −
∫ ∞−∞
φX(x)dx
= −σ2 (0− 1)= σ2,
where we have used the L’Hospital rule in order to evaluate the limits involving indeterminate formsof the type ∞∞ (essentially, we know that exponential growth dominates linear growth).
44 CHAPTER 1. DISCRETE-TIME SYSTEMS
1.29 The cross-covariance is defined in Equation (1.223) of the textbook as
cX,Y = E (X − µX) (Y − µY )= E XY −XµY − µXY + µXµY = E XY − E XµY − µXE Y + µXµY
= E XY ︸ ︷︷ ︸rX,Y
−µXµY
= rX,Y − E XE Y
1.30 (a) By definition, the autocorrelation function of a WSS process X isRX(ν) = E[X(n)X(n+ν)].Thus, RX(0) = E[X(n)X(n+ 0)] = E[X2(n)].
(b) Let ν be a fixed real number. By definition, the autocorrelation function of a WSS process X is
RX(ν) = E[X(n)X(n+ ν)] = E[X(m)X(m+ ν)],
even when m 6= n. Let us choose m = n− ν. In this particular case, we have
RX(ν) = E[X(n)X(n+ ν)] = E[X(n− ν)X(n− ν + ν)] = E[X(n− ν)X(n)]= E[X(n)X(n− ν)] = E[X(n)X(n+ (−ν))] = RX(−ν).
(c) Let f : R→ R be a function analytically defined as
f(λ) = E
[X(n) + λX(n+ ν)]2.
Thus, from item 1.30a, one has
f(λ) = EX2(n) + 2λX(n)X(n+ ν) + λ2X2(n+ ν)
= RX(0) + 2λRX(ν) + λ2RX(0).
Hence, f : R→ R is a quadratic polynomial whose discriminant is
∆ = 4R2X(ν)− 4R2
X(0).
By definition, we know that f(λ) ≥ 0 for all real λ. In order for f : R → R to satisfy thispositivity property, we must have ∆ ≤ 0. We, therefore, have
∆ = 4R2X(ν)− 4R2
X(0) ≤ 0⇔ |RX(ν)| ≤ |RX(0)| = RX(0).
1.31 According to Equation (1.240) of the textbook, the autocorrelation function of the output signal isgiven by
RY (n1, n2) =∞∑
k1=−∞
∞∑k2=−∞
RX((n1 − n2)− k1 + k2)h(k1)h(k2)
By doing simple change of variables such as k2 = r and k1 = k + r, the equation above becomes
RY (n1, n2) =∞∑
k=−∞
∞∑r=−∞
RX((n1 − n2)− (k + r) + r)h(k + r)h(r)
=∞∑
k=−∞
RX((n1 − n2)− k)∞∑
r=−∞h(k + r)h(r)︸ ︷︷ ︸Ch(k)
45
Notice that RY (n1, n2) does not depend on the absolute values of n1 and n2. It depends on the lagn = n1 − n2 and the expression above may be written as
RY (n) =∞∑
k=−∞
RX(n− k)Ch(k),
and the process Y is also WSS. Note that the derivation of Equation (1.240) assumes that the processX is WSS.
1.32 As the discrete random variable X has distribution uX,d(x) = 1M δ(x − xi), for xi ∈ XM =
x1, x2, · · · , xM, then the probability of X = xi, for each xi ∈ XM , is pX(xi) = 1M , whereas
the probability of X = x, for each x /∈ XM , is pX(x) = 0. Thus, the entropy of X is given by
H(X) = −∑x∈XM
pX(x) log2 pX(x) = −∑x∈XM
1M
log2
(1M
)=∑x∈XM
1M
log2 (M)
=∑x∈XM
log2
(M
1M
)= log2
( ∏x∈XM
M1M
)= log2
M 1M ×M 1
M × · · · ×M 1M︸ ︷︷ ︸
Mtimes
= log2M bits/symbol
1.33 Here we will denote the base of the logarithm by B instead of b in order to avoid problems withnotation.
(a)
H(X) = −∫x∈X
pX(x) logB pX(x)dx
= −∫ b
a
1b− a logB
(1
b− a)dx
=∫ b
a
1b− a logB (b− a) dx
=1
b− a logB (b− a)∫ b
a
dx
= logB (b− a)= log2 (b− a) bits (for B = 2)
46 CHAPTER 1. DISCRETE-TIME SYSTEMS
(b) In order to facilitate calculations, let B = e, then
H(X) = −∫x∈X
φX(x) loge φX(x)dx
= −∫ ∞−∞
φX(x) ln(
1√2πσ2
e−(x−µ)2
2σ2
)dx
= −∫ ∞−∞
φX(x)[ln(
1√2πσ2
)+ ln
(e−(x−µ)2
2σ2
)]dx
= −∫ ∞−∞
φX(x)[− ln
(√2πσ2
)− (x− µ)2
2σ2
]dx
= ln(√
2πσ2)∫ ∞−∞
φX(x)dx︸ ︷︷ ︸=1
+1
2σ2
∫ ∞−∞
(x− µ)2φX(x)dx︸ ︷︷ ︸E(x−µ)2=σ2
= ln(√
2πσ2)
+12
=12
ln(2πσ2
)+
12
ln e
=12
ln(2πσ2e
)nats
Chapter 2
THE Z AND Fourier TRANSFORMS
2.1 (a)
X (z) =∞∑
n=−∞x (n) z−n
=∞∑
n=−∞sin (ωn+ θ)u (n) z−n
=∞∑n=0
sin (ωn+ θ) z−n
=∞∑n=0
e(ωn+θ) − e−(ωn+θ)
2z−n
=eθ
2
∞∑n=0
eωnz−n − e−θ
2
∞∑n=0
e−ωnz−n
=eθ
2
∞∑n=0
(eωz−1
)n − e−θ
2
∞∑n=0
(e−ωz−1
)n=
eθ
21
1− eωz−1− e−θ
21
1− e−ωz−1, |eωz−1| < 1
=eθ
2z
z − eω −e−θ
2z
z − e−ω
=12eθz2 − e−(ω−θ)z − e−θz2 + e(ω−θ)z
(z − eω) (z − e−ω)
X (z) =z2 sin θ + z sin (ω − θ)z2 − 2z cosω + 1
ROC: |eωz−1| < 1⇒ |z| > 1.
47
48 CHAPTER 2. THE Z AND FOURIER TRANSFORMS
(b)
X (z) =∞∑
n=−∞x (n) z−n
=∞∑n=0
cos (ωn) z−n
=∞∑n=0
[eωn + e−ωn
2
]z−n
=12
∞∑n=0
(eωz−1
)n+
12
∞∑n=0
(e−ωz−1
)n=
12
11− eωz−1
+12
11− e−ωz−1
, |eωz−1| < 1
...
X (z) =z2 − z cosω
z2 − 2z cosω + 1
ROC: |eωz−1| < 1⇒ |z| > 1.
(c)
X (z) =4∑
n=0
nz−n
X (z) = z−1 + 2z−2 + 3z−3 + 4z−4
ROC: |z| > 0, otherwise we could not have z−1 in z transform.
(d)
X (z) =0∑
n=−∞anz−n,m = −n
=∞∑m=0
a−mzm
=∞∑m=0
(a−1z
)mX (z) =
11− a−1z
, |a−1z < 1|
ROC: |a−1z| < 1⇒ |z| < |a|(e)
X (z) =∞∑n=0
e−αnz−n
=∞∑n=0
(e−αz−1
)n=
11− e−αz−1
, |e−αz−1| < 1
X (z) =z
z − e−α
ROC: |e−αz−1| < 1⇒ |z| > e−α.
49
(f)
X (z) =∞∑n=0
e−αn[eωn − e−ωn
2
]z−n
=12
∞∑n=0
e−α+ωz−1 − 12
∞∑n=0
e−α−ωz−1
=12
11− e−α+ωz−1
− 12
11− e−α−ωz−1
, |e−αz−1| < 1
...
X (z) =ze−α sinω
z2 − 2ze−α cosω + e−2α
ROC: |e−αz−1| < 1⇒ |z| > e−α.
(g)
x (n) = n2u (n)a (n) = u (n)
A (z) =∞∑n=0
1z−n
A (z) =z
z − 1, |z| > 1
b (n) = na (n)
B (z) = −z dA (z)dz
B (z) =z
(z − 1)2 , |z| > 1
x (n) = nb (n)
X (z) = −z dB (z)dz
...
X (z) =z2 + z
(z − 1)3 , |z| > 1
ROC: |z| > 1.
2.2
12π
∮zn−1dz =
0, n 6= 01, n = 0
• n = 0:
I =1
2π
∮1zdz
From the residue theorem:
I =1
2π2π1 = 1
• n > 0:
I =1
2π
∮zn−1dz
50 CHAPTER 2. THE Z AND FOURIER TRANSFORMS
From the residue theorem:
I =1
2π2π0 = 0
Since there are no poles inside the unit circle.• n < 0:
I =1
2π
∮zn−1dz
I =1
2π
∮1
z1−n dz
From the residue theorem:
I =1
2π2π0 = 0
Residue at z = 0:
R (z = 0) =1
(−n)!d(−n)
dz−n
[(z − 0)1−n 1
z1−n
]|z=0
=1
(−n)!d(−n)
dz−n[1] |z=0,m = −n
=1m!
dm
dzm[1] |z=0
R (z = 0) = 0
I =1
2π2π (0) = 0
• Therefore:
12π
∮zn−1dz =
0, n 6= 01, n = 0
2.3 In order to be a stable digital filter, all the poles must be inside the unit circle of the complex plane.
H (z) =N (z)D (z)
D (z) = z2 + (m1 −m2) z + (1−m1 −m2)D (1) = 1 +m1 −m2 + 1−m1 −m2 > 0
= 2− 2m2 > 0⇒ m2 < 1
n = 2 (even) → D (−1) > 0D (−1) = 1−m1 +m2 + 1−m1 −m2 > 0
⇒ m1 < 1
Computation of α0
D0 (z) = D (z) = z2 + (m1 −m2) z + (1−m1 −m2)Di
0 (z) = z2(z−2 + (m1 −m2) z + (1−m1 −m2)
)Di
0 (z) = 1 + (m1 −m2) z + (1−m1 −m2) z2
D0 (z) = α0Di0 (z) +D1 (z)
α0 = 1−m1 −m2
|α0| < 11−m1 −m2 < 1
⇒ m1 + m2 > 0
51
By joining the three inequations, we can find a region of the m2 ×m1 plane in which the digital filteris stable, as it can be seen in Figure 2.1.
2.4
2.5 n→ n− 1
H (z) =(z − 1)2
z2 − 0.32z + 0.8x (n) = u (n)
X (z) =z
z − 1Y (z) = H (z)X (z)
=(z − 1)2
z2 − 0.32z + 0.8z
z − 1...
Y (z) = 1− 0.841e−0.985π
z − 0.894e0.443π− 0.841e0.985π
z − 0.894e−0.443π
y (n) = δ (n)− 1.681e−0.985π(0.894e0.443π
)nu (n)− 1.681e0.985π
(0.894e−0.443π
)nu (n)
...y (n) = δ (n)− 1.681 (0.894)n cos (0.443πn− 0.985π)u (n)
2.6 (a) From Table 2.1 (in the book), we know that:
(−a)n u (n)↔ z
z + a
Thus, if we consider that a = −0.8 for this problem:
X (z) =z
z − 0.8x (n) = (0.8)n u (n)
1
1
m2
m1
Figure 2.1: Region of m2 ×m1 plane in which the digital filter of exercise 2.3 is stable.
52 CHAPTER 2. THE Z AND FOURIER TRANSFORMS
(b)
X (z) =z2
z2 − z + 0.5...
X (z) = 1 +0.5
z − 0.707eπ4
+0.5
z − 0.707e−π4
x (n) = δ (n) + 0.5 (0.707)n eπ4 nu (n) + 0.5 (0.707)n e−
π4 nu (n)
x (n) = δ (n) + 2−n2 cos
(π4n)u (n)
(c)
X (z) =z2 + 2z + 1z2 − z + 0.5
...
X (z) = 1 +2.5e−0.294π
z − 0.707eπ4
+2.5e0.294π
z − 0.707e−π4
x (n) = δ (n) + 2.5e−0.294π(0.707e
π4)nu (n) + 2.5e0.294π
(0.707e−
π4)nu (n)
...x (n) = δ (n) + 5 (0.707)n cos
(π4n− 0.294π
)u (n)
(d)
X (z) =z2
(z − a) (z − 1)...
X (z) = 1 +a2
a− 11
z − a −1
a− 11
z − 1
x (n) = δ (n) +a2
a− 1(a)(n−1)
u (n− 1)− 1a− 1
u (n− 1)
(e)
1− z2
(2z2 − 1)(z − 2)=
12
1− z2
(z2 − 1/2)(z − 2)=
12
1− z2
(z −√2/2)(z +√
2/2)(z − 2)
=12
[A
z −√2/2+
B
z +√
2/2+
C
z − 2
]=
12
[A(z2 +
√2−42 z −√2) +B(z2 −
√2+42 z +
√2) + C(z2 − 1
2 )
(z −√2/2)(z +√
2/2)(z − 2)
]
=12
[(A+B + C)z2 + [
√2
2 (A−B)− 2(A+B)]z −√2(A−B)− C2
(z −√2/2)(z +√
2/2)(z − 2)
].
Thus, we must have√
2(A−B) = −C2 −1, yielding√
22 (A−B)−2(A+B) = −C4 − 1
2 −2(A+B) = 0⇔ (A+B) = −C8 − 1
4 . Hence, (A+B+C) = −1⇔ 78C− 1
4 = −1⇔ C = − 67 . Then,
(A+B) = − 17 , whereas (A−B) = − 2
√2
7 . We, therefore, haveA = − 1+2√
214 andB = − 1−2
√2
14 .By remembering that, for X(z) = 1
z−z0 , one has
x(n) = zn−10 u(n− 1),
53
whenever the convergence region is |z| > |z0|, or one has
x(n) = zn−10 u(−n),
whenever the convergence region is |z| < |z0|, then, the stable system has the following inversetransform
x(n) =12
[Z−1
A
z −√2/2
|z|>
√2
2
+ Z−1
B
z +√
2/2
|z|>
√2
2
+ Z−1
C
z − 2
|z|<2
]
=12
A(√22
)n−1
u(n− 1) +B
(−√
22
)n−1
u(n− 1) + C2n−1u(−n)
=
−1 + 2√
228
(√2
2
)n−1
− 1− 2√
228
(−√
22
)n−1u(n− 1)− 6
142n−1u(−n)
2.7 (a)
X (z) = sin(
1z
)X (z) = Y (a) = sin (a) , a =
1z
Using the Taylor series expansion:
Y (a) = sin (0) + adY
da|a=0 +
a2
2!d2Y
da2|a=0 + . . .
= 0 + a− a3
3!+a5
5!− a7
7!+ . . .
X (z) = z−1 − z−3
3!+z−5
5!− z−7
7!+ . . .
=∞∑n=0
sin(π
2n)z−n
n!
X (z) =∞∑
n=−∞
sin(π
2n)u (n) z−n
n!
x (n) =sin(π
2n)
n!u (n)
(b)
X (z) =√
z
1 + z
=
√1
1 + z−1
X (z) = Y (a) = (1 + a)−
12 , a = z−1
54 CHAPTER 2. THE Z AND FOURIER TRANSFORMS
Using the Taylor series expansion:
Y (a) = Y (0) + adY
da|a=0 +
a2
2!d2Y
da2|a=0 + . . .
Y (a) = 1 + a
(−1
2
)+a2
2!
(34
)+a3
3!
(−15
8
)+a4
4!
(+
10516
)+a4
4!
(−945
32
)+ . . .
X (z) = 1 + z−1
(−1
2
)+z−2
2!
(34
)+z−3
3!
(−15
8
)+z−4
4!
(+
10516
)+z−5
5!
(−945
32
)+ . . .
X(z) = 1 +∑i=1
∞z−i
i!
(∏ij=1
j+12
2i−1
)
x(n) = δ(n) +∑i=1
∞x(n− i)i!
(∏ij=1
j+12
2i−1
)
2.8 Exercise 2.8
2.9 Consider the sequence c (n) as a function of x (n):
C (n) =x (n) + (−1)n x (n)
2
C (z) =∞∑
n=−∞c (n) z−n
=12
∞∑n=−∞
x (n) z−n +12
∞∑n=−∞
(−1)n x (n) z−n
=12X (z) +
12
∞∑n=−∞
x (n) (−z)−n
C (z) =12X (z) +
12X (−z) (2.1)
Now note that y(n) = c(2n), and using equation (2.1):
Y (z) =∞∑
n=−∞c(2n)z−n
Y (z) =∞∑
n=−∞c(n)z−n/2
Y (z) =12X(z1/2
)+
12X(−z1/2
)
2.10 (a)
D (z) = z5 + 2z4 + z3 + 2z2 + z + 0.5D (1) = 1 + 2 + 1 + 2 + 1 + 0.5 > 0
n = 5 odd =⇒ D (−1) < 0D (−1) = −1 + 2− 1 + 2− 1 + 0.5 = 1.5 > 0
Hence D (z) cannot be the denominator of a causal stable filter.
55
(b)
D (z) = z6 − z5 + z4 + 2z3 + z2 + z + 0.25D (1) = 1− 1 + 1 + 2 + 1 + 1 + 0.25 = 5.25 > 0
n = 6 even =⇒ D (−1) > 0D (−1) = 1 + 1 + 1− 2 + 1− 1 + 0.25 = 1.25 > 0Di
0 (z) = 1− z + z2 + 2z3 + z4 − z5 + z6
D0 (z) = α0Di0 (z) +D1 (z)
α0 = 0.25D1 (z) = 1.25z + 0.75z2 + 1.5z3 + 0.75z4 − 1.25z5 + 0.938z6
Di1 (z) = 1.25z6 + 0.75z5 + 1.5z4 + 0.75z3 − 1.25z2 + 0.938z
D1 (z) = α1Di1 (z) +D2 (z)
α1 = 1.333 > 1
Hence D (z) cannot be the denominator of a causal stable filter.
(c)
D (z) = z4 + 0.5z3 − 2z2 + 1.75z + 0.5D (1) = 1 + 0.5− 2 + 1.75 + 0.5 = 1.75 > 0
n = 4 even =⇒ D (−1) > 0D (−1) = 1− 0.5− 2− 1.75 + 0.5 < 0
Hence D (z) cannot be the denominator of a causal stable filter.
2.11 (a)a0 > 0
D(1) > 0→ 1− 2− a+ b+ a+ 1 > 0; b > 0
D(−1) > 0→ 1 + 2 + a− b+ a+ 1 > 0; 4 + 2a− b > 0
(1 + a− (2 + a− b)z + z2)÷ (1− (2 + a− b)z + (1 + a)z2) = 2 + a+ remain
a0 = 1 + a→ |1 + a| < 1→ a < 0
(b) Figure 2.2 highlights the stability region associated with the polynomial D(z).
Figure 2.2: Stability region.
56 CHAPTER 2. THE Z AND FOURIER TRANSFORMS
2.12 There three possible regions of convergence, they are: (i) |z| <√
22 , (ii) |z| > 2 or (iii)
√2
2 < |z| < 2.Since only the stable solution is of interest, the region of convergence should include the unit circle,that is region (iii). The transfer function under discussion is given by
H(z) =z3
(z + 2)(z − 1
2 − 12
)(z − 1
2 + 12 )
=z3
(z + 2)(z2 − z + 1
2
) .So that,
h(n) =1
2π
∮C
H(z)zn−1dz =K∑k=0
resH(z)zn−1
=
K∑k=0
res
zn+2
(z + 2)(z − 1
2 − 12
) (z − 1
2 + 12
) ,
and for n ≥ −2, H(z)zn−1 has no poles at the origin, as previously mentioned, the correspondingregion of convergence is given by
√2
2 =∣∣− 1
2 ± 12
∣∣ = r1 < |z| < r2 = | − 2| = 2, so that
h(n) =
(12 + 1
2
)n+2(12 + 1
2 + 2) (
12 + 1
2 − 12 + 1
2
) +
(12 − 1
2
)n+2(12 − 1
2 + 2) (
12 − 1
2 − 12 − 1
2
)=
(12 + 1
2
)n+2(52 + 1
2
)
+
(12 − 1
2
)n+2(52 − 1
2
)(−) =
(1√2eπ4
)n+2
(
52 + 1
2
) −(
1√2e−
π4
)n+2
(
52 − 1
2
)=− ( 5
2 − 12
) (1√2eπ4
)n+2
+ (
52 + 1
2
) (1√2e−
π4
)n+2
254 + 1
4
=− (5− )
(1√2eπ4
)n+2
+ (5 + )(
1√2e−
π4
)n+2
13
=− (5− )
(1√2
)n+2
eπ4 (n+2) + (5 + )
(1√2
)n+2
e−π4 (n+2)
13
=(−5− 1)
(1√2
)n+2
eπ4 (n+2) + (5− 1)
(1√2
)n+2
e−π4 (n+2)
13
=(
1√2
)n12
[(−1− 5) e
π4 (n+2) + (−1 + 5) e−
π4 (n+2)
13
]=
(1√2
)n 126
[√26eArctic(5)e
π4 (n+2) +
√26eArctic(−5)e−
π4 (n+2)
]=
(1√2
)n√2626
[e(
π4 (n+2)+Arctic(5)) + e−(
π4 (n+2)+Arctic(5))
]
=(
1√2
)n−1 1√13
cos[π
4(n+ 2) + Arctic(5)
], paran ≥ −2
For n < −2, we can perform the transformation z = 1v in order to avoid having multiple poles at the
origin, that is
h(n) =1
2π
∮C′′
H
(1v
)v−n−1dv
The closed path C ′′ follows the counterclockwise direction within the new region of convergence12 = 1
r2< |v| < 1
r1=√
2,
12π
∮C′′
v−3v−n−1(1v + 2
) (1v2 − 1
v + 12
)dv =1
2π
∮C′′
v−n−1
(1 + 2v)(1− v + 1
2v2)dv
57
The only pole inside the region C ′′ is v = − 12 , therefore
h(n) =
(− 12
)−n−1[1 + 1
2 + 12
(− 12
)2] =
(− 12
)−n−1
32 + 1
8
=
(− 12
)−3 (− 12
)−n+2
138
=−8(− 1
2
)−n+2
138
= (−1)n+1 1613
2n , paran < −2
The resulting system is bilateral, that is, it is not a causal system. The system is linear since noinitial condition different from zero was used to obtain the solution. The resulting system is alsotime-invariant since no coefficient varies with time.
2.13
H (eω) =∞∑
n=−∞h (n) e−ωn
=N−2∑
n=−(N−2)
(−1)n e−ωn
=N−2∑
n=−(N−2)
eπe−ωn
=N−2∑
n=−(N−2)
e−(ω−π)n,m = −n
=N−2∑
m=−(N−2)
e(ω−π)m
= e−(ω−π)(N−2)2N−4∑m=0
e(ω−π)m
H (eω) = e−(ω−π)(N−2) 1− e(ω−π)(2N−3)
1− e(ω−π)
2.14 The magnitude and phase plots of the frequency response of the systems were obtained using the func-tion freqz.
(a)
h (n) = δ (n) + 2δ (n− 1) + 3δ (n− 2) + 2δ (n− 3) + δ (n− 4)H (eω) = 1 + 2e−ω + 3e−2ω + 2e−3ω + e−4ω
= e−2ω[e2ω + 2eω + 3 + 2e−ω + e−2ω
]H (eω) = e−2ω [3 + 4 cos (ω) + 2 cos (2ω)]|H (eω) | = |3 + 4 cos (ω) + 2 cos (2ω) |∠H (eω) = −2ω, since 3 + 4 cos (ω) + 2 cos (2ω) > 0, 0 < ω < π
Figure 2.3 shows the magnitude and phase plots.
58 CHAPTER 2. THE Z AND FOURIER TRANSFORMS
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−400
−300
−200
−100
0
Normalized frequency (Nyquist == 1)
Pha
se (
degr
ees)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−100
−80
−60
−40
−20
0
20
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 2.3: Magnitude and phase of the system of exercise 2.14a.
(b)
h (n) = h (n− 1) + δ (n)H (eω) = e−ωH (eω) + 1
H (eω)[1− e−ω] = 1
H (eω) =1
1− e−ω|H (eω) | =
1√(1− cosω)2 + sin2 ω
|H (eω) | =1
2− 2 cosω
∠H (eω) = − arctan(
sinω1− cosω
)
Figure 2.4 shows the magnitude and phase plots.
59
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−100
−80
−60
−40
−20
0
Normalized frequency (Nyquist == 1)
Pha
se (
degr
ees)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−10
0
10
20
30
40
50
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 2.4: Magnitude and phase of the system of exercise 2.14b.
(c)
h (n) = δ (n) + 3δ (n− 1) + 2δ (n− 2)H (eω) = 1 + 3e−ω + 2e−2ω
= e−ω
2
[eω
2 + e−ω
2
]+ 2e
−3ω2
[eω
2 + e−ω
2
]
= 2e−ω
2 cosω
2+ 4e
−3ω2 cos
ω
2
H (eω) = 2e−ω
2 cosω
2[1 + 2e−ω
]|H (eω) | = 2| cos
ω
2|√
(1 + 2 cosω)2 + 4 sin2 ω
|H (eω) | = 2| cosω
2|√5 + 4 cosω
∠H (eω) = −ω2
+ arctan(
2 sinω1 + 2 cosω
)
Figure 2.5 shows the magnitude and phase plots.
2.15 (a) Using the MatLab® function freqz([1 2 0 2 1],1), we have the magnitude and phase ofthe frequency response, as we can see in Figure 2.6.
(b) Using the MatLab® function freqz([1 0 -1],[1 -1.2 0.95]), we have the magnitudeand phase of the frequency response, as we can see in Figure 2.7.
60 CHAPTER 2. THE Z AND FOURIER TRANSFORMS
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−300
−250
−200
−150
−100
−50
0
Normalized frequency (Nyquist == 1)
Pha
se (
degr
ees)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−50
−40
−30
−20
−10
0
10
20
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 2.5: Magnitude and phase of the system of exercise 2.14c.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−200
−150
−100
−50
0
Normalized frequency (Nyquist == 1)
Pha
se (
degr
ees)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−50
−40
−30
−20
−10
0
10
20
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 2.6: Magnitude and phase of the filter of exercise 2.15a.
2.16 Steady-state response:
x (n) =[eωn − e−ωn]
2u (n)
y (n) =[H (eω)
eωn
2−H (e−ω) e−ωn
2
]u (n)
y (n) = |H (eω) |[eΘ(ω) e
ωn
2− e−Θ(ω) e
−ωn
2
]u(n)
y (n) = |H (eω) |[e(ωn+Θ(ω)) − e−((ωn+Θ(ω)))
2
]u(n)
y (n) = |H (eω) | sin(ωn+ Θ(ω))u(n)
61
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−100
−50
0
50
100
Normalized frequency (Nyquist == 1)
Pha
se (
degr
ees)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−60
−40
−20
0
20
40
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 2.7: Magnitude and phase of the filter of exercise 2.15b.
2.17 Then
H(z) =δ0(1− z−1) + δ1(1− z−1)z−1 − δ2z−1
1− z−1 − (1 +m1)z−1(1− z−1)−m2z−1
=δ0 − z−1(δ0 − δ1 + δ2)− δ1z−2
1− z−1(2 +m1 +m2) + (1 +m1)z−2
Given thatN(z) = N2(z) + δ2 (2.2)
N2(z) = −δ1z−2 − z−1(δ0 − δ1) + δ0 (2.3)
At z = 1N2(−1) = 1 +
δ0δ1− 1− δ0
δ1= 0
As a resultH(1) =
−δ21− 2−m1 −m2 + 1 +m1
=−δ2−m2
= − δ2m2
In order to achieve the gain one at DC, we must choose
δ2 = −m2
The numerator polynomial at z = −1 is given by
N(−1) = δ0 + δ0 − δ1 + δ2 − δ1 = 2δ0 − 2δ1 + δ2
In order to place two zeros at z = −1 we must have
N(z) = δ0(z + 1)2 = δ0(z2 + 2z + 1)
N(z) = δ0z2 − z2(δ0 − δ1 + δ2)− δ1 = δ0
[z2 − z
(1− δ1
δ0+δ2δ0
)− δ1δ0
]
62 CHAPTER 2. THE Z AND FOURIER TRANSFORMS
δ1δ0
= 1
δ2δ0
= −2
δ2 = −2δ0
δ0 =m2
2
δ1 =m2
2
2.18 (a) Given x(n) = eωn, we know that the output y(n) of a linear time-invariant system is as follows:
y(n) = H(eω)eωn = |H(eω)|eωn+Θ(ω),
where H(eω) = |H(eω)|eΘ(ω) is the frequency response of the system. We also know that thesystem is delayed by β samples, irrespective of the particular frequency, whenever y(n) can bewritten as
y(n) = |H(eω)|eω(n+β).
Thus, when the system is delayed by β samples, irrespective of the particular frequency, thenwe have that the phase of the system is Θ(ω) = βω, yielding a constant group delay τ(ω) =−dΘ(ω)
dω = −β. We, therefore, have just shown that a constant group delay is a necessary conditionfor the delay introduced by the system to a sinusoid to be independent of its frequency.On the other hand, a constant group delay is not a suficient condition. As a counterexample, let usconsider a system whose phase response is Θ(ω) = βω + π. For such a system, the group delayis −β (a constant). Nonetheless, one has
y(n) = |H(eω)|eω[n+(β+ πω )], (ω 6= 0),
implying that the delay depends on the particular frequency.
(b) First of all, we need to rewrite this exercise: let y1(n) and y2(n) be the outputs of the system totwo sinusoids x1(n) and x2(n), respectively. A constant group delay τ implies that, if x1(n0) =x2(n0), then y1(n0 + τ) = α2y2(n0 + τ) or y2(n0 + τ) = α1y1(n0 + τ), where α1 and α2 arenonnegative constants.Given this new statement, from exercise 2.18a, we know that
y1(n) = H(eω1)x1(n) = |H(eω1)|eΘ(ω1)x1(n) = |H(eω1)|e−τω1ecx1(n)y2(n) = H(eω2)x2(n) = |H(eω2)|eΘ(ω2)x2(n) = |H(eω2)|e−τω2ecx2(n),
where τ(ω) = −dΘ(ω)dω = − d
dω [−τω + c] = τ is the constant group delay of the system. Now,let us assume that eω1n0 = x1(n0) = x2(n0) = eω2n0 . This occurs if and only if there exists aninteger k such that ω2n0 = ω1n0 + 2πk. Using this fact, we have
y2(n0 + τ) = |H(eω2)|e−τω2ecx2(n0 + τ) = |H(eω2)|e−τω2eceω2(n0+τ)
= |H(eω2)|eceω2n0 = |H(eω2)|eceω1n0 =|H(eω2)||H(eω1)|︸ ︷︷ ︸
α1
|H(eω1)|eceω1n0
= α1|H(eω1)|eceω1n0 =(e−τω1eτω1
)× α1|H(eω1)|eceω1n0
= α1|H(eω1)|e−τω1eceω1(n0+τ) = α1|H(eω1)|e−τω1ecx1(n0 + τ)= α1y1(n0 + τ),
63
where we have assumed that |H(eω1)| 6= 0. In the particular case where |H(eω1)| = 0, then0 = y1(n0 + τ) = 0 × y2(n0 + τ), yielding α2 = 0. Similarly, one has that, for |H(eω2)| 6= 0,y1(n0 + τ) = α2y2(n0 + τ), with α2 = |H(eω1 )|
|H(eω2 )| , whereas, for |H(eω2)| = 0, one has 0 =y2(n0 + τ) = 0× y1(n0 + τ), yielding α1 = 0.
2.19H1(z) =
κ1z
z + b
H2(z) =κ2z
(z − a)2
H(z) = H1(z)H2(z) =κ1z
z + b
κ2z
(z − a)2
(a) We want H(1) = 1, by choosing κ1 = 1 + b and κ2 = (1− a)2, the gain at DC is 1.
(b) Lets start by removing the pole at −b from H(z) in order to describe it with partial fractiondecomposition.
H ′2(z) = H(z)−H ′1(z) = H(z)− κ1κ2b2
(a+ b)2(z + b)
= κ1κ2(a+ b)2z2 − b2(z + a)2
(a+ b)2(z + b)(z − a)2
=κ1κ2
(a+ b)2
(2ab+ a2)z2 − 2ab2z − a2b2
(z + b)(z − a)2
=κ1κ2
(a+ b)2
(2ab+ a2)z − a2b
(z − a)2
Therefore
H(z) = H ′1(z) +H ′2(z) =κ1κ2
(a+ b)2
(2ab+ a2)z − a2b
(z − a)2+ κ1κ2
b2
(a+ b)2(z + b)
=κ1κ2
(a+ b)2
[(2ab+ a2)z − a2b
(z − a)2+
b2
z + b
].
Using the fact that Z[−nanu(−n− 1)] = z(z−a)2 .
h(n) =κ1κ2
(a+ b)2
[−(2ab+ a2)nan−1u(−n− 1)
+b(n− 1)anu(−n) + (−b)n+1u(n− 1)]
(2.4)
2.20
2.21 (a)
x (n) = sin (ω0n+ θ)u (n)
X (eω) =∞∑
n=−∞
[e(ω0n+θ) − e−(ω0n+θ)
2e−ωu (n)
]
X (eω) =eθ
2
∞∑n=0
e−(ω−ω0)n − e−θ
2
∞∑n=0
e−(ω+ω0)n
Both summations shown above do not have convergence, since |eα| = 1,∀α ∈ <, and so itshould be clear that X (eω) cannot be expressed by any function. However, sequences expressed
64 CHAPTER 2. THE Z AND FOURIER TRANSFORMS
by sinusoids, for example, are so important in signal analysis that we usually make use of thedistribuition δ to express the Fourier transform of these signals, by using the following relation:
∞∑n=−∞
e−ωn =∞∑
k=−∞
2πδ (ω + 2πk)
∞∑n=0
e−ωn =∞∑
k=−∞
πδ (ω + 2πk)
Thus, we can develop the expression for X (eω) using the distribution δ.
X (eω) =eθ
2
∞∑k=−∞
πδ (ω − ω0 + 2πk)− e−θ
2
∞∑k=−∞
πδ (ω + ω0 + 2πk)
As one would expect, the Fourier transform of a sinusoid of frequency ω0 results in δ distributionscentered at ±ω0, repeated periodically with period 2π.
(b)
x (n) = cos (ω0n)u (n)
X (eω) =∞∑
n=−∞cos (ω0n)u (n) e−ωn
=∞∑n=0
[eω0n + e−ω0n
] e−ωn2
X (eω) =12
∞∑n=0
e−(ω−ω0)n +12
∞∑n=0
e−(ω+ω0)n
Here we have the same problem of the previous case. There is no function capable of expressingthe Fourier transform of x (n), once the two summations above do not have convergence. Sincethis sequence is also a sinusoid, we can also use the δ distribution to derive an expression forX (eω).
X (eω) =12
∞∑k=−∞
πδ (ω − ω0 + 2πk) +12
∞∑k=−∞
πδ (ω + ω0 + 2πk)
(c)
x (n) = δ (n− 1) + 2δ (n− 2) + 3δ (n− 3) + 4δ (n− 4)X (eω) = e−ω + e−2ω + e−3ω + e−4ω
= e−52ω[e
32ω + e
12ω + e−
12ω + e−
32ω]
X (eω) = 2e−52ω
[cos(
3ω2
)+ cos
(ω2
)]
65
(d)
x (n) = anu (−n)
X (eω) =0∑
n=−∞ane−ωn
=0∑
n=−∞
(ae−ω
)n,m = −n
=∞∑m=0
(a−1eω
)mX (eω) =
11− a−1eω
, if |a−1| < 1⇒ |a| > 1
The Fourier transform will exist if and only if |a| > 1, since the summation that was solved de-pends on this condition to exhibit convergence.
(e)
x (n) = e−αnu (n)
X (eω) =∞∑n=0
e−αne−ωn
=∞∑n=0
e−(α+ω)n
X (eω) =1
1− e−(α+ω), if |e−(α+ω)| < 1⇒ |eα| > 1
(f)
x (n) = e−αn sin (ω0n)u (n)
X (eω) =∞∑n=0
e−αn[eω0n − e−ω0n
2
]e−ωn
=12
∞∑n=0
e−αn − 12
∞∑n=0
e−αn−2ωn
=12
11− e−α −
12
11− e−(α+2ω)
, |e−α| < 1
...
X (eω) =e−(α+ ω) sin (ω)
1− 2e−(α+ω) cosω + e−2(α+ω)
The Fourier transform will exist if and only if |e−α| < 1, since the summation that was solveddepends on this condition to exhibit convergence.
(g)
X (eω) =∞∑
n=−∞x (n) e−ωn
X (eω) =∞∑n=0
|n2|
We can see that @X (eω), since the above summation does not converge.
66 CHAPTER 2. THE Z AND FOURIER TRANSFORMS
2.22 Rigorously speaking, it is not possible to compute the inverse Fourier transform of X(eω) = 11−e−ω .
Indeed, we need X(eω) to be well-defined (even though it may have some discontinuities) for all ωbetween −π and π. However, X(eω) is not defined for ω = 0.
Nevertheless, let us be less rigorous by assuming that there exists a sequence x(n), whose Fouriertransform is X(eω) = 1
1−e−ω . Thus, in this hypothetical case, one has that the Fourier transform ofx(n)− x(n− 1) is given by
F x(n)− x(n− 1) = X(eω)− e−ωX(eω)
=1
1− e−ω −e−ω
1− e−ω =1− e−ω1− e−ω = 1.
As the inverse Fourier transform of 1 is
F−1 1 =1
2π
∫ π
−πeωndω =
1, for n = 00, otherwise
= δ(n),
then we have
x(n)− x(n− 1) = F−1 F x(n)− x(n− 1) = F−1 1 = δ(n).
Thus, if a hypothetical sequence x(n) has Fourier transform X(eω) = 11−e−ω , then we necessarily
must have
x(n) = u(n) + c,
where c is a constant scalar.
On the other hand, by replacing eω by z = ρeω , with ρ > 1, we know that the inverse Z transformof X(z) = 1
1−z−1 , with |z| > 1, is u(n). Thus, the inverse Z transform of X(z) does not depend onρ > 1, which implies that
limρ→1
[Z−1
1
1− z−1
]= limρ→1
u(n) = u(n).
If we want to preserve the correspondence between X(z) and X(eω), we can define by continuitythat the inverse Fourier transform of X(eω) is given by
x(n) = u(n) + 0 = limρ→1
[Z−1 X(z)] .2.23
h (n) =1
2π
∫ π
−πH (eω) eωndω
=1
2π
[∫ 0
−πeωndω +
∫ π
0
(−) eωndω]
=1
2π
[1n
(1− e−πn)− 1
n(eπn − 1)
]=
12πn
[1− e−πn − eπn + 1
]=
1πn− 1πn
cos (πn)
h (n) =1πn
[1− (−1)n]
h (0) =1
2π
∫ π
−πH (eω) dω = 0
h (n) =
0, n even2πn
, n odd
67
(a)∞∑
n=−∞|h (n) | =
∞∑m=−∞
| 2π (2m− 1)
|∞∑
m=−∞| 2π (2m− 1)
| >
∞∑m=−∞
2π2m
=1π
∞∑m=−∞
1m
∞∑n=−∞
|h (n) | >1π
∞∑m=−∞
1m
The above summation is the harmonic series, which is known to be nonconvergent. Hence,∞∑
n=−∞|h (n) | does not exhibit convergence either.
(b)
H (z) =∞∑
n=−∞h (n) z−n
=∞∑
n=−∞,6=0
1πn
[1− (−1)n] z−n
=1π
∞∑n=−∞,6=0
z−n
n− 1π
∞∑n=−∞,6=0
(−z)−nn
H (z) =1π
−1∑n=−∞
z−n
n+
1π
∞∑1
z−n
n− 1π
−1∑n=−∞
(−z)−nn
− 1π
∞∑n=1
(−z)−nn
The first sum will achieve convergence if and only if |z−1 > 1| =⇒ |z| < 1, whereas the secondsum, only if |z−1| < 1 =⇒ |z| > 1. Thus, H (z) will not have convergence for any value of z,and so @H (z).
2.24
2.25 x (n) is a real sequence (x (n) = x∗ (n)).
(a) Re X (eω) = ReX(e−ω
)X (eω) = X∗
(e−ω
)ReX(e−ω
)= Re
X∗(e−ω
)ReX(e−ω
)= Re X (eω)
(b) Im X (eω) = −ImX(e−ω
)X (eω) = X∗
(e−ω
)ImX(e−ω
)= −Im
X∗(e−ω
)ImX(e−ω
)= −Im X (eω)
Im X (eω) = −ImX(e−ω
)(c) |X (eω) | = |X (e−ω) |
X(eω) = |X(eω)|e∠[X(eω)]
X(e−ω) = |X(e−ω)|e∠[X(e−ω)]
68 CHAPTER 2. THE Z AND FOURIER TRANSFORMS
But we now in advance that if x(n) is real then X(eω) = X∗(e−ω), then:
|X(eω)|e∠[X(eω)] = |X(e−ω)|e−∠[X(e−ω)]
For both complex values to be equal it is necessary that it modulus and phase are the same, whatimplies:
|X(eω)| = |X(e−ω)|e∠[X(eω)] = e−∠[X(e−ω)]
That is∠[X(eω)] = −∠[X(e−ω)]
(d) ∠X (eω) = −∠X(e−ω
), we have proved at item above, here we show another method.
X (eω) = X∗(e−ω
)∠X
(e−ω
)= −∠X∗
(e−ω
)∠X
(e−ω
)= −∠X (eω)
∠X (eω) = −∠X(e−ω
)2.26 x (n) is an imaginary sequence.
Re x (n) = 0
Re x (n) −→ 12[X (eω) +X∗
(e−ω
)]X (eω) +X∗
(e−ω
)= 0
X (eω) = −X∗ (e−ω)(a)
Re X (eω) = Re−X∗ (e−ω) = −ReX∗ (e−ω)
ReX(e−ω
)= Re
X∗(e−ω
)Re X (eω) = −ReX (e−ω)
(b)
Im X (eω) = Im−X∗ (e−ω) = −Im
X∗(e−ω
)ImX(e−ω
)= −Im
X∗(e−ω
)Im X (eω) = Im
X(e−ω
)(c)
|X (eω) | = | −X∗ (e−ω) | = |X∗ (e−ω) ||X (e−ω) | = |X∗ (e−ω) ||X (eω) | = |X (e−ω) |
(d)
∠X (eω) = ∠−X∗ (e−ω) = ∠eπX∗(e−ω
)∠X (eω) = π + ∠X∗
(e−ω
)∠X
(e−ω
)= −∠X∗
(e−ω
)∠X (eω) = π − ∠X
(e−ω
)
69
2.27 The convolution property for the Fourier Transform states that:
x1 (n) ∗ x2 (n)←→ X1 (eω)X2 (eω)
From the above equation:
x1 (n) ∗ x2 (−n)←→ X1 (eω)X2
(e−ω
)∞∑
l=−∞
∞∑n=−∞
x1 (n)x2 (n− l) e−ωl = X1 (eω)X2
(e−ω
)∞∑
l=−∞
r (−l) e−ωl = X1 (eω)X2
(e−ω
)r (−l) ←→ X1 (eω)X2
(e−ω
)r (l) ←→ X1
(e−ω
)X2 (eω)
2.28 If x (n) is an imaginary and odd sequence: Conditions:
x(n) = x(−n) ↔ x(0) = 0Rex(n) = 0
The Fourier transform is given by
X(eω) =∞∑
n=−∞x(n)e−ωn.
Then
X(eω) =−1∑
n=−∞x(n)e−ωn + x(0)eω×0 +
∞∑n=1
x(n)e−ωn
X(eω) =∞∑n=1
−x(n)eωn +∞∑n=1
x(n)e−ωn
X(eω) =∞∑n=1
−x(n)[e−ωn − eωn]
But e−ωn − eωn = −2 sin (ωn), then
X(eω) =∞∑n=1
−x(n)2 sin (ωn)
X(eω) = −2∞∑n=1
Imx(n) sin (ωn)
X(eω) = 2∞∑n=1
Imx(n) sin (ωn)
Hence, as X(eω) is a sumation of real values, X(eω) is real.
X(eω) = 2∞∑n=1
Imx(n) sin (ωn)
X(e(−ω)) = X(e−ω) = 2∞∑n=1
Imx(n) sin (−ωn)
As sin (−a) = −sen(a) then X(eω) = −X(e−ω), and then X(eω) is odd.
70 CHAPTER 2. THE Z AND FOURIER TRANSFORMS
2.29
x (n) = −x∗ (−n)X (eω) = −X∗ (eω)
X (eω) +X∗ (eω) = 02Re X (eω) = 0Re X (eω) = 0
Thus, X (eω) is an imaginary function.
2.30
F E [x (n)] = Fx (n)
2
+ F
x∗ (−n)
2
=
12X (eω) +
12X∗ (eω)
=X (eω) +X∗ (eω)
2F E [x (n)] = Re X (eω)
F O [x (n)] = Fx (n)
2
−F
x∗ (−n)
2
=
12X (eω)− 1
2X∗ (eω)
=X (eω)−X∗ (eω)
2F O [x (n)] = Im X (eω)
2.31 We have that the system transfer function is H(z) = Y (z)X(z) = 1 + z−2. In addition, we also have
x (n) = xa (nT )
= xa
( n
4000
)= 3 cos
(2π1000
n
4000
)+ 7 sin
(2π1100
n
4000
)x (n) = 3 cos
(π2n)
+ 7 sin(
11π20
n
)
As the system is linear, the output y(n) associated with x(n) is given by
y (n) = 3|H(eπ2 )| cos
[π2n+ Θ
(π2
)]+ 7|H(e
11π20 )| sin
[11π20
n+ Θ(
11π20
)],
in which H(z)∣∣∣z=eω
= H(eω) = |H(eω)|eΘ(ω). For the H(z) of this particular system, we have
H(eω) = 1 + e−2ω = e−ω(eω + e−ω
)= 2 cos(ω)e−ω.
71
One, therefore, has
H(eπ2 ) = 2 cos
(π2
)e−
π2 = 2× 0× (−) = 0.
H(e11π20 ) = 2 cos
(11π20
)e−
11π20 = 2 cos
( π20
+π
2
)e−
11π20
= −2 sin( π
20
)e−
11π20 = 2 sin
( π20
)e−(
11π20 −π)
= 2 sin( π
20
)e
9π20 ≈ 0.3129e
9π20
By applying this result, we get
y (n) ≈ 0 + 7× 0.3129 sin(
11π20
n+9π20
)≈ 2.1901 sin
(11π20
n+9π20
)This way, one has
ya (t) ≈∞∑
n=−∞2.1901 sin
(11π20
n+9π20
)sin [π (4000t− n)]π (4000t− n)
.
We know that this is equivalent to filtering the signal yi(t) =∑∞−∞ y(n)δ(t − nT ) with an ideal
lowpass filter having cutoff frequency fs2 = 2 kHz. As the discrete-time Fourier transform of y(n) is
two impulses at ± 11π20 (considering ω between −π and π), then the Fourier transform of yi(n) is two
impulses at± 11π20T = ± 11π
20 ×4000 = 2π1100 (considering Ω between−Ωs2 and Ωs
2 = 2π×2000). We,therefore, have that the Fourier transform of ya (t) are two impulses at±2π1100 for all Ω. Consideringthe exact values of the amplitudes of the impulses we get
ya (t) ≈ 2.1901 sin(
2π1100t+9π20
).
Note that, as 1100 < 40002 , then there is no aliasing when we sample ya(t) in order to generate
y(n) = ya(
n4000
).
The effect this processing has on the input signal is the cancellation of the component whose frequencywas 1000 Hz, and the attenuation of the the other component (frequency of 1100Hz), which sufferedalso a phase delay.
2.32
F−1
∞∑k=−∞
δ
(ω − 2π
Nk
)=
12π
∫ π
−π
∞∑k=−∞
δ
(ω − 2π
Nk
)eωndω
=1
2π
∞∑k=−∞
∫ π
−πδ
(ω − 2π
Nk
)eωndω
=
12πe
2πN kn, −N
2 ≤ k ≤ N2
0, k < N2 or k > N
2
=1
2π
N2 −1∑
k=−N2
e2πN kn
F−1
∞∑k=−∞
δ
(ω − 2π
Nk
)=
12π
N−1∑k=0
e2πN kn
72 CHAPTER 2. THE Z AND FOURIER TRANSFORMS
Thus, unless n = pN, p ∈ Z we have a summation of uniformly distributed values over the complexunit circle, which is equal to zero. Instead, if n = pN, p ∈ Z:
F−1
∞∑k=−∞
δ
(ω − 2π
Nk
)=
12π
N−1∑k=0
e2πN kpN
=1
2π
N−1∑k=0
1
=
N
2π, n = pN (p ∈ Z)
0, otherwise
F−1
∞∑k=−∞
δ
(ω − 2π
Nk
)=
N
2π
∞∑p=−∞
δ (n−Np)
2.33
Chapter 3
DISCRETE TRANSFORMS
3.1 (a) As there is no significant frequency components beyond 5 GHz, one can use an A/D converterwith sampling frequency of 10 GHz (Nyquist theorem).
(b) In order for the computer to measure the frequency content of the fast pulse x(t), one can usethe discrete Fourier transform (with N points), after sampling the pulse. In addition, as the pulseduration is of approximately 1 ns and the sampling frequency is 10 GHz, the sampling processwill generate around 1
(1/10) = 10 samples. Since one needs to discriminate frequency componentsspaced of 10 MHz, the size N of the DFT must respect the following proportion:
2π ! 10 GHz2πN
! 10 MHz,
yielding N = 1,000. One, therefore, must pad around 990 zeros on the output signal of the dataacquisition system before the application of the DFT.
From the above discussion, we can describe the measurement procedure as follows:
i. To sample the signal using an A/D converter with sampling rate 10 GHz. This process willgenerate around 10 points from the fast pulse;
ii. To include 990 zeros at the end of the aforementioned 10 points, generating a discrete-timesignal with N = 1,000 points;
iii. To apply the DFT on the resulting zero-padded signal. This will generate a frequency-domainsignal that discriminates frequency components spaced of 10 MHz.
3.2
N−1∑n=0
WnkN =
N−1∑n=0
e−2πN nk
=1− e− 2π
N kN
1− e− 2πN k
• If k 6= mN,m ∈ Z
N−1∑n=0
WnkN =
01− e− 2π
N k= 0
73
74 CHAPTER 3. DISCRETE TRANSFORMS
• If k = mN,m ∈ ZN−1∑n=0
WnkN =
1− e−2πk1− e− 2π
N k= 0
= limk→mN
2πe2πk
2πN e
2πN k
=2π 2πN
N−1∑n=0
WnkN = N
3.3
x (n) =1N
N−1∑k=0
X (k)W−knN
X (l) =N−1∑n=0
x (n)W lnN
X (l) =N−1∑n=0
1N
N−1∑k=0
X (k)W−knN W lnN
=1N
N−1∑k=0
X (k)N−1∑n=0
W l−kN
=1N
N−1∑k=0
X (k)Nδ (l − k)
=N−1∑k=0
X (k) δ (l − k)
= X (l)
Thus, it becomes clear that equations (3.15) and (3.16) of the textbook give the direct and inversediscrete Fourier transforms.
3.4 (a) W lnx (n)←→ X (k + l)
X ′ (k) =N−1∑n=0
W lnx (n)Wnk
=N−1∑k=0
x (n)Wn(k+l)
X ′ (k) = X (k + l)
By noting that:
cos(
2πNln
)=
e2πN ln + e−
2πN ln
2
cos(
2πNln
)=
W−lnN
2+W lnN
2
sin(
2πNln
)=
W−lnN
2− W ln
N
2
75
It becomes obvious that:
x (n) cos(
2πNln
)←→ 1
2[X (k − l) +X (k + l)]
x (n) sin(
2πNln
)←→ 1
2[X (k − l)−X (k + l)]
(b)N−1∑n=0
h (n)x (l + n)←→ H (−k)X (k)
y (l) =N−1∑n=0
h (n)x (l − n) = h (l)⊗ x (l)
c (l) =N−1∑n=0
h (n)x (l + n)
c (−l) =N−1∑n=0
h (n)x (−l + n) = h (l)⊗ x (−l)
C (−k) = H (k)X (−k)C (k) = H (−k)X (k)
(c)N−1∑n=0
|x (n) |2 =1N
N−1∑k=0
|X (n) |2
N−1∑n=0
|x (n) |2 =N−1∑n=0
x (n)x∗ (n)
x (n) =N−1∑k=0
X (k)W−knN
x∗ (n) =N−1∑k=0
X∗ (k)W knN
N−1∑n=0
|x (n) |2 =N−1∑n=0
1N
N−1∑k=0
X (k)W−knN
1N
N−1∑r=0
X∗ (r)W rnN
=1N2
N−1∑n=0
N−1∑k=0
N−1∑r=0
X (k)X∗ (r)Wn(r−k)N
=1N2
N−1∑k=0
N−1∑r=0
X (k)X∗ (r)N−1∑n=0
Wn(r−k)N
=1N2
N−1∑k=0
N−1∑r=0
X (k)X∗ (r)Nδ (r − k)
N−1∑n=0
|x (n) |2 =1N
N−1∑k=0
|X (k) |2
(d) • If x (n) is a real sequence:
x (n) = x∗ (n)X (k) = X∗ (−k)
ReX (k) = ReX∗ (−k) = ReX (−k)ImX (k) = ImX∗ (−k) = −ImX (−k)
76 CHAPTER 3. DISCRETE TRANSFORMS
• If x (n) is an imaginary sequence:
x (n) = −x∗ (n)X (k) = −X∗ (−k)
ReX (k) = Re−X∗ (−k) = −ReX (−k)ImX (k) = Im−X∗ (−k) = ImX (−k)
3.5 (a)
X1(k) =[1 1 1 1 1 1 1 1
]
x1(n) = δ(n)
x1(n) = δ(n)
X2(k) =[8 0 0 0 0 0 0 0
]= 8δ(k) 0 6 k 6 7
x2(n) =[1 1 1 1 1 1 1 1
]
X3(k) =[0 0 0 8 0 0 0 0
]= 8δ(k − 3)
x3(n) = W−3n8
x(n) = x1(n) + x2(n) + x3(n) = δ(n) + 1 +W−3n8
x(n) =
2 +W−3n8
1 +W−3n8
1 +W−3n8
1 +W−3n8
1 +W−3n8
1 +W−3n8
1 +W−3n8
1 +W−3n8
(b)
y(n) = x(n+ 4) =
1 +W−3n8
1 +W−3n8
1 +W−3n8
1 +W−3n8
2 +W−3n8
1 +W−3n8
1 +W−3n8
1 +W−3n8
(3.1)
77
3.6 (a)X(k) = 2δ(k) + 2 + 2δ(k − 7), 0 ≤ k ≤ 7
x1(n)↔ 2δ(k)
x1(n) =14
x2(n)↔ 2
x2(n) = 2δ(n)
x3(n)↔ 2δ(k − 7)
x3(n) =14W−7n
8
x(n) =[1 + 1
4 + 14W
−7n8
14 + 1
4W−7n8
14
(1 +W 7n
8
). . . 8
4
(1 +W 7n
8
)]=[a b b b b b b b
](b)
y(n) = x(n− 3) =[b b b a b b b b
]3.7 (a)
X(k) = 1 + 2e−2π4 jk − e−πjk + e−
6π4 jk = 1 + 2e−
π2 jk + e−πjk + e−
3π2 jk
X(0) = 3
X(1) = 1− 2j + 1 + j = 2− j
X(2) = 1− 2− 1− 1 = −3
X(3) = 1 + 2j + 1− j = 2 + j
(b)
X(k) = 1 + 2e2π6 jk − e− 4π
6 jk + e−6π6 jk
= 1 + 2e−π3 jk − e− 2π
3 jk + e−πjk
Y (k) = −2j sin(π
3k)
+ j sin(
2π3k
)
= 2j sin(π
3k)− j2 sin
(π3k)− cos
(πk
3
)= 2j sin
(π3k)(
1− cos(π
3k))
Im [X(k)] =12
[X(k)−X∗(k)]
y(n) =12IDFT [X(k)]− IDFT [X∗(k)]
78 CHAPTER 3. DISCRETE TRANSFORMS
=12
[x(n)− x∗(−n)] =12
[x(n)− x∗(−n)]
y(0) =12
[x(0)− x(0)] = 0
y(1) =12
[x(1)− x(5)] = 0
y(2) =12
[x(2)− x(4)] = −12
y(3) =12
[x(3)− x(3)] = 0
y(4) =12
[x(4)− x(2)] =12
y(5) =12
[x(5)− x(1)] = −1
(c)
y(n) =12
[x(n)− x∗(−n)]
y(0) = 0
y(1) =12
y(2) = 0
y(3) = −12
3.8 The sequences x1(n) and x2(n) are even, and the sequences x3(n) and x4(n) are odd.
Let’s form the sequence
x(n) = x1(n) + x3(n) + (x2(n) + x4(n))
Then by calculating
X(k) = DFT[x(n)]
one can obtain the desired DFTs from the even and odd parts from both the real and imaginary partsX(k) as follows
X1(k) =12
[Re[X(k)] + [Re[X(−k)]]
X2(k) =12
[Im[X(k)] + [Im[X(−k)]]
X3(k) =12
[Im[X(k)]− [Im[X(−k)]]
X4(k) =12
[Re[X(k)]− [Re[X(−k)]]
79
3.9 (a) Two even length-N sequences:
x1 (n) = x1 (N − n)x2 (n) = x2 (N − n)y (n) = Wn
Nx1 (n) + x2 (n)Y (k) = FWNn1+X2 (k)Y (k) = X1 (k + 1) +X2 (k)
Y (−k) = X1 (−k + 1) +X2 (−k)Y (−k − 1) = X1 (−k) +X2 (−k − 1)
(b) Once x1 (n) and x2 (n) are even sequences, X1 (k) and X2 (k) will be also even sequences.
Y (−k − 1) = X1 (k) +X2 (k + 1)
X1 (0) =N−1∑n=0
x1 (n)
X2 (0) =N−1∑n=0
x2 (n)
X1 (k + 1) = Y (k)−X2 (k)X2 (k + 1) = Y (−k − 1)−X1 (k)
3.10 (a) Two odd length-N sequences:
x1 (n) = −x1 (−n) = −x1 (N − n)x2 (n) = −x2 (−n) = −x2 (N − n)y (n) = Wn
Nx1 (n) + x2 (n)Y (k) = FWNn1+X2 (k)Y (k) = X1 (k + 1) +X2 (k)
Y (−k) = X1 (−k + 1) +X2 (−k)Y (−k − 1) = X1 (−k) +X2 (−k − 1)
(b) Once x1 (n) and x2 (n) are odd sequences, X1 (k) and X2 (k) will be also odd sequences.
Y (−k − 1) = −X1 (k)−X2 (k + 1)
X1 (0) =N−1∑n=0
x1 (n)
X2 (0) =N−1∑n=0
x2 (n)
X1 (k + 1) = Y (k)−X2 (k)X2 (k + 1) = −Y (−k − 1)−X1 (k)
3.11 Four real and even sequences:
xi (n) = xi (−n) = x∗i (n) = x∗i (−n) , i = 1, 2, 3, 4Xi (k) = Xi (−k) = X∗i (k) = X∗i (−k) , i = 1, 2, 3, 4
80 CHAPTER 3. DISCRETE TRANSFORMS
xa (n) = x1 (n) + x2 (n)xb (n) = x3 (n) + x4 (n)Xa (k) = X1 (k) + X2 (k)Xb (k) = X3 (k) + X4 (k)y (n) = Wn
Nxa (n) + xb (n)...
Xa (k + 1) = Y (k)−Xb (k)Xb (k + 1) = Y (−k − 1)−Xa (k)
Xa (0) =N−1∑n=0
[x1 (n) + x2 (n)]
Xb (0) =N−1∑n=0
[x3 (n) + x4 (n)]
X1 (k) = ReXa (k)X2 (k) = ImXa (k)X3 (k) = ReXb (k)X4 (k) = ImXb (k)
3.12 We can derive the coefficients of the Fourier series expansion of the periodic sequences by using therelation stated in equation (3.6) of the textbook, repeated bellow:
x (n) =2πNx′ (n)
then:
(a)
x′ (n) =e
2πN n − e− 2π
N n
2
=W−nN −Wn
N
2
X ′ (k) =N−1∑n=0
W−nN −WnN
2W knN
= − 12
N−1∑n=0
Wn(k+1)N +
12
N−1∑n=0
Wn(k−1)N
= − 12
1−WN(k+1)N
1−W k+1N
+12
1−WN(k−1)N
1−W k−1N
X ′ (k) = −N2δ (k + 1−N) +
N
2δ (k − 1)
X (k) =2πNX ′ (k)
= −πδ (k + 1−N) +
π
δ (k − 1)
X (k) =
−π , k = N − 1π , k = 10, otherwise
.
81
(b)
x′ (n) = (−1)n
= cosπn
=eπn + e−πn
2
=e
2πN nN2 + e−
2πN nN2
2
x′ (n) =W−nN2N +W
nN2N
2
x (n) =π
N
[W
nN2N +W
−nN2N
]X (k) =
π
N
N−1∑n=0
W−nN2N +
π
N
N−1∑n=0
WnN2N
=π
N
1−WN(k+N2 )
N
1−W k+N2
N
+π
N
1−WN(k−N2 )N
1−W k−N2N
...
X (k) =2πN
1− cos (πN)e−2πk
1 + e−2πN k
• If N is even:
X (k) =2πN
1− e−2πk1 + e−
2πN k
X (k) = 2πδ(k − N
2
)
• If N is odd:
X (k) =2πN
1 + e−2πk
1 + e−2πN k
X (k) =2πN
21− e− 2π
N k
82 CHAPTER 3. DISCRETE TRANSFORMS
3.13 (a)
x (n) = 2 cos(πn
N
)+
12− 1
2cos(
2πn
N
)x (n) = W
−n
2N +W
n
2N +
12− W−nN
4− Wn
N
4
X (k) =N−1∑n=0
x (n)W knN
=N−1∑n=0
Wn(k− 1
2 )N +
N−1∑n=0
Wn(k+ 1
2 )N +
12
N−1∑n=0
WnkN −
14
N−1∑n=0
Wn(k−1)N +
−14
N−1∑n=0
Wn(k+1)N
=1−WN(k− 1
2 )N
1−W k− 12
N
+1 +W
N(k− 12 )
N
1−W k+ 12
N
+12
1−WNkN
1−W kN
− 14
1−WN(k−1)N
1−W k−1N
+
−14
1−WN(k+1)N
1−W k+1N
=2
1− e− 2πkN e
πN
+2
1− e− 2πkN e−
πN
+N
2δ (k)− N
4δ (k − 1)− N
4δ (k −N + 1)
X (k) =2
1− e− 2πk11 e
π11
+2
1− e− 2πk11 e−
π11
+112δ (k)− 11
4δ (k − 1)− 11
4δ (k − 10)
The magnitude and phase of X (k) can be seen in Figure 3.1.
0 1 2 3 4 5 6 7 8 9 100
2
4
6
8
10
Mag
nitu
de
Magnitude and Phase for X(k)
0 1 2 3 4 5 6 7 8 9 10−2
−1
0
1
2
Pha
se
Figure 3.1: Magnitude and phase of X (k) for exercise 3.13a.
83
(b)
x (n) = e−2n
X (k) =N−1∑n=0
e−2nWnkN
=1−WNk
N e−2N
1−W kNe−2
X (k) =1− e−42
1−W21ke−2
The magnitude and phase of X (k) can be seen in Figure 3.2.
(c)
x (n) = δ (n− 1)
X (k) =N−1∑n=0
x (n)WnkN
= W kN
X (k) = e−2π3 k
The magnitude and phase of X (k) can be seen in Figure 3.3.
(d)
x (n) = n
X (k) =N−1∑n=0
nWnkN
X (k) = e−π3 k + 2e
−π
32k
+ 3e−π3 3k + 4e−
π3 4k + 5e−
π3 5k
The magnitude and phase of X (k) can be seen in Figure 3.4.
0 2 4 6 8 10 12 14 16 18 200
0.2
0.4
0.6
0.8
1
1.2
1.4
Mag
nitu
de
Magnitude and Phase for X(k)
0 2 4 6 8 10 12 14 16 18 20−0.15
−0.1
−0.05
0
0.05
0.1
0.15
Pha
se
Figure 3.2: Magnitude and phase of X (k) for exercise 3.13b.
84 CHAPTER 3. DISCRETE TRANSFORMS
0 0.5 1 1.5 2−0.2
0
0.2
0.4
0.6
0.8
1
Mag
nitu
de
Magnitude and Phase for X(k)
0 0.5 1 1.5 2−5
−4
−3
−2
−1
0
Pha
se
Figure 3.3: Magnitude and phase of X (k) for exercise 3.13c.
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50
5
10
15
Mag
nitu
de
Magnitude and Phase for X(k)
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50
1
2
3
4
5
Pha
se
Figure 3.4: Magnitude and phase of X (k) for exercise 3.13d.
3.14 Both sequences are of size 3. Thus, we must include at least 3 − 1 = 2 zeros at the end of eachsequence in order to perform a linear convolution using the IDFT of the product of the DFT of these twosequences. Hence, we have N = 3 + 2 = 5. Let us define x1(0) = 1.0, x1(1) = −1.0, x1(2) = −0.5,x1(3) = x1(4) = 0, and x2(0) = 1.0, x2(1) = −0.5, x2(2) = −1.0, x2(3) = x2(4) = 0. Thus, the
85
length-5 DFTs of x1(n) and x2(n) are
X1(0) =4∑
n=0
x1(n)e−2π5 0×n = −0.5
X1(1) =4∑
n=0
x1(n)e−2π5 1×n ≈ 1.1 + 1.2
X1(2) =4∑
n=0
x1(n)e−2π5 2×n ≈ 1.7 + 0.1
X1(3) =4∑
n=0
x1(n)e−2π5 3×n ≈ 1.7− 0.1
X1(4) =4∑
n=0
x1(n)e−2π5 4×n ≈ 1.1− 1.2
X2(0) =4∑
n=0
x2(n)e−2π5 0×n ≈ −0.5
X2(1) =4∑
n=0
x2(n)e−2π5 1×n ≈ 1.7 + 1.1
X2(2) =4∑
n=0
x2(n)e−2π5 2×n ≈ 1.1− 0.7
X2(3) =4∑
n=0
x2(n)e−2π5 3×n ≈ 1.1 + 0.7
X2(4) =4∑
n=0
x2(n)e−2π5 4×n ≈ 1.7− 1.1
Now, let us define X(k) = DFTx(n) = X1(k)X2(k). One, therefore, has
X(0) = X1(0)×X2(0) = 0.25X(1) = X1(1)×X2(1) ≈ 0.29 + 3.22X(2) = X1(2)×X2(2) ≈ 1.89− 0.96X(3) = X1(3)×X2(3) ≈ 1.89 + 0.96X(4) = X1(4)×X2(4) ≈ 0.29 + 3.22
Thus, it follows that
x(0) =15
4∑k=0
X(k)e2π5 0×k = 1
x(1) =15
4∑k=0
X(k)e2π5 1×k = −1.5
x(2) =15
4∑k=0
X(k)e2π5 2×k = −1
x(3) =15
4∑k=0
X(k)e2π5 3×k = 1.25
x(4) =15
4∑k=0
X(k)e2π5 4×k = 0.5
86 CHAPTER 3. DISCRETE TRANSFORMS
3.15 First solution:Since x1(k) has two zeros, then N ≤ 3 + 2 − 1 = 4. So that, x′1(n) = x1(n + 2) ⇒ x1(n) =x′1(n−2) = x′1(n)∗δ(n−2). Then, y(n) = x′1(n)∗x2(n)∗δ(n−2). Since y′(n) = x′1(n)∗x2(n)⇒Y ′(k) = X ′1(k)X2(k)
X ′1(k) =
1 1 1 11 − −1 1 −1 1 −11 −1 −
1110
=
3−1
X2(k) =
1 1 1 11 − −1 1 −1 1 −11 −1 −
aa00
=
2a
a− a0
a+ a
Y ′(k) = X ′1(k)X2(k) =
6a
a− a0
−a+ a
⇒ y′(n) =14
1 1 1 11 −1 −1 −1 1 −11 − −1
6aa− a
0−a+ a
=14
4a8a8a4a
Therefore,
y′(n) =
a2a2aa
⇒ y(n) =
00a2a2aa
Second solutionIn this solution we aim at keeping the length equal to N = 5 as in the case of the first sequence,then 5 will be the length of the DFT. Then, if x′1(n) = x1(n + 2) ⇒ X ′1(k) = W−2k
5 X1(k), that is,X1(k) = W 2k
5 X ′1(k). Also y(n) = x1(n)∗x2(n)⇒ Y (k) = W 2k5 X ′1(k)X2(k) = W 2k
5 Y ′(k). Since
Y ′(k) =1−W 3k
5
1−W k5
.a
(1−W 2k
5
1−W k5
)= a.
(1−W k5 )(1 +W k
5 +W 2k5 )
1−W k5
.(1 +W k
5 )(1−W k5 )
1−W k5
= a(1 + 2W k
5 + 2W 2k5 +W 3k
5
)Therefore, Y ′(k) = a+2aW 1k
5 +2aW 2k5 +aW 3k
5 +0.W 4k5 , implying that aW 2k
5 +2aW 3k5 +2aW 4k
5 +aW 5k
5 . As result,y(n) = [0 0 a 2a 2a a]T
3.16 (a) The z transforms of the sequences are given by
X(z) = z−2[z2 + az +a2
2]
H(z) = z−2[z2 − az +a2
2]
87
The z transforms of the output signal is
Y (z) = H(z)X(z) = z−4[z2 − az +a2
2][z2 + az +
a2
2]
= z−4[z4 + (2a2
2− a2)z2 +
a4
4] = z−4[z4 +
a4
4]
so that the output sequence is
y(n) = [1 0 0 0a2
4]
(b) For a = 1 the input and impulse response signals are depicted in Figure 3.5. Then, the impulse re-
-1.5
-1
-0.5
0
0.5
1
1.5
0 0.5 1 1.5 2 2.5 3 3.5 4
x(n)
n
-1.5
-1
-0.5
0
0.5
1
1.5
0 0.5 1 1.5 2 2.5 3 3.5 4
h(n)
n
Figure 3.5: Input and impulse response sequences.
sponse signal is divided in blocks of length 2 as shown in Figure 3.6. The corresponding sequences
-1.5
-1
-0.5
0
0.5
1
1.5
0 0.5 1 1.5 2 2.5 3 3.5 4
h_1
(n)
n
-1.5
-1
-0.5
0
0.5
1
1.5
2 3 4 5 6 7
h_2
(n)
n
Figure 3.6: First and second blocks of the impulse response sequences, h1(n)and h2(n), respectively.
reflected length 4 are shown in Figure 3.7.The circular convolutions between the blocks of h(n) and the input sequence x(n) are shown inFigure 3.8.The complete linear convolutions is depicted in Figure 3.9.
88 CHAPTER 3. DISCRETE TRANSFORMS
-1.5
-1
-0.5
0
0.5
1
1.5
0 0.5 1 1.5 2 2.5 3 3.5 4
h_1
(-n
)
n
-1.5
-1
-0.5
0
0.5
1
1.5
2 3 4 5 6 7
h_2
(-n
)
n
Figure 3.7: Sequences h1(−n) and h2(−n).
-1.5
-1
-0.5
0
0.5
1
1.5
0 0.5 1 1.5 2 2.5 3 3.5 4
y_1
(n)
n
-1.5
-1
-0.5
0
0.5
1
1.5
2 3 4 5 6 7
y_2
(n)
n
Figure 3.8: Circular convolutions 1 and 2.
-1.5
-1
-0.5
0
0.5
1
1.5
0 1 2 3 4 5 6 7
y(n)
n
Figure 3.9: Overall linear convolution.
89
3.17
x (n) −→ length Lh (n) −→ length k
We call M , for the overlap-and-save method, the number of blocks, and so:
M ≈ L
N − k + 1
Now we calculate the total number of multiplications and additions required for this method:
Multiplications: N log2N +MN log2N +MN +MN log2N
Additions: N log2N +MN log2N +MN log2N
Now we begin to analyze the reasons for the numbers given above. For the number of multiplications,the first term is due to the FFT of h (n), the second is due to the M FFTs of xm (n), the third is due tothe M products of the FFTs of length N and, finally, the fourth is due to the M IFFTs of Ym (k). Forthe number of additions, we find that the first term is due to the FFT of h (n), the second is due to theM FFTs of xm (n) and the third is due to the M additions of the FFTs of length N .
Thus, we have the total number of arithmetic operations:
O = 2N log2N + 4MN log2N +MN
By dividing the number of arithmetic operations by L, and then deriving with respect to N , we havethat:
∂O
∂N=
1N ln 2
[2NL
+4N
N − k + 1
]+ log2N
[2L− 4
k − 1(N − k + 1)2
]− k − 1
(N − k + 1)2 (3.2)
The length of signal x (n) (L) should be much greater than the length of h (n) (k). In fact, withoutloss of generality, we can see the signal x (n) as a very large sequence and, in order to approximate itin equation (3.2), we take the limit as the parameter L goes to infinity ( lim
L→∞).
∂O
∂N=
1N ln 2
4NN − k + 1
− 4 log2Nk − 1
(N − k + 1)2 −k − 1
(N − k + 1)2 = 0 (3.3)
In order to minimize the number of arithmetic operations, we set∂O
∂Nto zero. We also know that the
length of the blocksN must be greater than k−1, so we can multiply equation (3.3) by (N − k + 1)2,and develop it a little bit to show that:
4Nln 2− 4k
ln 2+
4ln 2− 4k lnN
ln 2+
4 lnNln 2
− k + 1 = 0 (3.4)
We see that there is no analytic function that describes N as a function of k, but we can find k as afunction of N and plot a graph relating these two parameters. By developing equation (3.4):
k =4N + 4 + 4 lnN + ln 2
4 + lnN + ln 2
The graph that relates k and N is shown in Figure 3.10.
90 CHAPTER 3. DISCRETE TRANSFORMS
0 50 100 150 200 2500
500
1000
1500
2000
2500
k
N
Figure 3.10: Relation between N and k at exercise 3.17.
3.18
x (n) −→ length Lh (n) −→ length K
For the overlap-and-add method, the number of blocks is equal toL
N, once the sequences xm (n) are
non-overlapped. Instead, the number of the blocks for the FFTs are N + k − 1.
Now we calculate the total number of multiplications and additions required for this method:
Multiplications: (N + k − 1) log2 (N + k − 1) +L
N(N + k − 1) log2 (N + k − 1) +
+L
N(N + k − 1) +
L
N(N + k − 1) log2 (N + k − 1)
Additions: (N + k − 1) log2 (N + k − 1) +L
N(N + k − 1) log2 (N + k − 1) +
+L
N(k − 1) +
L
N(N + k − 1) log2 (N + k − 1)
Now we begin to analyze the reasons for the above equations. For the number of multiplications, the
first term is due to the FFT of the filter, the second is due to theL
NFFTs of xm (n), the third is due to
theL
Nproducts of the FFTs of length (N + k − 1) and, finally, the fourth is due to the
L
NIFFTs of
Ym (k). For the number of additions, we find that the first term is due to the FFT of the filter, the second
is due to theL
NFFTs of xm (n), the third is due to the
L
Nadditions of the overlapped coefficients of
the FFTs, and the fourth term is due to theL
NFFTs of Ym (k).
From now on, we define the parameter M as the block length for the FFTs, so that M = N + k − 1.Thus, we have the total number of arithmetic operations:
O = M log2M
[2 +
4LM − k + 1
]+
L
M − k + 1(M + k − 1)
91
By dividing the above equation by L, and then taking the limit when L→∞, we have that:
O = M log2M
[4
M − k + 1
]+M + k − 1M − k + 1
Once the length of the FFT blocks isM , we can derive the number of arithmetic operation with respectto M , and set it to zero.
∂O
∂M=
[log2M +
M
M ln 2
] [4
M − k + 1
]−M log2M
4(M − k + 1)2 − 2
k − 1(M − k + 1)2 = 0
By developing the above equation, we conclude that there is no analytic function that describes M asa function of k, just like the previous exercise. Instead, we can describe k as a function of M , and plota graph relating these two parameters.
k =4 lnM + 4 (M − 1) + 2 ln 2
4 lnM + 4 + 2 ln 2
Figure 3.11 shows the plot relating M to k.
We can verify that the relation between the length of the filter (k) and the length of the blocks used forthe overlap-and-add and overlap-and-save methods are almost the same.
3.19 In matrix form, we have the following equation:
X = F1F2F3P8x
0 50 100 150 200 2500
500
1000
1500
2000
2500
k
M
Figure 3.11: Relation between M and k at exercise 3.18.
92 CHAPTER 3. DISCRETE TRANSFORMS
Expanding:
P8 =
11
11
11
11
, F3 =
1 11 −1
1 11 −1
1 11 −1
1 11 −1
F2 =
1 11 W 1
4
1 −11 W 3
4
1 11 W 1
4
1 −11 W 3
4
F1 =
1 11 W 1
8
1 W 28
1 W 38
1 −11 W 5
8
1 W 68
1 W 78
3.20 In matrix form, we have the following equation:
X = P8F1F2F3x
93
Expanding:
F3 =
1 11 1
1 11 1
1 −1W 1
8 W 58
W 28 W 6
8
W 38 W 7
8
F2 =
1 11 1
1 −1W 1
4 W 34
1 11 1
1 −1W 1
4 W 34
F1 =
1 11 −1
1 11 −1
1 11 −1
1 11 −1
, P8 =
11
11
11
11
3.21
X (k) =2∑
n=0
x (2n)Wnk3 +W k
6
2∑n=0
x (2n+ 1)Wnk3
Figure 3.12 shows the graph of the above operation.
W3
W3
W3
W3
W3
WW
3
3
W3
2
2
2
2
W6
W62
-1
-1
-1
Figure 3.12: Graph of the decimation-in-time 6-point FFT algorithm of exercise 3.21.
94 CHAPTER 3. DISCRETE TRANSFORMS
In matrix form, we can state that:
X = F1F2P6x
P6 =
1
11
11
1
F2 =
1 1 11 W3 W 2
3
1 W 23 W3
1 1 11 W3 W 2
3
1 W 23 W3
F1 =
1 1
1 W6
1 W 26
1 −11 −W6
1 −W 26
3.22
X (k) =
N5 −1∑m=0
x (5m)W 5mkN +
N5 −1∑m=0
x (5m+ 1)W (5m+1)kN +
N5 −1∑m=0
x (5m+ 2)W (5m+2)kN +
+
N5 −1∑m=0
x (5m+ 3)W (5m+3)kN +
N5 −1∑m=0
x (5m+ 4)W (5m+4)kN
=
N5 −1∑m=0
x (5m)WmkN5
+W kN
N5 −1∑m=0
x (5m+ 1)WmkN5
+W 2kN
N5 −1∑m=0
x (5m+ 2)WmkN5
+
+W 3kN
N5 −1∑m=0
x (5m+ 3)WmkN5
+W 4kN
N5 −1∑m=0
x (5m+ 4)WmkN5
X (k) =4∑l=0
W lkN
N5 −1∑m=0
x (5m+ l)WmkN5
X (k) =4∑l=0
W lkn Fl (k)
Since this process can be applied recursively, we conclude that a length L DFT S (k) is obtained from5 DFTs of length L
5 Sl (k).
S (k) =4∑l−0
W lkL Sl (k)
S
(k + r
L
5
)=
4∑l−0
Wl(k+rL5 )L Sl (k) (3.5)
S
(k + r
L
5
)=
4∑l=0
W lkL W
rl5 Sl (k)
95
And so we have the equation (3.5) of the basic cell of a radix-5 algorithm. Figure 3.13 shows the graphof this cell.
The radix-5 algorithm does not have the symmetries encountered in radix-4 algorithm, for example,and so there is no simplifications to be made.
3.23 Using a similar development of the previous exercise, it is easy to show that:
X (k) =7∑l=0
W lkN Fl (k)
Applying it recursively, we can obtain a length L DFT S (k) from 8 DFTs of lengthL
8Sl (k).
S (k) =7∑l=0
W lkN Fl (k)
S
(k + r
L
8
)=
7∑l=0
W
l
0@k+L
8
1AL Sl (k)
S
(k + r
L
8
)=
7∑l=0
W klL W
rl8 Sl (k)
The analysis of the complexity of a generic radix-8 algorithm is shown bellow.
A (N) = 8A(N
8
)+ 7N
M (N) = 8M(N
8
)+ 7N
WL
WL
WL
WL
W5
W5
W5
W5
W5
W5
W5
W5
W5
W5
W5
W5
W5
W5
W5
W5
S (k)
S (k)
S (k)
S (k)
S (k)
0
4
3
2
1k
2k
3k
4k
S (k)
S (k+L/5)
S (k+2L/5)
S (k+3L/5)
S (k+4L/5)
2
3
4
2
4
3
4
2
4
3
2
3
Figure 3.13: Graph of the basic cell radix-5 algorithm for exercise 3.22.
96 CHAPTER 3. DISCRETE TRANSFORMS
Suppose N = 8l, and T (l) =M (N)N
:
T (l) = T (l − 1) + 7T (0) = 0T (l) = 7l = 7 log8N
M (N)N
= 7 log8N
M (N) = 7N log8N
A (N) = 7N log8N
The above graph cell forms the basis of the radix-8 algorithm. In fact, with some algebraic manipu-lations, it can be proved that some simplifications can be evaluated in such a way that the number ofcomplex operations gets even less than that for the radix-2 algorithm. This is beyond the scope of thisexercise, and so we showed the development just for the basic cell radix-8 algorithm.
3.24
X (k) =N−1∑n=0
x (n)WnkN
=N1−1∑r=0
W rkN
N2l−1∑m=0
x (N1m+ r)WmkN2l
=N1−1∑r=0
W rkN
N2l−1∑p=0
W pkN2l
N3l−1∑m=0
x (N1N2m+ r + p)
As we can see, the DFTs become much simpler as we keep this process. When the maximum simpli-fication is achieved, the DFTs will be length-1, and hence, the number of multiplications necessary tocompute them will be zero. Instead, the number of multiplications necessary to combine the DFTs tocompose X (k) will keep increasing, and it is easy to derive a rule for these operations:
First Simplification −→ N (N1 − 1)Second Simplification −→ N (N1 − 1 +N2 − 1)
...Final Simplification −→ N (N1 +N2 +N3 + . . .+Nl − l)
3.25 In order to compute the linear convolution of two sequences using an FFT algorithm, we need firstto complete the two sequences with zeros until they have the same length of their linear convolution.Then, we can calculate the DFTs of these sequences, through an FFT algorithm, and multiply them.The IDFT of the resulting sequence (calculated through an IFFT algorithm) is the linear convolutionof the two original sequences.
The MatLab® code for the linear convolution of sequences from the exercise 3.13a and 3.13b can beseen bellow.
N=11; n=0:10;xa=2*cos(pi.*n./N)+sin(pi.*n./N).^2;N=21; n=0:20;xb=exp(-2.*n);xa2=[xa zeros(1,20)];xb2=[xb zeros(1,10)];Xa=fft(xa2); Xb=fft(xb2);Xcon=Xa.*Xb;con=ifft(Xc);
97
Figure 3.14 shows the linear convolution calculated using (above) and not using (bellow) an FFTalgorithm. Obviously, the results should be equal.
The MatLab® code for the linear convolution of sequences from the exercise 3.13a and 3.13b can beseen bellow.
N=21; n=0:20;xb=exp(-2.*n);xc=[0 1];xb2=[xb 0];xc2=[xc zeros(1,20)];Xb=fft(xb2); Xc=fft(xc2);Xcon=Xb.*Xc;con=ifft(Xcon);
Figure 3.15 shows the linear convolution calculated using (above) and not using (bellow) an FFTalgorithm. Obviously, the results should be equal.
0 5 10 15 20 25 30 35−3
−2
−1
0
1
2
3
Linear convolution through FFT
0 5 10 15 20 25 30 35−3
−2
−1
0
1
2
3
Direct linear convolution
Figure 3.14: Linear convolution using (above) and not using (bellow) an FFT algorithm for exercise 3.25for the sequences (a) and (b) of exercise 3.13.
98 CHAPTER 3. DISCRETE TRANSFORMS
0 5 10 15 20 250
0.2
0.4
0.6
0.8
1
Linear convolution through FFT
0 5 10 15 20 250
0.2
0.4
0.6
0.8
1
Direct linear convolution
Figure 3.15: Linear convolution using (above) and not using (bellow) an FFT algorithm for exercise 3.25for the sequences (b) and (c) of exercise 3.13.
3.26
X (k) =2N−1∑n=0
x (n)Wnk2N
=N−1∑n=0
x (n)Wnk2N +
2N−1∑n=N
x (2N − 1− n)Wnk2N
...
=N−1∑n=0
x (n)Wnk2N +
N−1∑p=0
x (p)W−k(p+1)2N
=N−1∑n=0
x (n)[e−
2π2N nk + e
2π2N k(n+1)
]=
N−1∑n=0
x (n) e2π2N
k2
[e
2π2N k(n+ 1
2 ) + e−2π2N k(n+ 1
2 )]
X (k) = 2eπn
2NN−1∑n=0
x (n) cos
[π(n+ 1
2k)
N
]
C (k) = α (k)N−1∑n=0
x (n) cos
[π(n+ 1
2k)
N
]
C (k) =α (k)
2eπk2NX (k) =
α (k)
W− k22N
X (k)2
99
3.27
C (k) = α (k)N−1∑n=0
x (n) cos
[π(n+ 1
2
)k
N
]
X (k) =N−1∑n=0
x (n)WnkN
=
N2 −1∑n=0
x (n)WnkN +
N−1∑n=N
2
x (n)WnkN ,m = N − 1− n
=
N2 −1∑n=0
x (n)WnkN +
N2 −1∑m=0
W(N−1−m)kN
=
N2 −1∑n=0
x (n)WnkN +
N2 −1∑n=0
x (2n+ 1)W−(n+1)kN
X (k) =
N2 −1∑n=0
x (2n)W 2nk2N + x (2n+ 1)W−(2n+2)k
2N , 0 ≤ k < N
Wk2
2N X (k) =
N2 −1∑n=0
x (2n)Wk(2n+ 1
2 )2N + x (2n+ 1)W
−k(2n+1+ 12 )
2N
Wk2
2N X (k) =N−1∑
m=0,even
x (m)Wk(m+ 1
2 )2N +
N−1∑m=0,odd
x (m)W−k(m+ 1
2 )2N
ReW k2
2N X (k) =N−1∑
m=0,even
x (m) cos
[πk(m+ 1
2
)N
]+
N−1∑m=0,odd
x (m) cos
[πk(m+ 1
2
)N
]
ReW k2
2N X (k) =N−1∑m=0
x (m) cos
[πk(m+ 1
2
)N
]
Reα (k)Wk2
2N X (k) = α (k)N−1∑m=0
x (m) cos
[πk(m+ 1
2
)N
]C (k) = Reα (k)W
k2
2N X (k)
3.28
H (k) =N−1∑n=0
x (n) cos(
2πNkn
)+N−1∑n=0
x (n) sin(
2πNkn
)
X (k) =N−1∑n=0
x (n) e−2πN kn
ReX (k) =N−1∑n=0
x (n) cos2πNkn
ImX (k) = −N−1∑n=0
x (n) sin2πNkn
H (k) = ReX (k) − ImX (k)
100 CHAPTER 3. DISCRETE TRANSFORMS
H (k) =N−1∑n=0
x (n)[cos(
2πNkn
)+ sin
(2πNkn
)]
H (−k) =N−1∑n=0
x (n)[cos(
2πNkn
)− sin
(2πNkn
)]
ReX (k) =N−1∑n=0
x (n) cos(
2πNkn
)=H (k) +H (−k)
2= EH (k)
ImX (k) = −N−1∑n=0
x (n) sin(
2πNkn
)= −H (k)−H (−k)
2= −OH (k)
X (k) = ReX (k)+ ImX (k)X (k) = EH (k) − OH (k)
3.29
y (n) = x1 (n)⊗ x2 (n)YDFT = X1 (k)X2 (k)YDHT = ReYDFT (k) − ImYDFT (k)YDHT = ReX1 (k)X2 (k) − ImX1 (k)X2 (k)
But we know that:
X1(k)X2(k) =[H1(k) +H1(−k)
2− H1(k)−H1(−k)
2
]×[
H2(k) +H2(−k)2
− H2(k)−H2(−k)2
]=
14
[(H1(k) +H1(−k)) (H2(k) +H2(−k))− (H1(k)−H1(−k)) (H2(k)−H2(−k))] +
−14
[(H1(k)−H1(−k)) (H2(k) +H2(−k)) + (H1(k) +H1(−k)) (H2(k)−H2(−k))]
X1(k)X2(k) =14
[2H1(k)H2(−k) + 2H1(−k)H2(k)]− 14
[2H1(k)H2(k)− 2H1(−k)H2(−k)]
YDHT (k) = H1(k)H2(k) +H2(−k)
2+H1(−k)
H2(k)−H2(−k)2
YDHT (k) = H1(k)EH2(k)+H1(−k)OH2(k)
3.30
H1 =1√2
[1 11 −1
]H−1
1 =1√2
[1 11 −1
]= H∗
T
1 = H1
Now we have proved that the Hadamard transform is unitary for H1. To prove it remains true for anyHi (where i = 2p), we can use finite induction process. Since we have already proved this is true for
i = 1 (p = 0), we assume it is true for i =N
2(p = log2N − 1) and then prove it is also true for
i = N (p = log2N ).
H−1N2
= H∗T
N2
= HN2
101
And so, it is obvious that:
1√2
[HN
2HN
2
HN2−HN
2
]1√2
[HN
2HN
2
HN2−HN
2
]=[I 00 I
]H−1N =
1√2
[HN/2 HN/2
HN/2 −HN/2
]= H∗
T
N = HN
3.31 (a)
Hk (z) =1− z−NW kN − z−1
=W−kN
1−W−kN z−1− W−kN z−N
1−W−kN z−1
hk (n) = W−kN W−knN u (n)−W−kN W−knN u (n−N)
hk (n) = W−k(n+1)N [u (n)− u (n−N)]
yk (n) = x (n) ∗ hk (n)
=N−1∑l=0
W−k(l+1)N x (n− l),m = N − 1− l
yk (m) =N−1∑m=0
W kmN x (n−N + 1 +m)
X (k, i) = yk (i) =N−1∑m=0
x (i−N + 1 +m)W kmN
X (k, i) = yk (i) =N−1∑m=0
xi (m)W kmN
(b) The computation of X (k, i) =N−1∑n=0
xi (n)W knN requires N2 complex multiplications N (N − 1)
complex additions. By using the FFT algorithm, one can calculate the stFFT using approximatelyN log2N complex multiplications and the same number for the additions.Using the bank of N IIR filters, for each new sample, we need only N complex multiplicationsand 2N additions, if we use the following architecture:
Yk (z)W kN − z−1Yk (z) = X (z)−X (z) z−N
yk (n) = yk (n− 1)W−kN + x (n)W−kN + x (n−N)W−kN
(c) When either the overlap-and-add or the overlap-and-save method is used to compute a linear con-volution, the DFTs of the signals should be calculated in blocks. The above item showed that, usingthe described formula for Hk (z), we need N complex multiplications for each sample. Thus, forN samples, the number of complex multiplications needed would be N2. In the other hand, if anFFT algorithm is used to compute the DFT for each block, the number of complex multiplicationsneeded is N log2N . The increase in the computational effort for the method that uses the filterbanks is due to the useless calculations of the short-time Fourier transforms for blocks that will benot used by the overlap-based methods. These calculations are performed because of the recursivecharacteristic of the algorithm, and so the transform is calculated for each new sample (which isnot necessary for the overlap-based methods).Therefore, we can say that it is not advantageous to use the above formula for Hk (z) to computea linear convolution using the overlap-and-add or the overlap-and-save methods.
102 CHAPTER 3. DISCRETE TRANSFORMS
3.32 In the first example for this exercise, the values for N and k were set to 20 and 5, respectively. TheMatLab® code for this exercise can be seen bellow. The values for the sequences were chosen atrandom.
N=20;k=5;x=randn(1,N);h=rand(1,k);flops(0)xl=[x zeros(1,k-1)];hl=[h zeros(1,N-1)];Xl=fft(xl); Hl=fft(hl);co1=ifft(Xl.*Hl);f(1)=flops;flops(0)co2=conv(x,h);f(2)=flops;flops(0)co3=filter(h,1,x);f(3)=flops
Figure 3.16 shows the convolution results for the three methods. It is clear that the final result is thesame for all methods.
For N = 20 and k = 5, the first method, that uses the FFT to compute the linear convolution, needed6182 floating point operations (flops), whereas the other two methods needed respectively 202 and 200flops.
If we choose other values for N and k so that N + k − 1 is a power of two, we can see that the firstmethod becomes more efficient (in terms of flops), as long as the values of N and k increase. Table ??summarizes some of the results obtained for some different values of N and k.
3.33 Figure 3.17 shows the DFTs for the sequence x (n) with 64 samples. Above, the DFT uses also 64
0 5 10 15 20 25−4
−2
0
2
4
Using the FFT
0 5 10 15 20 25−4
−2
0
2
4
Using the conv function
0 2 4 6 8 10 12 14 16 18 20−4
−2
0
2
4
Using the filter function
Figure 3.16: Linear convolution between x (n) and h (n) using three different methods for exercise 3.32.
103
Table 3.1: Results of exercise 3.32.
UsingN + k − 1 N k FFT conv filter
64 53 12 6028 1274 1272256 230 27 28698 12422 12420
1024 980 45 134504 88202 882004096 4020 77 618934 619082 619080
16384 16200 185 2802180 5994002 5994000
samples, whereas in the graph bellow the DFT uses 128 samples. Only half of the DFT samples areshown, since the magnitude of the DFT is symmetric.
We can see that there is no way to see that the sequence x (n) is made of two sinusoids, since we seeonly one peak in the DFT. This happens because the frequencies of the two sinusoids are too close.
Now, we may increase the number of samples of the sequence x (n) to 128. The DFTs of this newsequence using 128, 256 and 512 samples. These three DFTs are exhibited in Figure 3.18. Once more,only half of the samples are shown.
Using 256 samples for the DFT, we can start to see two peaks, referring to the two sinusoids of se-quence x (n). When using 512 samples, these two peaks become even clearer.
By increasing the number of samples of the sequence x (n), the two peaks of its spectrum couldbe separated by also increasing the number of samples of the DFT. These procedures increased thespectrum resolution so that the tiny difference between the frequencies of the two sinusoids couldbecome apparent.
The MatLab® source code for this exercise is shown bellow:
l=100;n=0:63;ws=2*pi;
0 5 10 15 20 25 30 350
5
10
15
20
25
30
0 10 20 30 40 50 60 700
5
10
15
20
25
30
Figure 3.17: DFTs of sequence x (n), using 64 (above) and 128 samples (bellow) for exercise 3.33.
104 CHAPTER 3. DISCRETE TRANSFORMS
0 10 20 30 40 50 60 700
20
40
60
80
0 20 40 60 80 100 120 1400
20
40
60
80
0 50 100 150 200 250 3000
20
40
60
80
Figure 3.18: DFTs of the increased sequence x (n), using 128 (above), 256 (center) and 512 samples(bellow) for exercise 3.33.
x=sin(ws.*n./10)+sin((ws/10+ws/l).*n);X1=fft(x);X2=fft(x,128);n=0:127;xl=sin(ws.*n./10)+sin((ws/10+ws/l).*n);Xl1=fft(x);Xl2=fft(x,256);Xl3=fft(x,512);
3.34 As performed in Experiment 3.2, we shall employ the frequency domain to analyze the contents of agiven signal x(n) composed of a 10 Hz sinusoid corrupted by noise, with Fs = 200 samples/s for aninterval of 1 s, as given by
fs = 200;f = 10;time = 0:1/fs:(1-1/fs);K = [3:6]; % Possible values for kM = [5 10 16 26]; % Possible values for M
x = zeros(length(K),length(time));sample_x = zeros(length(K),length(time));absX = zeros(length(K),length(time));sample_absX = zeros(length(K),length(time));
for k=K,for m=1:M(k-2),x(k-2,:) = sin(2*pi*f.*time) + k*randn(1,fs);absX(k-2,:) = absX(k-2,:) + abs(fft(x(k-2,:)));if m==1,sample_x(k-2,:) = x(k-2,:);sample_absX(k-2,:) = absX(k-2,:);
endend
105
absX(k-2,:) = absX(k-2,:)/M(k-2);end
figure(1);for p=1:length(K);subplot(2,2,p);plot(time,sample_x(p,:));ylim([-20 20]);xlabel(’Time (s)’);ylabel(’$x(t)$’,’interpreter’,’latex’);
end
figure(2);for p=1:length(K);subplot(2,2,p);plot(sample_absX(p,:));ylim([0 180]);xlabel(’Frequency (Hz)’);ylabel(’Magnitude of $X(e^\jmath 2\pi f)$’,’interpreter’,’latex’);maxX = max(sample_absX(p,:));aveX = mean(sample_absX(p,:));threshold = (maxX+aveX)/2;hold on;plot([1,200],[threshold threshold],’r--’);hold off;
end
figure(3);for p=1:length(K);subplot(2,2,p);plot(absX(p,:));ylim([0 120]);xlabel(’Frequency (Hz)’);ylabel(’Magnitude of $X(e^\jmath 2\pi f)$’,’interpreter’,’latex’);maxX = max(absX(p,:));aveX = mean(absX(p,:));threshold = (maxX+aveX)/2;hold on;plot([1,200],[threshold threshold],’r--’);hold off;
end
Figure 3.19 depicts examples of x(n), whereas Figure 3.20 depicts the absolute value of the corre-sponding DFT, for distinct values of k ∈ 3, 4, 5, 6. Notice that is really hard to observe the sinusoidalcomponent in the time domain for large amounts of noise (k ≥ 3). In addition, a closer observationof Figure 3.20 shows that, even in the frequency domain, it is impossible to detect the peaks due tothe sinusoidal component in the signal x(n). The dashed red lines in Figure 3.20 indicates a thresholdcomputed as follows:
Threshold =MaxValue + MeanValue
2,
in which MaxValue is the maximum absolute value of the DFT of x(n), whereas MeanValue is theaverage of the absolute values of the DFT of x(n). Following this criteria, we should detect a sinusoidproperly if only the peaks associated with this sinusoid is above the threshold. Note that this is not thecase in Figure 3.20 (for all the four values of k).
A solution for this problem is to average the absolute value of the DFT results for M repetitions of theexperiment. The value of M depends on the amount of noise (controlled by k). It is not difficult to
106 CHAPTER 3. DISCRETE TRANSFORMS
0 0.2 0.4 0.6 0.8 1−20
−10
0
10
20
Time (s)
x(t)
(a)
0 0.2 0.4 0.6 0.8 1−20
−10
0
10
20
Time (s)
x(t)
(b)
0 0.2 0.4 0.6 0.8 1−20
−10
0
10
20
Time (s)
x(t)
(c)
0 0.2 0.4 0.6 0.8 1−20
−10
0
10
20
Time (s)x(
t)
(d)
Figure 3.19: Sinusoidal signal corrupted with different levels of noise: (a) k = 3; (b) k = 4; (c) k = 5; and(d) k = 6.
0 50 100 150 2000
50
100
150
200
250
Frequency (Hz)
Magn
itud
ede
X(e
2π
f)
(a)
0 50 100 150 2000
50
100
150
200
250
Frequency (Hz)
Magn
itud
ede
X(e
2π
f)
(b)
0 50 100 150 2000
50
100
150
200
250
Frequency (Hz)
Magn
itud
ede
X(e
2π
f)
(c)
0 50 100 150 2000
50
100
150
200
250
Frequency (Hz)
Magn
itud
ede
X(e
2π
f)
(d)
Figure 3.20: Absolute value of FFT of sinusoidal signal corrupted with different levels of noise: (a) k = 3;(b) k = 4; (c) k = 5; and (d) k = 6.
realize that, due to the random nature of the noise signal, the larger the M , the cleaner is the resultingspectrum. As an example, we have averaged the absolute value of the related DFTs using: M = 5repetitions for k = 3; M = 10 repetitions for k = 4; M = 16 repetitions for k = 5; and M = 26repetitions for k = 6. The results are depicted in Figure 3.21. Note that, in this case, only the peaksdue to the sinusoid of 10 Hz is above the Threshold.
107
0 50 100 150 2000
20
40
60
80
100
120
Frequency (Hz)
Magn
itud
ede
X(e
2π
f)
(a)
0 50 100 150 2000
20
40
60
80
100
120
Frequency (Hz)
Magn
itud
ede
X(e
2π
f)
(b)
0 50 100 150 2000
20
40
60
80
100
120
Frequency (Hz)
Magn
itud
ede
X(e
2π
f)
(c)
0 50 100 150 2000
20
40
60
80
100
120
Frequency (Hz)
Magn
itud
ede
X(e
2π
f)
(d)
Figure 3.21: Average (for M repetitions) of absolute value of FFT of sinusoidal signal corrupted withdifferent levels of noise: (a) k = 3,M = 5; (b) k = 4,M = 10; (c) k = 5,M = 16; and (d) k = 6, ,M =26.
108 CHAPTER 3. DISCRETE TRANSFORMS
Chapter 4
DIGITAL FILTERS
4.1 (a) • Realization number 1 at figure 4.1:
z−2 z−2 z−2
a=0.0034
b=0.0106
c=0.0025
d=0.0149
a b c d
x(n)
y(n)
Figure 4.1: Realization number 1 for exercise 4.1a.
• Realization number 2 at figure 4.2:
12γ
10γ
11γ
20γ
21γ
22γ
00γ
01γ
02γ
−1z
−1z
−1z
−1z
−1z
−1z
00γ
02γ
01γ = 0
= 0.0034
= 0.011212
10γ
11γ
12γ
= 1.5766
= 1
= 1.152791
20γ
21γ
22γ
= −1.15766
= 1
= 1.152791
x(n) y(n)
Figure 4.2: Realization number 2 for exercise 4.1a.
(b) • Realization number 1 at figure 4.3:
109
110 CHAPTER 4. DIGITAL FILTERS
−1z
−1z
−1z
−1z
b0
b1
b2
b3
b4
a1 = 3.856
a2 = −5.5921
a3 = 3.61473
a4 = − 0.8787
b0 = 1
b1 = −3.238
b2 = −4.548261
b3 = −3.238
b4 = 1
−a 1
−a 2
−a 3
−a 4
y(n)x(n)
Figure 4.3: Realization number 1 for exercise 4.1b.
• Realization number 2 at figure 4.4:
b00 b01
−1z
−1z
−1z
−1z
b10
b20
−a 10
−a 20
b11
b21
−a 11
−a 21
a10
a20
= −1.919
= 0.923
b00
b10
b20
= 1
= −1.349
= 1
a10
a20
= −1.937
= 0.952
b00
b10
b20
= 1
= −1.889
= 1
y(n)x(n)
Figure 4.4: Realization number 2 for exercise 4.1b.
4.2
Y (z) = X (z) +ATY (z) +BTY (z) z−1
(a) Y1 (z)Y2 (z)Y3 (z)Y4 (z)
=
00
X3 (z)0
+
0 0 0 0α2 0 0 0α3 0 0 0γ2 γ1 γ0 0
Y1 (z)Y2 (z)Y3 (z)Y4 (z)
+
+
0 1 0 00 0 1 00 0 0 00 0 0 0
Y1 (z)Y2 (z)Y3 (z)Y4 (z)
z−1
111
(b) Y1 (z)Y2 (z)Y3 (z)Y4 (z)Y5 (z)
=
00
X3 (z)00
+
0 0 0 0 00 0 0 0 01 1 0 0 01 0 −m1 0 00 1 −m2 0 0
Y1 (z)Y2 (z)Y3 (z)Y4 (z)Y5 (z)
+
+
0 0 0 1 00 0 1 0 −10 0 0 0 00 0 0 0 00 0 0 0 0
Y1 (z)Y2 (z)Y3 (z)Y4 (z)Y5 (z)
z−1
(c)
Y1 (z)Y2 (z)Y3 (z)Y4 (z)Y5 (z)Y6 (z)Y7 (z)
=
0X2 (z)X3 (z)
0000
+
0 0 0 0 0 0 00 0 0 0 0 0 00 k1 0 0 0 0 0−1 0 1 0 0 0 00 0 0 k0 0 0 00 0 1 0 1 0 01 0 0 0 −1 0 0
Y1 (z)Y2 (z)Y3 (z)Y4 (z)Y5 (z)Y6 (z)Y7 (z)
+
+
0 0 0 0 0 −1 00 0 0 0 0 0 −10 0 0 0 0 0 00 0 0 0 0 0 00 0 0 0 0 0 00 0 0 0 0 0 00 0 0 0 0 0 0
Y1 (z)Y2 (z)Y3 (z)Y4 (z)Y5 (z)Y6 (z)Y7 (z)
z−1
4.3 Exercise 4.3
4.4 T is a MxM matrix.
TT =(I −AT −BT )−1
Y (z) =M∑i=1
Tij (z)Xi (z) , Xi is the input x (n) applied to node i
(a)
H (z) =γ0 + (γ1 − γ0α2) z−1 + γ2z
−2
1− α2z−1 − α3z−2
(b)
H (z) =1− z−2
1 + (m1 −m2) z−1 + (m1 +m2 − 1) z−2
(c)
H (z) =1 + k0 + k1 + k0k1
1− (k0 + k0k1) z−1 − (k1 + 2k0k1) z−2
• MatLab® Program:
112 CHAPTER 4. DIGITAL FILTERS
more on;clear all;close all;
syms k0 k1 y0 y1 y2 a2 a3 m1 m2 z;
%-------------------------------------------------------%% EXERCISE 4.4 - a) %%-------------------------------------------------------%
A=[0 0 0 0;a2 0 0 0;a3 0 0 0;y2 y1 y0 0];
B=[0 1 0 0;0 0 1 0;0 0 0 0;0 0 0 0];
T=(inv(eye(size(A))-A-B/z));T=T.’;
S = sum(T); S=S(4);S = simple(S);Sc = collect(S,z);disp(’ ’);disp(’---------------------------------------------------’)disp(’ Exercise a’)disp(’ ’)pretty(Sc);disp(’ ’);
%-------------------------------------------------------%% EXERCISE 4.4 - b) %%-------------------------------------------------------%
A=[0 0 0 0 0;0 0 0 0 0;1 1 0 0 0;1 0 -m1 0 0;0 1 -m2 0 0];
B=[0 0 0 1 0;0 0 0 0 -1;0 0 0 0 0;0 0 0 0 0;0 0 0 0 0];
T=(inv(eye(size(A))-A-B/z));T=T.’;
S = sum(T); S=S(3);S = simple(S);Sc = collect(S,z);disp(’ ’);disp(’---------------------------------------------------’)disp(’ Exercise b’)disp(’ ’)
113
pretty(Sc);
%-------------------------------------------------------%% EXERCISE 4.4 - c) %%-------------------------------------------------------%
A=[0 0 0 0 0 0 0;0 0 0 0 0 0 0;0 k1 0 0 0 0 0;
-1 0 1 0 0 0 0;0 0 0 k0 0 0 0;0 0 1 0 1 0 0;1 0 0 0 -1 0 0];
B=[0 0 0 0 0 -1 0;0 0 0 0 0 0 -1;0 0 0 0 0 0 0;0 0 0 0 0 0 0;0 0 0 0 0 0 0;0 0 0 0 0 0 0;0 0 0 0 0 0 0];
T=(inv(eye(size(A))-A-B/z));T=T.’;
S = sum(T); S=S(6);S = simple(S);S = simplify(S);Sc = collect(S,z);disp(’ ’);disp(’---------------------------------------------------’)disp(’ Exercise c’)disp(’ ’)pretty(Sc);disp(’---------------------------------------------------’)
more off;
4.5 x (n+ 1) = Ax (n) +Bu (n)y (n) = Cx (n) +Du (n)
If x′ (n) = Tx (n)⇒ x (n) = T−1x′ (n), then :
A′ = TAT−1 ; B′ = TB ; C ′ = CT−1 ; D′ = D
H ′ (z) = C ′ (zI −A′)−1B′ +D′
= CT−1(zTT−1 − TAT−1
)−1TB +D
= CT−1[T (zI −A)T−1]−1TB +D
Since (XY Z)−1 = Z−1Y −1X−1, then we have :
H ′ (z) = CT−1[T (zI −A)−1T−1]TB +D
= C(T−1T
)(zI −A)−1 (
T−1T)B +D
= C (zI −A)−1B +D
= H (z)
114 CHAPTER 4. DIGITAL FILTERS
4.6 (a) [x1 (n+ 1)x2 (n+ 1)
]=
[0 α3
1 α2
] [x1 (n)x2 (n)
]+[
10
]u (n)
y (n) =[γ1 γ2 + α2γ1 + α3γ0
] [ x1 (n)x2 (n)
]+ γ0u (n)
(b) [x1 (n+ 1)x2 (n+ 1)
]=
[ −m1 + 1 −m1
−m2 −m2 + 1
] [x1 (n)x2 (n)
]+[ −m1
−m2
]u (n)
y (n) =[
1 −1] [ x1 (n)
x2 (n)
]+ u (n)
(c) [x1 (n+ 1)x2 (n+ 1)
]=
[k0k1 −k0 − 1
−k1 (k0 + 1) k0
] [x1 (n)x2 (n)
]+[ −k0 (k1 + 1)
(1 + k0) (1 + k1)
]u (n)
y (n) =[ −k1 (k0 + 1) k0
] [ x1 (n)x2 (n)
]+ (1 + k0) (1 + k1)u (n)
4.7 (a)
H(z) =z3 + 3z2 + 11
4 z + 54
(z2 + 12z + 1
2 )(z + 12 )
H(z) =z2 + 3
2z + 12 + z3 + 3
2z2 + 1
2z2 + 3
4z + z2 + 3
4
(z2 + 12z + 1
2 )(z + 12 )
H(z) =z + 1
z2 + 12z + 1
2
+z + 3
2
z + 12
H(z) =z + 1
z2 + 12z + 1
2
+ 1 +1
z + 12
(b) Circuit
4.8 (a)
[x1(n+1)x2(n+1)
]=[−a1 a2
1 0
] [x1(n)x2(n)
]+[
1 + a2
0
]u(n)
y(n) =[a1 (1− a2)
] [x1(n)x2(n)
]+[−a2]u(n)
(b)
Using
H(z) =Y (z)U(z)
= CT (zI− A)−1 B + d
115
x(n) y(n)
z−1
z−1
−1/2
−1/2
−1/2
z−1
Figure 4.5: Realization
since
(zI− A)−1 =1
z2 + a1z − a2
[z a2
1 z + a1
]we obtain
H(z) =z−2 + a1z
−1 − a2
−a2z−2 + a1z−1 + 1
(c)
This is an allpass transfer function since |H(eω)| = 1.
4.9 (a)
A =[1−m1 m1
−m2 m2 − 1
]b =
[−m1
−m2
](4.1)
y(k) =[1−m1 + 1−m2 −1 +m2 − 1 +m2
]x(k)
+[1−m1 −m2
]u(k)
=[2−m1 −m2 −2 +m2 +m2
]x(k) +
[1−m1 −m2
]u(k)
(4.2)
116 CHAPTER 4. DIGITAL FILTERS
(b)
H(z) = cT (zI−A)−1b + d =z2(1−m1 −m2)− z(m1 −m2)− 1z2 + z(m1 −m2) +m1 +m2 − 1
=N(z)
−z2N(z−1)(4.3)
Since zi are the root of N(z), the polynomial −z2N(z−1) has roots at 1/zi, leading to
|H(ejω)| =∣∣∣∣ N(z)−z2N(z−1)
∣∣∣∣∣∣∣∣∣z=ejω
=∣∣∣∣ N(ejω)−e2jωN(e−jω)
∣∣∣∣ = 1 (4.4)
Therefore, H(z) is an allpass filter whose magnitude response is depicted in Fig. 4.6.
4.10
A =[λ0λ3 1 + λ0λ1
λ3 λ1
]b =
[λ0
1
](4.5)
y(k) =[λ3γ0 + λ3γ0γ1 + (1 + λ2λ3)γ2 λ1γ0 + (1 + λ0λ1)γ1
]x(k)
+[γ0 + λ0γ1 + λ2γ2
]u(k) (4.6)
H(z) = cT (zI−A)−1b + d
=1
z2 − (λ0λ3 + λ1)z + λ3
× [λ3γ0 + λ3γ0γ1 + (1 + λ2λ3)γ2 λ1γ0 + (1 + λ0λ1)γ1
]T×[λ1 −1− λ0λ1
−λ3 λ0λ3
] [λ0
1
]+[γ0 + λ0γ1 + λ2γ2
](4.7)
4.11 (a) The state-space equations related to the circuit of Fig. ?? are given by
x(n+ 1) = [b− a]x(n) + 1u(n) (4.8)y(n) = [γ2 − aγ1]x(n) + γ1u(n) (4.9)
(4.10)
(b) The corresponding transfer function is given by
H(z) =γ2 − aγ1
z − b+ a+ γ1 =
γ1z − bγ1 + γ2
z − b+ a(4.11)
(c) For a DC gain equal to one the following relation should be satisfied
γ2 − bγ1 + γ1 = 1− b+ a (4.12)
For a zero at ωs2 the following equality should hold
γ2 − bγ1 − γ1 = 0 (4.13)
Since b = −a = − 14 , we obtain the following solution by solving the equations above
γ1 =34
(4.14)
γ2 =916
(4.15)
117
1
ω
|H(ejω)|
ωs/2
Figure 4.6: Magnitude of H(z).
4.12 (a)
1L
(1− z −1L )[
11− z−1
− 1 + z−1c
1− az−1 − bz−2
]
4.13 (a)
A =[ −m1 1−m2 +m1 −1
]
b =[ −γ0m1 + γ1
γ2 − γ1 − γ0m2 + γ0m1
]
118 CHAPTER 4. DIGITAL FILTERS
(b)
z−1
a
−z1/L
y(n)x(n)
b
1/L
c
Figure 4.7: Transposed structure.
cT =[1 0
]
d = γ0
H(z) = cT [zI − A]−1b + d
119
H(z) = cT[z +m1 −1m2 −m1 z + 1
]−1
+ d
H(z) = cT
[z + 1 1
−m2 +m1 z +m1
]z2 + (m1 + 1)z + 2m1 +m2
+ d
H(z) =
[z + 1 1
]TD(z)
b+ d
H(z) =(γ1 − γ0)m1z + (γ1 − γ0)m1 + γ1 − γ2 − γ0m2 + γ0m1
D(z)+ d
H(z) =(γ0 − γ1)m1z + γ1m1 − γ0m2 + γ1 − γ2
D(z)+ d
D(z) = z2 + (m1 + 1)z + 2m1 +m2 (4.16)
(c) If γ0 = γ1 = γ2 andm1 = m2 the transfer function related to the part of the circuit with memoryis zero.
4.14Y (z) = X(z) +ATY (z) +BTY (z)z−1
(a) Y1(z)Y2(z)Y3(z)
=
γ0X1(z)γ1X2(z)γ2X3(z)
+
0 0 00 0 0α3 α2 0
Y1(z)Y2(z)Y3(z)
+
0 1 00 0 10 0 0
Y1(z)Y2(z)Y3(z)
z−1
(b) Y1(z)Y2(z)Y3(z)Y4(z)Y5(z)
=
00
X3(z)00
+
0 0 0 0 00 0 0 0 0−m1 −m2 0 0 0
1 0 1 0 00 1 1 0 0
Y1(z)Y2(z)Y3(z)Y4(z)Y5(z)
+
+
0 0 0 1 00 0 0 0 −10 0 0 0 00 0 0 0 00 0 0 0 0
Y1(z)Y2(z)Y3(z)Y4(z)Y5(z)
z−1
120 CHAPTER 4. DIGITAL FILTERS
(b)
z−1
z−1−1
−m1
−m2
x(n)
γ0
γ1
γ2
y(n)
Figure 4.8: Transposed circuit
(c)
Y1(z)Y2(z)Y3(z)Y4(z)Y5(z)Y6(z)Y7(z)
=
X1(z)000000
+
0 0 0 0 0 0 00 0 0 0 0 0 01 −1 0 0 0 0 01 0 k0 0 0 0 00 1 −k0 0 0 0 00 0 0 k1 0 0 00 0 0 1 0 1 0
Y1(z)Y2(z)Y3(z)Y4(z)Y5(z)Y6(z)Y7(z)
+
+
0 0 0 0 −1 0 00 0 0 0 0 −1 00 0 0 0 0 0 00 0 0 0 0 0 00 0 0 0 0 0 00 0 0 0 0 0 00 0 0 0 0 0 0
Y1(z)Y2(z)Y3(z)Y4(z)Y5(z)Y6(z)Y7(z)
z−1
121
4.15 (a) • Sensitivity - α2 :
S (z) =γ1z−2 + (α3γ0 + γ2) z−3
(1− α2z−1 − α3z−2)2
• Sensitivity - α3 :
S (z) =γ0z−2 + (γ1 − γ0α2) z−3 + γ2z
−4
(1− α2z−1 − α3z−2)2
• Sensitivity - γ0 :
S (z) =1− γ2z
−1
1− α2z−1 − α3z−2
• Sensitivity - γ1 :
S (z) =z−1
1− α2z−1 − α3z−2
• Sensitivity - γ2 :
S (z) =z−2
1− α2z−1 − α3z−2
(b) • Sensitivity - m1 :
S (z) = − z−1 + z−2 − z−3 − z−4
(1 + (m1 −m2) z−1 + (m1 +m2 − 1) z−2)2
• Sensitivity - m2 :
S (z) =z−1 − z−2 − z−3 + z−4
(1 + (m1 −m2) z−1 + (m1 +m2 − 1) z−2)2
(c) • Sensitivity - k0 :
S (z) =(1 + k1)
(1 + (1 + k1) z−1 + k1z
−2)
(1− (k0 + k0k1) z−1 − (k1 + 2k0k1) z−2)2
• Sensitivity - k1 :
S (z) =(1 + k0)
(1 + (1 + 2k0) z−2
)(1− (k0 + k0k1) z−1 − (k1 + 2k0k1) z−2)2
• MatLab® programs:
more off;clear all;close all;
syms k0 k1 y0 y1 y2 a2 a3 m1 m2 z;
%-------------------------------------------------------%% EXERCISE 4.8 - a) %%-------------------------------------------------------%
A=[0 0 0 0;a2 0 0 0;a3 0 0 0;
122 CHAPTER 4. DIGITAL FILTERS
y2 y1 y0 0];
B=[0 1 0 0;0 0 1 0;0 0 0 0;0 0 0 0];
Sc=transfer(A,B,3,4);pause;sensitivity(Sc,a2,z);pause;sensitivity(Sc,a3,z);pause;sensitivity(Sc,y0,z);pause;sensitivity(Sc,y1,z);pause;sensitivity(Sc,y2,z);pause;
%-------------------------------------------------------%% EXERCISE 4.4 - b) %%-------------------------------------------------------%
A=[0 0 0 0 0;0 0 0 0 0;1 1 0 0 0;1 0 -m1 0 0;0 1 -m2 0 0];
B=[0 0 0 1 0;0 0 0 0 -1;0 0 0 0 0;0 0 0 0 0;0 0 0 0 0];
Sc=transfer(A,B,3,3);pause;sensitivity(Sc,m1,z);pause;sensitivity(Sc,m2,z);pause;
%-------------------------------------------------------%% EXERCISE 4.4 - c) %%-------------------------------------------------------%
A=[0 0 0 0 0 0 0;0 0 0 0 0 0 0;0 k1 0 0 0 0 0;
-1 0 1 0 0 0 0;0 0 0 k0 0 0 0;0 0 1 0 1 0 0;1 0 0 0 -1 0 0];
B=[0 0 0 0 0 -1 0;0 0 0 0 0 0 -1;0 0 0 0 0 0 0;0 0 0 0 0 0 0;0 0 0 0 0 0 0;0 0 0 0 0 0 0;0 0 0 0 0 0 0];
Sc=transfer(A,B,[2,3],6);pause;sensitivity(Sc,k0,z);pause;sensitivity(Sc,k1,z);pause;
Function sensitivity.m
123
function sensitivity(S,a,z)
dS=diff(S,a);dS=simple(dS);dSc=collect(dS,z);disp(’ ’);disp(cat(2,’------------ Sensitivity of H(z) related to ’,char(a), ...
’--------------’));disp(’ ’);pretty(dSc);disp(’ ’);disp(’---------------------------------------------------’);
syms z;z=1/z;
T=(inv(eye(size(A))-A-B/z));T=T.’;
Program transfer.m
transfer.m
S=sum(T(input,output));Ss = simple(S);Sc = collect(Ss,z);disp(’ ’);disp(’---------------- Transfer Function --------------------’)disp(’ ’)pretty(Sc);disp(’ ’)disp(’---------------------------------------------------’)
4.16 (a) Switch at position 1:
• Point A :
A (z) = X (z)(1 + z−3
)• Point B :
B (z) = A (z)(1− z−2
)• Output :
Y (z) = B (z)(1− z−2
)3 (1 + z−4
)3• So, we have :
Y (z) = X (z)(1 + z−3
) (1− z−2
)4 (1 + z−4
)3(b) Switch at position 2:
• Point A :
A (z) = X (z)(1− z−2
)• Point B :
B (z) = A (z)(1− z−1
)• Output :
Y (z) = B (z)(1− z−2
)3 (1 + z−4
)3
124 CHAPTER 4. DIGITAL FILTERS
• So, we have :
Y (z) = X (z)(1− z−1
) (1− z−2
)4 (1 + z−4
)34.17 (a) Switch at 1:
• Transfer function :
H (z) =(1 + z−3
) (1− z−2
)4 (1 + z−4
)3• Frequency response at figure 4.9.
0 0.5 1 1.5 2 2.5 3−150
−100
−50
0
50
Mag
nitu
de [d
B]
Frequency [rad/s]
Figure 4.9: Frequency response for exercise 4.17a.
(b) Switch at 2:
• Transfer function:
H (z) =(1− z−1
) (1− z−2
)4 (1 + z−4
)3• Frequency response at figure 4.10.
• MatLab® program:clear all;close all;
%------- Switch at 1 -------------%A=[1,0,0,1];B=[1,0,-1];C=[1,0,0,0,1];
B2=conv(B,B);B4=conv(B2,B2);
C3=conv(C,conv(C,C));
h=conv(A,conv(B4,C3));
125
0 0.5 1 1.5 2 2.5 3−150
−100
−50
0
50
Mag
nitu
de [d
B]
Frequency [rad/s]
Figure 4.10: Frequency response for exercise 4.17b.
N=2048;H=fft(h,N);H=H(1:N/2);
W=0:2/N:1-2/N;
warning off;figure(1);plot(W*pi,db(H));ylabel(’Magnitude [dB]’);xlabel(’Frequency [rad/s]’);axis([0 pi -150 50]);warning on
print -f1 -deps fig4_10_1.eps
%------- Switch at 2 -------------%A=[1,-1];B=[1,0,-1];C=[1,0,0,0,1];
B2=conv(B,B);B4=conv(B2,B2);
C3=conv(C,conv(C,C));
h=conv(A,conv(B4,C3));
N=2048;H=fft(h,N);H=H(1:N/2);
W=0:2/N:1-2/N;
warning off;figure(2);plot(W*pi,db(H));ylabel(’Magnitude [dB]’);
126 CHAPTER 4. DIGITAL FILTERS
xlabel(’Frequency [rad/s]’);axis([0 pi -150 50]);warning on
4.18 • At point A we have :
A (n) = 0.5x (n)− 0.75x (n− 1) + 0.5x (n− 2)
• So the output is :
y (n) = A (n) + y (n− 1)y (n) = 0.5x (n)− 0.75x (n− 1) + 0.5x (n− 2) + y (n− 1)
• So the impulse response is :
y (0) = 0.5y (1) = −0.25y (2) = 0.25y (3) = 0.25
...y (n) = 0.25
Or simply:
y(n) =
0.5, n = 0;−0.25, n = 10.25, 2 ≤ n
4.19
M∑i=1
(XiY′i −X ′iYi) = 0
Yi =M∑j=1
TjiXj, and :
Y ′i =M∑j=1
T ′jiX′j
Then, we have :
M∑i=1
Xi
M∑j=1
T ′jiX′j −X ′i
M∑j=1
TjiXj
=M∑i=1
M∑j=1
(XiT
′jiX
′j
)− M∑i=1
M∑j=1
(X ′iTjiXj)
=M∑i=1
M∑j=1
(XjT
′ijX
′i
)− M∑i=1
M∑j=1
(X ′iTjiXj)
=M∑i=1
M∑j=1
(T ′ij − Tji
)(X ′iXj)
= 0
127
and then
Tji = T ′ij
4.20
H (z) =z−(M+1) − 1z−1 − 1
=1− zM+1
zM − zM+1
=zM+1 − 1zM+1 − zM
=M∑k=0
z−k
The operation realized was the sum of the M previous inputs with the present input. The filter isactually an FIR. This fact happened, because there was a cancelation of all the filter’s poles with someof its zeros.
This solution should be generalized to the problem posed.
4.21 Exercise 4.21
4.22 Exercise 4.22
4.23 Exercise 4.23
4.24 Exercise 4.24
4.25 Exercise 4.25
4.26 Exercise 4.26
4.27 Exercise 4.27
4.28 Exercise 4.28
4.29 Exercise 4.29
4.30 Exercise 4.30
4.31
128 CHAPTER 4. DIGITAL FILTERS
Chapter 5
FIR FILTERS APPROXIMATIONS
5.1 Exercise 5.1
5.2 Design of three notch filters with complex coefficients. We assume that the frequency ω0 is normalizedto prevent aliasing.
H1(z) = e−2ω0 − z−1
H2(z) = e−3ω0 − z−1
H3(z) = e−4ω0 − z−1
H(z) = H1(z) ·H2(z) ·H3(z)H(z) = e−9ω0 − (e−7ω0 + e−6ω0 + e−5ω0
)z−1 +
(e−4ω0 + e−3ω0 + e−2ω0
)z−2 − z−3
5.3 Let H(z) be frequency response of an FIR filter then:
(a)
H(z) → H(ew)H(−z) → H(−ew) = H(e(w−π))
This means that the frequency response of H(-z) is the complementary frequency response of H(z).
(b)
H(z) → H(ew)H(z−1) → H(e−w) = H∗(ew)
Frequency response is symmetrical. This means that the frequency response of H(z−1) is equal tothe conjugate of the frequency response of H(z).
(c)
H(z) → H(ew)H(z2) → H(e2w)
0 ... 2π → 0 ... π
The frequency repeats itself every π rad/sample, instead of 2π rad/sample.
129
130 CHAPTER 5. FIR FILTERS APPROXIMATIONS
5.4 G(z) is the complementary filter of H(z). So,
H(z) +G(z) = z−d
H(z) + [z−L −H(z)] = z−d
z−L = z−d
However, the filter H(z) must have even order, then L is equal to M/2. This way the complementaryfilter G(z) is a linear phase FIR.
5.5 H(z) has order M, and G(z) has order N. N must be greater or equal to M. This way when we delayH(z), his order is, still, smaller than or equal to N. Also, if N is even then M must be even, and if N isodd then M must be odd. In both cases L is equal to N−M
2 .
5.6 Filter response of the designed filter presented at 5.1 and coefficients at table 5.1.
Table 5.1: Filter coefficients - Exercise 5.6.
h(0) to h(20)h(0) = 1.55295042e-02 h(7) = 2.31095855e-02 h(14)= 5.01581941e-02h(1) =-2.15694413e-02 h(8) =-2.28954839e-02 h(15)=-3.75375110e-02h(2) =-7.49615628e-03 h(9) =-1.66952199e-02 h(16)=-5.60551315e-02h(3) = 2.51275410e-02 h(10)= 3.31029124e-02 h(17)= 8.98486381e-02h(4) =-1.98382265e-03 h(11)= 5.85178389e-03 h(18)= 5.97283479e-02h(5) =-2.57734627e-02 h(12)=-4.23632111e-02 h(19)=-3.12793456e-01h(6) = 1.23190485e-02 h(13)= 1.08751458e-02 h(20)= 4.39024390e-01
0 0.5 1 1.5 2 2.5−100
−90
−80
−70
−60
−50
−40
−30
−20
−10
0
10
Frequency [rad/s]
Mag
nitu
de R
espo
nse
[dB
]
Figure 5.1: Magnitude response of the designed filter – Exercise 5.6.
• MatLab® Program:
M=40;Wp=1;
131
Wr=1.5;Ws=5;
k=M+1;
kp=floor(k*(Wp/Ws));kr=floor(k*(Wr/Ws));
A=[zeros(1,M/2-kp),ones(1,kp+1)];flag=0;
if (flag==0)
% Type In=0:M/2;kk=1:M/2;
indice=kk’*(1+2*n);Mcos=cos(pi*indice’/(M+1));A=[A(1),A(2:length(A)).*( (-1).^kk)];
Soma=Mcos*A(2:length(A))’;h = ( A(1) + 2*Soma); h=h/(M+1);h = [h;fliplr(h(1:length(h)-1)’)’];
[H,W]=freqz(h,1,1024,’whole’);W=(W/pi)*(Ws/2);plot(W,db(H));axis([0 Ws/2 -100 10]);xlabel(’Frequency [rad/s]’);ylabel(’Magnitude Response [dB]’);
else
% Type IIIn=0:M/2;kk=1:M/2;
indice=kk’*(1+2*n);Msin=sin(pi*indice’/(M+1));A=[0,A(2:length(A)).*( (-1).^(kk+1))];
Soma=Msin*A(2:length(A))’;h = (2*Soma)./(M+1);h = [h;fliplr(h(1:length(h)-1)’)’];
[H,W]=freqz(h,1,1024,’whole’);W=(W/pi)*(Ws/2);plot(W,db(H));axis([0 Ws/2 -100 10]);xlabel(’Frequency [rad/s]’);ylabel(’Magnitude Response [dB]’);
end
5.7 The windows are shown at figure 5.2 and the magnitude responses of the corresponding filters at figure5.3.
• MatLab® Program:
clear all;close all;
132 CHAPTER 5. FIR FILTERS APPROXIMATIONS
0 1 2 3 40
0.2
0.4
0.6
0.8
1
Hamming M=50 2 4 6 8 10
0
0.2
0.4
0.6
0.8
1
Hamming M=10
0 5 10 150
0.2
0.4
0.6
0.8
1
Hamming M=150 5 10 15 20
0
0.2
0.4
0.6
0.8
1
Hamming M=20
Figure 5.2: Windows for filter projects of exercise 5.7.
h1=hamming(5);h2=hamming(10);h3=hamming(15);h4=hamming(20);
figure(1);x=0:length(h1)-1;subplot(2,2,1);plot(x,h1);xlabel(’Hamming M=5’);ylabel(’’);x=0:length(h2)-1;subplot(2,2,2);plot(x,h2);xlabel(’Hamming M=10’);ylabel(’’);x=0:length(h3)-1;subplot(2,2,3);plot(x,h3);xlabel(’Hamming M=15’);ylabel(’’);x=0:length(h4)-1;subplot(2,2,4);plot(x,h4);xlabel(’Hamming M=20’);ylabel(’’);
N=2048;[H1,w]=freqz(h1,1,N);H2=freqz(h2,1,w);H3=freqz(h3,1,w);H4=freqz(h4,1,w);w=w/pi;
figure(2);subplot(2,2,1);plot(w,db(H1));
133
0 0.2 0.4 0.6 0.8 1−25
−20
−15
−10
−5
0
5
10Hamming M=5
Frequency [rad/s]
Mag
nitu
de R
espo
nse
[dB
]
0 0.2 0.4 0.6 0.8 1−80
−60
−40
−20
0
20Hamming M=10
Frequency [rad/s]
Mag
nitu
de R
espo
nse
[dB
]
0 0.2 0.4 0.6 0.8 1−100
−80
−60
−40
−20
0
20Hamming M=15
Frequency [rad/s]
Mag
nitu
de R
espo
nse
[dB
]
0 0.2 0.4 0.6 0.8 1−100
−80
−60
−40
−20
0
20
40Hamming M=20
Frequency [rad/s]
Mag
nitu
de R
espo
nse
[dB
]
Figure 5.3: Magnitude responses of the corresponding filters of exercise 5.7.
title(’Hamming M=5’);xlabel(’Frequency [rad/s]’);ylabel(’Magnitude Response [dB]’);subplot(2,2,2);plot(w,db(H2));title(’Hamming M=10’);xlabel(’Frequency [rad/s]’);ylabel(’Magnitude Response [dB]’);subplot(2,2,3);plot(w,db(H3));title(’Hamming M=15’);xlabel(’Frequency [rad/s]’);ylabel(’Magnitude Response [dB]’);subplot(2,2,4);plot(w,db(H4));title(’Hamming M=20’);xlabel(’Frequency [rad/s]’);ylabel(’Magnitude Response [dB]’);
5.8 The windows are shown at figure 5.4 and the magnitude responses of the corresponding filters at figure5.5.
• MatLab® Program:
clear all;close all;
M=20;
h1=boxcar(M);h2=triang(M);h3=bartlett(M);
134 CHAPTER 5. FIR FILTERS APPROXIMATIONS
0 5 10 15 200
0.5
1
1.5
2
Rectangular M=200 5 10 15 20
0
0.5
1
Triangular M=20
0 5 10 15 200
0.5
1
Bartlett M=200 5 10 15 20
0
0.5
1
Hamming M=20
0 5 10 15 200
0.5
1
Hanning M=200 5 10 15 20
0
0.5
1
Blackman M=20
Figure 5.4: Windows for filter projects of exercise 5.8.
h4=hamming(M);h5=hanning(M);h6=blackman(M);
figure(1);x=0:length(h1)-1;subplot(3,2,1);plot(x,h1);xlabel(’Rectangular M=20’);ylabel(’’);x=0:length(h2)-1;subplot(3,2,2);plot(x,h2);xlabel(’Triangular M=20’);ylabel(’’);x=0:length(h3)-1;subplot(3,2,3);plot(x,h3);xlabel(’Bartlett M=20’);ylabel(’’);x=0:length(h4)-1;subplot(3,2,4);plot(x,h4);xlabel(’Hamming M=20’);ylabel(’’);x=0:length(h5)-1;subplot(3,2,5);plot(x,h5);xlabel(’Hanning M=20’);ylabel(’’);x=0:length(h6)-1;subplot(3,2,6);plot(x,h6);xlabel(’Blackman M=20’);ylabel(’’);
N=2048;[H1,w]=freqz(h1,1,N);H2=freqz(h2,1,w);H3=freqz(h3,1,w);H4=freqz(h4,1,w);H5=freqz(h5,1,w);
135
0 0.2 0.4 0.6 0.8 1−50
0
50
Frequency [rad/s]
Mag
nitu
de R
espo
nse
[dB
]
0 0.2 0.4 0.6 0.8 1−200
−100
0
100
Frequency [rad/s]
Mag
nitu
de R
espo
nse
[dB
]0 0.2 0.4 0.6 0.8 1
−100
−50
0
50
Frequency [rad/s]
Mag
nitu
de R
espo
nse
[dB
]
0 0.2 0.4 0.6 0.8 1−100
−50
0
50
Frequency [rad/s]M
agni
tude
Res
pons
e [d
B]
0 0.2 0.4 0.6 0.8 1−150
−100
−50
0
50
Frequency [rad/s]
Mag
nitu
de R
espo
nse
[dB
]
0 0.2 0.4 0.6 0.8 1−150
−100
−50
0
50
Frequency [rad/s]
Mag
nitu
de R
espo
nse
[dB
]
Figure 5.5: Magnitude responses of the corresponding filters of exercise 5.8.
H6=freqz(h6,1,w);w=w/pi;
figure(2);subplot(3,2,1);plot(w,db(H1));xlabel(’Frequency [rad/s]’);ylabel(’Magnitude Response [dB]’);subplot(3,2,2);plot(w,db(H2));xlabel(’Frequency [rad/s]’);ylabel(’Magnitude Response [dB]’);subplot(3,2,3);plot(w,db(H3));xlabel(’Frequency [rad/s]’);ylabel(’Magnitude Response [dB]’);subplot(3,2,4);plot(w,db(H4));xlabel(’Frequency [rad/s]’);ylabel(’Magnitude Response [dB]’);subplot(3,2,5);plot(w,db(H5));xlabel(’Frequency [rad/s]’);ylabel(’Magnitude Response [dB]’);subplot(3,2,6);plot(w,db(H6));xlabel(’Frequency [rad/s]’);ylabel(’Magnitude Response [dB]’);
136 CHAPTER 5. FIR FILTERS APPROXIMATIONS
5.9 • Ideal impulse response:
h(n) =1
2π
∫ π
−πH(ew)ewndw
=1
2π
(∫ −π6
−π3
ewndw + 2∫ π
6
−π6
ewndw +∫ π
3
π6
ewndw
)
=1
2π
[ewn
n
]−π6
−π3
+ 2[ewn
n
]π6
−π6
+[ewn
n
]π3
π6
=1
2π
[cos(πn6 )− j sin(πn6 )− cos(πn3 ) + j sin(πn3 ) + 2 cos(πn6 ) + 2j sin(πn6 )
n+
−2 cos(πn
6) +
2j sin(πn6 ) + cos(πn3 ) + j sin(πn3 )− cos(πn6 )− j sin(πn6 )n
]=
12π
[2j sin(πn6 ) + 2j sin(πn3 )
n
]h(n) =
1πn
[sin(
πn
6) + sin(
πn
3)]
• The practical impulse responses of order 10, 20 and 30 are shown at figures 5.6, 5.7 and 5.8 respec-tively.
0 1 2 3 4 5 6 7 8 9 10−0.1
0
0.1
0.2
0.3
0.4
0.5M igual a 10
n
Figure 5.6: Practical impulse response of order 10 for exercise 5.9.
• MatLab® Program:close all;clear all;
M=10;n=[-M/2:-1,1:M/2];h0=1/2;h=(sin(n*pi/6)+sin(n*pi/3))./(pi*n);h=[h(1:M/2),h0,h(M/2+1:M)];
137
0 2 4 6 8 10 12 14 16 18 20−0.1
0
0.1
0.2
0.3
0.4
0.5M igual a 20
n
Figure 5.7: Practical impulse response of order 20 for exercise 5.9.
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30−0.1
0
0.1
0.2
0.3
0.4
0.5M igual a 30
n
Figure 5.8: Practical impulse response of order 30 for exercise 5.9.
w=hamming(M+1)’;hh=h.*wfigure(1);stem(hh);title(’M igual a 10’);xlabel(’n’);if 0hcc=get(get(gcf,’Children’),’Children’);set(hcc(2),’Marker’,’x’,’Color’,’r’);hold on;stem(h);hold off;
138 CHAPTER 5. FIR FILTERS APPROXIMATIONS
endset(gca,’Xtick’,[1:M+1],’XLim’,[1 M+1],’XtickLabel’,[0:M]);
M=20;n=[-M/2:-1,1:M/2];h0=1/2;h=(sin(n*pi/6)+sin(n*pi/3))./(pi*n);h=[h(1:M/2),h0,h(M/2+1:M)];w=hamming(M+1)’;hh=h.*wfigure(2);stem(hh);title(’M igual a 20’);xlabel(’n’);if 0hcc=get(get(gcf,’Children’),’Children’);set(hcc(2),’Marker’,’x’,’Color’,’r’);hold on;stem(h);hold off;
endset(gca,’Xtick’,[1:2:M+1],’XLim’,[1 M+1],’XtickLabel’,[0:2:M]);
M=30;n=[-M/2:-1,1:M/2];h0=1/2;h=(sin(n*pi/6)+sin(n*pi/3))./(pi*n);h=[h(1:M/2),h0,h(M/2+1:M)];w=hamming(M+1)’;hh=h.*wfigure(3);stem(hh);title(’M igual a 30’);xlabel(’n’);if 0hcc=get(get(gcf,’Children’),’Children’);set(hcc(2),’Marker’,’x’,’Color’,’r’);hold on;stem(h);hold off;
endset(gca,’Xtick’,[1:2:M+1],’XLim’,[1 M+1],’XtickLabel’,[0:2:M]);
5.10 (a) Since the ideal transfer function consists of the periodic convolution between two ideal lowpassfilters with cutoff frequency at ωc, the ideal impulse response is given by
h(n) =
ω2c
π2, for n = 0
sin2 (ωcn)π2n2
, for n 6= 0(5.1)
(b) For a filter of order 4, M = 4. In this case we have
wtB(n) =[
13
23
123
13
](5.2)
and
h(n) =[
14π2
12π2
116
12π2
14π2
](5.3)
for n = 0,±1,±2. The filter impulse response is then given by the product of the two sequences,that is
h′(n) =[
112π2
13π2
116
13π2
112π2
](5.4)
5.11 (a) The impulse response can be a composition of two distinct impulse responses
h(n) = h1(n) + h2(n) (5.5)
139
ωs/2ωs/4 ωs/2ωs/4ωs/2ωs/4
1 1 1
ω ω ω
|H(ejω)| |H1(ejω)| |H2(e
jω)|
= +
Figure 5.9: Combination of Responses.
where the frequency responses of h1(n) and h2(n) are shown in Fig. 5.9.
As can be observed h2(n) is a highpass filter. Therefore,
h2(n) =
1− ωc
π, n = 0
− 1πn
sin(ωcn) , n 6= 0
, (5.6)
where ωc = π2 .
Then,
h2(n) =
1− 1
2, n = 0
− 1πn
sin(π
2n), n 6= 0
, (5.7)
140 CHAPTER 5. FIR FILTERS APPROXIMATIONS
The sequence h1(n) can be obtained for n 6= 0, as
h1(n) =1
2π
∫ π2
−π2
2π|ω|ejωndω
=1π2
∫ π2
0
ωejωndω +∫ −π2
0
ωejωndω
=1jπ2
[ejωn
(ω
n− 1jn2
)∣∣∣∣π20
+ ejωn(ω
n− 1jn2
)∣∣∣∣−π20
]
=1jπ2
[ejn
π2
(π
2n− 1jn2
)+
1jn2
+ e−jnπ2
(− π
2n− 1jn2
)+
1jn2
]=
1jπ2
[ejn
π2
(π
2n− 1jn2
)+
1jn2− e−jnπ2
(π
2n+
1jn2
)+
1jn2
]=
1jπ2
[π
nj sin
(πn2
)− 2jn2
cos(πn
2
)+
2jn2
]=
1π2
[π
nsin(πn
2
)+
2n2
cos(πn
2
)− 2n2
]=
1πn
sin(πn
2
)+
2π2n2
cos(πn
2
)− 2π2n2
(5.8)
For n = 0,
h1(n) = 21
2π
∫ π2
0
2πωdω =
2π2
ω2
2
∣∣∣∣π20
=2π2
π2
8=
14
(5.9)
Therefore,
h(n) =
12
+14, n = 0
− 1πn
sin(π
2n)
+1πn
sin(πn
2
)+
2π2n2
cos(πn
2
)− 2π2n2
, n 6= 0
, (5.10)
resulting in
h(n) =
34, n = 0
2π2n2
[cos(πn
2
)− 1], n 6= 0
, (5.11)
(b) The fourth-order Hanning window is described by
wH(n) =
12
+12
cos(
2πn4
), |n| ≤ 2
0, |n| > 2
(5.12)
That means wH(0) = 1, wH(1) = wH(−1) = 12 and wH(2) = wH(−2) = 0. As a result,
H(z) = h(−1)wH(−1)z−1 + h(0)wH(0)z−2 + h(1)wH(1)z−3 =
= − 1π2z−1 +
34z−2 − 1
π2z−3
(5.13)
141
5.12 Design of a bandpass filter using Hamming, Hanning and Blackman, with the following specifications:
M = 10Ωc1 = 1.125 rad/sΩc2 = 2.5 rad/sΩs = 10 rad/s
Table 5.2: Filter Coefficients (Hamming) – Exercise 5.12.
h(0) to h(5)h(0) = 7.04194890e-03 h(2) =-7.82062915e-02 h(4) = 1.01781241e-01h(1) =-4.12761795e-03 h(3) =-1.07230555e-01 h(5) = 2.75000000e-01
Table 5.3: Filter Coefficients (Hanning) – Exercise 5.12.
h(0) to h(5)h(0) = 5.89651412e-03 h(2) =-9.82856133e-02 h(4) = 1.04109431e-01h(1) =-6.14769777e-03 h(3) =-1.17896611e-01 h(5) = 2.75000000e-01
Table 5.4: Filter Coefficients (Blackman) – Exercise 5.12.
h(0) to h(5)h(0) = 0.00000000e+00 h(2) =-3.94656333e-02 h(4) = 9.47605933e-02h(1) =-9.88866097e-04 h(3) =-8.01362346e-02 h(5) = 2.75000000e-01
Magnitude response of the designed filter, using Hamming, Hanning, and Blackman are shown atfigure 5.10. As can be seen Hamming and Hanning windows have a smaller transition band than theBlackman window.
• MatLab® Program:
close all;clear all;
M=10;wc1=1.125; wc2=2.5; ws=10;wc1=2*pi*wc1/ws; wc2=2*pi*wc2/ws;
neg=-M/2:-1;pos=1:M/2;n=[neg,pos];h0=(wc2-wc1)/pi;h=(sin(n*wc2)-sin(n*wc1)) ./ (pi*n);h=[h(1:M/2),h0,h(M/2+1:M)];
w1=hamming(M+1)’;h1=h.*w1;[H1,W]=freqz(h1,1,2048);figure(1);plot(W*ws/(2*pi),db(H1));xlabel(’Frequency [rad/s]’);
142 CHAPTER 5. FIR FILTERS APPROXIMATIONS
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5−120
−100
−80
−60
−40
−20
0
Frequency [rad/s]
Mag
nitu
de [d
B]
HammingHanningBlackman
Figure 5.10: Filter characteristics with Hamming, Hanning, and Blackman window for exercise 5.12.
ylabel(’Magnitude [dB]’);
w2=hanning(M+1)’;h2=h.*w2;H2=freqz(h2,1,W);figure(2);plot(W*ws/(2*pi),db(H2));xlabel(’Frequency [rad/s]’);ylabel(’Magnitude [dB]’);
w3=blackman(M+1)’;h3=h.*w3;H3=freqz(h3,1,W);figure(3);plot(W*ws/(2*pi),db(H3));xlabel(’Frequency [rad/s]’);ylabel(’Magnitude [dB]’);
figure(4);plot(W*ws/2/pi,db(H1),’k-’,W*ws/2/pi,db(H2),’k--’,...W*ws/2/pi,db(H3),’k:’);
legend(’Hamming’,’Hanning’,’Blackman’);xlabel(’Frequency [rad/s]’);ylabel(’Magnitude [dB]’);
5.13 The Kaiser window function with M = 20 for different values of β and the corresponding filtersmagnitude responses can be seen at figures 5.11 and 5.12 respectively.
As the values of β increases the time resolution also increases, but the frequency resolution decreases.
• MatLab® Program:
close all;clear all;
M=20;B=[1,7,15,30];
for k=1:length(B)
143
−10 −8 −6 −4 −2 0 2 4 6 8 100
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Amostra
Am
plitu
de
171530
Figure 5.11: Kaiser windows for different β for exercise 5.13.
0 0.5 1 1.5 2 2.5 3−300
−250
−200
−150
−100
−50
0
50
Frequency [rad/s]
Mag
nitu
de [d
B]
171530
Figure 5.12: Frequency responses for different β Kaiser windows of exercise 5.13.
w(k,:)=kaiser(M+1,B(k))’;H=fft(w(k,:),1024);W(k,:)=H(1:end/2);plot(-M/2:M/2,w(k,:));xlabel(num2str(B(k)));
end
figure(1);T=-M/2:M/2;
144 CHAPTER 5. FIR FILTERS APPROXIMATIONS
plot(T,w(1,:),’k-’,T,w(2,:),’k-.’,T,w(3,:),’k--’,...T,w(4,:),’k:’);
xlabel(’Amostra’);ylabel(’Amplitude’);legend(’1’,’7’,’15’,’30’);figure(2);T=0:pi/512:pi-pi/512;w=db(W);plot(T,w(1,:),’k-’,T,w(2,:),’k-.’,T,w(3,:),’k--’,...
T,w(4,:),’k:’);axis([0 pi -300 50]);xlabel(’Frequency [rad/s]’);ylabel(’Magnitude [dB]’);legend(’1’,’7’,’15’,’30’);
5.14 (a) Frequency response shown at figure 5.13.
0 500 1000 1500 2000 2500
−120
−100
−80
−60
−40
−20
0
Frequency [rad/s]
Mag
nitu
de [d
B]
Figure 5.13: Frequency response of exercise 5.14a.
(b) Frequency response shown at figure 5.14.
(c) Frequency response shown at figure 5.15.
• MatLab® Program:
clear all;close all;W=linspace(0,pi,2048);
% Filter SpecificationAp=1;Ar=40;Op=1000;Or=1200;Os=5000;[hlp,h,param1]=kaiserfilt(Ap,Ar,Op,Or,Os);param1(1:4)
145
0 500 1000 1500 2000 2500−140
−120
−100
−80
−60
−40
−20
0
Frequency [rad/s]
Mag
nitu
de [d
B]
Figure 5.14: Frequency response of exercise 5.14b.
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000
−120
−100
−80
−60
−40
−20
0
Frequency [rad/s]
Mag
nitu
de [d
B]
Figure 5.15: Frequency response of exercise 5.14c.
Hlp=freqz(hlp,1,W);figure(1);plot(W*Os/(2*pi),20*log10(abs(Hlp)));m=min(20*log10(abs(Hlp)));n=max(20*log10(abs(Hlp)));axis([0 Os/2 m-10 n+10]);xlabel(’Frequency [rad/s]’);ylabel(’Magnitude [dB]’);
% Filter SpecificationAp=1;Ar=40;Op=1200;Or=1000;Os=5000;[hhp,h,param2]=kaiserfilt(Ap,Ar,Op,Or,Os);
146 CHAPTER 5. FIR FILTERS APPROXIMATIONS
param2(1:4)Hhp=freqz(hhp,1,W);figure(2);plot(W*Os/(2*pi),20*log10(abs(Hhp)));m=min(20*log10(abs(Hhp)));n=max(20*log10(abs(Hhp)));axis([0 Os/2 m-10 n+10]);xlabel(’Frequency [rad/s]’);ylabel(’Magnitude [dB]’);
% Filter SpecificationAp=1;Ar=40;Op1=1000;Or1=800;Op2=1200;Or2=1400;Os=10000;[hbp,h,param3]=kaiserfilt(Ap,Ar,Op1,Or1,Os,Op2,Or2);param3(1:4)Hbp=freqz(hbp,1,W);figure(3);plot(W*Os/(2*pi),20*log10(abs(Hbp)));m=min(20*log10(abs(Hbp)));n=max(20*log10(abs(Hbp)));axis([0 Os/2 m-10 n+10]);xlabel(’Frequency [rad/s]’);ylabel(’Magnitude [dB]’);
5.15 Procedure to design a differentiator using the Kaiser window. First we assume that the filter has Ncoefficients, and that N is odd.
The following relationships will be used.
Hideal(z) =∞∑
n=−∞h(n)z−n
r = (N − 1)/2
Hpract(z) =r∑
n=−rh(n)z−n
HW (z) =r∑
n=−r[w(n) · h(n)]z−n
Kaiser window.
W (n) =I0(β)/I0(α) , for |n| ≤ r0 , otherwise
β = α
√1−
(nr
)2
Error function.
E(ω,N, α) = ω − |HW (e−ω)|
E(ω,N, α) = ω − |r∑
n=−r[w(n) · h(n)]e−ωn|
E(ω,N, α) = ω − |r∑
n=−r[I0(β)I0(α)
· h(n)]e−ωn|
Two parameters are needed for the filter design, δ (maximun deviation of |HW (e−ω)| from |Hideal(e−ω)|)and ωp (bandwith of the Kaiser differentiatior).
(i) Compute the ideal response h(n) for the differentiator using Table 5.1.
147
(ii) Initialise N , (ex. N = 3).
(iii) Minimise |E(ω,N, α)| with respect toalpha, using a minimax approach, with 0 ≤ ω < ωp.
(iv) If maxω |E(ω,N, α| ≤ δ , go to (vi).
(v) Set N = N + 2. Go to step (iii).
(vi) Recalculate the Kaiser window with N and α found, and multiply it by ideal filter impulse re-sponse h(n), to find the desired filter impulse response.
Some considerations:
1- Since the maximum of the function |E(ω,N, α)| occur at frequecies close to ωp, it is only neededto evaluate |E(ω,N, α)| over a small range of frequencies close to ωp, say from m · ωp to ωp,where a good estimate for m is m = (N − 3)/(N − 1).
2- A good resolution for ω is necessary if a good estimate of |E(ω,N, α)| is to be obtained.
5.16 Exercise 5.14a with the Dolph-Chebyshev window.
0 500 1000 1500 2000 2500−120
−100
−80
−60
−40
−20
0
20
Frequency [rad/s]
Mag
nitu
de [d
B]
KaiserDolph−Chebyshev
Figure 5.16: Frequency response of exercise 5.16.
The transition band of the Dolph-Chebyshev window is slightly smaller than the trasition band of thefilter with the Kaise window. However, the stopband attenuation levels of the Kaiser windows is higherthan the ones of the Dolph-Chebyshev window.
• MatLab® Program:
clear all;close all;format short;W=linspace(0,pi,2048);
% Filter Specification - Low-PassAp=1;Ar=40;Op=1000;Or=1200;
148 CHAPTER 5. FIR FILTERS APPROXIMATIONS
Os=5000;[hlp,h,param1]=chebyfilt(Ap,Ar,Op,Or,Os);%param1(1:5)Hlp=freqz(hlp,1,W);figure(1);plot(W*Os/(2*pi),db(Hlp));m=min(db(Hlp));n=max(db(Hlp));axis([0 Os/2 floor(m-10) ceil(n+5)]);xlabel(’Frequency [rad/s]’);ylabel(’Magnitude [dB]’);
% Filter Specification - High-PassAp=1;Ar=40;Op=1200;Or=1000;Os=5000;[hhp,h,param2]=chebyfilt(Ap,Ar,Op,Or,Os);%param2(1:5)Hhp=freqz(hhp,1,W);figure(2);plot(W*Os/(2*pi),db(Hhp));m=min(db(Hhp));n=max(db(Hhp));axis([0 Os/2 floor(m-10) ceil(n+5)]);xlabel(’Frequency [rad/s]’);ylabel(’Magnitude [dB]’);
% Filter Specification - Band-PassAp=1;Ar=40;Op1=1000;Or1=800;Op2=1200;Or2=1400;Os=10000;[hbp,h,param3]=chebyfilt(Ap,Ar,Op1,Or1,Os,Op2,Or2);%param3(1:5)Hbp=freqz(hbp,1,W);figure(3);plot(W*Os/(2*pi),db(Hbp));m=min(db(Hbp));n=max(db(Hbp));axis([0 Os/2 floor(m-10) ceil(n+5)]);xlabel(’Frequency [rad/s]’);ylabel(’Magnitude [dB]’);
5.17 Maximally flat FIR. Tr = 0.15π rad/s and ωc = 0.3π rad/s.
For this values, we have :
M1 =(π
Tr
)2
ρ =1 + cosωc
2K = K∗p = 54M = 2 · (M∗p − 1) = 134L = M∗p −K∗p = 14
Whose frequency response coefficients are shown at table 5.5.
Table 5.5: Frequency response coefficients of exercise 5.17.d(0) to d(14)
d(0) = 1 d(4) = 395010 d(8) = 2944827765 d(12) = 4027810484880d(1) = 54 d(5) = 4582116 d(9) = 20286591270 d(14) = 20448884000160d(2) = 1485 d(6) = 45057474 d(10) = 127805525001d(3) = 27720 d(7) = 386206920 d(11) = 743595781824
149
The frequecy response of the maximally flat filter can be seen in the Figure 5.17.(a).
We can compute the impulse response of the filter by sampling the frequency response H(ejw) atM + 1 equally spaced points at frequecies w = 2πn/(M + 1), for n = 0, · · · ,M , and taking theIDFT, as in the frequency sampling approach.
In order to make the frequency response of the filter have linear phase we need to make a circular shiftof the impulse response found by the frequency sampling approach.
The impulse response found is shown in Table 8.2.
However, due to precision problems of the FFT algorithm, the frequency response found from thelinear phase filter is slightly different from the original maximally flat frequency response. Figure5.17.(b) shows this difference.
Table 5.6: Impulse response coefficients for the filter of exercise 5.17.
h(0) to h(67)h(0) = 6.69471752e-17 h(23)= 7.13353602e-10 h(46)= 3.00799335e-04h(1) = 5.71401901e-17 h(24)=-7.80595794e-10 h(47)=-2.74523088e-04h(2) = 3.58666753e-17 h(25)=-5.87549357e-09 h(48)=-1.09113390e-03h(3) = 9.44520793e-18 h(26)=-1.50557602e-08 h(49)=-1.33883600e-03h(4) =-1.18934971e-17 h(27)=-2.08193373e-08 h(50)=-1.70601232e-04h(5) =-7.41209043e-18 h(28)= 4.12329530e-11 h(51)= 2.22598210e-03h(6) = 1.98955114e-17 h(29)= 8.09758709e-08 h(52)= 3.96522520e-03h(7) = 6.51629599e-17 h(30)= 2.21618189e-07 h(53)= 2.42879515e-03h(8) = 8.25650071e-17 h(31)= 2.98294009e-07 h(54)=-3.07261599e-03h(9) =-4.30034737e-17 h(32)= 5.62495921e-09 h(55)=-9.01480959e-03h(10)=-9.48036820e-16 h(33)=-9.71768539e-07 h(56)=-8.86152042e-03h(11)=-5.82701727e-15 h(34)=-2.37370012e-06 h(57)= 1.31210793e-03h(12)=-2.86838284e-14 h(35)=-2.61563852e-06 h(58)= 1.65763810e-02h(13)=-1.22055308e-13 h(36)= 1.01897470e-06 h(59)= 2.31454898e-02h(14)=-4.50859462e-13 h(37)= 9.74180189e-06 h(60)= 8.27349736e-03h(15)=-1.42821044e-12 h(38)= 1.84071501e-05 h(61)=-2.52845228e-02h(16)=-3.76702991e-12 h(39)= 1.31464695e-05 h(62)=-5.29749761e-02h(17)=-7.68464474e-12 h(40)=-2.01827909e-05 h(63)=-4.01403549e-02h(18)=-9.14869997e-12 h(41)=-7.37002225e-05 h(64)= 3.24274887e-02h(19)= 9.96871562e-12 h(42)=-9.72119424e-05 h(65)= 1.47557370e-01h(20)= 9.60428066e-11 h(43)=-1.57139019e-05 h(66)= 2.54536463e-01h(21)= 3.16866483e-10 h(44)= 1.91123296e-04 h(67)= 2.98126353e-01h(22)= 6.58051549e-10 h(45)= 3.89884876e-04
5.18
P (w) =M/2∑l=0
p(l) cos(wl)
150 CHAPTER 5. FIR FILTERS APPROXIMATIONS
0 0.5 1 1.5 2 2.5 30
0.2
0.4
0.6
0.8
1
Normalized Frequency [rad/s]
Mag
nitu
de R
espo
nse
Maximally Flat FIR
0 0.5 1 1.5 2 2.5 3−500
−400
−300
−200
−100
0
Normalized Frequency [rad/s]
Mag
nitu
de R
espo
nse
[dB
]
Maximally Flat FIR
Original Flat FIRLinear Phase Flat FIR
0 0.5 1 1.5 2 2.5 3−200
−100
0
100
200
Normalized Frequency [rad/s]P
hase
[Deg
rees
]
Figure 5.17: Frequency response for filters of exercise 5.17.
(a)
H(ew) = e−wM2 [p(0) +
12p(1)] cos(
w
2) +
L−1∑m=2
12
[p(m− 1) + p(m)] cos[w(m− 12
)] +
+p(L)
2cos(w
M
2)
H(ew) = e−wM2 p(0) cos(
w
2) +
L∑m=1
p(m)2cos[w(m− 1
2)] + cos[w(m+
12
)]
H(ew) = e−wM2 p(0) cos(
w
2) +
L∑m=1
p(m)2
[2 cos(w
2) cos(wm)]
H(ew) = e−wM2
L∑m=0
p(m) cos(w
2) cos(wm)
H(ew) = e−wM2 cos(
w
2)
L∑m=0
p(m) cos(wm)
H(ew) = e−wM2 cos(
w
2)P (w)
151
(b)
H(ew) = e−(wM2 −
π2 )
M/2∑m=1
c(m) sin(wm)
H(ew) = e−(wM2 −
π2 )[p(0)− 1
2p(2)] sin(w) +
L−1∑m=2
12
[p(m− 1)− p(m+ 1)] sin[w(m+ 1)] +
+p(L− 1)
2sin(wL) +
p(L)2
sin[w(L+ 1)]
H(ew) = e−(wM2 −
π2 )p(0) sin(w) +
L∑m=2
p(m)2sin[w(m+ 1)]− sin[w(m− 1)]+
+p(1)
2sin(2w)
H(ew) = e−(wM2 −
π2 )p(0) sin(w) +
p(1)2
[2 sin(w) cos(w)] +L∑
m=2
p(m)2
[2 sin(w) cos(wm)]
H(ew) = e−(wM2 −
π2 )[p(0) sin(w) + p(1) sin(w) cos(w) +
L∑m=2
p(m)sin(w) cos(wm)]
H(ew) = e−(wM2 −
π2 )[sin(w)
L∑m=0
p(m) cos(wm)
H(ew) = e−(wM2 −
π2 ) sin(w)P (w)
(c)
H(ew) = e−(wM2 −
π2 )
M/2∑m=1
d(m) sin(wm)
H(ew) = e−(wM2 −
π2 )[p(0)− 1
2p(1)] sin(
w
2) +
L∑m=2
12
[p(m− 1)− p(m)] sin[w(m− 12
)] +
+p(L)
2sin(w
M
2)
H(ew) = e−(wM2 −
π2 )p(0) sin(
w
2) +
L∑m=1
p(m)2sin[w(m+
12
)]− sin[w(m− 12
)]
H(ew) = e−(wM2 −
π2 )p(0) sin(
w
2) +
L∑m=1
p(m)2
[2 sin(w
2) cos(wm)]
H(ew) = e−(wM2 −
π2 )[
L∑m=0
p(m) sin(w
2) cos(wm)]
H(ew) = e−(wM2 −
π2 ) sin(
w
2)
L∑m=0
p(m) cos(wm)
H(ew) = e−(wM2 −
π2 ) sin(
w
2)P (w)
5.19 Using the command REMEZ, from MatLab® , three narrowband filters were designed. The frequencyresponse of all the filters is showed in the figure 5.18.
• MatLab® Program:
clear all;close all;
152 CHAPTER 5. FIR FILTERS APPROXIMATIONS
0 1000 2000 3000 4000 5000 6000 7000 8000−100
−80
−60
−40
−20
0
20
Frequency [rad/s]
Mag
nitu
de [d
B]
Figure 5.18: Frequency responses for the filters of exercise 5.19.
M=98;N=M-1;fs=5*1e3;
% FILTER 1
F=[0 680 730 810 860 fs/2]/(fs/2);F=F’;A=[0 0 1 1 0 0]’;W=[1 5 1]’;
B1=remez(N,F,A,W);
% FILTER 2
F=[0 762 812 892 942 fs/2]/(fs/2);F=F’;A=[0 0 1 1 0 0]’;W=[1 5 1]’;
B2=remez(N,F,A,W);
% FILTER 3
F=[0 851 901 981 1031 fs/2]/(fs/2);F=F’;A=[0 0 1 1 0 0]’;W=[1 5 1]’;
B3=remez(N,F,A,W);
[HB1,wf]=freqz(B1,1,2048);HB2=freqz(B2,1,wf);HB3=freqz(B3,1,wf);
plot(wf*fs/2,db(HB1),’k-’,wf*fs/2,db(HB2),’k--’,wf*fs/2,
153
db(HB3),’k-.’);xlabel(’Frequency [rad/s]’);ylabel(’Magnitude [dB]’);
5.20 Filter for MFC detection shown at figure 5.19 and filters coeficients shown at table 5.7.
0 500 1000 1500 2000 2500 3000 3500 4000−90
−80
−70
−60
−50
−40
−30
−20
−10
0
10
Frequency [Hz]
Mag
nitu
de [d
B]
Figure 5.19: Frequency response for the filter of exercise 5.20.
Table 5.7: Filter coeficients for MFC detection filter of exercise 20.
h(0) to h(47)h(0) =-1.03997164e-03 h(16)= 5.27869787e-04 h(32)=-1.79319763e-02h(1) = 6.04591216e-03 h(17)=-7.13973846e-03 h(33)=-3.49516923e-03h(2) = 3.96644903e-03 h(18)=-1.35738690e-02 h(34)= 1.28058441e-02h(3) = 3.20619812e-03 h(19)=-1.65110357e-02 h(35)= 2.60407976e-02h(4) = 1.57474020e-03 h(20)=-1.45518575e-02 h(36)= 3.19874424e-02h(5) =-1.00014805e-03 h(21)=-7.73733439e-03 h(37)= 2.84949389e-02h(6) =-3.88822718e-03 h(22)= 2.29971828e-03 h(38)= 1.62486296e-02h(7) =-6.13472091e-03 h(23)= 1.26935228e-02 h(39)=-1.33180677e-03h(8) =-6.79027242e-03 h(24)= 2.01646313e-02 h(40)=-1.90613165e-02h(9) =-5.27342718e-03 h(25)= 2.20476889e-02 h(41)=-3.15507814e-02h(10)=-1.65665822e-03 h(26)= 1.72258764e-02 h(42)=-3.48749477e-02h(11)= 3.23031846e-03 h(27)= 6.62570560e-03 h(43)=-2.78267116e-02h(12)= 7.95878408e-03 h(28)=-6.90033177e-03 h(44)=-1.23516112e-02h(13)= 1.08972791e-02 h(29)=-1.93356125e-02 h(45)= 7.00213493e-03h(14)= 1.07522518e-02 h(30)=-2.67142912e-02 h(46)= 2.44465568e-02h(15)= 7.07092856e-03 h(31)=-2.63796625e-02 h(47)= 3.47238090e-02
• MatLab® Program:
clear all;close all;
154 CHAPTER 5. FIR FILTERS APPROXIMATIONS
N=95;fs=8*1e3;
% FILTER 1Filter_5_17; %Data of table 5.5
B=zeros(size(h));B(1)=h(96);B(2:end)=h(1:end-1);
[HB,wf]=freqz(B,1,2048);
plot(wf*fs/2/pi,db(HB));axis([0 fs/2 -90 10]);xlabel(’Frequency [Hz]’);ylabel(’Magnitude [dB]’);
5.21 Exercise 5.21
5.22 Exercise 5.22
5.23 The Hilbert transformer of order 98 using a type IV structure is shown at figure 5.20 and the filterscoefficients are shown at table 5.8.
0 0.5 1 1.5 2 2.5 3−30
−25
−20
−15
−10
−5
0
5
Frequency [rad/s]
Mag
nitu
de [d
B]
Figure 5.20: Frequency response for the filter of exercise 5.23.
• MatLab® Program:
clear all;close all;
M=98;
% Diferential FILTER
F=[.02 .98];A=[1 1];B=remez(M+1,F,A,’Hilbert’);
155
Table 5.8: Filter coeficientes of a Hilber transformer of exercise 5.23.
h(0) to h(49)h(0) =-1.04801449e-02 h(17)=-4.15938984e-03 h(34)=-2.06592523e-02h(1) =-8.58592198e-04 h(18)=-7.92967956e-03 h(35)=-1.75924740e-02h(2) =-2.94146097e-03 h(19)=-4.87735872e-03 h(36)=-2.40143179e-02h(3) =-1.10507694e-03 h(20)=-8.87153337e-03 h(37)=-2.13234369e-02h(4) =-3.38647788e-03 h(21)=-5.70254445e-03 h(38)=-2.84276978e-02h(5) =-1.38898909e-03 h(22)=-9.92834767e-03 h(39)=-2.64045327e-02h(6) =-3.87257732e-03 h(23)=-6.65264907e-03 h(40)=-3.45672098e-02h(7) =-1.70653449e-03 h(24)=-1.11104918e-02 h(41)=-3.37806443e-02h(8) =-4.39861745e-03 h(25)=-7.76037895e-03 h(42)=-4.38137088e-02h(9) =-2.06950219e-03 h(26)=-1.24601437e-02 h(43)=-4.55589963e-02h(10)=-4.97631357e-03 h(27)=-9.05802304e-03 h(44)=-5.95565728e-02h(11)=-2.50772304e-03 h(28)=-1.40138443e-02 h(45)=-6.75890317e-02h(12)=-5.60911054e-03 h(29)=-1.05970550e-02 h(46)=-9.29331399e-02h(13)=-2.97709206e-03 h(30)=-1.58324904e-02 h(47)=-1.24455202e-01h(14)=-6.31080498e-03 h(31)=-1.24506553e-02 h(48)=-2.14495024e-01h(15)=-3.53103628e-03 h(32)=-1.80042588e-02 h(49)=-6.34037300e-01h(16)=-7.08065782e-03 h(33)=-1.47281198e-02
[HB,wf]=freqz(B,1,2048);
plot(wf,db(HB));axis([0 pi -30 5]);xlabel(’Frequency [rad/s]’);ylabel(’Magnitude [dB]’);
5.24 Exercise 5.24
5.25 Test of the validity of the estimation of lowpass filters using the minimax estimation are shown atfigures 5.21 and 5.22.
• MatLab® Program:
clear all;close all;
wp=0.5*pi;wr=0.7*pi;dp=0.05;dr=0.01;
[n(1),fo,mo,w]=remezord([wp,wr],[1,0],[dp,dr],2*pi);
B=remez(n(1),fo,mo,w);[HB1,wf]=freqz(B,1,2048);WR=linspace(wr,pi,20);AR=ones(1,20)*db(dr);WP=linspace(0,wp,20);AP=ones(1,20)*db((1+dp)/(1-dp));plot(wf,db(HB1),’k--’,WR,AR,’k’,WP,AP,’k’,WP,-AP,’k’);axis([0 pi -100 10]);xlabel(’Frequency [rad/s]’);ylabel(’Magnitude [dB]’);
156 CHAPTER 5. FIR FILTERS APPROXIMATIONS
0 0.5 1 1.5 2 2.5 3−100
−90
−80
−70
−60
−50
−40
−30
−20
−10
0
10
Frequency [rad/s]
Mag
nitu
de [d
B]
FilterDesired
Figure 5.21: Magnitude responses of a filter for exercise 5.25.
0 0.5 1 1.5 2 2.5 3−70
−60
−50
−40
−30
−20
−10
0
10
Frequency [rad/s]
Mag
nitu
de [d
B]
FilterDesired
Figure 5.22: Magnitude responses of a filter for exercise 5.25.
wp=0.47*pi;wr=0.5*pi;dp=0.05;dr=0.1;
[n(2),fo,mo,w]=remezord([wp,wr],[1,0],[dp,dr],2*pi);
B=remez(n(2),fo,mo,w);[HB2,wf]=freqz(B,1,2048);
157
figure;WR=linspace(wr,pi,20);AR=ones(1,20)*db(dr);WP=linspace(0,wp,20);AP=ones(1,20)*db((1+dp)/(1-dp));plot(wf,db(HB2),’k--’,WR,AR,’k’,WP,AP,’k’,WP,-AP,’k’);axis([0 pi -70 10]);xlabel(’Frequency [rad/s]’);ylabel(’Magnitude [dB]’);
5.26 Tests of the validity of the estimation of lowpass filters using the Kaiser estimation are shown at figures5.23 and 5.24.
0 0.5 1 1.5 2 2.5 3−100
−90
−80
−70
−60
−50
−40
−30
−20
−10
0
10
Frequency [rad/s]
Mag
nitu
de [d
B]
FilterDesired
Figure 5.23: Magnitude responses of a filter for exercise 5.26.
• MatLab® Program:
clear all;close all;
wp=0.5*pi;wr=0.7*pi;dp=0.05;dr=0.01;
[n(1),fo,mo,w]=remezord([wp,wr],[1,0],[dp,dr],2*pi);M(1)=ceil((-db(sqrt(dp*dr))-13)/(2.3237*(wr-wp)));
B=remez(M(1),fo,mo,w);[HB1,wf]=freqz(B,1,2048);WR=linspace(wr,pi,20);AR=ones(1,20)*db(dr);WP=linspace(0,wp,20);AP=ones(1,20)*db((1+dp)/(1-dp));plot(wf,db(HB1),’k--’,WR,AR,’k’,WP,AP,’k’,WP,-AP,’k’);axis([0 pi -100 10]);xlabel(’Frequency [rad/s]’);ylabel(’Magnitude [dB]’);
wp=0.47*pi;wr=0.5*pi;dp=0.05;
158 CHAPTER 5. FIR FILTERS APPROXIMATIONS
0 0.5 1 1.5 2 2.5 3−70
−60
−50
−40
−30
−20
−10
0
10
Frequency [rad/s]
Mag
nitu
de [d
B]
FilterDesired
Figure 5.24: Magnitude responses of a filter for exercise 5.26.
dr=0.1;
[n(2),fo,mo,w]=remezord([wp,wr],[1,0],[dp,dr],2*pi);M(2)=ceil((-db(sqrt(dp*dr))-13)/(2.3237*(wr-wp)));
B=remez(M(2),fo,mo,w);[HB2,wf]=freqz(B,1,2048);figure;WR=linspace(wr,pi,20);AR=ones(1,20)*db(dr);WP=linspace(0,wp,20);AP=ones(1,20)*db((1+dp)/(1-dp));plot(wf,db(HB2),’k--’,WR,AR,’k’,WP,AP,’k’,WP,-AP,’k’);axis([0 pi -70 10]);xlabel(’Frequency [rad/s]’);ylabel(’Magnitude [dB]’);
5.27 Exercise 5.27
5.28 The frequency responses for both WLS and Chebyshev methods are shown at figure 5.25. Table5.9 shows the stopband minimun attenuation and the total stopband energy. As it can be seen theChebyshev method has a greater stopband attenuation, but has more energy in the stopband than theWLS method.
Table 5.9: Stopband minimun attenuation and the total stopband energy for the filters of exercise 5.28.
Chebyshev WLSStopband minimun att. 40.3 dB 33.7 dBTotal stopband energy 0.0161 0.0029
• MatLab® Program:
clear all;
159
0 50 100 150 200 250 300 350 400 450 500−140
−120
−100
−80
−60
−40
−20
0
20
Frequency [rad/s]
Mag
nitu
de [d
B]
ChebyshevWLS
Figure 5.25: Frequency response for the filter of exercise 5.28.
close all;
M=50;Fs=1000;F=[0 100 150 200 250 Fs/2]/(Fs/2);A=[0 0 1 1 0 0];W=[1 10 1];N=2^12;
b=remez(M,F,A,W);Hb=fft(b,2*N);Hb=Hb(1:end/2);d=Fs/2/N;w=0:d:Fs/2-d;
B=firls(M,F,A,W);HB=fft(B,2*N);HB=HB(1:end/2);
plot(w,db(Hb),’k--’,w,db(HB),’k-’);xlabel(’Frequency [rad/s]’);ylabel(’Magnitude [dB]’);legend(’Chebyshev’,’WLS’);
Bnd1=find(w<=100);Bnd2=find(w>=250);
Bnd=[Bnd1,Bnd2];
Atb=-max(db(Hb(Bnd)))AtB=-max(db(HB(Bnd)))
Enb=sum(abs(Hb(Bnd)).^2)*dEnB=sum(abs(HB(Bnd)).^2)*d
5.29 Exercise 5.29
5.30 Exercise 5.30
160 CHAPTER 5. FIR FILTERS APPROXIMATIONS
5.31 Exercise 5.31
5.32 Exercise 5.32
5.33 Exercise 5.33
Chapter 6
IIR FILTERS APPROXIMATIONS
6.1 Exercise 6.1
6.2 Exercise 6.2
6.3 Analog filter designed using the elliptic method, magnitude response at Figure 6.1, and filters coefi-cients at Table 6.1.
0 500 1000 1500 2000 2500 3000 3500 4000−120
−100
−80
−60
−40
−20
0
Frequency [Hz]
Mag
nitu
de R
espo
nse
[dB
]
Figure 6.1: Magnitude response for the filter of exercise 6.3.
• MatLab® Program:
clear all;close all;
Ap=0.5;Ar=60;Opi=1000*2*pi;Ori=1209*2*pi;Os=8000*2*pi;
% Transformation to the normalized lowpass prototype
161
162 CHAPTER 6. IIR FILTERS APPROXIMATIONS
Table 6.1: Filter coefficients, poles and zeros for exercise 6.3.
Numerator Coefficients Denominator CoefficientsH0 = 9.995692× 10−4
b0 = 2.077076179× 1032 a0 = 2.199202909× 1029
b1 = 3.548980970× 1013 a1 = 1.462813499× 1026
b2 = 9.904595768× 1024 a2 = 6.869101555× 1022
b3 = 1.080828262× 106 a3 = 1.766686051× 1019
b4 = 1.566523029× 1017 a4 = 4.750710710× 1015
b5 = 0.828841226× 10−2 a5 = 6.361764010× 1011
b6 = 8.809778420× 108 a6 = 1.194248753× 108
b7 = 6.058451752× 10−25 a7 = 7.138758543× 103
b8 = 1.0 a8 = 1.0Filter Zeros Filter Polesz0 = +25844.8281055 p0 = −114.6977913 + 6302.3002278z1 = −25844.8281055 p1 = −114.6977913− 6302.3002278z2 = +10123.4111038 p2 = −453.3793261 + 5873.5082090z3 = −10123.4111038 p3 = −453.3793261− 5873.5082090z4 = +7732.7388768 p4 = −1104.0497967 + 4612.1697533z5 = −7732.7388768 p5 = −1104.0497967− 4612.1697533z6 = +7123.4822414 p6 = −1897.2523572 + 1868.6740585z7 = −7123.4822414 p7 = −1897.2523572− 1868.6740585
a=sqrt(Ori/Opi);Op=1/a;Or=(1/a)*(Ori/Opi);
% Estimate the filter orderk=sqrt(Op/Or)/sqrt(Or/Op);e=sqrt((10^(0.1*Ap)-1)/(10^(0.1*Ar)-1));qo=0.5*(1-(1-k*k)^(0.25))/(1+(1-k*k)^(0.25));q=qo+2*qo^5+15*qo^9+150*qo^13;n=ceil(log10(16/e/e)/log10(1/q)); % Filter order
% Calculate the transfer function of the prototype filter[z,p,k]=ellipap(n,Ap,Ar);[num,den]=zp2tf(z,p,k);
% Transform the lowpass prototype filter to the bandstop filter[numT,denT] = lp2lp(num,den,Opi);sys=tf(numT,denT);
% Plot the frequency responseW=linspace(0,Os/2,1024);[MAG,PHASE]=bode(sys,W);
6.4 Exercise 6.4
6.5 Analog filter response shown at Figures 6.2 and 6.3. Filter coeficients at Table 6.2.
• MatLab® Program:
clear all;
163
0 20 40 60 80 100 120−100
−90
−80
−70
−60
−50
−40
−30
−20
−10
0
10
Frequency [Hz]
Mag
nitu
de R
espo
nse
[dB
]
Figure 6.2: Magnitude response of the analog filter of exercise 6.5.
0 20 40 60 80 100 120−400
−300
−200
−100
0
100
200
300
400
Frequency [Hz]
Pha
se R
espo
nse
[deg
ree]
Figure 6.3: Phase response of the analog filter of exercise 6.5.
close all;
Ap=0.5;Ar=60;
Op1=40*2*pi;Op2=80*2*pi;Or1=50*2*pi;Or2=70*2*pi;Os=240*2*pi;
O1=sqrt(Op1*Op2);
164 CHAPTER 6. IIR FILTERS APPROXIMATIONS
Table 6.2: Filter coefficients, poles and zeros for exercise 6.5.
Numerator Coefficients Denominator CoefficientsH0 = 9.440609× 10−1
b0 = 2.547058150× 1020 a0 = 2.547058150× 1020
b1 = 4.327714674× 102 a1 = 1.298961363× 1018
b2 = 8.207323536× 1015 a2 = 1.233401900× 1016
b3 = 1.489408748× 10−2 a3 = 3.670079838× 1013
b4 = 9.802505098× 1010 a4 = 1.731243407× 1011
b5 = 1.531252642× 10−7 a5 = 2.905131510× 108
b6 = 5.142592638× 105 a6 = 7.728321545× 105
b7 = 4.982629736× 10−13 a7 = 6.442687823× 102
b8 = 1.0 a8 = 1.0Filter Zeros Filter Polesz0 = +401.5773935 p0 = −198.8491601 + 506.1217005z1 = −401.5773935 p1 = −198.8491601− 506.1217005z2 = +374.4057280 p2 = −25.3200901 + 495.1724674z3 = −374.4057280 p3 = −25.3200901− 495.1724674z4 = +337.4172105 p4 = −13.0115168 + 254.4597930z5 = −337.4172105 p5 = −13.0115168− 254.4597930z6 = +314.5867730 p6 = −84.9536241 + 216.2285858z7 = −314.5867730 p7 = −84.9536241− 216.2285858
O2=sqrt(Or1*Or2);
% Check if the filter is symmetricalif (O1~=O2)Or1b=O1^2/Or2;if Or1b>Or1Or1=Or1b;
elseOr2b=O1^2/Or1;Or2=Or2b;
endend
% Transformation to the normalized lowpass prototypea=sqrt((Op2-Op1)/(Or2-Or1));Op=1/a;Or=(1/a)*((Op2-Op1)/(Or2-Or1));
% Estimate the filter orderk=sqrt(Op/Or)/sqrt(Or/Op);e=sqrt((10^(0.1*Ap)-1)/(10^(0.1*Ar)-1));qo=0.5*(1-(1-k*k)^(0.25))/(1+(1-k*k)^(0.25));q=qo+2*qo^5+15*qo^9+150*qo^13;n=ceil(log10(16/e/e)/log10(1/q)); % Filter order
% Calculate the transfer function of the prototype filter[z,p,k]=ellipap(n,Ap,Ar);B=Op2-Op1;Oo=sqrt(Op1*Op2);[num,den]=zp2tf(z,p,k);
% Transform the lowpass prototype filter to the bandstop filter
165
[numT,denT] = lp2bs(num,den,Oo,B);sys=tf(numT,denT);
% Plot the frequency responseW=linspace(0,Op2+Op1,1024);[MAG,PHASE]=bode(sys,W);
% Transforme to the discrete domainFs=Os/2/pi;[numD,denD]=bilinear(numT,denT,Fs,Oo/2/pi);
[H,F]=freqz(numD,denD,1024,Fs);
plot(W/2/pi,db(MAG(:)));axis([0 (Op2+Op1)/2/pi -100 10]);xlabel(’Frequency [Hz]’);ylabel(’Magnitude Response [dB]’);figure;plot(W/2/pi,(PHASE(:)));xlabel(’Frequency [Hz]’);ylabel(’Phase Response [degree]’);
Function db:
function y=db(x)
y=20*log10(abs(x));
6.6 Using the bilinear method to transform the analog filter into the digital filter the magnitude responseobtained for Butterworth is shown in Figure 6.4, for the Chebyshev filter in Figure 6.5, and for theElliptic filter in Figure 6.6.
0 200 400 600 800 1000 1200 1400−300
−250
−200
−150
−100
−50
0
Frequency [Hz]
Mag
nitu
de [d
B]
Butterworth Filter
Analog FilterDigital Filter
Figure 6.4: Magnitude response of the Butterworth implementation for the filter of exercise 6.6.
• MatLab® Program:
166 CHAPTER 6. IIR FILTERS APPROXIMATIONS
0 200 400 600 800 1000 1200 1400−300
−250
−200
−150
−100
−50
0
Frequency [Hz]
Mag
nitu
de [d
B]
Chebyshev Filter
Analog FilterDigital Filter
Figure 6.5: Magnitude response of the Chebyshev implementation for the filter of exercise 6.6.
0 200 400 600 800 1000 1200 1400
−100
−80
−60
−40
−20
0
Frequency [Hz]
Mag
nitu
de [d
B]
Elliptic Filter
Analog FilterDigital Filter
Figure 6.6: Magnitude response of the Elliptic implementation for the filter of exercise 6.6.
clear all;close all;
Ap=1;Ar=40;
Ori=5912.5;Opi=7539.8;
167
Os=50265.5;
%-------------------------------------------------------%% Highpass -> Lowpass , Butterwortha=1;Op=1/a;Or=(1/a)*(Opi/Ori);% Estimate the filter ordere=sqrt(10^(0.1*Ap)-1);nB=ceil(log10((10^(0.1*Ar)-1)/e/e)/(2*log10(Or)));[zB,pB,kB]=buttap(nB);bB=real(kB*poly(zB));aB=real(poly(pB));[bB,aB]=lp2hp(bB,aB,Opi);W=linspace(0,2*Opi-Ori,2048);magB=bode(tf(bB,aB),W);magB=magB(:);
%-------------------------------------------------------%% Highpass -> Lowpass , Chebyshev (Same as Butterworth)
% Estimate the filter ordernC=ceil(acosh(sqrt((10^(0.1*Ar)-1))/e)/acosh(Or));[zC,pC,kC]=cheb1ap(nC,Ap);bC=real(kC*poly(zC));aC=real(poly(pC));[bC,aC]=lp2hp(bC,aC,Opi);magC=bode(tf(bC,aC),W);magC=magC(:);
%-------------------------------------------------------%% Highpass -> Lowpass , Elliptica=sqrt(Opi/Ori);Op=1/a;Or=(1/a)*(Opi/Ori);% Estimate the filter orderk=sqrt(Op/Or)/sqrt(Or/Op);e=sqrt((10^(0.1*Ap)-1)/(10^(0.1*Ar)-1));qo=0.5*(1-(1-k*k)^(0.25))/(1+(1-k*k)^(0.25));q=qo+2*qo^5+15*qo^9+150*qo^13;nE=ceil(log10(16/e/e)/log10(1/q)); % Filter order[zE,pE,kE]=ellipap(nE,Ap,Ar);bE=real(kE*poly(zE));aE=real(poly(pE));[bE,aE]=lp2hp(bE,aE,Opi);magE=bode(tf(bE,aE),W);magE=magE(:);
% Plot[bBd,aBd]=bilinear(bB,aB,Os/2/pi,Opi/2/pi);HB=freqz(bBd,aBd,W/2/pi,Os/2/pi);[bCd,aCd]=bilinear(bC,aC,Os/2/pi,Opi/2/pi);HC=freqz(bCd,aCd,W/2/pi,Os/2/pi);[bEd,aEd]=bilinear(bE,aE,Os/2/pi,Opi/2/pi);HE=freqz(bEd,aEd,W/2/pi,Os/2/pi);
if 1ld1=’k--’;ld2=’k-’;figure(1);plot(W/2/pi,db(magB),ld1,W/2/pi,db(HB(:)));axis([0,(2*Opi-Ori)/2/pi,-300 20]);xlabel(’Frequency [Hz]’);ylabel(’Magnitude [dB]’);legend(’Analog Filter’,’Digital Filter’,2);
168 CHAPTER 6. IIR FILTERS APPROXIMATIONS
title(’Butterworth Filter’);figure(2);plot(W/2/pi,db(magC),ld1,W/2/pi,db(HC(:)));axis([0,(2*Opi-Ori)/2/pi,-300 20]);xlabel(’Frequency [Hz]’);ylabel(’Magnitude [dB]’);legend(’Analog Filter’,’Digital Filter’,2);title(’Chebyshev Filter’);figure(3);plot(W/2/pi,db(magE),ld1,W/2/pi,db(HE(:)));axis([0,(2*Opi-Ori)/2/pi,-110 10]);xlabel(’Frequency [Hz]’);ylabel(’Magnitude [dB]’);legend(’Analog Filter’,’Digital Filter’,2);title(’Elliptic Filter’);elsefigure(1);plot(W/2/pi,db([magB]));axis([0,(2*Opi-Ori)/2/pi,-500 20]);xlabel(’Frequency [Hz]’);ylabel(’Magnitude Response [dB]’);title(’Butterworth Filter’);figure(2);plot(W/2/pi,db([magC]));axis([0,(2*Opi-Ori)/2/pi,-400 20]);xlabel(’Frequency [Hz]’);ylabel(’Magnitude Response [dB]’);title(’Chebyshev Filter’);figure(3);plot(W/2/pi,db([magE]));axis([0,(2*Opi-Ori)/2/pi,-110 10]);xlabel(’Frequency [Hz]’);ylabel(’Magnitude Response [dB]’);title(’Elliptic Filter’);end
Function db:
function y=db(x)
y=20*log10(abs(x));
6.7 Using the Chebyshev method to design the analog filter, and the bilinear method to transform intodigital filter, with the three requirements:
• First filter: Ωo = 770 HzΩr1 = 697 HzΩr2 = 852 HzB = 15.5 Hz
• Second filter: Ωo = 852 HzΩr1 = 770 HzΩr2 = 941 HzB = 17.1 Hz
• Third filter: Ωo = 941 HzΩr1 = 852 HzΩr2 = 1209 HzB = 17.85 Hz
The filters responses are shown at Figure 6.7.
169
0 200 400 600 800 1000 1200 1400 1600 1800 2000−120
−100
−80
−60
−40
−20
0
Frequency [Hz]
Mag
nitu
de R
espo
nse
[dB
]
Figure 6.7: Magnitude responses for the filters of exercise 6.7.
• MatLab® Program:
clear all;close all;
Ap=1;Ar=40;Fs=8000;
Or1=697*2*pi;Or2=852*2*pi;Oo=770*2*pi;Op1=(0.1*Or1+0.9*Oo);Op2=(0.1*Or2+0.9*Oo);B1=Op2-Op1;
[b,a]=cheb_6(Ap,Ar,Op1,Op2,Or1,Or2,Oo,Fs);[H1,F]=freqz(b,a,2048,Fs);
Or1=770*2*pi;Or2=941*2*pi;Oo=852*2*pi;Op1=(0.1*Or1+0.9*Oo);Op2=(0.1*Or2+0.9*Oo);B2=Op2-Op1;
[b,a]=cheb_6(Ap,Ar,Op1,Op2,Or1,Or2,Oo,Fs);H2=freqz(b,a,F,Fs);
Or1=852*2*pi;Or2=1209*2*pi;Oo=941*2*pi;Op1=(0.05*Or1+0.95*Oo);Op2=(0.05*Or2+0.95*Oo);B3=Op2-Op1;
[b,a]=cheb_6(Ap,Ar,Op1,Op2,Or1,Or2,Oo,Fs);H3=freqz(b,a,F,Fs);
plot(F,db(H1),’k-’,F,db(H2),’k--’,F,db(H3),’k-.’);xlabel(’Frequency [Hz]’);ylabel(’Magnitude Response [dB]’);axis([0 Fs/4 -120 10]);
170 CHAPTER 6. IIR FILTERS APPROXIMATIONS
Function db:
function y=db(x)
y=20*log10(abs(x));
Function cheb_6:
function [b,a,magC,bC,aC]=cheb_6(Ap,Ar,Op1,Op2,Or1,Or2,Oo,Fs)
%-------------------------------------------------------%% Bandpass -> Lowpass , Chebyshev (Same as Butterworth)a=1;Op=1/a;Or=(1/a)*((Or2-Or1)/(Op2-Op1));% Estimate the filter ordere=sqrt(10^(0.1*Ap)-1);nC=ceil(acosh(sqrt((10^(0.1*Ar)-1))/e)/acosh(Or));[zC,pC,kC]=cheb1ap(nC,Ap);bC=real(kC*poly(zC));aC=real(poly(pC));[bC,aC]=lp2bp(bC,aC,Oo,(Op2-Op1));W=linspace(0,Fs*2*pi,2048);magC=bode(tf(bC,aC),W);magC=magC(:);[b,a]=bilinear(bC,aC,Fs,Oo/2/pi);
6.8 Exercise 6.8
6.9 Exercise 6.9
6.10 Ripple, 0.5 dB
H(s) =κ1
(s2 + 1.4256s+ 1.2313)κ2
(s+ 0.6265)
High pass ωp = π3 rad/s, ωs = πrad/s
(a) Gain κ = 1.2310× 0.6265 = 0.77141 = κ1 × κ2
(b)
rai =2T
tanωi
2
T = 2
rai = tanπ
6=
12√3
2
=√
33
(c)
s′ −→ 11
√3
3s
H(s) =γκ1s
2
19 + 1.4256
√3
3 s+ 1.2313s2× κ2s√
33 + 0.6265s
H(s) =s2
s2 + 0.668456s+ 0.09023× s
s+ 0.921548
171
(d) Bilinear transformation
H(z) =
(z−1z+1
)2
(z−1z+1
)2
+ 0.668456 (z−1)z+1 + 0.09023
×z−1z+1
z−1z+1 + 0.921548
=(z − 1)2
z2 − 2z + 1 + 0.668456(z2 − 1) + 0.09023(z2 + 2z + 1)
× z − 1(1 + 0.921548)z − (1− 0.921548)
=(z − 1)2
1.758686z2 + 1.81954z + 0.421774× z − 1
1.921548z − 0.078452
=1
1.758686(z − 1)2
z2 − 1.034602z + 0.2398231
1.921548(z − 1)
z − 0.040827
= 0.295916(z − 1)2
z2 − 1.034602z + 0.239823(z − 1)
z − 0.040827(6.1)
As expected at z = 1, H(z) = 1.
(e) The transfer function above can be realized as a cascade of a second-order direct form sectionwith a first-order section also in the direct form.
6.11 Exercise 6.11
6.12 Exercise 6.12
6.13
H(s) =1
(s2 + 0.76722s+ 1.33863)(s+ 0.76722)
• Impulse invariance method
After some factorization, we get:
H(s) =− s
1.33863
s2 + 0.76722s+ 1.33863+
11.33863
s+ 0.76722
So we have
p1 = p∗2 = −0.38361 + 1.09154631963101p3 = −0.76722
r1 = r∗2 =−1
1.33863(0.5 + 0.175718608134594)
r′1 = 0.5 + 0.175718608134594
r3 =1
1.33863
Since
Hd(z) =N∑l=1
Trlz
z − eplT
172 CHAPTER 6. IIR FILTERS APPROXIMATIONS
We have
Hd(z) =−T
1.33863
»(0.5 + 0.175718608134594)z
z − e(−0.38361+1.09154631963101)T+
(0.5− 0.175718608134594)z
z − e(−0.38361−1.09154631963101)T+
+−z
z − e(−0.76722)T
–Hd(z) =
−T1.33863
z2 − e(−0.38361)T [cos (Imp1T ) + 2Imr′1 sin (Imp1T )]z
z2 − 2 cos(Imp1T )e(−0.38361)T z + e(−0.76722)T+
− z
z − e(−0.76722)T
ffHd(z) =
−T1.33863
ne(−0.38361)T [2 cos(Imp1T )− C − e(−0.38361)T ]z2
+e(−0.76722)T [Ce(−0.38361)T − 1]zo×n
z3 + e(−0.38361)T [e(−0.38361)T + 2 cos(Imp1T )]z2+
+[1 + 2 cos(Imp1T )e(−0.38361)T ]z − e(−0.38361)To−1
C = cos (Imp1T ) + 2Imr′1 sin (Imp1T )
T =2π
Ωs=
2π
12s
Thus, we have:
Hd(z) =0.0534524750545966z2 + 0.0409551414340317z
z3 − 2.04521517914735z2 + 1.5899800804405z − 0.447790000553345
• Bilinear methodHere we have to replace s with 2
Tz−1z+1 and T = π
6 .
H(z) =1
[ 4T 2
(z−1)2
(z+1)2 + 0.76722 2Tz−1z+1 + 1.33863][ 2
Tz−1z+1 + 0.76722]
H(z) =T 3(z3 + 3z2 + 3z + 1)× 12.4178870257439−1
(z3 − 2.07077956439265z2 + 1.62441645611047z − 0.458659854278131)
H(z) =0.0115597425653831(z3 + 3z2 + 3z + 1)
z3 − 2.07077956439265z2 + 1.62441645611047z − 0.458659854278131
6.14 Exercise 6.14
6.15 Exercise 6.15
6.16 (a)
1s+ a
↔ T
1− e−aT z−1
first term
T = 4 (6.2)
− aT = −0.4 then a = 0.1 (6.3)
second term
− aT = −0.8 then a = 0.2 (6.4)
Then the analog transfer function should be
H(s) =4/Ts+ 0.1
− 1/Ts+ 0.2
=1
s+ 0.1− 1/4s+ 0.2
173
(b)
H(s) =4/T
s+ 0.1 + 2πkT
− 1/Ts+ 0.2 + θπl
T
(6.5)
where k and l are integers.
(c)
z =1 + (T/2)s1− (T/2)s
=1 + 2s1− 2s
H(s) =4
1− e0.4(
1−2s1+2s
) − 11− e−0.8 1−2s
1+2s
H(s) =4(1 + 2s)
(2 + 2e0.4)s+ 1− e0.4− 1 + 2s
(2 + 2e−0.8)s+ 1− e−0.8
6.17 (a)
H(z) =2z2 − (e−0.2 + e−0.4
)z
(z − e−0.2) (z − e0.4)
H(z) =z
z − e−0.2+
z
z − e−0.4
Impulse invariance method.
1s+ a
↔ T
1− e−aT z−1
T = 2
− aT = −0.2
a = 0.1
− aT = −0.4
a = 0.2
H(s) =1/2
s+ 0.1+
1/2s+ 0.2
The analog transfer function is not unique since
H(s) =1/T
s+ 0.1 + 2πkT
+1/T
s+ 0.2 + 2πlT
for any k = l integer.
174 CHAPTER 6. IIR FILTERS APPROXIMATIONS
(b) By replacing
z =1 + T
2 s
1− T2 s→ z =
1 + s
1− s
H(s) =2 (1+s)2
(1−s)2 −(e−0.2 + e−0.4
) (1+s1−s
)(
1+s1−s − e−0.2
)(1+s1−s − e−0.4
)
H(s) =2s2 + 2s+ 2− (e−0.2 + e−0.4
) (1− s2
)(s+ 1 + e−0.2s− e−0.2) (s+ 1 + e−0.4s− e−0.4)
H(s) =s2(2 + e−0.4 + e−0.2
)+ 2s+
(2− e−0.4 − e−0.2
)[(1 + e−02) s+ 1− e−0.2] [(1 + e−0.4) s+ (1− e−0.4)]
H(s) =
(2 + e−0.4 + e−0.2
) [s2 + 0.57322s+ 0.14644
](1 + e−0.2) (1 + e−0.4) (s+ 0.099668) (s+ 0.197375)
zero1 =−0.57322 + j0.50714
2
zero2 =−0.57322− j0.50714
2
pole1 = −0.09966
pole2 = −0.197375
6.18 • The digital filter corresponding to the analog filter of exercise 6.3 is shown at Figure 6.8 and thefilters coeficients at Table 6.3.
• The highpass filter satisfying the specifications of exercise 6.18 is shown at Figure 6.9 and thefilters coeficients at Table 6.4.
6.19 Prove of inverse chebyshev poles and zeros.
A(s)A(−s) = 1 +ε2
Cn2(/s)
The poles for the inverse chebyshev filter corresponds to the zeros of the A(s)A(-s). So:
0 = 1 +ε2
Cn2(/s)ε2
Cn2(/s)= −1
1Cn(/s)
= ± ε
Cn(/s) = ±ε
cos(n cos−1(/s)) = ∓ε
175
0 500 1000 1500 2000 2500 3000 3500 4000−120
−100
−80
−60
−40
−20
0
Frequency [Hz]
Mag
nitu
de R
espo
nse
[dB
]
Figure 6.8: Magnitude response for the filter of exercise 6.18 corresponding to the analog filter of exercise6.3.
Table 6.3: Filter coefficients, poles and zeros for exercise 6.18 corresponding to the analog filter of exercise6.3.
Numerator Coefficients Denominator CoefficientsH0 = 3.786187× 10−3
b0 = 1.000000000 a0 = 4.099277210× 10−1
b1 = −2.244474952 a1 = −3.018769598b2 = 4.243008988 a2 = 1.028353056× 101
b3 = −4.586670475 a3 = −2.110183066× 101
b4 = 5.363010194 a4 = 2.851235175× 101
b5 = −4.586670475 a5 = −2.600626957× 101
b6 = 4.243008988 a6 = 1.569034384× 101
b7 = −2.244474952 a7 = −5.760514074b8 = 1.000000000 a8 = 1.000000000Filter Zeros Filter Polesz0 = −0.4875641 + 0.8730872 p0 = 0.6965193 + 0.6995695z1 = −0.4875641− 0.8730872 p1 = 0.6965193− 0.6995695z2 = 0.3837074 + 0.9234547 p2 = 0.7014529 + 0.6396931z3 = 0.3837074− 0.9234547 p3 = 0.7014529− 0.6396931z4 = 0.6386284 + 0.7695153 p4 = 0.7256855 + 0.4891018z5 = 0.6386284− 0.7695153 p5 = 0.7256855− 0.4891018z6 = 0.5874658 + 0.8092490 p6 = 0.7565993 + 0.1923399z7 = 0.5874658− 0.8092490 p7 = 0.7565993− 0.1923399
Using the fact that,
n(cos−1(/s)) = a+ b (6.6)
176 CHAPTER 6. IIR FILTERS APPROXIMATIONS
0 500 1000 1500 2000 2500 3000 3500 4000−120
−100
−80
−60
−40
−20
0
Frequency [Hz]
Mag
nitu
de R
espo
nse
[dB
]
Figure 6.9: Magnitude response for the filter of exercise 6.18 corresponding to the analog filter of exercise6.6.
Table 6.4: Filter coefficients, poles and zeros for exercise 6.18 corresponding to the analog filter of exercise6.6.
Numerator Coefficients Denominator CoefficientsH0 = 9.440609× 10−1
b0 = 2.547058150× 1020 a0 = 2.547058150× 1020
b1 = 4.327714674× 102 a1 = 1.298961363× 1018
b2 = 8.207323536× 1015 a2 = 1.233401900× 1016
b3 = 1.489408748× 10−2 a3 = 3.670079838× 1013
b4 = 9.802505098× 1010 a4 = 1.731243407× 1011
b5 = 1.531252642× 10−7 a5 = 2.905131510× 108
b6 = 5.142592638× 105 a6 = 7.728321545× 105
b7 = 4.982629736× 10−13 a7 = 6.442687823× 102
b8 = 1.0 a8 = 1.0Filter Zeros Filter Polesz0 = +401.5773935 p0 = −198.8491601 + 506.1217005z1 = −401.5773935 p1 = −198.8491601− 506.1217005z2 = +374.4057280 p2 = −25.3200901 + 495.1724674z3 = −374.4057280 p3 = −25.3200901− 495.1724674z4 = +337.4172105 p4 = −13.0115168 + 254.4597930z5 = −337.4172105 p5 = −13.0115168− 254.4597930z6 = +314.5867730 p6 = −84.9536241 + 216.2285858z7 = −314.5867730 p7 = −84.9536241− 216.2285858
Then we have,
cos(a+jmathmathb) = ∓ε
cos a cos b− sin a sin b = ∓εcos a cosh b− sin asinhb = ∓ε
177
cos a cosh b = 0 (6.7)sin asinhb = ±ε
Since cosh b > 1,
cos a = 0a =
π
2(2k + 1), k = 0, 1, ..., 2n− 1
Substituting a in equation (6.7):
sin(π
2(2k + 1))sinh(b) = ±ε
±sinh(b) = ±εb = sinh−1(ε)
Using a e b in equation (6.6):
s= cos(
a+ b
n)
s= cos(
a
n) cosh(
b
n)− sin(
a
n)sinh(
b
n)
s= cos[
π
2(2k + 1)
n] cosh[sinh−1(ε)]− sin[
π
2(2k + 1)
n]sinh[sinh−1(ε)]
1s
= − sin[π
2(2k + 1)
n]sinh[sinh−1(ε)]− cos[
π
2(2k + 1)
n] cosh[sinh−1(ε)]
6.20 (a) Lowpass-Bandpass Transformation:
ωp is the cutoff frequency of the lowpass filter, and, ωp1 is the lower cutoff frequency of thebandpass filter, and ωp2 is the higher cutoff frequency of the bandpass filter. So:
g(z) = − z2 + α1z + α2
α2z2 + α1z + 1g(1) = g(−1) = −1
g (eω0) = 1g (eωp1) = e−ωp
g (eωp2) = eωp
Substituting g (e−ω0) in g(z), we have:
(α2 + 1) + 2α1eω0 + (α2 + 1)e2ω0 = 0
(α2 + 1)(1 + e2ω0) + 2α1eω0 = 0
(α2 + 1)(e−ω0 + eω0
2
)+ α1 = 0
(α2 + 1) cos(ω0) + α1 = 0
178 CHAPTER 6. IIR FILTERS APPROXIMATIONS
Calling cos(ω0) = α, we have that:
g(z) = − z2 − α(1 + α2)z + α2
α2z2 − α(1 + α2)z + 1
g(z) = − z2 − αz − αα2z + α2
α2z2 − αz − αα2z + 1
g(z) = −z(z − α) + (1− αz)α2
(1− αz) + α2z(z − α)
g(z) = − z
(z−α1−αz
)+ α2
1 + α2z(z−α1−αz
)
Let E = z
(z − α1− αz
), so:
g(z) = − E + α2
1 + α2E
Let’s us form1− g(z)1 + g(z)
:
1− g(z) =1 + α2E + E + α2
1 + α2E
1 + g(z) =1 + α2E − E − α2
1 + α2E
1− g(z)1 + g(z)
=1 + α2E + E + α2
1 + α2E − E − α2
1− g(z)1 + g(z)
=1 + α2
1− α2
1 + E
1− E1− g(z)1 + g(z)
=1 + α2
1− α2
z2 − 2αz + 11− z2
But when ω = ωp1 in the bandass characteristic, the function g(z) is equal to e−ωp , and, whenω = ωp2, g(z) = eωp .Substituting z with e−ωp1 , we have:
1− eωp1 + eωp
=1 + α2
1− α2
e2ωp1 − 2αeωp1 + 11− e2ωp1
− tan(ωp
2
)=
1 + α2
1− α2
eωp1 + e−ωp1 − 2αe−ωp1 − eωp1
tan(ωp
2
)=
α2 + 1α2 − 1
cos(ωp1)− α sin(ωp1)
− tan(ωp
2
)=
α2 + 1α2 − 1
cos(ωp1)− αsin(ωp1)
(6.8)
And substituting z with e−ωp2 , we have:
tan(ωp
2
)=α2 + 1α2 − 1
cos(ωp2)− αsin(ωp2)
(6.9)
Using equations (6.8) and (6.9):
179
− cos(ωp1) + α
sin(ωp1)=
cos(ωp2)− αsin(ωp2)
− (eωp1 + e−ωp1) /2 + α
(eωp1 − e−ωp1) /(2)=
(eωp2 + e−ωp2) /2− α(eωp2 − e−ωp2) /(2)[
2α− (eωp1 + e−ωp1)] (
eωp2 − e−ωp2)
=[(eωp2 + e−ωp2
)− 2α] (eωp1 − e−ωp1
)2α[(eωp1 − e−ωp1
)+(eωp2 − e−ωp2
)]=(eωp1 − e−ωp1
) (eωp2 + e−ωp2
)+
+(eωp1 + e−ωp1
) (eωp2 − e−ωp2
)α[(e(
ωp2+ωp12 ) − e−(
ωp2+ωp12 )
)(e(
ωp2−ωp12 ) + e−(
ωp2−ωp12 )
)]=(
e(ωp2+ωp1) − e−(ωp2+ωp1))
α
[2 cos
(ωp2 − ωp1
2
)]=(e(
ωp2+ωp12 ) + e−(
ωp2+ωp12 )
)α
[2 cos
(ωp2 − ωp1
2
)]= 2 cos
(ωp2 + ωp1
2
)
α =cos(ωp2+ωp1
2
)cos(ωp2−ωp1
2
)
Since cos(ω0) = α:
cos(ω0) =cos(ωp1+ωp2
2
)cos(ωp1−ωp2
2
)
Letα2 + 1α2 − 1
= −k, so that:
k =sin(ωp1)
cos(ωp1)− α tan(ωp
2
)k =
sin(ωp1)(eωp1+e−ωp1)
2 − e(ωp2+ωp1
2 )+e−(ωp2+ωp1
2 )
e(ωp2−ωp1
2 )+e−(ωp2−ωp1
2 )
tan(ωp
2
)tan
(ωp2
)
k =2 sin(ωp1)2 cos
(ωp2−ωp1
2
)(eωp1 + e−ωp1)− 2 e
(ωp2+ωp1
2 )+e−(ωp2+ωp1
2 )
e(ωp2−ωp1
2 )+e−(ωp2−ωp1
2 )
tan(ωp
2
)
k =4 sin(ωp1) cos
(ωp2−ωp1
2
)e(
ωp2−3ωp12 ) + e−(
ωp2−3ωp12 ) −
[e(
ωp2+ωp12 ) + e−(
ωp2+ωp12 )
] tan(ωp
2
)
k =4 sin(ωp1) cos
(ωp2−ωp1
2
)(eωp1 + e−ωp1)
(e−(
ωp2+ωp12 ) − e(ωp2+ωp1
2 )) tan
(ωp2
)
180 CHAPTER 6. IIR FILTERS APPROXIMATIONS
k =4 sin(ωp1) cos
(ωp2−ωp1
2
)(2 sin(ωp1))(−2 sin
(ωp2−ωp1
2
))
tan(ωp
2
)
k =cos(ωp2−ωp1
2
)sin(ωp2−ωp1
2
) tan(ωp
2
)k = cot
(ωp2 − ωp1
2
)tan
(ωp2
)Thus, we have that:
sin(ωp1)cos(ωp1)− α = cot
(ωp2 − ωp1
2
)Finally:
α2 =k − 1k + 1
α1 = −α(1 + α2) =−2αkk + 1
(b) Lowpass-Bandstop Transformation
ωp is the cutoff frequency of the lowpass filter, and, ωp1 is the lower cutoff frequency of thebandstop filter, and ωp2 is the higher cutoff frequency of the bandstop filter. So:
g(z) =z2 + α1z + α2
α2z2 + α1z + 1g(1) = g(−1) = 1
g (eω0) = −1g (eωp1) = eωp
g (eωp2) = e−ωp
Substituting g (eω0) in g(z), we have:
(α2 + 1) + 2α1eω0 + (α2 + 1)e2ω0 = 0
(α2 + 1)(1 + e2ω0) + 2α1eω0 = 0
(α2 + 1)(e−ω0 + eω0
2
)+ α1 = 0
(α2 + 1) cos(ω0) + α1 = 0
Calling cos(ω0) = α, we have that:
g(z) =z2 − α(1 + α2)z + α2
α2z2 − α(1 + α2)z + 1
g(z) =z2 − αz − αα2z + α2
α2z2 − αz − αα2z + 1
g(z) =z(z − α) + (1− αz)α2
(1− αz) + α2z(z − α)
g(z) =
z(z−α1−αz
)+ α2
1 + α2z(z−α1−αz
)
181
Let E = z
(z − α1− αz
), so:
g(z) =E + α2
1 + α2E
Let’s us form1 + g(z)1− g(z)
:
1 + g(z) =1 + α2E + E + α2
1 + α2E
1− g(z) =1 + α2E − E − α2
1 + α2E
1 + g(z)1− g(z)
=1 + α2E + E + α2
1 + α2E − E − α2
1 + g(z)1− g(z)
=1 + α2
1− α2
1 + E
1− E1 + g(z)1− g(z)
=1 + α2
1− α2
z2 − 2αz + 11− z2
But when ω = ωp1 in the bandstop characteristic, the function g(z) is equal to eωp , and, whenω = ωp2, g(z) = e−ωp .Substituting z with eωp1 , we have:
1 + eωp
1− eωp =1 + α2
1− α2
e2ωp1 − 2αeωp1 + 11− e2ωp1
cot(ωp
2
)=
1 + α2
1− α2
eωp1 + e−ωp1 − 2αe−ωp1 − eωp1
cot(ωp
2
)=
α2 + 1α2 − 1
cos(ωp1)− α sin(ωp1)
− cot(ωp
2
)=
α2 + 1α2 − 1
cos(ωp1)− αsin(ωp1)
And substituting z with eωp2 , we have:
cot(ωp
2
)=
α2 + 1α2 − 1
cos(ωp2)− αsin(ωp2)
Using equations (6.8) and (6.9) (see exercise 6.20a)):
α =cos(ωp1+ωp2
2
)cos(ωp1−ωp2
2
)cos(w0) =
cos(ωp1+ωp2
2
)cos(ωp1−ωp2
2
)Let
α2 − 1α2 + 1
= −k, so that:
k =cos(ωp1)− α
sin(ωp1)tan
(ωp2
)k = tan
(ωp2 − ωp1
2
)tan
(ωp2
)
182 CHAPTER 6. IIR FILTERS APPROXIMATIONS
With,sin(ωp1)
cos(ωp1)− α = cot(ωp2 − ωp1
2
), (see exercise 6.20a)).
Finally:
α2 =1− k1 + k
α1 = −α(1 + α2) =−2αk + 1
6.21 Exercise 6.21
6.22 Exercise 6.22
6.23 Filter obtained with the proposed transformation shown at Figure 6.10.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−100
−80
−60
−40
−20
0
20
Frequency [x pi rad/s]
Mag
nitu
de R
espo
nse
[dB
]
Figure 6.10: Magnitude response for the filter of exercise 6.23.
• MatLab® Program:
clear all;close all;
wp1=pi/6;wp2=2*pi/3;wp=3*pi/4;
alpha=cos((wp2+wp1)/2)/cos((wp2-wp1)/2);k=tan((wp2-wp1)/2)*tan(wp/2);
a1=(-2*alpha*k)/(k+1);a2=(k-1)/(k+1);
Mask1=-[1,a1,a2];Mask2=[a2,a1,1];
p1=conv(Mask1,Mask1);
183
p2=sqrt(2)*conv(Mask1,Mask2);p3=conv(Mask2,Mask2);Num=(p1+p2+p3)*0.06;
p2=-1.18*conv(Mask1,Mask2);p3=0.94*conv(Mask2,Mask2);Den=(p1+p2+p3);
[H,W]=freqz(Num,Den);plot(W/pi,db(H));xlabel(’Frequency [x pi rad/s]’);ylabel(’Magnitude Response [dB]’);
6.24 Zeros, poles and coefficients for the elliptic filter from Exercise 6.6 shown at Table 6.5.
Table 6.5: Data for exercise 6.24.
Numerator Coefficients Denominator CoefficientsH0 = 2.292787× 10−1
b0 = −1.000000000 a0 = 7.506633146× 10−2
b1 = 4.125080804 a1 = 3.690648992× 10−1
b2 = −7.549840406 a2 = −9.314864425× 10−1
b3 = 7.549840406 a3 = 1.971212228b4 = −4.125080804 a4 = −1.615482211b5 = 1.000000000 a5 = 1.000000000Filter Zeros Filter Polesz0 = 1.0000000− 0.0000000 p0 = 0.5646031 + 0.7768676z1 = 0.7165151 + 0.6975716 p1 = 0.5646031− 0.7768676z2 = 0.7165151− 0.6975716 p2 = 0.3125199 + 0.6991928z3 = 0.8460253 + 0.5331427 p3 = 0.3125199− 0.6991928z4 = 0.8460253− 0.5331427 p4 = −0.1387637− 0.0000000
Zeros, poles and coefficients for the phase equalizer shown at Table 6.6.
Table 6.6: Phase equalizer for exercise 6.24.
Numerator Coefficients Denominator CoefficientsH0 = 1.384517× 10−1
b0 = −7.222737824 a0 = −1.384516542× 10−1
b1 = −8.088678932 a1 = −4.465626558× 10−1
b2 = −2.041142102 a2 = −4.165864463× 10−1
b3 = 3.008894683 a3 = 2.825995006× 10−1
b4 = 3.225404985 a4 = 1.119890979b5 = 1.000000000 a5 = 1.000000000Filter Zeros Filter Polesz0 = 1.3352969− 0.0000000 p0 = 0.7488971− 0.0000000z1 = −0.6922120 + 1.2368174 p1 = −0.3445770 + 0.6156767z2 = −0.6922120− 1.2368174 p2 = −0.3445770− 0.6156767z3 = −1.5881390 + 0.4128081 p3 = −0.5898171 + 0.1533123z4 = −1.5881390− 0.4128081 p4 = −0.5898171− 0.1533123
184 CHAPTER 6. IIR FILTERS APPROXIMATIONS
The plot 6.11 shows the phase for the original elliptic filter and the phase for the equalized filter (a),and the group delay, in samples, for both filters (b).
It’s possible to notice that the phase of the equalizer filter is more linear than the phase of the originalfilter, in passband region.
0 0.5 1 1.5 2 2.5
x 104
−200
−150
−100
−50
0
50
100
150
200
250
300
Frequency [rad/s]
Ang
le [D
egre
es]
Original FilterEqualized Filter
0 0.5 1 1.5 2 2.5
x 104
0
5
10
15
20
25
30
Frequency [rad/s]
Gro
up d
elay
[Sam
ples
]
Original FilterEqualized Filter
Figure 6.11: Response for filters of exercise 6.24.
Zeros, poles and coefficients for the final filter (elliptic filter and phase equalizer) are shown at Table6.7.
Table 6.7: Final filter of exercise 6.24.
Numerator Coefficients Denominator CoefficientsH0 = 3.174402× 10−2
b0 = 7.222737824 a0 = −1.039305777× 10−2
b1 = −2.170569822× 101 a1 = −8.461946616× 10−2
b2 = 2.320520578× 101 a2 = −6.711637895× 10−2
b3 = −4.891053619 a3 = 1.051573868× 10−2
b4 = −6.677032056 a4 = −8.019529267× 10−2
b5 = 3.218007833× 10−1 a5 = −1.307674552× 10−2
b6 = 2.821659749 a6 = 1.093905642× 10−1
b7 = 2.348376676 a7 = 4.029354369× 10−1
b8 = −2.746321099 a8 = 4.446477741× 10−1
b9 = −8.996758191× 10−1 a9 = −4.955912322× 10−1
b10 = 1.000000000 a10 = 1.000000000Filter Zeros Filter Polesz0 = −1.5881390 + 0.4128081 p0 = 0.5646031 + 0.7768676z1 = −1.5881390− 0.4128081 p1 = 0.5646031− 0.7768676z2 = −0.6922120 + 1.2368174 p2 = 0.7488971− 0.0000000z3 = −0.6922120− 1.2368174 p3 = 0.3125199 + 0.6991928z4 = 0.7165151 + 0.6975716 p4 = 0.3125199− 0.6991928z5 = 0.7165151− 0.6975716 p5 = −0.3445770 + 0.6156767z6 = 1.3352969− 0.0000000 p6 = −0.3445770− 0.6156767z7 = 1.0000000− 0.0000000 p7 = −0.5898171 + 0.1533123z8 = 0.8460253 + 0.5331427 p8 = −0.5898171− 0.1533123z9 = 0.8460253− 0.5331427 p9 = −0.1387637− 0.0000000
185
6.25 Using the sum of two FIR filters as the initial values for the IIR filter, we arrived at the coefficientsshown in Table 6.8. Figure 6.12 shows the frequency response and the group delay of the IIR filterfound wich coefficients are shown at Table 6.8. One FIR filter is a lowpass filter with cutoff frequencyequal to 0.1Ωs, and a magnitude, at the passband, equal to 1. The other filter is a highpass filter withcutoff frequency equal to 0.2Ωs, and a magnitude, at the passband, equal to 0.5.
0 0.5 1 1.5 2 2.5 3−7
−6
−5
−4
−3
−2
−1
0
1
Normalized Frequency [rad/s]
Mag
nitu
de R
espo
nse
[dB
]
0 0.5 1 1.5 2 2.5 33.998
3.9985
3.999
3.9995
4
4.0005
4.001
4.0015
4.002
Normalized Frequency [rad/s]
Gro
up D
elay
[Sam
ples
]
Figure 6.12: Frequency response for filters of exercise 6.25.
Table 6.8: IIR filter found for Exercise 6.25.
Numerator Coefficients Denominator CoefficientsH0 = 1.817538× 10−3
b0 = 1.000000000 a0 = 0.000000000b1 = −7.065844209× 10−1 a1 = 0.000000000b2 = 3.568183013× 101 a2 = 0.000000000b3 = 7.038407575× 101 a3 = 0.000000000b4 = 3.605353948× 102 a4 = 7.502883808× 10−15
b5 = 7.038407575× 101 a5 = −1.019720516× 10−10
b6 = 3.568183013× 101 a6 = 5.197151307× 10−7
b7 = −7.065844209× 10−1 a7 = −1.177245520× 10−3
b8 = 1.000000000 a8 = 1.000000000Filter Zeros Filter Polesz0 = 1.8113143 + 5.7263663 p0 = 0.0000000− 0.0000000z1 = 1.8113143− 5.7263663 p1 = 0.0000000− 0.0000000z2 = −1.3654689 + 2.7748556 p2 = 0.0000000− 0.0000000z3 = −1.3654689− 2.7748556 p3 = 0.0000000− 0.0000000z4 = −0.1427668 + 0.2901255 p4 = 0.0002947 + 0.0000004z5 = −0.1427668− 0.2901255 p5 = 0.0002947− 0.0000004z6 = 0.0502137 + 0.1587477 p6 = 0.0002939 + 0.0000004z7 = 0.0502137− 0.1587477 p7 = 0.0002939− 0.0000004
6.26 Using the lsqnonlin routine from matlab, and using M = 0 and N = 1, we have the followingfilter:
H(z) =z
z − 0.5
• Impulse response shown at Figure 6.13.
186 CHAPTER 6. IIR FILTERS APPROXIMATIONS
0 2 4 6 8 10 12 14 16 18 200
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Sample
Impu
lse
Res
pons
e
DesiredObtained
Figure 6.13: Impulse response for the filter of exercise 6.26.
0 2 4 6 8 10 12 14 16 18 200
0.2
0.4
0.6
0.8
1
1.2
1.4x 10
−16
Sample
Qua
drat
ic E
rror
Figure 6.14: Quadratic error for the filter of exercise 6.26.
• Quadratic error shown at Figure 6.14.
• MatLab® program:
clear all;close all;
M=0;N=1;X0=0.1*ones(1,M+N+2);n=0:5;
187
g=[0.5.^n,zeros(1,10)];opt=optimset(’TolFun’,1e-12,’maxIter’,20000,’display’,’iter’,...
’maxFunevals’,50000,’tolX’,1e-10);x=lsqnonlin(’func’,X0,[],[],opt,M,N);
b=x(1:M+1);a=x(M+2:end);h=filter(b,a,[1,zeros(1,40)]);
k=0:20;plot(k,0.5.^k,’k:o’,k,h(1:21),’kx’)xlabel(’Sample’);ylabel(’Impulse Response’);legend(’Desired’,’Obtained’);
figure;plot(k,abs(0.5.^k-h(1:21)).^2,’k-x’);xlabel(’Sample’);ylabel(’Quadratic Error’);
Function func:
function fun=func(x,M,N);
imp=[1,zeros(1,20)];b=x(1:M+1);a=x(M+2:end);h_n=filter(b,a,imp);h_n=h_n(1:6);n=0:5;g_n=[0.5.^n];fun=abs(h_n-g_n);
6.27 Exercise 6.27
6.28 Table 6.9 makes the match between the number in the x-axis in Figure 6.15 and the values of M andN .
Table 6.9: Correspondance between the number in the x-axis in Figure 6.15 and the values of M and Nfor execise 28.
Number M N1 0 92 1 83 2 74 3 65 4 56 5 47 6 38 7 29 8 110 9 0
The values of M and N that lead to the smallest mean squared error were: M = 3 and N = 6; M = 4and N = 5; M = 5 and N = 4.
188 CHAPTER 6. IIR FILTERS APPROXIMATIONS
1 2 3 4 5 6 7 8 9 104
5
6
7
8
9
10
11
12x 10
−7 MSE after the 10th sample
Number
Figure 6.15: Mean squared error for the filter of exercise 6.28.
This fact demonstrate that the smallest mean square error will happen when the value of M is close tothe value of N .
• MatLab® program:
clear all;warning offglobal GG;
p=0:9;A=[p’,fliplr(p)’];
ticfor p=1:100for k=1:size(A,1)M=A(k,1);N=A(k,2);X0=(2*randn(1,M+N+1)-1)*0.1;n=0:200;g_n=[(1/6).^n+10.^(-n)+0.05./(n+2)];opt=optimset(’display’,’off’,’maxFunEvals’,3500,’TolX’,1e-8, ...
’TolFun’,1e-8,’maxIter’,2000,’TolCon’,1e-9);VLB=-3*ones(1,M+N+1);VUB=5*ones(1,M+N+1);x=fmincon(’func13’,X0,[],[],[],[],VLB,VUB,’const’,opt,M,N);
b=x(1:M+1);a=[1,x(M+2:end)];h=filter(b,a,[1,zeros(1,300)]);
im=(11:201);MSE(p,k)=mean(abs(g_n(im)-h(im)).^2);
if 0figure(1);plot(0:15,g_n(1:16),’k-o’,0:15,h(1:16),’k:x’)xlabel(’Sample’);ylabel(’Impulse Response’);
189
legend(’Desired’,’Obtained’);drawnow;figure(2);plot(n,abs(g_n-h(1:201)).^2,’k-x’);title(sprintf(’%d : %d : %8.4e’,p,k,GG));xlabel(’Sample’);ylabel(’Quadratic Error’);drawnow
end
if (GG>1e-4)error(’Ferrou’);return
end
endfprintf(’#’);
endfprintf(’\n’);toc
Function func13:
function fun=func13(x,M,N);
imp=[1,zeros(1,20)];b=x(1:M+1);a=[1,x(M+2:end)];
h_n=filter(b,a,imp);h_n=h_n(1:10);n=0:9;g_n=(1/6).^n+10.^(-n)+0.05./(n+2);fun=sum(abs(h_n-g_n).^2);
Function const:
function [C,Ceq]=const(x,M,N)
global GG;
b=x(1:M+1);a=[1,x(M+2:end)];
imp=[1,zeros(1,20)];h_n=filter(b,a,imp);h_n=h_n(1:10);n=0:9;g_n=(1/6).^n+10.^(-n)+0.05./(n+2);fun=sum(abs(h_n-g_n).^2);
r=abs(roots(a));if isempty(r)r=-100;
endC=[max(r)-0.98,fun-1e-6];Ceq=0;GG=C(1);
6.29 Exercise 6.29
6.30 Exercise 6.30
6.31 Exercise 6.31
190 CHAPTER 6. IIR FILTERS APPROXIMATIONS
6.32 Exercise 6.32
Chapter 7
SPECTRAL ESTIMATION
7.1 In this exercise we studied the behavior of the periodogram algorithm considering different values ofL,the length of the white noise sequence generated by the MATLAB command randn (pseudorandomnumbers drawn from a standard normal distribution, i.e., a zero-mean Gaussian distribution with vari-ance 1), and N , number of independent runs that were used to average the periodogram PSD estimate.The following results were generated using the standard periodogram, but most of the conclusions alsohold for other periodogram-based methods.
Basically, we started with a situation where we have a small sample-size L, and we gradually increasedit. In addition, for each value of L we considered the following values of N : N = 1 (which matchesthe practice for most of the applications), N = 10, N = 100, and N = 1000.
Fig. 7.1 depicts the PSD estimate considering L = 10 and the four different values of N mentioned
0 0.2 0.4 0.6 0.8 1−20
−15
−10
−5
0
5
10
Normalized frequency [× π rad/samples]
PS
D o
f whi
te n
oise
[dB
]
(a)
0 0.2 0.4 0.6 0.8 1−20
−15
−10
−5
0
5
10
Normalized frequency [× π rad/samples]
PS
D o
f whi
te n
oise
[dB
]
(b)
0 0.2 0.4 0.6 0.8 1−20
−15
−10
−5
0
5
10
Normalized frequency [× π rad/samples]
PS
D o
f whi
te n
oise
[dB
]
(c)
0 0.2 0.4 0.6 0.8 1−20
−15
−10
−5
0
5
10
Normalized frequency [× π rad/samples]
PS
D o
f whi
te n
oise
[dB
]
(d)
Figure 7.1: PSD estimate using the periodogram method, considering L = 10 and different values of N :(a) N = 1 (b) N = 10 (c) N = 100 (d) N = 1000.
191
192 CHAPTER 7. SPECTRAL ESTIMATION
above. In Fig. 7.1 (a) we can see that the small amount of data (sample-size) leads to a poor estimate ofthe PSD. For the same sample-size, one can enhance the PSD estimate by averaging other realizations,see Fig. 7.1 (b), (c), and (d). We can see that the PSD estimate obtained in Fig. 7.1 (d) has a flatshape in almost all spectrum range (as we expect). So, we can conclude that the variance of the PSDestimate decreases as N grows, just as in the averaged periodogram (but with the difference that weare not loosing spectral resolution, since L is fixed).
Fig. 7.2 and 7.3 depict the PSD estimates considering L = 100 and L = 1000, respectively, and thefour different values of N mentioned above. It is easy to verify that, as we have already mentioned,the variance of the PSD estimate decreases as N grows.
Now, let us compare these figures considering that N is fixed and L is varying (for example, comparethe PSD estimates given in Fig. 7.1 (a), Fig. 7.2 (a), and Fig. 7.3 (a)). By looking at the verticalaxis of these figures, we can see that all the estimates have the same dynamic range (between −20 dBand 5 dB, approximately), which indicates that the variance of the standard periodogram estimate doesnot change with L, as demonstrated by Kay (1998), illustrating that the standard periodogram PSDestimator is not consistent.
One marginal comment is that all PSD estimates presented here are unbiased (no matter the value ofL), since the process we are estimating the PSD is a white noise. This comment concerns only thestandard periodogram method.
Finally, Fig. 7.4 exhibits the biased estimate of the autocorrelation of the white noise sequence oflength L = 1000 (just one realization of the process was used).
0 0.2 0.4 0.6 0.8 1−20
−15
−10
−5
0
5
10
Normalized frequency [× π rad/samples]
PS
D o
f whi
te n
oise
[dB
]
(a)
0 0.2 0.4 0.6 0.8 1−20
−15
−10
−5
0
5
10
Normalized frequency [× π rad/samples]
PS
D o
f whi
te n
oise
[dB
]
(b)
0 0.2 0.4 0.6 0.8 1−20
−15
−10
−5
0
5
10
Normalized frequency [× π rad/samples]
PS
D o
f whi
te n
oise
[dB
]
(c)
0 0.2 0.4 0.6 0.8 1−20
−15
−10
−5
0
5
10
Normalized frequency [× π rad/samples]
PS
D o
f whi
te n
oise
[dB
]
(d)
Figure 7.2: PSD estimate using the periodogram method, considering L = 100 and different values of N :(a) N = 1 (b) N = 10 (c) N = 100 (d) N = 1000.
193
0 0.2 0.4 0.6 0.8 1−20
−15
−10
−5
0
5
10
Normalized frequency [× π rad/samples]
PS
D o
f whi
te n
oise
[dB
]
(a)
0 0.2 0.4 0.6 0.8 1−20
−15
−10
−5
0
5
10
Normalized frequency [× π rad/samples]
PS
D o
f whi
te n
oise
[dB
]
(b)
0 0.2 0.4 0.6 0.8 1−20
−15
−10
−5
0
5
10
Normalized frequency [× π rad/samples]
PS
D o
f whi
te n
oise
[dB
]
(c)
0 0.2 0.4 0.6 0.8 1−20
−15
−10
−5
0
5
10
Normalized frequency [× π rad/samples]
PS
D o
f whi
te n
oise
[dB
]
(d)
Figure 7.3: PSD estimate using the periodogram method, considering L = 1000 and different values of N :(a) N = 1 (b) N = 10 (c) N = 100 (d) N = 1000.
−1000 −500 0 500 1000−0.2
0
0.2
0.4
0.6
0.8
1
1.2
Samples
Aut
ocor
rela
tion
of w
hite
noi
se s
eque
nce
Figure 7.4: Biased estimate of the autocorrelation of the white noise sequence, considering L = 1000 anda single realization.
7.2 We used the standard periodogram algorithm in order to estimate the PSD related to the referred sig-nal. In order to simplify our discussion, let us assume that the sample frequency is normalized tofs = 1.0 Hz. Fig. 7.5 depicts the obtained result. By observing this figure, one can verify that theperiodogram algorithm detects a rather tuned peak at 0.05 Hz (normalized frequency). This occurssince the discrete frequency associated with x(n) is 2π/20 rad/sample, implying that the normalizedfrequency of x(n) in Hz is 1/20 = 0.05 Hz. To deduce this result, we used the well-known fact thatthe normalized sample frequency fs = 1.0 Hz is proportional to 2π rad/sample. From this observation,we conclude that the periodogram algorithm was able to detect this sinusoid at its proper position.
Furthermore, it is also possible to verify that the periodogram algorithm was able to detect a peak
194 CHAPTER 7. SPECTRAL ESTIMATION
around 0.5 Hz. The reason for that can be understood based on Fig. 7.6, which depicts the squaredmagnitude of the frequency response of the system
H(z) =1
1 + 0.9z−1.
By comparing Fig. 7.6 to Fig. 7.5, one can verify that the PSD estimate obtained with the periodogrammethod was able to detect the shape of the noise component x1(n).
Note that, even though we used a biased method to estimate the PSD of x(n), the frequency at 0.05 Hzwas almost perfectly detected due to the fact that there is no other frequency with significant energynear 0.05 Hz to move the estimate from its proper position. Even the presence of the noise signal wasnot able to disturb the detection of the peak at the frequency 0.05 Hz due to the frequency responsecharacteristic of the system H(z) = 1
1+0.9z−1 . In addition, the number of samples L = 1024 waslarge enough not to induce significant bias.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−5
0
5
10
15
20
25
Normalized Frequency [Hz]
PS
D [d
B]
Figure 7.5: PSD estimate for the standard periodogram method.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−10
−5
0
5
10
15
20
Normalized Frequency [Hz]
Pow
er R
espo
nse
[dB
]
Figure 7.6: Squared magnitude [dB] of the frequency response of the system H(z) = 11+0.9z−1 .
195
7.3 In this exercise we want to estimate the PSD of the signal x(n), considering the following sample-sizes: L = 512, L = 1024, and L = 2048. Since x(n) is the sum of a sinusoidal component atfrequency 0.5π plus a first-order AR process with its pole located in −0.9 (angle π), we expect thePSD estimate of x(n) to have a peak at frequency 0.5π, and a gain at frequency 1π. Indeed, the PSDof x1(n), depicted in Fig. 7.7, yields a peak at frequency π.
First, we consider L = 512. Fig. 7.8 depicts PSD estimates of x(n) utilizing some periodogram-based methods presented in the chapter. Notice that both the cosine frequency and the filter resonancefrequency are properly located, for all the methods utilized. Many features of these methods (discussedin the chapter) are illustrated in this figure. For example, we can see that:
• the standard periodogram provides a slightly biased estimation of the PSD;
• the averaged periodogram (data was divided into four blocks) exchanges frequency resolutionwith variance;
• the Hamming window applied to the data smooths the PSD estimated curve, but also widens themain lobe of the peak at 0.5π;
• the Hamming window applied to the data followed by an unbiased autocorrelation estimationalso provides a biased PSD estimation, with a sharp peak at frequencies around 0.5π and a veryhigh variance in the entire frequency range;
• the Blackman-Tukey scheme reduces the variance considerably and almost achieved the samecurve as the minimum variance method;
• the minimum variance method, as the name suggests, provides the PSD estimate with minimumvariance.
Fig. 7.9 and 7.10 depict the PSD estimates for L = 1024 and L = 2048, respectively, utilizingperiodogram-based methods. For each individual method, the same features mentioned above are (ofcourse) valid. Now, let us analyze the influence of L in each method. We can see that the frequencyresolution increases with L (e.g., notice that all the peaks are sharper in Fig. 7.10 than in the others),the variance does not vary with L, and it also seems that bias is reduced as L grows (the periodogramestimate is unbiased for L→∞).
0 0.2 0.4 0.6 0.8 1−10
−5
0
5
10
15
20
Pow
er R
espo
nse
of x
1(n)
[dB
]
Normalized frequency [× π rad/samples]
Figure 7.7: PSD of x1(n).
196 CHAPTER 7. SPECTRAL ESTIMATION
0 0.2 0.4 0.6 0.8 1−60
−50
−40
−30
−20
−10
0
10
20
30
40
Normalized frequency [× π rad/samples]
PS
D o
f x(n
) [d
B]
(a) Standard periodogram.
0 0.2 0.4 0.6 0.8 1−60
−50
−40
−30
−20
−10
0
10
20
30
40
Normalized frequency [× π rad/samples]
PS
D o
f x(n
) [d
B]
(b) Averaged periodogram.
0 0.2 0.4 0.6 0.8 1−60
−50
−40
−30
−20
−10
0
10
20
30
40
Normalized frequency [× π rad/samples]
PS
D o
f x(n
) [d
B]
(c) Windowed-data periodogram.
0 0.2 0.4 0.6 0.8 1−60
−50
−40
−30
−20
−10
0
10
20
30
40
Normalized frequency [× π rad/samples]
PS
D o
f x(n
) [d
B]
(d) Windowed-data periodogram (unbiased autocorr.).
0 0.2 0.4 0.6 0.8 1−60
−50
−40
−30
−20
−10
0
10
20
30
40
Normalized frequency [× π rad/samples]
PS
D o
f x [d
B]
(e) Blackman-Tukey scheme.
0 0.2 0.4 0.6 0.8 1−60
−50
−40
−30
−20
−10
0
10
20
30
40
Normalized frequency [× π rad/samples]
PS
D o
f x [d
B]
(f) Minimum variance method.
Figure 7.8: PSD estimates for periodogram-based methods (L = 512): (a) standard periodogram; (b)averaged periodogram using four blocks; (c) windowed-data periodogram; (d) windowed-data periodogramwith unbiased autocorrelation; (e) Blackman-Tukey scheme; (f) minimum variance method.
197
0 0.2 0.4 0.6 0.8 1−60
−50
−40
−30
−20
−10
0
10
20
30
40
Normalized frequency [× π rad/samples]
PS
D o
f x(n
) [d
B]
(a) Standard periodogram.
0 0.2 0.4 0.6 0.8 1−60
−50
−40
−30
−20
−10
0
10
20
30
40
Normalized frequency [× π rad/samples]
PS
D o
f x(n
) [d
B]
(b) Averaged periodogram.
0 0.2 0.4 0.6 0.8 1−60
−50
−40
−30
−20
−10
0
10
20
30
40
Normalized frequency [× π rad/samples]
PS
D o
f x(n
) [d
B]
(c) Windowed-data periodogram.
0 0.2 0.4 0.6 0.8 1−60
−50
−40
−30
−20
−10
0
10
20
30
40
Normalized frequency [× π rad/samples]
PS
D o
f x(n
) [d
B]
(d) Windowed-data periodogram (unbiased autocorr.).
0 0.2 0.4 0.6 0.8 1−60
−50
−40
−30
−20
−10
0
10
20
30
40
Normalized frequency [× π rad/samples]
PS
D o
f x [d
B]
(e) Blackman-Tukey scheme.
0 0.2 0.4 0.6 0.8 1−60
−50
−40
−30
−20
−10
0
10
20
30
40
Normalized frequency [× π rad/samples]
PS
D o
f x [d
B]
(f) Minimum variance method.
Figure 7.9: PSD estimates for periodogram-based methods (L = 1024): (a) standard periodogram; (b)averaged periodogram using four blocks; (c) windowed-data periodogram; (d) windowed-data periodogramwith unbiased autocorrelation; (e) Blackman-Tukey scheme; (f) minimum variance method.
198 CHAPTER 7. SPECTRAL ESTIMATION
0 0.2 0.4 0.6 0.8 1−60
−50
−40
−30
−20
−10
0
10
20
30
40
Normalized frequency [× π rad/samples]
PS
D o
f x(n
) [d
B]
(a) Standard periodogram.
0 0.2 0.4 0.6 0.8 1−60
−50
−40
−30
−20
−10
0
10
20
30
40
Normalized frequency [× π rad/samples]
PS
D o
f x(n
) [d
B]
(b) Averaged periodogram.
0 0.2 0.4 0.6 0.8 1−60
−50
−40
−30
−20
−10
0
10
20
30
40
Normalized frequency [× π rad/samples]
PS
D o
f x(n
) [d
B]
(c) Windowed-data periodogram.
0 0.2 0.4 0.6 0.8 1−60
−50
−40
−30
−20
−10
0
10
20
30
40
Normalized frequency [× π rad/samples]
PS
D o
f x(n
) [d
B]
(d) Windowed-data periodogram (unbiased autocorr.).
0 0.2 0.4 0.6 0.8 1−60
−50
−40
−30
−20
−10
0
10
20
30
40
Normalized frequency [× π rad/samples]
PS
D o
f x [d
B]
(e) Blackman-Tukey scheme.
0 0.2 0.4 0.6 0.8 1−60
−50
−40
−30
−20
−10
0
10
20
30
40
Normalized frequency [× π rad/samples]
PS
D o
f x [d
B]
(f) Minimum variance method.
Figure 7.10: PSD estimates for periodogram-based methods (L = 2048): (a) standard periodogram; (b)averaged periodogram using four blocks; (c) windowed-data periodogram; (d) windowed-data periodogramwith unbiased autocorrelation; (e) Blackman-Tukey scheme; (f) minimum variance method.
199
7.4 Considering that
ΓY,MV(eω) =L
eH(eω)R−1Y e(eω)
,
where [e(eω)]l = e−ωl,∀l ∈ 0, 1, · · · , L − 1 . Thus, if we define the row vector v/σ2X =
eH(eω)R−1Y , where
[v]0 = 1− aeω[v]l = −a[eω(l−1) + eω(l+1)] + (1 + a2)eωl,∀l ∈ 1, 2, · · · , L− 2
[v]L−1 = −aeω(L−2) + eω(L−1),
then we have that
ve(eω) = 1− aeω +L−2∑l=1
[v]le−ωl − ae−ω + 1
= 2− 2a cos(ω)−L−2∑l=1
a[e−ω + eω] +L−2∑l=1
(1 + a2)
= (L− 2)a2 − 2aL cos(ω) + L+ 2a cos(ω).
Therefore, we have that
ΓY,MV(eω) =L
eH(eω)R−1Y e(eω)
=Lσ2
X
(L− 2)a2 − 2aL cos(ω) + L+ 2a cos(ω).
One can verify that, when L −→∞, the above expression for ΓY,MV(eω) yields
ΓY,MV(eω) =σ2X
a2 − 2a cos(ω) + 1.
Note that this is exactly the PSD of the AR process described in the exercise. In fact, given thatY (z) = H(z)X(z), where X(z) is the Z-transform of the white Gaussian noise with variance σ2
X ,we have that:
ΓY (eω) = |H(eω)|2ΓX(eω)
=∣∣∣∣ eω
eω − a∣∣∣∣2 σ2
X
=σ2X
a2 − 2a cos(ω) + 1.
This shows that ΓY,MV(eω) = ΓY (eω) when L −→ ∞, which means that this minimum variancespectral estimator generates a consistent estimate.
7.5 At first, let us remind the reader that the PSD (also called, in this exercise, actual PSD in order to avoidambiguity) of the first-order AR process described in the exercise is given by (see the solution to theExercise 7.4):
ΓY (eω) =σ2X
a2 − 2a cos(ω) + 1. (7.1)
200 CHAPTER 7. SPECTRAL ESTIMATION
From now on, we assume that σ2X = 1. In this way, the actual PSD, ΓY (eω) given by Equation (7.1),
is depicted in Fig. 7.11. As we expected, the actual PSD exhibits the resonance frequency of the filter,which is located at ω = 0 (since the pole is 0.8).
0 0.2 0.4 0.6 0.8 1−15
−10
−5
0
5
10
15
Normalized frequency [× π rad/samples]
Act
ual P
SD
[dB
]
Figure 7.11: Actual PSD of the first-order AR process Y .
0 0.2 0.4 0.6 0.8 1−15
−10
−5
0
5
10
15
Normalized frequency [× π rad/samples]
PS
D [d
B]
Exact MVActual PSD
(a) Exact MV.
0 0.2 0.4 0.6 0.8 1−15
−10
−5
0
5
10
15
Normalized frequency [× π rad/samples]
PS
D [d
B]
MV methodActual PSD
(b) MV method.
Figure 7.12: Minimum variance PSD estimates (L = 4): (a) Exact MV; (b) MV method.
0 0.2 0.4 0.6 0.8 1−15
−10
−5
0
5
10
15
Normalized frequency [× π rad/samples]
PS
D [d
B]
Exact MVActual PSD
(a) Exact MV.
0 0.2 0.4 0.6 0.8 1−15
−10
−5
0
5
10
15
Normalized frequency [× π rad/samples]
PS
D [d
B]
MV methodActual PSD
(b) MV method.
Figure 7.13: Minimum variance PSD estimates (L = 10): (a) Exact MV; (b) MV method.
201
0 0.2 0.4 0.6 0.8 1−15
−10
−5
0
5
10
15
Normalized frequency [× π rad/samples]
PS
D [d
B]
Exact MVActual PSD
(a) Exact MV.
0 0.2 0.4 0.6 0.8 1−15
−10
−5
0
5
10
15
Normalized frequency [× π rad/samples]
PS
D [d
B]
MV methodActual PSD
(b) MV method.
Figure 7.14: Minimum variance PSD estimates (L = 50): (a) Exact MV; (b) MV method.
0 0.2 0.4 0.6 0.8 1−15
−10
−5
0
5
10
15
Normalized frequency [× π rad/samples]
PS
D [d
B]
Exact MVActual PSD
(a) Exact MV.
0 0.2 0.4 0.6 0.8 1−15
−10
−5
0
5
10
15
Normalized frequency [× π rad/samples]
PS
D [d
B]
MV methodActual PSD
(b) MV method.
Figure 7.15: Minimum variance PSD estimates (L = 256): (a) Exact MV; (b) MV method.
In what follows, we will compute the minimum variance (MV) estimate considering the followingvalues of sample-size: L = 4, L = 10, L = 50, and L = 256. For each value of L we will computetwo minimum variance solutions. They are:
• Exact MV: spectral estimation using the MV approach and considering that the autocorrelationmatrix is known (given in the previous exercise);
• MV method: spectral estimation using the MV approach and estimating the autocorrelation ma-trix using the same procedure that was used in the do-it-yourself section.
Figures 7.12, 7.13, 7.14, and 7.15 illustrate the PSD estimates obtained using L = 4, L = 10, L = 50,and L = 256, respectively. In each figure, the index (a) indicates an Exact MV solution, while index(b) indicates that the MV method was used. The first thing to notice is that the PSD estimate enhancesas L grows. The Exact MV, for example, is almost identical to the actual PSD for L = 50, and forL = 256 both curves coincide. In addition, the estimate provided by the MV Method also enhancesas L increases, for example, for L = 4 and L = 10 there is not enough amount of data, leading to apoor estimate of the PSD. However, for L = 50 and for L = 256 the MV method provides a goodapproximation of the actual PSD. The variance of the MV Method curves is due to the estimation ofthe autocorrelation function.
202 CHAPTER 7. SPECTRAL ESTIMATION
7.6 In this exercise, we used the minimum variance (MV) method in order to estimate the PSD relatedto the referred signal. In order to simplify our discussion, let us assume that the sample frequency isnormalized to fs = 1.0 Hz. Fig. 7.16 depicts the obtained result. By observing this figure, one canverify that the MV method detects a rather tuned peak at 0.25 Hz (normalized frequency). This occurssince the discrete frequency associated with x(n) is π/2 rad/sample, implying that the normalizedfrequency of x(n) in Hz is 1/4 = 0.25 Hz. To deduce this result, we used the well-known fact that thenormalized sample frequency fs = 1.0 Hz is proportional to 2π rad/sample. From this observation, weconclude that the minimum variance algorithm was able to detect this sinusoid at its proper position.
Furthermore, it is also possible to verify that the minimum variance algorithm was able to detect a peakaround 0.5 Hz. The reason for that can be understood based on Fig. 7.17, which depicts the squaredmagnitude of the frequency response of the system
H(z) =1
1 + 0.8z−1.
By comparing Fig. 7.17 to Fig. 7.16, one can verify that the PSD estimate obtained with the MVmethod was able to detect the shape of the noise component x1(n).
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−15
−10
−5
0
5
10
15
Normalized Frequency [Hz]
PS
D [d
B]
Figure 7.16: PSD estimate for the minimum variance method.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−6
−4
−2
0
2
4
6
8
10
12
14
Normalized Frequency [Hz]
Pow
er R
espo
nse
[dB
]
Figure 7.17: Squared magnitude [dB] of the frequency response of the system H(z) = 11+0.8z−1 .
203
7.7 Any stable ARMA system can be modeled by the following transfer function
HARMA(z) = b0
∏Mi=1(1− ziz−1)∏Nj=1(1− pjz−1)
(7.2)
with all poles within the unit circle, i.e., |pj | < 1.
An MA system, HMA(z), equivalent to a stable ARMA system can be generated by recalling that:
• HMA(z) must have all the zeros belonging to HARMA(z);
• each pole of the stable HARMA(z) can be approximated by an infinite-order MA system, i.e.,each pole represented by 1
1−pjz−1 can be modeled as an infinite-order MA system of the form(1 +
∑∞k=1 p
kj z−k).
The pole-to-zeros mapping mentioned above may be explained by
11− pjz−1
=1(1
1+pjz−1
1−pjz−1
) =1(1
1+P∞k=1 p
kj z−k
) =
(1 +
∞∑k=1
pkj z−k
)(7.3)
where we used the fact that |pj | < 1 in order to derive the expression above.
Therefore, since each stable pole can be modeled as an infinite-order MA system, so does the entireARMA process.
7.8 (a) We determined the pole locations of the AR approximation for the ARMA system using N =1, 2, 3, 4 by means of a MATLAB script. The poles are
• N = 1: 1.2;• N = 2: 0.6 and 0.6;• N = 3: 0.9391, 0.1304 + 0.3130, and 0.1304− 0.3130;• N = 4: 0.8841, 0.3676, −0.0258 + 0.3147, and −0.0258− 0.3147.
As expected, all of the complex poles appear with complex conjugated pairs. We can verify thatthere is always one pole near z = 1. This is also expected since this frequency range (around DC)contains more spectral information than other parts.Fig. 7.18 depicts the pole-zero plot for the AR models using N = 1, 2, 3, 4, respectively.
(b) Fig. 7.19 depicts the impulse responses of the ARMA and AR models usingN = 1, 2, 3, 4, respec-tively. In these figures, the blue marks are related to the true ARMA impulse responses, whereasthe red marks are related to the AR approximation. In addition, the impulse response of the ARsystem is delayed by one sample in order to be easier to compare both impulse responses in thesame figure. It is clear that the AR impulse response is quite similar to the original ARMA impulseresponse when N = 4.
7.9 Our target here is to find an N th-order AR approximation for the ARMA system
HARMA(z) =1− 1.5z−1
1 + 0.5z−1
We will rely on Theorem 7.1 (System Decomposition Theorem) to accomplish this task.
First, let us rewrite HARMA(z) as
HARMA(z) =1− z1z
−1
1− p1z−1(7.4)
204 CHAPTER 7. SPECTRAL ESTIMATION
−1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
Real Part
Imag
inar
y P
art
−1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
2 2
Real Part
Imag
inar
y P
art
−1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
3
Real Part
Imag
inar
y P
art
−1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
4
Real Part
Imag
inar
y P
art
Figure 7.18: Pole-zero plot for the AR models using N = 1, 2, 3, 4, respectively.
0 10 20 30 40 50 60 70 80 900
1
2
3
4
5
6
7
8x 10
5
n (samples)
Am
plitu
de
Impulse Response
ARMAAR
0 10 20 30 40 50 60 70 80 900
0.2
0.4
0.6
0.8
1
1.2
1.4
n (samples)
Am
plitu
de
Impulse Response
ARMAAR
0 10 20 30 40 50 60 70 80 900
0.2
0.4
0.6
0.8
1
1.2
1.4
n (samples)
Am
plitu
de
Impulse Response
ARMAAR
0 10 20 30 40 50 60 70 80 900
0.2
0.4
0.6
0.8
1
1.2
1.4
n (samples)
Am
plitu
de
Impulse Response
ARMAAR
Figure 7.19: Comparison between the impulse responses of the ARMA and AR models using N =1, 2, 3, 4, respectively. The ARMA impulse response is in blue, whereas the AR impulse response is inred.
205
where z1 = 1.5 and p1 = −0.5 are the zero and pole, respectively, of the ARMA process. Therefore,HARMA(z) is stable but does not have minimum phase. So, we will follow the procedure described inTheorem 7.1.
HARMA(z) = HARMA(z)× 1− 1z1z−1
1− 1z1z−1
=1− z1z
−1
1− p1z−1× 1− 1
z1z−1
1− 1z1z−1
=1− z1z
−1
1− p1z−1× 1− z2z
−1
1− z2z−1
=1− z2z
−1
1− p1z−1︸ ︷︷ ︸H(z)
× 1− z1z−1
1− z2z−1︸ ︷︷ ︸allpass
(7.5)
where z2 = 1z1
= 23 is the reciprocal minimum-phase zero and pole. Now that we have decomposed
the stable ARMA system HARMA(z) in the cascade of an allpass filter and a minimum-phase stableARMA system H(z), we can generate an AR system equivalent to H(z):
HAR(z) =1
(1− p1z−1)(1 +
∑∞k=1 z
k2z−k)
=1
(1 + 0.5z−1)(1 +
∑∞k=1( 2
3 )kz−k)
=1
1 +∑∞k=1( 2
3 )kz−k + 0.5z−1 + 12
∑∞k=1( 2
3 )kz−(k+1)
=1
1 +∑∞k=1( 2
3 )kz−k + 12
[∑∞k=0( 2
3 )kz−(k+1)]
=1
1 +∑∞k=1( 2
3 )kz−k + 12
[∑∞k=0( 2
3 )kz−(k+1)]
23
32
=1
1 +∑∞k=1( 2
3 )kz−k + 34
[∑∞k=0( 2
3 )k+1z−(k+1)]
=1
1 +∑∞k=1( 2
3 )kz−k + 34
[∑∞k′=1( 2
3 )k′z−k′]
=1
1 + 74
∑∞k=1( 2
3 )kz−k
Finally, the N th-order AR system is given by
HARN (z) =1
1 + 74
∑Nk=1( 2
3 )kz−k(7.6)
In Fig. 7.20, the frequency responses of the original stable ARMA system, HARMA(z), of the stableminimum-phase ARMA system, H(z), and of the allpass filter are depicted. Notice that there is agap of about 3.5 dB between the magnitude curves of HARMA(z) and H(z), which is the value of theconstant gain of the allpass filter.
In Fig. 7.21, the magnitude frequency response of the N th-order AR approximation is depicted, con-sidering different values of N . We can see that the AR approximation is improved when N is in-creased. In this exercise, using N = 7 we obtained an AR approximation quite close to the ARMAsystemH(z). It is worth mentioning that forN = 10 the frequency response of the AR approximationmatches almost perfectly the one of H(z). However we preferred not to plot another curve since itcould cause some confusion in Fig. 7.21.
206 CHAPTER 7. SPECTRAL ESTIMATION
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
50
100
150
200
Normalized Frequency ( ×π rad/sample)
Ph
ase
(d
eg
ree
s)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−20
−10
0
10
20
Normalized Frequency ( ×π rad/sample)
Ma
gn
itud
e (
dB
)
stable ARMA stable min.−phase ARMA
(a) ARMA systems.
0 0.2 0.4 0.6 0.8 10
50
100
150
200
Normalized Frequency (×π rad/sample)
Phase (degrees)
0 0.2 0.4 0.6 0.8 12.5
3
3.5
4
4.5
5
Normalized Frequency (×π rad/sample)
Magnitude (dB)
(b) allpass filter.
Figure 7.20: Frequency responses of: (a) the original stable ARMA system, HARMA(z), and the stableminimum-phase ARMA system, H(z); (b) the allpass filter.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−15
−10
−5
0
5
10
15
20
25
Normalized Frequency (×π rad/sample)
Magnitude (dB)
N = 1 N = 3N = 5N = 7
Figure 7.21: N th-order AR approximation.
7.10 Let us define x1(n) = −a1x1(n − 1) + v(n) and x2(n) = a1x2(n − 1) + v(n), where |a1| < 1and v(n) is a zero-mean white noise with unit variance. We wish to compute E x1(n)x2(m) =RX1,X2(n,m), for all integer numbers n and m.
Recalling our random processes course, we know that if the input of a linear time invariant (LTI)system is wide-sense stationary (WSS), then the system output and system input processes are jointlyWSS. This way, we have V and X1 jointly WSS, and V and X2 jointly WSS and, therefore,X1 and X2 are also jointly WSS. So, we have RX1,X2(n,m) = RX1,X2(l), with l = n−m ∈ Z,i.e., the cross-correlation depends only on the lag l.
Thus, by assuming that l = n−m ≥ 0, one has that
RX1,X2(l) = E x1(n)x2(n− l) = E
(−a1)lx1(n− l) +
l−1∑j=0
(−a1)jv(n− j)x2(n− l)
= (−a1)lE x1(n− l)x2(n− l)︸ ︷︷ ︸
=RX1,X2 (0)
+l−1∑j=0
(−a1)j E v(n− j)x2(n− l)︸ ︷︷ ︸=0
,
207
where the last term in the former equation is zero, since x2(n−l) does not have any kind of dependencyon v(n), v(n− 1), · · · , v(n− l + 1). Now, by taking into consideration that
RX1,X2(0) = E x1(n)x2(n) = −a21E x1(n− 1)x2(n− 1)︸ ︷︷ ︸
=RX1,X2 (0)
+1,
one has that
RX1,X2(0) =1
1 + a21
,
which yields
RX1,X2(l) =(−a1)l
1 + a21
,∀l ∈ N.
On the other hand, when l = n−m < 0, one has that
RX1,X2(l) = E x1(n)x2(n− l) = E
x1(n)
a−l1 x2(n) +−1∑j=l
aj−l1 v(n− j)
= a−l1 E x1(n)x2(n)︸ ︷︷ ︸=RX1,X2 (0)
+−1∑j=l
aj−l1 E x1(n)v(n− j)︸ ︷︷ ︸=0
= a−l1 E x1(n)x2(n)︸ ︷︷ ︸=RX1,X2 (0)
.
Once again, we can use that
RX1,X2(0) =1
1 + a21
,
in order to deduce that
RX1,X2(l) =a−l1
1 + a21
,∀(−l) ∈ N.
An alternative way for computing RX1,X2(l) would be using the Z transform as follows
RX1,X2(l) = E x1(n)x2(n− l)=
12π
∮CH1(z)zlH2(z−1)σ2
V
dz
z(7.7)
where H1(z) =1
1 + a1z−1, H2(z) =
11− a1z−1
, C is a counterclockwise closed contour corre-
sponding to the unit circle, and σ2V = 1 from the hypothesis. This way we have
RX1,X2(l) =1
2π
∮Czl
11 + a1z−1
11− a1z
dz
z
=1
2π
∮Czl
1z + a1
11− a1z
dz
=∑
residuezl
1z + a1
11− a1z
=∑
residue P (z) .
208 CHAPTER 7. SPECTRAL ESTIMATION
For l ≥ 0 we have just one singular point within C located at z = −a1. The other pole is located atz = 1
a1, i.e., outside C. Thus
RX1,X2(l) = (z + a1)P (z) |z=−a1=(−a1)l
1 + a21
,∀l ≥ 0.
For l < 0 we have another pole located at z = 0 with multiplicity −l, leading to more involvedcomputations. However, by doing the following change of variables z = ν−1 (see Section 2.3) weovercome this problem
RX1,X2(l) =1
2π
∮Cν−l
11 + a1ν
1ν − a1
dν =∑
residue Q(ν)
= (ν − a1)Q(ν) |ν=a1=(a1)−l
1 + a21
,∀l < 0.
7.11 The state-space equations describing the system are: Our target here is to find an N th-order AR ap-proximation for the ARMA system
HARMA(z) =(1− 0.8z−1)(1− 0.9z−1)(1− 0.5z−1)(1 + 0.5z−1)
which is a minimum-phase stable system.
Based on Theorem 7.1 (System Decomposition Theorem) we can write an AR approximation toHARMA(z) as
HAR(z) =1
(1− 0.5z−1)(1 + 0.5z−1) (1 +∑∞k=1 0.8kz−k) (1 +
∑∞l=1 0.9lz−l)
(7.8)
Notice that
(1− 0.5z−1
)(1 +
∞∑k=1
0.8kz−k)
= 1 +∞∑k=1
0.8kz−k − 0.5z−1 − 0.5∞∑k=1
0.8kz−(k+1)
= 1 +∞∑k=1
0.8kz−k − 0.5∞∑k=0
0.8kz−(k+1) × 0.80.8
= 1 +∞∑k=1
0.8kz−k − 58
∞∑k=0
0.8k+1z−(k+1)
= 1 +∞∑k=1
0.8kz−k − 58
∞∑k′=1
0.8k′z−k
′
= 1 +38
∞∑k=1
0.8kz−k (7.9)
In addition, we can show in a similar fashion that
(1 + 0.5z−1
)(1 +
∞∑l=1
0.9lz−l)
= 1 +149
∞∑l=1
0.9lz−l (7.10)
Substituting equations (7.9) and (7.10) in (7.8), we have
HAR(z) =1(
1 + 38
∑∞k=1 0.8kz−k
) (1 + 14
9
∑∞l=1 0.9lz−l
) (7.11)
209
Finally, we generate the N th-order AR approximation
HARN (z) =1(
1 + 38
∑N/2k=1 0.8kz−k
)(1 + 14
9
∑N/2l=1 0.9lz−l
) (7.12)
where we have supposed that N is even.
Fig. 7.22 depicts the magnitude responses of the ARMA system and of the N th-order AR approxima-tion, considering three different values of N : 10, 30 and 60. For N = 10, the magnitude response ofthe AR approximation is far from the one of the ARMA system, while for N = 60 these magnituderesponses are almost identical (there is a small discrepancy mainly at high frequencies). UsingN = 30we have a solution that represents a tradeoff between the filter order and the variance of the estimate.
Finally, the 61 coefficients of the denominator of HARN (z) are given below (the order among them isfrom the left to the right, top to bottom):
1.0000000e+ 00 1.7000000e+ 00 1.9200000e+ 00 2.0400000e+ 00 2.0856000e+ 002.0767200e+ 00 2.0287920e+ 00 1.9537080e+ 00 1.8605734e+ 00 1.7563050e+ 001.6461056e+ 00 1.5338400e+ 00 1.4223319e+ 00 1.3135994e+ 00 1.2090401e+ 001.1095766e+ 00 1.0157713e+ 00 9.2791606e− 01 8.4610198e− 01 7.7027380e− 017.0027204e− 01 6.3586533e− 01 5.7677519e− 01 5.2269478e− 01 4.7330300e− 014.2827485e− 01 3.8728909e− 01 3.5003356e− 01 3.1620891e− 01 2.8553098e− 012.5773225e− 01 1.7284352e− 01 1.3775488e− 01 1.0973596e− 01 8.7367622e− 026.9515065e− 02 5.5270923e− 02 4.3909722e− 02 3.4851463e− 02 2.7632487e− 022.1882175e− 02 1.7304306e− 02 1.3662155e− 02 1.0766563e− 02 8.4664055e− 036.6409639e− 03 5.1938267e− 03 4.0480114e− 03 3.1420642e− 03 2.4269409e− 031.8635133e− 03 1.4205751e− 03 1.0732482e− 03 8.0170779e− 04 5.9016456e− 044.2605015e− 04 2.9936677e− 04 2.0216740e− 04 1.2814050e− 04 7.2278330e− 053.0611999e− 05
It is clear that the coefficients used for the AR approximation with N = 10 (or N = 30) are the first11 (or 31) coefficients from the list above.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−40
−30
−20
−10
0
10
20
Normalized Frequency (×π rad/sample)
Mag
nitu
de (
dB)
ARMA
AR − N = 10
AR − N = 30
AR − N = 60
Figure 7.22: Magnitude responses for the ARMA system and for the N th-order AR approximation.
210 CHAPTER 7. SPECTRAL ESTIMATION
7.12 (a) Before approximating, let us transform the original H(z) to a more convenient representation, asfollows:
H(z) =1 + 0.3z−1
1− 0.9z−1= (1 + 0.3z−1)
(1 +
∞∑k=1
0.9kz−k)
= 1 + 0.3z−1 +∞∑k=1
0.9kz−k + 0.3∞∑k=1
0.9kz−(k+1)
= 1 +∞∑k=1
0.9kz−k + 0.3∞∑k=0
0.9kz−(k+1)
= 1 +∞∑k=1
0.9kz−k +0.30.9
∞∑k=0
0.9k+1z−(k+1)
= 1 +∞∑k=1
0.9kz−k +13
∞∑k′=1
0.9k′z−k
′
= 1 +43
∞∑k=1
0.9kz−k
Based on this representation, an M th order MA approximation to the original ARMA system isgiven by:
H(z) ≈ 1 +43
M∑k=1
0.9kz−k
(b) We determined the zero locations of the MA approximation for the ARMA system using M =1, 2, 3, 4 by means of a MATLAB script. The zeros are
• M = 1: −1.2;• M = 2: −0.6 + 0.8485 and −0.6− 0.8485;• M = 3: −1.0518, −0.0741 + 0.9585, and −0.0741− 0.9585;• M = 4: −0.8376 + 0.5549, −0.8376− 0.5549, 0.2376 + 0.9001, and 0.2376− 0.9001.
As expected, all of the complex zeros appear in complex conjugated pairs. We can verify that thezeros are mainly distributed in the second and third quadrants. This yields high attenuation in highfrequencies. This occurs since the MA model tries to indicate that the frequency range around DCcontains more spectral information than other high-frequency parts.Fig. 7.23 depicts the pole-zero plot for the MA models using M = 1, 2, 3, 4, respectively.
(c) Fig. 7.24 depicts the impulse responses of the ARMA and MA models using M = 1, 2, 3, 4,respectively. In these figures, the blue marks are related to the true ARMA impulse responses,whereas the red marks are related to the MA approximation. In addition, the impulse response ofthe MA system is delayed by one sample in order to be easier to compare both impulse responsesin the same figure. It is clear that the MA impulse response is equal to the original ARMA impulseresponse up to the coefficient of order M . This was expected since there is no approximationrelated to those coefficients. However, the MA model needs much more coefficients (greater M )in order to yield a reasonable approximation to the original ARMA system. In this example, it isclear that M = 4 is not large enough to represent with accuracy the original impulse response.
7.13 An AR system may be represented by the following difference equation
y(n) = x(n)−N∑i=1
aiy(n− i)
211
−1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
Real Part
Imag
inar
y P
art
−1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
2
Real Part
Imag
inar
y P
art
−1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
3
Real Part
Imag
inar
y P
art
−1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
4
Real Part
Imag
inar
y P
art
Figure 7.23: Pole-zero plot for the MA models using M = 1, 2, 3, 4, respectively.
0 10 20 30 40 50 60 70 80 900
0.2
0.4
0.6
0.8
1
1.2
1.4
n (samples)
Am
plitu
de
Impulse Response
ARMAMA
0 10 20 30 40 50 60 70 80 900
0.2
0.4
0.6
0.8
1
1.2
1.4
n (samples)
Am
plitu
de
Impulse Response
ARMAMA
0 10 20 30 40 50 60 70 80 900
0.2
0.4
0.6
0.8
1
1.2
1.4
n (samples)
Am
plitu
de
Impulse Response
ARMAMA
0 10 20 30 40 50 60 70 80 900
0.2
0.4
0.6
0.8
1
1.2
1.4
n (samples)
Am
plitu
de
Impulse Response
ARMAMA
Figure 7.24: Comparison between the impulse responses of the ARMA and MA models using M =1, 2, 3, 4, respectively. The ARMA impulse response is in blue, whereas the MA impulse response is inred.
So, we can compute the cross-correlation E x(n)y(n− ν) as follows
E x(n)y(n− ν) = E
x(n)
[x(n− ν)−
N∑i=1
aiy(n− ν − i)]
= E x(n)x(n− ν) −N∑i=1
aiE x(n)y(n− ν − i)
212 CHAPTER 7. SPECTRAL ESTIMATION
and for ν ≥ 0 we haveE x(n)y(n− ν) = E x(n)x(n− ν)
since the system is causal, i.e., x(n) is uncorrelated with y(n − 1), y(n − 2), . . . , and we also haveE x(n) = 0. This way, assuming that the input signal is independently drawn from a certain dis-tribution (actually, we only need uncorrelatedness between x(n1) and x(n2), for all n1 6= n2), wehave
E x(n)y(n− ν) =σ2X , for ν = 00 for ν > 0 .
7.14 We want to find the autocorrelation function of the process Y , RY (ν), which is generated by applyinga white noise with variance σ2
X to a second-order AR system H(z) whose transfer function is
H(z) =1
1 + a1z−1 + a2z−2
First, let us write the Yule-Walker equations:
RY (0) = σ2X − a1RY (−1)− a2RY (−2)
RY (1) = −a1RY (0)− a2RY (−1)RY (2) = −a1RY (1)− a2RY (0)
Recalling that RY (−1) = RY (1) and RY (−2) = RY (2), we can write the system above in thefollowing form: 1 a1 a2
a1 (1 + a2) 0a2 a1 1
RY (0)RY (1)RY (2)
=
σ2X
00
The solution to the system above is
RY (0) =[
(1 + a2)D
]σ2X
RY (1) =[−a1
D
]σ2X (7.13)
RY (2) =[a2
1 − a2 − a22
D
]σ2X
where D = (1− a22)(1 + a2)− a2
1(1− a2) = (1− a2)[(1 + a2)2 − a21].
One problem here is that it is difficult to inferRY (ν) by means of Equations (7.13). So, we will followa different approach.
Consider the difference equation
RY (ν) = −a1RY (ν − 1)− a2RY (ν − 2)
which can be rewritten as
RY (ν) + a1RY (ν − 1) + a2RY (ν − 2) = 0
The solution of the difference equation must have the form
RY (ν) = c1(λ1)ν + c2(λ2)ν (7.14)
213
where c1 and c2 are constants to be determined, and λ1 and λ2 are the solutions of the characteristicpolynomial, i.e., they are the solutions of
λ2 + a1λ+ a2 = 0
which are given by
λ1 =−a1 +
√a2
1 − 4a2
2
λ2 =−a1 −
√a2
1 − 4a2
2
Now that we have λ1 and λ2, we just have to compute the constants c1 and c2. This is accomplishedby computing RY (ν) given by Equation (7.14) for ν = 0, 1, 2, and equating it to Equations (7.13),which may be seen as the initial conditions of the problem. Actually, since there are just two constantsto be determined, we just need to use two equations. For simplicity, we chose the ones correspondingto ν = 0 and ν = 1, which are expressed as
RY (0) = c1 + c2 =1 + a2
Dσ2X
RY (1) = c1λ1 + c2λ2 =−a1
Dσ2X
After a lot of effort, the following results are obtained
c1 =G+ F
2D√a2
1 − 4a2
c2 =G− F
2D√a2
1 − 4a2
(7.15)
where F = a1(a2 − 1)σ2X , and G =
√a2
1 − 4a2(1 + a2)σ2X .
Now we have completely determined the autocorrelation function of the process
RY (ν) = c1(λ1)|ν| + c2(λ2)|ν| (7.16)
as a function of the coefficients of the AR model and of σ2X , solving the problem. If one wants to check
the validity of the formula above, one can compute it for ν = 2 (which was not used to determine theconstants) and compare to RY (2) given in Equation (7.13).
7.15 Given that H(z) = Y (z)X(z) = 1+0.3z−1
1−0.9z−1 , one has that
y(n) = 0.9y(n− 1) + x(n) + 0.3x(n− 1).
Taking this relationship into account, we have that
RY (ν) = E y(n)y(n− ν)= 0.9E y(n− 1)y(n− ν)+ E x(n)y(n− ν)+ 0.3E x(n− 1)y(n− ν)
This way, for ν = 0, we have that
RY (0) = E y(n)y(n)= 0.9E y(n− 1)y(n)︸ ︷︷ ︸
=RY (1)
+E x(n)y(n)︸ ︷︷ ︸=σ2
X=1
+0.3E x(n− 1)y(n)︸ ︷︷ ︸=(0.9+0.3)σ2
X=1.2
,
214 CHAPTER 7. SPECTRAL ESTIMATION
in which we used the following facts: E x(n)y(n− 1) = 0, E x(n)x(n− l) = σ2Xδ(l), and
E x(n− 1)y(n) = 0.9E x(n− 1)y(n− 1) + 0.3E x(n− 1)x(n− 1) = 0.9σ2X + 0.3σ2
X .Therefore,
RY (0) = 0.9RY (1) + 1.36
For ν = 1, we have that
RY (1) = E y(n)y(n− 1)= 0.9E y(n− 1)y(n− 1)︸ ︷︷ ︸
=RY (0)
+E x(n)y(n− 1)︸ ︷︷ ︸=0
+0.3E x(n− 1)y(n− 1)︸ ︷︷ ︸=σ2
X=1
= 0.9RY (0) + 0.3
On the other hand, for ν > 1, we have that
RY (ν) = E y(n)y(n− ν)= 0.9E y(n− 1)y(n− ν)︸ ︷︷ ︸
=RY (ν−1)
+E x(n)y(n− ν)︸ ︷︷ ︸=0
+0.3E x(n− 1)y(n− ν)︸ ︷︷ ︸=0
= 0.9RY (ν − 1)
It is straightforward to verify that a solution to such a recursive equation is
RY (ν) = α(0.9)ν = 0.9(α(0.9)ν−1
)= 0.9RY (ν − 1)
The scalar number α can be determined by using the values for RY (1). Notice that the values RY (0)and RY (1) can be obtained by solving the following linear system of equations:
RY (0) = 0.9RY (1) + 1.36RY (1) = 0.9RY (0) + 0.3
Thus, RY (0) = 8.58 and RY (1) = 8.02. This way, we have that:
0.9α = RY (1) = 8.02,
which means that α = 8.91.
Therefore, we have the following general solution:
RY (ν) = 8.91(0.9)|ν| − 0.33δ(ν), ∀ν ∈ Z.
Now let us compare the obtained recursions for RY (0), RY (1), and RY (ν), with ν > 1 = M to theARMA model relationship expressed in (7.59). The first thing to do is to remember the result developin exercise 7.12, where we showed that:
H(z) = 1 +43
∞∑k=1
0.9kz−k, (7.17)
implying that
h(n) =43
(0.9)n, ∀n > 0, (7.18)
and h(0) = 1. As M = N = 1, we will use only the values h(0) = 1 and h(1) = 1.2, as can bereadily verified in equation (7.59). Therefore, we can verify that the results derived here exactly match
215
the result in equation (7.59), since
RY (0) = ( 1︸︷︷︸b0
× 1︸︷︷︸h(0)
+ 0.3︸︷︷︸b1
× 1.2︸︷︷︸h(1)
)σ2X − (−0.9)RY (1)︸ ︷︷ ︸P
i aiRY (−i)
RY (1) = ( 0.3︸︷︷︸b1
× 1︸︷︷︸h(0)
)σ2X − (−0.9)RY (0)︸ ︷︷ ︸P
i aiRY (1−i)
RY (ν) = − (−0.9)RY (ν − 1)︸ ︷︷ ︸Pi aiRY (ν−i)
, ∀ν > 1
7.16 Starting with the Yule-Walker equations for an MA process (with coefficients b0 = 1, b1 = 2, andb2 = 3), we have
RY (0) =(∑2
j=0 b2j
)σ2X = (1 + 4 + 9)σ2
X = 14σ2X
RY (1) =(∑2
j=1 bjbj−1
)σ2X = (2 + 6)σ2
X = 8σ2X
RY (2) =(∑2
j=2 bjbj−2
)σ2X = (3)σ2
X = 3σ2X
and for ν > 2 we have RY (ν) = 0.
Now, we can write the Wiener-Hopf equations[RY (0) RY (1)RY (1) RY (0)
]︸ ︷︷ ︸
RY
[a∗1a∗2
]︸︷︷︸a∗
=[RY (1)RY (2)
]︸ ︷︷ ︸
pY
and the solutions to these equations are the LP coefficients, which are given by
a∗ = R−1Y pY
=1
R2Y (0)−R2
Y (1)
[RY (0) −RY (1)−RY (1) RY (0)
] [RY (1)RY (2)
]=
1196σ4
X − 64σ4X
[14σ2
X −8σ2X
−8σ2X 14σ2
X
] [8σ2
X
3σ2X
]=
1132
[14 −8−8 14
] [83
]=
1132
[88−22
]=
16
[4−1
]=
[4/6−1/6
]≈[
0.6667−0.1667
]
These are the LP coefficients which can be related to the coefficients of an AR system by
a1 = −a∗1 = −0.6667a2 = −a∗2 = 0.1667
The MATLAB command lpc yields the coefficients a1 and a2. Table 7.1 illustrates the results ob-tained by this command, considering different values for the length of the input sequence, L. It can benoticed that these results are improved as L grows.
216 CHAPTER 7. SPECTRAL ESTIMATION
Table 7.1: MATLAB results for different values of L.L a1 a2
1024 −0.6491 0.14565120 −0.6897 0.174110240 −0.6592 0.1619102400 −0.6662 0.1685
7.17 Recalling that the error signal is
e(n) = y(n)− y(n)
= y(n)−N∑i=1
aiy(n− i)
where ai are the LP coefficients, we can define the following transfer function
D(z) =E(z)Y (z)
= 1−N∑i=1
aiz−i = 1− a1z
−1 − . . .− aNz−N
where −ai, i = 1, 2, . . . , N , are the coefficients of the AR model1
D(z).
Since the LP coefficients minimize the MSE ξ and e(n) may be seen as the output of the filter D(z) toan input signal y(n) we have
ξmin = E[|e(n)|2] = RE(0) =
12π
∫ π
−πΓE(eω)dω =
12π
∫ π
−π|D(eω)|2ΓY (eω)dω .
Now, let’s assume by contradiction that D(z) has one zero outside the unit circle. For convenience wewill denote this zero by z1. We will show that such D(z) would not lead to a minimum MSE.
Let’s write D(z) as
D(z) =N∏i=1
(1− z1z−1) = (1− z1z
−1)N∏i=2
(1− z1z−1)︸ ︷︷ ︸
D′(z)
.
Then, we have
ξmin =1
2π
∫ π
−π|1− z1e
−ω|2|D′(eω)|2ΓY (eω)dω .
However, since |z1| > 1 we know that |1 − z1e−ω|2 > |1 − 1
z∗1e−ω|2. This means that we can
reduce the MSE by exchanging every zero zi outside the unit circle by its reciprocal 1z∗1
, because theother quantities in the integrand are greater than 0. Thus, as the LP coefficients are the result of theminimization of the MSE, then all the zeros of D(z) (they correspond to the poles of the AR system)are inside the unit circle.
217
proof |1− z1e−ω|2 > |1− 1
z∗1e−ω|2
|1− z1e−ω|2 = |z1|2
∣∣∣∣ 1z1− e−ω
∣∣∣∣2= |z1|2
∣∣∣∣eω − 1z∗1
∣∣∣∣2= |z1|2
∣∣∣∣1− 1z∗1e−ω
∣∣∣∣2>
∣∣∣∣1− 1z∗1e−ω
∣∣∣∣2 , since |z1| > 1
proofThe autocorrelation method, however, uses a biased estimation of the autocorrelation function RY,b(ν).Thus, to complete the proof we should show that the PSD estimate ΓY (eω) =
∑L−1ν=−L+1 RY,b(ν)e−ων
is a non-negative function. This is already proven in the textbook, since it is the periodogram PSD es-timator defined by Eq. (7.9).
7.18 Consider a first-order AR process. Its LP coefficient may be determined as (see Eq. (7.84) of the book)
a∗1 =RY,m(1, 0)RY,m(1, 1)
where
RY,m(1, 0) =1K
K+k−1∑n=k
y(n− 1)y(n)
RY,m(1, 1) =1K
K+k−1∑n=k
y2(n− 1)
Therefore, if, for example, the sequence y(n) is positive (meaning that each element of the sequenceis greater than zero) and monotonically crescent, both conditions holding during the window intervalK, we will have RY,m(1, 0) > RY,m(1, 1) and consequently, |a∗1| > 1, which means that the pole isoutside the unit circle in the z-plane.
A numerical example in which this problem happens is described below: Consider the windowed-data sequence to be y(n) = n, for n = 1, 2, · · · , 10. Now, let us consider that K = 9 and k = 2.Computing the correlations above, we have
RY,m(1, 0) =19
10∑n=2
y(n− 1)y(n) ≈ 36.6667 (7.19)
RY,m(1, 1) =19
10∑n=2
y2(n− 1) ≈ 31.6667 (7.20)
which yielda∗1 ≈ 1.16 (7.21)
i.e., a pole in 1.16 which is outside the unit circle.
7.19 Let x1(n) and x2(n) be first-order AR processes defined as
x1(n) = 0.8x1(n− 1) + v(n)x2(n) = −0.8x2(n− 1) + v(n),
218 CHAPTER 7. SPECTRAL ESTIMATION
where v(n) is a zero-mean white noise with unit variance. Let y(n) be defined as
y(n) = x1(n) + x2(n).
It is rather straightforward to verify that:
Y (z) =(
z
z + 0.8+
z
z − 0.8
)V (z) =
2z2
z2 − 0.64V (z) =
21− 0.64z−2
V (z),
which implies that
y(n) = 0.64y(n− 2) + 2v(n).
Taking this relationship into account, we have that
RY (ν) = E y(n)y(n− ν)= 0.64E y(n− 2)y(n− ν)+ 2E v(n)y(n− ν)
This way, for ν = 0, we have that
RY (0) = E y(n)y(n)= 0.64 E y(n− 2)y(n)︸ ︷︷ ︸
=0.64Ey(n−2)y(n−2)
+2E v(n)y(n)︸ ︷︷ ︸=2σ2
V =2
,
in which we used the following facts: E v(n)y(n− 2) = 0 and E v(n)v(n− l) = σ2V δ(l).
Therefore,
RY (0) = (0.64)2RY (0) + 4⇔ RY (0) =4
1− (0.64)2= 6.775.
For ν = 1, we have that
RY (1) = E y(n)y(n− 1)= 0.64E y(n− 2)y(n− 1)︸ ︷︷ ︸
=RY (1)
+2E v(n)y(n− 1)︸ ︷︷ ︸=0
= 0.64RY (1)⇔ RY (1) = 0
On the other hand, for ν > 1, we have that
RY (ν) = E y(n)y(n− ν)= 0.64E y(n− 2)y(n− ν)︸ ︷︷ ︸
=RY (ν−2)
+ 2E v(n)y(n− ν)︸ ︷︷ ︸=0
= 0.64RY (ν − 2)
It is straightforward to verify that a solution to such a recursive equation is
RY (ν) = α(0.8)ν + β(−0.8)ν = 0.64(α(0.8)ν−2 + β(−0.8)ν−2
)= 0.64RY (ν − 2)
The scalar numbers α and β can be determined by using the values for RY (0) and RY (1). This way,we have that:
α+ β = RY (0) = 6.775α− β = RY (1) = 0
219
which means that α = β = 3.3875.
This way, we have the following general solution:
RY (ν) = 3.3875[(0.8)|ν| + (−0.8)|ν|
], ∀ν ∈ Z.
The linear prediction method computes the AR-process coefficients by solving the following linearsystem:
RY (0) RY (−1) RY (−2) RY (−3)RY (1) RY (0) RY (−1) RY (−2)RY (2) RY (1) RY (0) RY (−1)RY (3) RY (2) RY (1) RY (0)
︸ ︷︷ ︸
RY
a∗1a∗2a∗3a∗4
︸ ︷︷ ︸
a∗Y
=
RY (1)RY (2)RY (3)RY (4)
︸ ︷︷ ︸
pY
By substituting the values and calculating the solution vector a∗Y , we have that
a∗Y =
0
0.6400
.Based on this result one can infer that the best estimate (in the sense of linear prediction ) for the signal
y(n) = 0.64y(n− 2) + 2v(n)
is
y(n) = 0.64y(n− 2),
for all N ≥ 2. In our case, we used N = 4, but the coefficients a∗3 and a∗4 are zero, implying that theactual order for the AR model is N ′ = 2.
In addition, note that
e(n) = y(n)− y(n) = 2v(n),
which means that all the predictable part of the signal was perfectly predicted. The residual part con-tains only the noise input contribution. Note that, if we did not have a well-defined model for y(n),we would not be able to deduce the exact expression for RY (ν). In this situation, one must estimateRY (ν) using some known non-parametric method. The resulting a∗Y based on a estimate for RY (ν)could be slightly different. In the case of a good estimate for RY (ν), it is likely that the resultinga∗2 ≈ 0.64, and all the other coefficients would be quite close to zero (but not exactly zero). Thisimplies that the linear model would spend some effort to predict the noise v(n) as well.
7.20 Starting with the Yule-Walker equations for an MA process (with coefficients b0 = 0.921, b1 =−1.6252, and b2 = 1), we have
RY (0) =(∑2
j=0 b2j
)σ2X = (4.4895)σ2
X = 4.4895
RY (1) =(∑2
j=1 bjbj−1
)σ2X = (−3.1220)σ2
X = −3.1220
RY (2) =(∑2
j=2 bjbj−2
)σ2X = (0.9210)σ2
X = 0.9210
since σ2X = 1, and for ν > 2 we have RY (ν) = 0.
220 CHAPTER 7. SPECTRAL ESTIMATION
Now, we can write the Wiener-Hopf equationsRY (0) RY (1) RY (2)RY (1) RY (0) RY (1)RY (2) RY (1) RY (0)
︸ ︷︷ ︸
RY
a∗1a∗2a∗3
︸ ︷︷ ︸
a∗
=
RY (1)RY (2)RY (3)
︸ ︷︷ ︸
pY
and the LP coefficients are given by
a∗ = R−1Y pY
≈−1.2990−0.9931−0.4241
These are the LP coefficients which can be related to the coefficients of an AR system by
a1 = −a∗1 = 1.2990a2 = −a∗2 = 0.9931a3 = −a∗3 = 0.4241
The MATLAB command lpc yields the coefficients a1, a2, and a3. Table 7.2 illustrates the resultsobtained by this command, considering different values for the length of the input sequence, L. It canbe noticed that these results are improved as L grows.
7.21 We used the covariance algorithm in order to estimate the autocorrelation function. Such an estimateis used during the solution of the AR modeling problem. The estimated PSD related to the referredsignal is obtained by using the AR model.
(a) In order to simplify our discussion, let us assume that the sample frequency is normalized to fs =1.0 Hz. Fig. 7.25 depicts the obtained result (|HAR(eω)|2 [dB]). By observing this figure, onecan verify that the covariance method detects a peak at 1/L = 1/1024 ≈ 0.001 Hz (normalizedfrequency). This occurs since the discrete frequency associated with x(n) is 2π/L rad/sample,implying that the normalized frequency of x(n) in Hz is 1/L = 0.001 Hz. To deduce this result,we used the well-known fact that the normalized sample frequency fs = 1.0 Hz is proportional to2π rad/sample. From this observation, we conclude that the covariance method was able to detectthis sinusoid at its proper position.Furthermore, it is also possible to verify that the covariance method was able to detect a peakaround 0.5 Hz. The reason for that can be understood based on Fig. 7.26, which depicts thesquared magnitude of the frequency response of the system
H(z) =1
1 + 0.9z−1.
By comparing Fig. 7.26 to Fig. 7.25, one can verify that the PSD estimate obtained with the co-variance method (with N = 4 and a = −0.9) was able to detect the shape of the noise componentx1(n).
Table 7.2: MATLAB results for different values of L.L a1 a2 a3
1024 1.2771 0.9591 0.39475120 1.2831 0.9751 0.416310240 1.3032 0.9939 0.4221102400 1.2953 0.9860 0.4184
221
In addition, one can observe in Fig. 7.27 that the resulting AR model is stable (all poles are in theunit circle and the system is causal). This is not guaranteed in the covariance method, but we knowthat the stability shows up in several practical problems.
(b) Fig. 7.28 depicts the obtained result (|HAR(eω)|2 [dB]). Once again, by observing this figure, onecan verify that the covariance method detects a very tuned peak at 1/L = 1/1024 ≈ 0.001 Hz(normalized frequency). This peak was much better detected than in the previous item, as it is noweven more tuned.Furthermore, it is also possible to verify that the covariance method was able to detect a peakaround 0.5 Hz. The reason for that can be understood based on Fig. 7.26, which depicts thesquared magnitude of the frequency response of the system
H(z) =1
1 + 0.9z−1.
By comparing Fig. 7.26 to Fig. 7.28, one can verify that the PSD estimate obtained with thecovariance method (with N = 20 and a = −0.9) was able to detect the shape of the noisecomponent x1(n) as well.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−20
−15
−10
−5
0
5
10
15
Normalized Frequency [Hz]
PS
D [d
B]
Figure 7.25: PSD estimate for the covariance method. In this example, N = 4 and a = −0.9.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−10
−5
0
5
10
15
20
Normalized Frequency [Hz]
Pow
er R
espo
nse
[dB
]
Figure 7.26: Squared magnitude [dB] of the frequency response of the system H(z) = 11+0.9z−1 .
222 CHAPTER 7. SPECTRAL ESTIMATION
−1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
4
Real Part
Imag
inar
y P
art
Figure 7.27: Pole-zero plot for the AR model using N = 4.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−20
−15
−10
−5
0
5
10
15
20
Normalized Frequency [Hz]
PS
D [d
B]
Figure 7.28: PSD estimate for the covariance method. In this example, N = 20 and a = −0.9.
−1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
20
Real Part
Imag
inar
y P
art
Figure 7.29: Pole-zero plot for the AR model using N = 20.
223
In addition, one can observe in Fig. 7.29 that the resulting AR model is not stable (there is a polein the unit circle and the system is causal). A comparison between Fig. 7.25 and Fig. 7.28 showsthat the peak at 0.001 Hz, as well as the shape of the signal x1(n) are much better detected whenN = 20.
(c) Fig. 7.30 depicts the obtained result (|HAR(eω)|2 [dB]). By observing this figure, one can verifythat the covariance method detects a peak around 1/L = 1/1024 ≈ 0.001 Hz (normalized fre-quency). This peak is not detected as good as in the previous items, since the noise shape has apeak around DC (see Fig. 7.31), yielding a masking effect for detection of peaks near zero Hertz.Fig. 7.31 depicts the squared magnitude of the frequency response of the system
H(z) =1
1− 0.9z−1,
which is responsible for shaping the broadband signal x2(n), resulting in the colored signal x2(n).By comparing Fig. 7.31 to Fig. 7.30, one can verify that the PSD estimate obtained with thecovariance method (with N = 4 and a = 0.9) was able to detect the shape of the noise componentx1(n).
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−20
−15
−10
−5
0
5
10
15
Normalized Frequency [Hz]
PS
D [d
B]
Figure 7.30: PSD estimate for the covariance method. In this example, N = 4 and a = 0.9.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−10
−5
0
5
10
15
20
Normalized Frequency [Hz]
Pow
er R
espo
nse
[dB
]
Figure 7.31: Squared magnitude [dB] of the frequency response of the system H(z) = 11−0.9z−1 .
224 CHAPTER 7. SPECTRAL ESTIMATION
−1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
4
Real Part
Imag
inar
y P
art
Figure 7.32: Pole-zero plot for the AR model using N = 4.
In addition, one can observe in Fig. 7.32 that the resulting AR model is stable (all poles are in theunit circle and the system is causal).
7.22 In this exercise we want to estimate the PSD of a complex signal x(n). Therefore, in Fig. 7.33, whichdepicts the PSD of x(n), we will consider the whole spectrum range [0, 2π]. The length of the signalis L = 512 (arbitrarily chosen).
0 0.5 1 1.5 2−60
−50
−40
−30
−20
−10
0
10
20
30
40
Normalized frequency [× π rad/samples]
PS
D o
f x(n
) [d
B]
(a) PSD (biased autocorr. estimation).
0 0.5 1 1.5 2−60
−50
−40
−30
−20
−10
0
10
20
30
40
Normalized frequency [× π rad/samples]
PS
D o
f x(n
) [d
B]
(b) PSD (unbiased autocorr. estimation).
Figure 7.33: PSD estimate considering: (a) biased autocorrelation estimation; (b) windowed-data unbiasedautocorrelation estimation.
It can be seen, in Fig. 7.33, that both methods were able to estimate the frequency of the complexsinusoidal, which is 0.25π. While the PSD estimate using a windowed-data unbiased autocorrelationestimation has a sharper peak at the 0.25π, its variance is higher than the one of the PSD estimateconsidering a biased autocorrelation estimation (i.e., the standard periodogram).
7.23 We used the biased autocorrelation algorithm in order to estimate the autocorrelation function. Suchestimate is used during the solution of the AR modeling problem. The PSD estimate related to the
225
referred signal is obtained by using the AR model.
(a) In order to simplify our discussion, let us assume that the sample frequency is normalized tofs = 1.0 Hz. Fig. 7.34 depicts the obtained result (|HAR(eω)|2 [dB]). By observing this figure,one can verify that the autocorrelation method detects a peak at 1/L = 1/1024 ≈ 0.001 Hz(normalized frequency). This occurs since the discrete frequency associated with x(n) is 2π/Lrad/sample, implying that the normalized frequency of x(n) in Hz is 1/L = 0.001 Hz. To deducethis result, we used the well-known fact that the normalized sample frequency fs = 1.0 Hz isproportional to 2π rad/sample. From this observation, we conclude that the autocorrelation methodwas able to detect this sinusoid at its proper position.Furthermore, it is also possible to verify that the autocorrelation method was able to detect a peakaround 0.5 Hz. The reason for that can be understood based on Fig. 7.35, which depicts the squaredmagnitude of the frequency response of the system
H(z) =1
1 + 0.9z−1.
By comparing Fig. 7.35 to Fig. 7.34, one can verify that the PSD estimate obtained with theautocorrelation method (with N = 4 and a = −0.9) was able to detect the shape of the noise
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−10
−5
0
5
10
15
20
25
Normalized Frequency [Hz]
PS
D [d
B]
Figure 7.34: PSD estimate for the autocorrelation method. In this example, N = 4 and a = −0.9.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−10
−5
0
5
10
15
20
Normalized Frequency [Hz]
Pow
er R
espo
nse
[dB
]
Figure 7.35: Squared magnitude [dB] of the frequency response of the system H(z) = 11+0.9z−1 .
226 CHAPTER 7. SPECTRAL ESTIMATION
component x1(n).
Note that, even though we used a biased method to estimate the PSD of x(n), the frequency at0.001 Hz was reasonably detected due to the fact that there is no other frequency with significantenergy near 0.001 Hz to move the estimate from its proper position. Even the presence of thenoise signal was not able to disturb the detection of the peak at the frequency 0.05 Hz due to thefrequency response characteristic of the system H(z) = 1
1+0.9z−1 . In addition, one can observein Fig. 7.36 that the resulting AR model is stable (all poles are in the unit circle and the systemis causal). This is expected, as when dealing with the autocorrelation method, the resulting ARmodel is guaranteed to be stable.
(b) Fig. 7.37 depicts the obtained result (|HAR(eω)|2 [dB]). Once again, by observing this figure, onecan verify that the autocorrelation method detects a rather tuned peak at 1/L = 1/1024 ≈ 0.001Hz (normalized frequency). This peak was better detected than the one in the previous item, as itis now much more tuned.
Furthermore, it is also possible to verify that the autocorrelation method was able to detect a peakaround 0.5 Hz. The reason for that can be understood based on Fig. 7.35, which depicts the squared
−1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
4
Real Part
Imag
inar
y P
art
Figure 7.36: Pole-zero plot for the AR model using N = 4.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−10
−5
0
5
10
15
20
25
30
35
Normalized Frequency [Hz]
PS
D [d
B]
Figure 7.37: PSD estimate for the autocorrelation method. In this example, N = 20 and a = −0.9.
227
magnitude of the frequency response of the system
H(z) =1
1 + 0.9z−1.
By comparing Fig. 7.35 to Fig. 7.37, one can verify that the PSD estimate obtained with theautocorrelation method (with N = 20 and a = −0.9) was able to detect the shape of the noisecomponent x1(n) as well.
In addition, one can observe in Fig. 7.38 that the resulting AR model is stable, even though theMATLAB function zplane shows a pole quite near the unit circle at DC. A comparison betweenFig. 7.34 and Fig. 7.37 shows that the peak at 0.001 Hz, as well as the shape of the signal x1(n)are better detected when N = 20.
(c) Fig. 7.39 depicts the obtained result (|HAR(eω)|2 [dB]). By observing this figure, one can verifythat the autocorrelation method detects a peak around 1/L = 1/1024 ≈ 0.001 Hz (normalizedfrequency). This peak is not detected as good as in the previous items, since the noise shape has apeak around DC (see Fig. 7.40), yielding a masking effect for detection of peaks near zero Hertz.
−1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
20
Real Part
Imag
inar
y P
art
Figure 7.38: Pole-zero plot for the AR model using N = 20.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−10
−5
0
5
10
15
20
25
Normalized Frequency [Hz]
PS
D [d
B]
Figure 7.39: PSD estimate for the autocorrelation method. In this example, N = 4 and a = 0.9.
228 CHAPTER 7. SPECTRAL ESTIMATION
Fig. 7.40 depicts the squared magnitude of the frequency response of the system
H(z) =1
1− 0.9z−1,
which is responsible for shaping the broadband signal x2(n), resulting in the colored signal x2(n).By comparing Fig. 7.40 to Fig. 7.39, one can verify that the PSD estimate obtained with theautocorrelation method (with N = 4 and a = 0.9) was able to detect the shape of the noisecomponent x1(n).In addition, one can observe in Fig. 7.41 that the resulting AR model is stable (all poles are in theunit circle and the system is causal), as expected.
7.24 In this exercise we want to evaluate the number of operations performed by an N th-order Levinson-Durbin algorithm. First, we will consider that step (i) is not part of the algorithm, i.e., the resultspresented here will not consider the number of operations necessary to determine the autocorrelation.Step (ii) of the algorithm does not require any arithmetic operation. So, let us start evaluating step(iii). We will evaluate each stage of step (iii) (i.e., equations (7.95), (7.96), and (7.97) of the book)
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−10
−5
0
5
10
15
20
Normalized Frequency [Hz]
Pow
er R
espo
nse
[dB
]
Figure 7.40: Squared magnitude [dB] of the frequency response of the system H(z) = 11−0.9z−1 .
−1 −0.5 0 0.5 1
−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
4
Real Part
Imag
inar
y P
art
Figure 7.41: Pole-zero plot for the AR model using N = 4.
229
considering two main cases i = 1 and i 6= 1: 1
(a) Considering i = 1:
Table 7.3: Number of arithmetic operations necessary to compute the iteration i = 1.Computation of: \ Number of: additions subtractions multiplications divisions
k1 0 0 0 1σ2X,[1] 0 1 2 0a∗[1] 0 0 0 0
(b) Considering i 6= 1:
Table 7.4: Number of arithmetic operations necessary to compute any iteration i 6= 1.Computation of: \ Number of: additions subtractions multiplications divisions
ki i− 2 1 i− 1 1σ2X,[i] 0 1 2 0a∗[i] 0 i− 1 i− 1 0
Using Table 7.4 we can compute the total number of operations required from i = 2 to i = N , shownin Table 7.5. Notice that most of the cases lead to an AP (arithmetic progression) sum of N − 1 terms,in which the first term is calculated considering i = 2 and the last term considering i = N .
Table 7.5: Number of arithmetic operations necessary to compute the recursion from i = 2 to i = N .Computation of: \ Number of: additions subtractions multiplications divisions
ki(N−2)(N−1)
2 N − 1 N(N−1)2 N − 1
σ2X,[i] 0 N − 1 2(N − 1) 0a∗[i] 0 N(N−1)
2N(N−1)
2 0
Finally, Table 7.6 shows the total number of arithmetic operations required to compute an N th-orderLevinson-Durbin recursion, i.e., it is the result of summing Table 7.5 with Table 7.3.
Table 7.6: Total number of arithmetic operations necessary to compute the recursions from i = 1 to i = N .Computation of: \ Number of: additions subtractions multiplications divisions
kiN2−3N+2
2 N − 1 N2−N2 N
σ2X,[i] 0 N 2N 0a∗[i] 0 N2−N
2N2−N
2 0
TOTAL N2−3N+22
N2−N2 + 2N − 1 N2 +N N
If one is not concerned about the type of arithmetic operation (addition, subtraction, etc.), we can sumthe last row of Table 7.6 in order to compute the total number of arithmetic operations, which is:
2N2 + 2N
So, as it was said in the book, the computational complexity is proportional to N2.
1The reason for this separation is that the first order does not follow the same pattern that the others follow (see the computationof k1 in equation (7.95) of the book).
230 CHAPTER 7. SPECTRAL ESTIMATION
7.25 It was shown in the book that the result of equating the first derivative of ξB,[i] to 0 is ki. So, weonly have to show that the second derivative is positive in order to state that ki is a minimum point.Evaluating the second derivative we have:
∂2ξB,[i]
∂k2i
=1
L− iL−1∑n=i
x2b,[i−1](n− 1) +
1L− i
L−1∑n=i
x2f,[i−1](n) ≥ 0
where the equality ∂2ξB,[i]∂k2i
= 0 holds if and only if all terms of the two summations above are zero,which will not happen in practical applications. Therefore, the second derivative may be consideredpositive and, consequently, ki is a minimum point of the function ξB,[i].
7.26 First, we introduce the first-order transfer function H(z) as
H(z) =X1(z)X2(z)
=1
1− az−1
which has a pole in a.
Now, let us use the Burg’s reflection coefficients to evaluate the PSD of signal x(n) in the followingscenarios:
(a) Here, Fig. 7.42 depicts the magnitude frequency response of H(z). The pole position, a = −0.9,justifies the gain at frequency π. Fig. 7.43 illustrates the PSD estimate of x(n) considering anN th-order AR process, with N = 4. We can see that the sinusoidal component at frequency 1
512 × πas well as the resonance frequency of H(z) are properly located. The AR parameters generatedby this method are described by the following polynomial: 1.0000 + 0.6001z−1 − 0.5628z−2 −0.6273z−3 − 0.3462z−4.
(b) In this item, since a = −0.9, as in the last item, Fig. 7.42 also depicts the magnitude frequencyresponse of H(z). However, now the model order is N = 20, which yields a PSD estimate, seeFig. 7.44, much sharper at the correct frequencies (π/512 and π), and leads to a finer PSD esti-mate (in comparison to the last item, where it was used N = 4). The AR parameters generatedby this method are described by the following polynomial: 1.0000 + 0.8405z−1 − 0.1069z−2 −0.1262z−3 − 0.1596z−4 − 0.1406z−5 − 0.1196z−6 − 0.1291z−7 − 0.0605z−8 − 0.0290z−9 −0.0468z−10−0.0598z−11−0.0797z−12−0.0817z−13−0.0948z−14−0.0951z−15−0.1113z−16−0.1489z−17 − 0.1306z−18 − 0.0715z−19 − 0.0375z−20.
0 0.2 0.4 0.6 0.8 1−10
−5
0
5
10
15
20
Normalized Frequency (×π rad/sample)
Mag
nitu
de (
dB)
Figure 7.42: Magnitude response of the first-order AR process x1(n).
231
(c) Here, Fig. 7.45 depicts the magnitude frequency response of H(z). The pole position, a = 0.9,justifies the gain at frequency 0. Fig. 7.46 illustrates the PSD estimate of x(n) considering anN th-order AR process, with N = 4. In this case, since the sinusoidal component at frequencyπ
512 and the resonance frequency of H(z) are very close to each other, the PSD estimate revealedjust one peak in that frequencies (because N is small, as well). The AR parameters generatedby this method are described by the following polynomial: 1.0000 − 0.9397z−1 + 0.0230z−2 −0.0462z−3 + 0.0381z−4.
0 0.2 0.4 0.6 0.8 1−10
−5
0
5
10
15
20
25
Normalized Frequency (×π rad/sample)
Mag
nitu
de (
dB)
Figure 7.43: PSD estimate of x(n) using N = 4.
0 0.2 0.4 0.6 0.8 1−10
−5
0
5
10
15
20
25
Normalized Frequency (×π rad/sample)
Mag
nitu
de (
dB)
Figure 7.44: PSD estimate of x(n) using N = 20.
0 0.2 0.4 0.6 0.8 1−10
−5
0
5
10
15
20
Normalized Frequency (×π rad/sample)
Mag
nitu
de (
dB)
Figure 7.45: Magnitude response of the first-order AR process x1(n).
232 CHAPTER 7. SPECTRAL ESTIMATION
0 0.2 0.4 0.6 0.8 1−10
−5
0
5
10
15
20
25
Normalized Frequency (×π rad/sample)
Mag
nitu
de (
dB)
Figure 7.46: PSD estimate of x(n) using N = 4.
7.27 Using matrix notation, we can rewrite ki as
ki =2xTf xb
‖xf‖2 + ‖xb‖2 =2‖xf‖‖xb‖ cos(θ)‖xf‖2 + ‖xb‖2
where θ is the angle between xf and xb, which are defined as
xf = [ xf,[i−1](i) · · · xf,[i−1](L− 1) ]T
xb = [ xb,[i−1](i− 1) · · · xb,[i−1](L− 2) ]T
Re-organizing the terms we have
cos(θ) = ki‖xf‖2 + ‖xb‖2
2‖xf‖‖xb‖
Since | cos(θ)| ≤ 1 and ‖xf‖2+‖xb‖2
2‖xf‖‖xb‖ is already a positive number, we have
|ki|(‖xf‖2 + ‖xb‖2
2‖xf‖‖xb‖)≤ 1
|ki|(‖xf‖‖xb‖ +
‖xb‖‖xf‖
)≤ 2
|ki|(a+
1a
)≤ 2
|ki| ≤ 2(a+ 1
a
) (7.22)
where a = ‖xf‖‖xb‖ is a positive number. As we expect, Equation (7.22) is the ratio of two positive
numbers. Now, the initial problem is mapped into a problem of finding the worst case, i.e., we want tofind the maximum value of the ratio in order to find a bound that is valid for all cases (independent ofthe value of a). This can be done by introducing f(a) = a + a−1 and computing the first and secondderivatives2:
df(a)da
= 1− a−2 = 0→ a = 1
d2f(a)da2
∣∣∣∣a=1
= 2a−3∣∣a=1
= 2 > 0→ a is a minimum point
2Maximizing the ratio is equivalent to minimizing the denominator.
233
Substituting a = 1 into Equation (7.22) we have
|ki| ≤ 1
where the equality occurs when xf = xb (which does not happen in practice).
An alternative way of proofing this result is to first assume that xb,[i−1](n − 1) 6= xf,[i−1](n), for alln between i and L − 1. If this reasonable assumption is not stated, we would have ki = 1, as canbe verified in eq. (7.109). Let us also assume that all involved quantities are real, as performed in thetext-book.
We know that, for any vectors a,b ∈ RM , with 0 6= a 6= b 6= 0, we have that
‖a− b‖22 = (a− b)T (a− b) = ‖a‖22 + ‖b‖22 − 2aTb > 0,
which yields
2aTb‖a‖22 + ‖b‖22
< 1.
Therefore, by setting M = L − i, a = [xb,[i−1](i − 1) xb,[i−1](i) · · · xb,[i−1](L − 2)]T andb = [xf,[i−1](i) xf,[i−1](i+ 1) · · · xf,[i−1](L− 1)]T , we have the required result, i.e.,
ki =2L−1∑n=i
xf,[i−1](n)xb,[i−1](n− 1)
L−1∑n=i
(x2
b,[i−1](n− 1) + x2f,[i−1](n)
) < 1 (7.23)
In order to show that ξf,[i] is a decreasing function of i, let us first verify that
x2f,[i](n) = x2
f,[i−1](n) + k2i x
2b,[i−1](n− 1)− 2kixf,[i−1](n)xb,[i−1](n− 1), (7.24)
which yields
(L− i)ξf,[i] =L−1∑n=i
x2f,[i](n) =
L−1∑n=i
[x2f,[i−1](n) + k2
i x2b,[i−1](n− 1)]− 2ki
L−1∑n=i
[xf,[i−1](n)xb,[i−1](n− 1)]
=L−1∑n=i
x2f,[i](n) =
L−1∑n=i
[x2f,[i−1](n) + k2
i x2b,[i−1](n− 1)]− k2
i [x2f,[i−1](n) + x2
b,[i−1](n− 1)]
=L−1∑n=i
x2f,[i](n) = (1− k2
i )L−1∑n=i
x2f,[i−1](n)
≤L−1∑n=i
x2f,[i−1](n) ≤
L−1∑n=i
x2f,[i−1](n) + x2
f,[i−1](i− 1) =L−1∑n=i−1
x2f,[i−1](n), (7.25)
Note that, from this reasoning, we have already proved the following result:
L−1∑n=i
x2f,[i](n) ≤
L−1∑n=i−1
x2f,[i−1](n). (7.26)
In addition, we have that
ξf,[i] =1
(L− i)L−1∑n=i
x2f,[i](n) ≤ 1
(L− i)L−1∑n=i
x2f,[i−1](n)
=1
(L− i) [(L− i+ 1)ξf,[i−1] − x2f,[i−1](i− 1)]
≤ ξf,[i−1] +1
(L− i)[ξf,[i−1] − x2
f,[i−1](i− 1)]
(7.27)
234 CHAPTER 7. SPECTRAL ESTIMATION
Taking this inequality into consideration, we were not able to prove that ξf,[i] ≤ ξf,[i−1]. Note that, inorder to do so, we should prove that ξf,[i−1] ≤ x2
f,[i−1](i− 1). We used the word “should” because wedo not know if there is another method that arrives at a different relationship between ξf,[i] and ξf,[i−1].
7.28 In this exercise we will write the Levinson-Durbin’s coefficients as functions of RY (ν). The first oneis easy:
k1 =RY (1)RY (0)
(7.28)
Other first-order variables are:
σ2X,[1] = (1− k2
1)σ2X,[0]
=
(1−
(RY (1)RY (0)
)2)RY (0)
=(R2Y (0)−R2
Y (1)RY (0)
)(7.29)
a∗1,[1] = k1
=RY (1)RY (0)
(7.30)
Then, substituting equations (7.29) and (7.30) in the following expression leads to k2:
k2 =RY (2)− a∗1,[1]RY (1)
σ2X,[1]
=RY (2)RY (0)−R2
Y (1)R2Y (0)−R2
Y (1)(7.31)
Computing the other second-order variables we come to
σ2X,[2] = (1− k2
2)σ2X,[1] (7.32)
a∗1,[2] = a∗1,[1] − k2a∗1,[1] (7.33)
a∗2,[2] = k2 (7.34)
Finally, substituting equations (7.32), (7.33) and (7.34) into the expression of k3 we have, after somestraightforward manipulations:
k3=RY (3)− a∗1,[2]RY (2)− a∗2,[2]RY (1)
σ2X,[2]
=RY (3)
[R2Y (0)−R2
Y (1)]−RY (1)
[RY (2)RY (0)−R2
Y (1)]
[RY (0)−RY (2)] + [R2Y (0)+RY (2)RY (0)−2R2
Y (1)]− RY (1)RY (2)
[R2Y (0)+RY (2)RY (0)−2R2
Y (1)]
235
7.29 In this exercise we aim to estimate the PSD of an ARMA process utilizing two different methods:standard periodogram and the autocorrelation method.
Fig. 7.47 depicts the magnitude response of the ARMA process. Fig. 7.48 illustrate the PSD estimateusing the standard periodogram method. We can see that the periodogram estimate achieved a PSDestimate which is quite close to the true PSD, but with a high variance. We used a sequence of lengthL = 1024 (i.e., we used a sample-size that is not small).
0 0.2 0.4 0.6 0.8 1−40
−30
−20
−10
0
10
20
Normalized Frequency (×π rad/sample)
Mag
nitu
de (
dB)
Figure 7.47: PSD of the ARMA process.
0 0.2 0.4 0.6 0.8 1−40
−30
−20
−10
0
10
20
Normalized frequency [× π rad/samples]
PS
D o
f x(n
) [d
B]
Figure 7.48: PSD estimate using the standard periodogram.
Finally, Fig. 7.49 depicts the PSD estimate using the autocorrelation method considering differentvalues of N (the AR-model order). We can see that the PSD estimate becomes better (i.e., moresimilar to the true PSD given in Fig. 7.47) as N increases. This agrees with the theory, because sincethe ARMA process has two finite zeros that are not at the origin, it would be necessary an infinitenumber of poles in order to model each zero (see the Decomposition Theorem).
Comparing the PSD estimates given in figures 7.48 and 7.49 we can see that the former one provides anoisy representation of the process, whereas the latter provides a concise representation of the ARMAprocess, i.e., instead of estimating the PSD at each frequency, we estimate the best (in terms of MSE)AR representation for the ARMA process with N + 1 coefficients.
236 CHAPTER 7. SPECTRAL ESTIMATION
0 0.2 0.4 0.6 0.8 1−40
−30
−20
−10
0
10
20
Normalized Frequency (×π rad/sample)
Mag
nitu
de (
dB)
(a) N = 1.
0 0.2 0.4 0.6 0.8 1−40
−30
−20
−10
0
10
20
Normalized Frequency (×π rad/sample)
Mag
nitu
de (
dB)
(b) N = 2.
0 0.2 0.4 0.6 0.8 1−40
−30
−20
−10
0
10
20
Normalized Frequency (×π rad/sample)
Mag
nitu
de (
dB)
(a) N = 3.
0 0.2 0.4 0.6 0.8 1−40
−30
−20
−10
0
10
20
Normalized Frequency (×π rad/sample)
Mag
nitu
de (
dB)
(b) N = 4.
Figure 7.49: PSD estimate using the autocorrelation method considering (a)N = 1; (b)N = 2; (c)N = 3;(d) N = 4.
7.30 Using matrix notation, we can show that
ξ = E[e2(n)
]= E
[(y(n)−wTx(n))2
]= E
[y2(n)
]− 2wTE [y(n)x(n)] + wTE[x(n)xT (n)
]w
= E[y2(n)
]− 2wTpY X + wTRXw
where RX is the correlation matrix of the process X, and pY X is the cross-correlation vector, whichwere defined in Section 7.6 of the text.
By choosing w = w∗ = R−1X pY X and substituting in the equation above, we have the minimum
value of ξ (since the Hessian matrix R is semidefinite positive), denoted by ξmin:
ξmin = E[y2(n)
]− 2pTY XR−1X pY X + pTY XR−1
X RXR−1X pY X
= E[y2(n)
]− pTY XR−1X pY X
where we used the fact that RX is symmetric, and, therefore, its inverse is also symmetric.
237
7.31 In this exercise we have the system H1(z) given by
H1(z) =1
z + 0.6=
z−1
1 + 0.6z−1
and the system H(z) given by
H(z) =1
z2 − 0.62=
1z + 0.6
× 1z − 0.6
= H1(z)× z−1
1− 0.6z−1
i.e., H(z) is the cascade of H1(z) and H2(z) = z−1
1−0.6z−1 .
Applying w(n), which is a white noise with zero-mean and variance σ2w = 1, to the system H1(z) we
have y(n) as the output signal. Then, applying y(n) to the system H2(z) we have x(n) as its output.We want to calculate the Wiener filter that best maps y(n) to x(n).
We already know that the Wiener filter is given by w∗ = R−1Y pXY , where RY is the correlation
matrix of the process Y and pXY is the cross-correlation vector between the processes X and Y . So,we just have to determine these two variables.
Considering H1(z) we can write the following relation:
y(n) = w(n− 1)− 0.6y(n− 1)
and multiplying it by y(n− ν) and taking the expected value, we have
RY (ν) = E [y(n)y(n− ν)] = E [(w(n− 1)− 0.6y(n− 1))y(n− ν)]= E [w(n− 1)y(n− ν)]− 0.6RY (ν − 1)= E [w(n− 1)w(n− (ν + 1))− 0.6y(n− (ν + 1))]− 0.6RY (ν − 1)
which yields, for ν = 0, 1,
RY (0) = σ2w − 0.6RY (−1) = σ2
w − 0.6RY (1)RY (1) = −0.6RY (0)
Solving the system above (it is already solved in the book) we have
RY (ν) =σ2w(−0.6)|ν|
1− (−0.6)2
Now, considering H2(z) we have the following input-output relation
x(n) = y(n− 1) + 0.6x(n− 1) (7.35)
and multiplying it by y(n− i) and taking the expected value, we have
pXY (i) = E [x(n)y(n− i)] = E [(y(n− 1) + 0.6x(n− 1))y(n− i)]= RY (i− 1) + 0.6pXY (i− 1)
Computing pXY (0):
pXY (0) = RY (1) + 0.6pXY (−1) (7.36)
where
pXY (−1) = E [x(n)y(n+ 1)]= E [x(n)(w(n)− 0.6y(n))]= E [x(n)w(n)− 0.6x(n)y(n))]= −0.6pXY (0) (7.37)
238 CHAPTER 7. SPECTRAL ESTIMATION
Solving the system composed by equations (7.36) and (7.37) we have the following value for pXY (0):
pXY (0) =RY (1)
1 + 0.62
and for, i = 1, 2 we have
pXY (1) = RY (0) + 0.6pXY (0) = RY (0) + 0.6RY (1)
1 + 0.62
pXY (2) = RY (1) + 0.6pXY (1) = RY (1) + 0.6RY (0) + 0.62 RY (1)1 + 0.62
So, we have determined the correlation matrix and the cross-correlation vector as a function of RY (0),RY (1), and RY (2) which are:
RY (0) = 1.5625RY (1) = −0.9375RY (2) = 0.5625
Now, we can compute w∗ leading to the following solution:
w∗ = R−1Y pXY = [0 1 0.4412]T
If we pay close attention to relation (7.35), we will see that x(n) does not depend on y(n), whichjustifies the first element of the Wiener Filter being equal to zero.
Chapter 8
MULTIRATE SYSTEMS
8.1
DMX(z)H(z) ←→ x(nM) ∗ h(n)
x(nM) ∗ h(n) =∞∑
k=−∞
h(k)x((n− k)M), p = kM
=∞∑
p=−∞,p=kM
h(p
M)x(nM − p)
=∞∑
p=−∞,p=kM
hM (p)x(nM − p), hM (p) =h( pM ), p = rM, r ∈ Z
0, other values
=∞∑
p=−∞hM (p)x(nM − p), once hM (p) = 0∀p 6= kM
= DMx(n) ∗ hM (n)DMx(n) ∗ hM (n) ←→ DMX(z)H(zM )
DMX(z)H(z) = DMX(z)H(zM )
IMX(z)H(z) ←→ IMx(n) ∗ h(n)
IMx(n) ∗ h(n) = IM∞∑
k=−∞
x(k)h(n− k)
= ∑∞
k=−∞ x(k)h( nM − k), n = rM, r ∈ Z0, n 6= rM, r ∈ Z
= 0,∀n 6= rM, r ∈ Z
239
240 CHAPTER 8. MULTIRATE SYSTEMS
If n = rM :
IMx(n) ∗ h(n) =∞∑
k=−∞
x(k)h(n
M− k)
=∞∑
p=−∞,p=kM
x(p
M)h(
n
M− p
M), p = kM
=∞∑
p=−∞xI(p)hI(n− p), where
xI(n) = IMx(n)hI(n) = IMh(n)
And ∀n:
IMx(n) ∗ h(n) = xI(p) ∗ hI(p)= IMx(p) ∗ IMh(p)←→ IMX(z)H(zM )
IMX(z)H(z) = IMX(z)H(zM )
8.2 Using equation (9.38) of the textbook:
H00(z)H11(z)−H01(z)H10(z) = cz−l
H00(z2)H11(z2)−H01(z2)H10(z2) = cz−2l
H0(z) +H0(−z)2
H1(z)−H1(−z)2z−1
− H0(z)−H0(−z)2z−1
H1(z) +H1(−z)2
= cz−2l
H0(z)H1(z)−H0(z)H1(−z) +H0(−z)H1(z)−H0(−z)H1(−z)4z−1
−
−H0(z)H1(z) +H0(z)H1(−z)−H0(−z)H1(z)−H0(−z)H1(−z)4z−1
= cz−2l
H0(−z)H1(z)−H0(z)H1(−z) = 2cz−2l−1
By using equations (9.37) and (9.38) of the textbook:[G00(z) G10(z)G01(z) G11(z)
]=
z−∆+l
c
[H11(z) −H01(z)−H10(z) H00(z)
]G0(z) = z−1G00(z2) +G01(z2)
= z−1 z2(l−∆)
cH11(z2)− z2(l−∆)
cH10(z2)
= −z2(l−∆)
c[H10(z2)− z−1H11(z2)]
G0(z) = −z2(l−∆)
cH1(−z)
G1(z) = z−1G10(z2) +G11(z2)
=z2(l−∆)
c[−z−1H01(z2) +H00(z2)]
G1(z) =z2(l−∆)
cH0(−z)
241
8.3 Exercise 8.3
8.4 (a) Decimation
xd(n) = DMx(m)xd(n) = x(nM)yd(n) = DMx(m− n0)yd(n) = y(nM)yd(n) = x(nM − n0)
xd(n− n0) = x(nM − n0M)
Thus,
xd(n− n0) 6= DMx(n− n0)
But, if n0 = rM ,
yd(n) = x(nM − rM)yd(n) = x((n− r)M)
xd(n− r) = x((n− r)M)xd(n− r) = DMx(n− rM)
Hence, we can say that the decimator is periodically time-invariant.
(b) Interpolation
xi(n) = ILx(m)xi(n) =
x(nL ), n = kL, k ∈ Z
0, otherwise
yi(n) = ILx(m− n0)yi(n) =
y(nL ), n = kL, k ∈ Z
0, otherwise
yi(n) =x(nL − n0), n = kL, k ∈ Z
0, otherwise
xi(n− n0) =x(n−n0
L ), n− n0 = kL, k ∈ Z0, otherwise
Thus, without loss of generality, we can say that
ILx(m− r) = xi(n− rL)
and so the interpolation is time-invariant1.
8.5 The lowpass filter should have its cutoff frequency set to Ωs50 , once the information is within 0 ≤ Ω ≤
Ωs5 , and the signal will be decimated by 10. The stopband frequency should be Ωs
20 , to avoid aliasingdue to the decimation process. In order to achieve the specifications required for the filter, it wasfound that a filter with with order equal to 147 is needed. Figure 8.1 shows its magnitude and phaseresponses.
In the order hand, if we use a multiband filter (inspired in equation 8.22), it is possible to achieve therequired specifications with a filter of 77 coefficients only. Figure 8.2 shows its magnitude and phase
1Note that the concept of time-invariace was extended. Previously, it was stated that the time-invariance property was true whenHx(n− n0) = y(n− n0), if y(n) = Hx(n).
242 CHAPTER 8. MULTIRATE SYSTEMS
responses. For the multiband filter, the first cutoff frequency can be Ωs50 , whereas the first stopband
frequency can be set to 4Ωs50 , allowing a much larger transition band. This is possible since we are
interested only in the information below Ωs5 .
In fact, it was found that the filter requirements could be achieved with less than 77 coefficients. Byanalysing the magnitude response (Figure 8.2), for the filter with 77 coefficients, we can see thatthe stopband attenuation, for example, achieved -90 dB, when the requirements were for a stopbandattenuation of only -80 dB.
• MatLab® Program:
Ws=10000;eps=Ws/200;[N,f0,m0,w]= remezord([Ws/50 Ws/20],[1 0],[.0002 1e-4],Ws); Nh= remez(N,f0,m0,w);figure(1)freqz(h,1,256,Ws), zoom onsubplot(2,1,1), xlabel(’Frequency (rad/s)’)subplot(2,1,2), xlabel(’Frequency (rad/s)’)
[N2,f2,m2,w2]=remezord([2 8 12 18 22 28 32 38 42 48].*(Ws/100),[1 0 0 0 0 0],[.0002 1e-4 1e-4 1e-4 1e-4 1e-4],Ws); N2
h2= remez(N2,f2,m2,w2);figure(2)freqz(h2,1,512,Ws), zoom onsubplot(2,1,1), xlabel(’Frequency (rad/s)’)subplot(2,1,2), xlabel(’Frequency (rad/s)’)
8.6 Exercise 8.6
8.7 Exercise 8.7
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000−1400
−1200
−1000
−800
−600
−400
−200
0
Frequency (rad/s)
Pha
se (
degr
ees)
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000−150
−100
−50
0
50
Frequency (rad/s)
Mag
nitu
de R
espo
nse
(dB
)
Figure 8.1: Magnitude and phase responses for the lowpass filter of exercise 8.5
243
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000−1200
−1000
−800
−600
−400
−200
0
Frequency (rad/s)
Pha
se (
degr
ees)
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000−200
−150
−100
−50
0
50
Frequency (rad/s)
Mag
nitu
de R
espo
nse
(dB
)
Figure 8.2: Magnitude and phase responses for the multiband filter of exercise 8.5.
8.8 Exercise 8.8
8.9 Exercise 8.9
8.10 Exercise 8.10
8.11
H(z) =M−1∑j=0
z−jEj(zM )
Ej(z) =L−1∑l=0
h(Ml + j)z−l
=L−1∑p=0
h(M(L− 1− p) + j)z−(L−1−p), p = L− 1− l
= z−(L−1)L−1∑p=0
h(ML−M −Mp+ j)zp, but h(n) = ±h(ML− 1− n)
= ±z−(L−1)L−1∑p=0
h(Mp+M − 1− j)zp
Ej(z) = ±z−(L−1)EM−1−j(z−1)
8.12 • FIRST METHOD
One should note that an efficient FIR structure for the decimation and interpolation filters canalways be achieved if we compute just one output out of M samples for the decimation filter,whereas for the interpolation filter, the only one non-zero output of the interpolator, for every Msamples, will be used for the computation of the filter output.
244 CHAPTER 8. MULTIRATE SYSTEMS
Figure 8.3 shows a general efficient structure for the computation of the output of a decimationfilter. We can see that the multiplications and additions will operate at a period M times greaterthat the input data period. Note that the output is calculated at each M input samples.
Clock Period = T
. . .h(
0)
h(1)
h(2)
h(k). . .
Clock Period = MT
y(n)ADDER
SHIFT REGISTER
Input Data
Figure 8.3: Scheme for the efficient computation of the decimation filter of exercise 8.12.
• SECOND METHOD
Another interesting FIR structure for the decimation and interpolation filters is the so-called IFIRfilter design. The technique consists of the desing of a high-order FIR filter from the cascade oftwo simpler filters: a multiband filter and a lowpass one.Suppose we need an interpolation filter for L=20, with δp = 0.02 and δr = 10−3. Using theParks-Mclellan optimization, we need a filter with order 1392. Figure 8.4 shows the magnitudeand phase responses for this filter.Then, we desing a lowpass filter with the following specifications: wp = π
4 , δp = 0.01 andδr = 10−3, and the order necessary for this filter is 305. This filter is then interpolated (L=5) inorder to generate a multiband filter, whose magnitude and phase responses is depicted in Figure8.5.If we use a filter to retain just the lowpass part of the multiband signal, this filter can be de-
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−2000
−1500
−1000
−500
0
Normalized frequency (Nyquist == 1)
Pha
se (
degr
ees)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−150
−100
−50
0
50
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 8.4: Magnitude and phase responses for the lowpass filter of exercise 8.12.
245
signed with just a few coefficients (13), as we can see in Figure 8.6. This filter has the followingspecifications: wp = π
20 , δp = 0.01 and δr = 10−3.By cascading these two filters (multiband and lowpass), the resulting filter has the magnitudeand phase responses of Figure 8.7. It is clear that the resulting filter achieves the specificationsrequired for the interpolation filter. The resulting filter (IFIR) is formed by the cascading oftwo filters, with 305 and 13 coefficients, whereas the design of just one lowpass filter for theinterpolation required 1392 coefficients. Thus, the IFIR desing proved to be more efficient.
• MatLab® Program:
eps=pi/300;eps2=1/300;[N,f0,m0,w]= remezord([pi/20-eps pi/20],[1 0],[.02 1e-3],2*pi); Nh= remez(N,f0,m0,w);figure(1)freqz(h,1,256), zoom on
[N1,f1,m1,w1]= remezord([pi/4-eps*5 pi/4],[1 0],[.01 1e-3],2*pi); N1h1= remez(N1,f1,m1,w1);h1i= zeros(1,5*length(h1)); h1i(1:5:length(h1i))=h1;figure(2)freqz(h1i,1,256), zoom on
[N2,f2,m2,w2]= remezord([pi/20 .4*pi-.05*pi],[1 0],[.01 1e-3],2*pi); N2h2= remez(N2+3,f2,m2,w2);figure(3)freqz(h2,1,256), zoom on
h3=conv(h1i,h2);figure(4)freqz(h3,1,256), zoom on
8.13 For the efficient decimation and interpolation IIR structures, one should remember that, for the deci-mation operation, it is necessary to compute just one out of M samples of the decimation filter (if thedecimation factor is equal to M). For the interpolation operation, only one out of M samples of thefilter input signal is necessary to compute the filter output (if the interpolation factor is equal to M),since the other inputs are zero, due to the interpolation.
Thus, we should try to use, for the IIR case, a scheme similar to the FIR case, exhibited in the previousproblem. Nevertheless, an IIR filter will need past output samples for the computation of the presentoutput (due to its recursiveness). It should be clear that, for the efficient FIR structure, the output sam-ple was calculated at every M input samples (clock frequency M times greater). Therefore, to achievean efficient IIR structure, the present output sample should need only the past output samples that havebeen alreadu calculated. Figure 8.8 shows a scheme for the efficient IIR structure. Mathematically, wehave that the denominator of the IIR filter transfer function has the form of D(zM ).
The "D" block means a delay block. It should be obvious that a unit delay at the clock period of theoutput samples (M times greater that the input data clock period) is equivalent to z−M for the inputdata clock. Hence, the recursive part of the IIR filter has the form of D(zM ).
8.14 The formula for estimating the order of a lowpass FIR filter is repeated below:
M ≈ D∞(δp, δr)− f(δp, δr)(∆F )2
∆F+ 1 (8.1)
246 CHAPTER 8. MULTIRATE SYSTEMS
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−12000
−10000
−8000
−6000
−4000
−2000
0
Normalized frequency (Nyquist == 1)
Pha
se (
degr
ees)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−150
−100
−50
0
50
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 8.5: Magnitude and phase responses for the multiband filter of exercise 8.12.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−700
−600
−500
−400
−300
−200
−100
0
Normalized frequency (Nyquist == 1)
Pha
se (
degr
ees)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−120
−100
−80
−60
−40
−20
0
20
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 8.6: Magnitude and phase responses for the lowpass filter of exercise 8.12.
where D∞(δp, δr) = 0.005309[log10(δp)]2 + 0.071140 + [log10(δp)] − 0.4761 × log10(δr) −0.002660[log10(δp)]2 + 0.594100[log10(δp)] + 0.4278, being ∆F =
Ωr − ΩpΩs
and f(δp, δr) =
11.012 + 0.51244[log10(δp)− log10(δr)].
In order to implement a decimation filter for a decimation by 50, we should design a lowpass filterwhose stopband frequency is wr = π
50 . Thus, the passband frequency should be a little smaller thanthat, like wp = π
50 − π500 . We assume δp = 0.02 and δr = 10−3. The order necessary for this
decimation filter is equal to 2320. Figure 8.9 shows its magitude and phase responses.
Instead, if we use several decimation stages to accomplish the decimation by 50, it is possible to obtain
247
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
5000
10000
15000
Normalized frequency (Nyquist == 1)
Pha
se (
degr
ees)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−200
−150
−100
−50
0
50
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 8.7: Magnitude and phase responses for the resulting filter of exercise 8.12.
Clock Period = T
. . .
h(0)
h(1)
h(2)
h(k). . .
Clock Period = MT
y(n)ADDER
SHIFT REGISTER
...
Input Data
D D D D
Figure 8.8: Scheme for the efficient computation of the decimation filter o exercise 8.13.
a more efficient implementation.
We can have three stages, for example. The first one decimates the signal by a factor of 2, and theother two stages decimate the signal by a factor of 5. Thus, the second and third stages will have thesame decimation filter. The equatios below show the requirements for the first (HD2(z)) and second
248 CHAPTER 8. MULTIRATE SYSTEMS
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
1000
2000
3000
4000
Normalized frequency (Nyquist == 1)
Pha
se (
degr
ees)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−150
−100
−50
0
50
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 8.9: Magnitude and phase responses for the decimation filter (decimation by 50) of exercise 8.14.
(HD5(z)) decimation filters.
HD2(ew) :
wp = π
2 − π20
wr = π2
δp = 0.004δr = 10−3
HD5(ew) :
wp = π
5 − π50
wr = π5
δp = 0.008δr = 10−3
The order necessary for the first filter (decimation by 2) is just 114, whereas the order for the secondand third filters (decimation by 5) is 262.
Figures 8.10 and 8.11 show, respectively, the magnitude and phase responses for the decimation by 2and decimation by 5 filters.
Besides, only the first filter will operate at the highest data rate. The second filter at half the full datarate, and the third filter will operate at a data rate ten times smaller that the full data rate.
• MatLab® Program:
dp=.008; dr=1e-3;wr=1/5; wp=1/5-1/50; ws=2;deltaf=(wr-wp)/ws;dinf=(.005309*(log10(dp))^2+.07114*log10(dp)-.4761)*log10(dr)-(.00266
*(log10(dp))^2+.5941*log10(dp)+.4278);f=11.012+.51244*(log10(dp)-log10(dr));M=(dinf-f*deltaf^2)/deltaf+1
eps=(pi/500);[N,f0,m0,w]= remezord([pi/50-eps pi/50],[1 0],[.02 1e-3],2*pi); N
249
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−6000
−5000
−4000
−3000
−2000
−1000
0
Normalized frequency (Nyquist == 1)
Pha
se (
degr
ees)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−150
−100
−50
0
50
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 8.10: Magnitude and phase responses for the first decimation filter (decimation by 2) of exercise8.14.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−5000
−4000
−3000
−2000
−1000
0
Normalized frequency (Nyquist == 1)
Pha
se (
degr
ees)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−150
−100
−50
0
50
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 8.11: Magnitude and phase responses for the second (or third) decimation filter (decimation by 5)of exercise 8.14.
h= remez(N,f0,m0,w);figure(1)freqz(h,1,1024), zoom on
eps=(pi/20);[N,f0,m0,w]= remezord([pi/2-eps pi/2],[1 0],[.004 1e-3],2*pi); Nh= remez(N,f0,m0,w);
250 CHAPTER 8. MULTIRATE SYSTEMS
figure(2)freqz(h,1,512), zoom on
eps=(pi/50);[N,f0,m0,w]= remezord([pi/5-eps pi/5],[1 0],[.008 1e-3],2*pi); Nh= remez(N,f0,m0,w);figure(3)freqz(h,1,512), zoom on
8.15 Using one decimation/interpolation stage, we need to design the decimation and interpolation lowpassfilters. These filters can be made identical, and their project use the same requirements for the proposedfilter, except for the passband ripple δp, which can be made half the ripple for the proposed filter, soδp = 0.0005. Thus, using one decimation/interpolation stage, the filters can have their magnitude andphase responses like Figure 8.12.
The maximum possible value for the decimation factor M is 25. The decimation (or interpolation)filter has 413 coefficients. Therefore, the total number of multiplications per output sample is 207
25 forthe decimation and for the interpolation. Hence, the total number of multiplications for output sampleis 414
25 = 16.56.
If we want to use two decimation/interpolation stages, instead, we need to design the two decimationand two interpolation filters. In fact, we need to design only two pairs of filters, since the decima-tion and interpolation filters can be made identical. Figure 8.13 shows the scheme for the two-stagedecimation/interpolation technique.
Figures 8.14 and 8.15 shows the magnitude and phase responses for the two filters (H1(z) and H2(z))used for decimation and interpolation.
One should note that, for the filter H1(z), the cutoff frequency can be set to 0.01 rad/s, whereas thestopband frequency can be 0.1 rad/s, in order to avoid the aliasing from the decimation by 5. Obviously,the maximum ripple allowed in the passband should be 0.00025, and the stopband attenuation remainsthe same (δr ≥ 10−4). The total number of coefficients necessary to the design of such filter was 49.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−1500
−1000
−500
0
Frequency (rad/s)
Pha
se (
degr
ees)
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−150
−100
−50
0
50
Frequency (rad/s)
Mag
nitu
de R
espo
nse
(dB
)
Figure 8.12: Magnitude and phase responses for the decimation (or interpolation) lowpass filter of exercise8.15.
251
H1 (z) H2 (z) H1 (z) H2 (z)5 5 5 5
Figure 8.13: Scheme for the two-stage decimation/interpolation technique of exercise 8.15.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−1000
−800
−600
−400
−200
0
Frequency (rad/s)
Pha
se (
degr
ees)
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−150
−100
−50
0
50
Frequency (rad/s)
Mag
nitu
de R
espo
nse
(dB
)
Figure 8.14: Magnitude and phase responses for the decimation (or interpolation) lowpass filter H1(z) ofexercise 8.15.
For the filter H2(z), the cutoff frequency should be 0.05 rad/s, and the stopband frequency remains at0.1 rad/s, to avoid aliasing. Obviously, the ripple and stopband attenuation have the same values of theprevious case. The total number of coefficients necessary for filter H2(z) is 88.
Filter H1(z) will need a total of 2× 255 multiplications, due to the decimation and interpolation filters
H1(z), since it is a linear-phase filter with 49 coefficients. By analogy, filter H2(z) will need a total of2× 44
25 multiplications (decimation and interpolation filters H2(z)), since it is a linear-phase filter with88 coeficients. By summing these two values, we conclude that the total number of multiplications peroutput sample is 13.52.
It becomes clear that the two-stage decimation/interpolation desing (13.53 multiplications per outputsample) is more efficient than the implementation in only one stage (16.56 multiplications per outputsample).
• MatLab® Program:
[N,f0,m0,w]= remezord([.01 .02],[1 0],[.0005 1e-4],1); Nh= remez(N,f0,m0,w);figure(1)freqz(h,1,1024,1), zoom onsubplot(2,1,1), xlabel(’Frequency (rad/s)’)subplot(2,1,2), xlabel(’Frequency (rad/s)’)
[N,f0,m0,w]= remezord([.01 .1],[1 0],[.00025 1e-4],1); Nh= remez(N,f0,m0,w);figure(2)
252 CHAPTER 8. MULTIRATE SYSTEMS
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−2000
−1500
−1000
−500
0
Frequency (rad/s)
Pha
se (
degr
ees)
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−150
−100
−50
0
50
Frequency (rad/s)
Mag
nitu
de R
espo
nse
(dB
)
Figure 8.15: Magnitude and phase responses for the decimation (or interpolation) lowpass filter H2(z) ofexercise 8.15.
freqz(h,1,1024,1), zoom onsubplot(2,1,1), xlabel(’Frequency (rad/s)’)subplot(2,1,2), xlabel(’Frequency (rad/s)’)
[N,f0,m0,w]= remezord([.05 .1],[1 0],[.00025 1e-4],1); Nh= remez(N,f0,m0,w);figure(2)freqz(h,1,1024,1), zoom onsubplot(2,1,1), xlabel(’Frequency (rad/s)’)subplot(2,1,2), xlabel(’Frequency (rad/s)’)
8.16 The bandpass filter for MFC detection is depicted in Figure 8.16. Its bandpass goes from 699.5 Hz to700.5 Hz. The sampling frequency is 8000 Hz, and it has two stopbands, from 0 to 555.2 Hz, and from844.8 Hz to 4000 Hz. The order of this filter is 95.
The total number of multiplications per output sample is 48 (as it would be expected for a linear-phasefilter of order 95).
If we use the decimation/interpolation concept, we have to design a decimation filter with the samecharacteristics of the previous FIR filter. The interpolation filter will be identical to the decimationfilter. The passband ripple should be half the ripple of the previous filer (0.02), and so δp = 0.01.Figure 8.17 shows the magnitude and phase responses for this filer, whose order is equal to 108.
The maximum value for the decimation factor M is also easy to calculate. We already know that thepassband and transition bands of the filter should be in an interval smaller than π
M . In this case, thisfrequency interval is equal to 289.6 Hz (844.8 − 555.2). Since the maximum frequency for this filteris 4000 Hz (due to the sampling frequency of 8000 Hz), the maximum value for M is 13.
The total number of multiplications per output sample is only 11013 ≈ 8.46, since we have 55
13 multipli-cations per output sample for each of the two lowpass filters (decimation and interpolation ones).
• MatLab® Program:
253
0 500 1000 1500 2000 2500 3000 3500 4000−400
−300
−200
−100
0
100
200
300
Frequency (Hz)
Pha
se (
degr
ees)
0 500 1000 1500 2000 2500 3000 3500 4000−100
−80
−60
−40
−20
0
Frequency (Hz)
Mag
nitu
de R
espo
nse
(dB
)
Figure 8.16: Magnitude and phase frequencies for the FIR filter for MFC detection of exercise 8.16 .
0 500 1000 1500 2000 2500 3000 3500 4000−800
−600
−400
−200
0
200
Frequency (Hz)
Pha
se (
degr
ees)
0 500 1000 1500 2000 2500 3000 3500 4000−100
−80
−60
−40
−20
0
Frequency (Hz)
Mag
nitu
de R
espo
nse
(dB
)
Figure 8.17: Magnitude and phase frequencies for the decimation (or interpolation) filter for MFC detectionof exercise 8.16.
[N,f0,m0,w]= remezord([555.2 699.5 700.5 844.8],[0 1 0],[1e-2 .02 1e-2],8000); N[h,rip]= remez(95,f0,m0,w);figure(1)freqz(h,1,512,8000), zoom onsubplot(2,1,1), xlabel(’Frequency (Hz)’)subplot(2,1,2), xlabel(’Frequency (Hz)’)
254 CHAPTER 8. MULTIRATE SYSTEMS
df=844.8-555.2;M=4000/df, M=floor(M)
[N,f0,m0,w]= remezord([555.2 699.5 700.5 844.8],[0 1 0],[1e-2 rip/2 1e-2],8000); Nh= remez(N,f0,m0,w);figure(2)freqz(h,1,512,8000), zoom onsubplot(2,1,1), xlabel(’Frequency (Hz)’), axis([0 4000 -100 0])subplot(2,1,2), xlabel(’Frequency (Hz)’)
8.17 Suppose we want to design the lowpass filter with wp = 5π8 and wr = 11π
16 . We will also suppose wewant this filter to have δp = .001 and δr = 10−4. In order to achieve the following specifications, afilter with order equal to 125 is needed.
Using the frequency masking approach, we first define the filters H1(z) and H2(z), which will be oforder 34. These two filters are then interpolated, generating HI
1 (z) and HI2 (z). The magnitude and
phase responses for these interpolated filters can be seen in Figures 8.18 and 8.19.
The filters F1(z) and F2(z) are used to select parts of the interpolated spectra ofH1(z) andH2(z). Forthe filter F1(z), wp = π
2 + π8 and wr = π− (π8 + π
16 ), whereas for filter the Filter F2(z), wp = π4 + π
8and wr = 3π
4 − (π8 + π16 ). The order found for these two filters was 45, and their magnitude and phase
responses can be seen in Figures 8.20 and 8.21.
The magnitude and phase responses of the final filter can be seen in Figure 8.22, where the narrowtransition bandwith required was achieved, by using low complexity filters.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−8000
−6000
−4000
−2000
0
Normalized frequency (Nyquist == 1)
Pha
se (
degr
ees)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−150
−100
−50
0
50
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 8.18: Magnitude and phase responses for the filter HI1 (z) of exercise 8.17.
255
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−6000
−5000
−4000
−3000
−2000
−1000
0
1000
Normalized frequency (Nyquist == 1)
Pha
se (
degr
ees)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−100
−50
0
50
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 8.19: Magnitude and phase responses for the filter HI2 (z) of exercise 8.17.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−3500
−3000
−2500
−2000
−1500
−1000
−500
0
Normalized frequency (Nyquist == 1)
Pha
se (
degr
ees)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−120
−100
−80
−60
−40
−20
0
20
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 8.20: Magnitude and phase responses for the filter F1(z) of exercise 8.17.
256 CHAPTER 8. MULTIRATE SYSTEMS
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−2500
−2000
−1500
−1000
−500
0
Normalized frequency (Nyquist == 1)
Pha
se (
degr
ees)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−150
−100
−50
0
50
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 8.21: Magnitude and phase responses for the filter F2(z) of exercise 8.17.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−15000
−10000
−5000
0
Normalized frequency (Nyquist == 1)
Pha
se (
degr
ees)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−150
−100
−50
0
50
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 8.22: Magnitude and phase responses for the final filter of exercise 8.17.
257
• MatLab® Program:
[N,f0,m0,w]= remezord([5*pi/8 11*pi/16],[1 0],[.001 1e-4],2*pi); Nh= remez(N,f0,m0,w);figure(1)freqz(h,1,256), zoom on
[N1,f1,m1,w1]= remezord([pi/2 pi/2+pi/4],[1 0],[.0005 1e-4],2*pi);N1=N1+2h1= remez(N1,f1,m1,w1);figure(2)freqz(h1,1,256), zoom on
h3=[zeros(1,N1/2) 1 zeros(1,N1/2)];h2=-h1+h3;figure(3)freqz(h2,1,256), zoom on
h1i=zeros(1,4*length(h1)); h1i(1:4:length(h1i))=h1;h2i=zeros(1,4*length(h2)); h2i(1:4:length(h2i))=h2;
[N1,f1,m1,w1]= remezord([pi/2+pi/8 pi-(pi/8+pi/16)],[1 0],[.0005 1e-4],2*pi);N1=N1+1;g1= remez(N1,f1,m1,w1);figure(4)freqz(g1,1,256), zoom on
[N2,f2,m2,w2]= remezord([pi/4+pi/8 3*pi/4-(pi/8+pi/16)],[1 0],[.0005 1e-4],2*pi);N2=N2+1;g2= remez(N2,f2,m2,w2);figure(5)freqz(g2,1,256), zoom on
figure(6), freqz(h1i), zoom onfigure(7), freqz(h2i), zoom on
a=conv(h1i,g1);b=conv(h2i,g2);
figure(8), freqz(a), zoom onfigure(9), freqz(b), zoom on
f=[a zeros(length(b)-length(a))]+b;figure(10), freqz(f), zoom on
8.18 Exercise 8.18
8.19 Exercise 8.19
258 CHAPTER 8. MULTIRATE SYSTEMS
8.20 A non-mimimum delay solution is possible with
Cl(z) =
D0(z) C1(z) 0 00 C0(z) C1(z) 00 0 C0(z) D1(z)
Therefore
Cl(z)E(z) =
D0(z) C1(z)z−1C1(z) C0(z)z−1C0(z) z−1D1(z)
Then
C(z) = R(z)Cl(z)E(z) =(
z−1C1(z) C0(z)z−1D0(z) + z−1C0(z) z−1D1(z) + z−1C1(z)
)The matrix above is pseudo circulant if D0(z) = D1(z) = 0. In this case the overall transfer functionis given by
H(z) = 2z−2[C0(z2) + z−1C1(z2)
](8.2)
In this case the resulting structure is as shown in the Figure 8.23 below
259
2
2
z−1
z−1
2
2
z−1
z−1
z−1
C0(z)
C1(z)
C0(z)
C1(z)
Figure 8.23: Realization of equation (8.2).
260 CHAPTER 8. MULTIRATE SYSTEMS
8.21 Exercise 8.21
8.22 Exercise 8.22
8.23 Exercise 8.23
8.24 Exercise 8.24
8.25 Exercise 8.25
8.26 Exercise 8.26
8.27 Exercise 8.27
8.28 Exercise 8.28
8.29 Exercise 8.29
8.30 Exercise 8.30
8.31 The minimum order that is necessary to design a linear-phase FIR filter satisfying the specifications is78. The impulse response of this filter, depicted in Figure 8.24, is
Table 8.1: Impulse response coefficients for the filter of exercise 8.31.
h(0) to h(79)h(0) = 0.0020 h(28)= 0.0010 h(56)= 0.0052h(1) = -0.0067 h(29)= -0.0466 h(57)= 0.0234h(2) = -0.0036 h(30)= -0.0030 h(58)= -0.0058h(3) = 0.0056 h(31)= 0.0751 h(59)= -0.0248h(4) = 0.0036 h(32)= 0.0041 h(60)= 0.0054h(5) = -0.0067 h(33)= -0.1026 h(61)= 0.0214h(6) = -0.0039 h(34)= -0.0041 h(62)= -0.0039h(7) = 0.0061 h(35)= 0.1255 h(63)= -0.0151h(8) = 0.0034 h(36)= 0.0030 h(64)= 0.0019h(9) = -0.0035 h(37)= -0.1406 h(65)= 0.0078h(10)= -0.0021 h(38)= -0.0011 h(66)= 0.0003h(11)= -0.0013 h(39)= 0.1459 h(67)= -0.0013h(12)= 0.0003 h(40)= -0.0011 h(68)= -0.0021h(13)= 0.0078 h(41)= -0.1406 h(69)= -0.0035h(14)= 0.0019 h(42)= 0.0030 h(70)= 0.0034h(15)= -0.0151 h(43)= 0.1255 h(71)= 0.0034h(16)= -0.0039 h(44)= -0.0041 h(72)= 0.0061h(17)= 0.0214 h(45)= -0.1026 h(73)= -0.0039h(18)= 0.0054 h(46)= 0.0041 h(74)= -0.0067h(19)= -0.0248 h(47)= 0.0751 h(75)= 0.0036h(20)= -0.0058 h(48)= -0.0030 h(76)= 0.0056h(21)= 0.0234 h(49)= -0.0466 h(77)= -0.0036h(22)= 0.0052 h(50)= 0.0010 h(78)= -0.0067h(23)= -0.0157 h(51)= 0.0203 h(79)= 0.0020h(24)= -0.0036 h(52)= 0.0014h(25)= 0.0010 h(53)= 0.0010h(26)= 0.0014 h(54)= -0.0036h(27)= 0.0203 h(55)= -0.0157
Therefore, we have:
261
Figure 8.24: Impulse Response, h(n), of the bandpass FIR filter, minimax.
Figure 8.25: Frequency Response of h(n), minimax.
c0(n) = 12 [0.0020 −0.0036 0.0036 −0.0039 0.0034 −0.0021 0.0003 0.0019−0.0039 0.0054 −0.0058 0.0052 −0.0036 0.0014 0.0010 −0.00300.0041 −0.0041 0.0030 −0.0011 −0.0011 0.0030 −0.0041 0.0041−0.0030 0.0010 0.0014 −0.0036 0.0052 −0.0058 0.0054 −0.00390.0019 0.0003 −0.0021 0.0034 −0.0039 0.0036 −0.0036 0.0020];
c1(n) = 12 [−0.0067 0.0056 −0.0067 0.0061 −0.0035 −0.0013 0.0078 −0.0151
0.0214 −0.0248 0.0234 −0.0157 0.0010 0.0203 −0.0466 0.0751−0.1026 0.1255 −0.1406 0.1459 −0.1406 0.1255 −0.1026 0.0751−0.0466 0.0203 0.0010 −0.0157 0.0234 −0.0248 0.0214 −0.01510.0078 −0.0013 −0.0035 0.0061 −0.0067 0.0056 −0.0067].
262 CHAPTER 8. MULTIRATE SYSTEMS
And now all matrices are determined.
• MatLab® Program:
clear all;close all;
% Specifications:delta_p = 1e-2; % ripple in the passbanddelta_r = 1e-2; % atenuation in the stopbandWs = 2*pi; % sampling freq. [ rad/s ]W_r1 = 0.4 * (Ws/2);W_p1 = 0.48 * (Ws/2);W_p2 = 0.55 * (Ws/2);W_r2 = 0.6 * (Ws/2);
%%%%%%%%%%%%%%%%%%% BANDPASS FILTER: (filter order = 78)%%%%%%%%%%%%%%%%%%% Parks-McClellan Method (minimax)F = [ W_r1 W_p1 W_p2 W_r2 ];A = [ 0 1 0 ];DEV =[ delta_r delta_p delta_r ];[N0,F0,A0,W0] = firpmord(F,A,DEV,Ws);h0 = firpm(N0,F0,A0,W0);figure, freqz(h0); title(’Bandpass filter’);
8.32 In this problem I used the WLS project within the Signal Processing toolbox. In the first designs Iset the weights on each band equal to 1 and varied the filter order. I could see that the attenuationin the first stopband (before freq. 0.4) was easily achieved, while the ripple in the passband and theattenuation in the second stopband (after freq. 0.6) were not. Therefore, I decided to “increase theimportance” of these 2 bands. So, I chose the weights of the WLS as:
(a) Weight in the first stopband = 1
(b) Weight in the passband = 3
(c) Weight in the second stopband = 3
With these weights, the order of the designed filter satisfying the specifications is 89. Its impulseresponse, depicted in Figure 8.26, is
The frequency response of h(n) is depicted in Figure 8.27. In comparison with Figure 8.25, we cansee that with the WLS approach we can achieve higher attenuations, but on the other hand, the ripplein the passband in more difficult to control.
Knowing h(n) we can calculate C0(z) and C1(z) just in the same way we did in the previous exercise,i.e., C0(z) will be formed by the coefficients of H(z) (divided by 2) that have even exponents in z,and C1(z) will be composed by the coefficients of H(z) (divided by 2) that have odd exponents in z.So, we will show the coefficients of c0(n) and c1(n) which form the polynomials C0(z) and C1(z)that determine the matrices of overlapped block filtering:
263
Table 8.2: Impulse response coefficients for the filter of exercise 8.32.
h(0) to h(44) - the second half is obtained by symmetryh(0) = -0.0007 h(1) = 0.0009 h(2) = 0.0014h(3) = -0.0019 h(4) = -0.0020 h(5) = 0.0029h(6) = 0.0023 h(7) = -0.0037 h(8) = -0.0021h(9) = 0.0037 h(10)= 0.0015 h(11)= -0.0025h(12)= -0.0005 h(13)= -0.0001 h(14)= -0.0007h(15)= 0.0040 h(16)= 0.0017 h(17)= -0.0089h(18)= -0.0022 h(19)= 0.0139 h(20)= 0.0020h(21)= -0.0176 h(22)= -0.0012 h(23)= 0.0187h(24)= 0.0000 h(25)= -0.0162 h(26)= 0.0007h(27)= 0.0090 h(28)= -0.0001 h(29)= 0.0027h(30)= -0.0028 h(31)= -0.0184 h(32)= 0.0088h(33)= 0.0365 h(34)= -0.0184 h(35)= -0.0550h(36)= 0.0311 h(37)= 0.0717 h(38)= -0.0460h(39)= -0.0843 h(40)= 0.0616 h(41)= 0.0914h(42)= -0.0757 h(43)= -0.0920 h(44)= 0.0865
Figure 8.26: Impulse Response, h(n), of the bandpass FIR filter, WLS.
c0(n) = 12 [−0.0007 0.0014 −0.0020 0.0023 −0.0021 0.0015 −0.0005 −0.0007
0.0017 −0.0022 0.0020 −0.0012 0.0000 0.0007 −0.0001 −0.00280.0088 −0.0184 0.0311 −0.0460 0.0616 −0.0757 0.0865 −0.09200.0914 −0.0843 0.0717 −0.0550 0.0365 −0.0184 0.0027 0.0090−0.0162 0.0187 −0.0176 0.0139 −0.0089 0.0040 −0.0001 −0.00250.0037 −0.0037 0.0029 −0.0019 0.0009];
c1(n) = 12 [0.0009 −0.0019 0.0029 −0.0037 0.0037 −0.0025 −0.0001 0.0040−0.0089 0.0139 −0.0176 0.0187 −0.0162 0.0090 0.0027 −0.01840.0365 −0.0550 0.0717 −0.0843 0.0914 −0.0920 0.0865 −0.07570.0616 −0.0460 0.0311 −0.0184 0.0088 −0.0028 −0.0001 0.00070.0000 −0.0012 0.0020 −0.0022 0.0017 −0.0007 −0.0005 0.0015−0.0021 0.0023 −0.0020 0.0014 −0.0007].
264 CHAPTER 8. MULTIRATE SYSTEMS
Figure 8.27: Frequency Response of h(n), WLS.
8.33 Exercise 8.33
8.34 Exercise 8.34
8.35 Exercise 8.35
8.36 Exercise 8.36
8.37 Exercise 8.37
Chapter 9
FILTER BANKS
9.1
9.2 Using equation (9.123) of the textbook:
H0(−z)H1(z)−H0(z)H1(−z) = 2cz−2l−1
P (z) = H0(z)H1(−z)P (z)− P (−z) = 2dz−2l−1, (d = −c, d ∈ <)
P (z) = a0 + a1z−1 + a2z
−2 + . . .+ aNz−N
P (−z) = a0 − a1z−1 + a2z
−2 + . . .+ (−1)NaNz−N
P (z)− P (−z) = 2a1z−1 + 2a3z
−3 + . . .+ 2a2l+1z−2l−1 + . . .+ 2[1− (−1)N ]aNz−N
Since P (z)− P (−z) = 2dz−2l−1, ai = 0, ∀i odd, except for a2l+1 = d.
But, in order to guarantee the linear-phase property for the filter bank, one should guarantee that P(z)is also a linear-phase filter.
p(n) = ±p(N − n)p(2l + 1) 6= 0
If there are M coefficients below (and after) the (2l+ 1)-th coefficient, the total number of coefficientsis 2M + 1, M ∈ Z, which is obviously and odd number.
265
266 CHAPTER 9. FILTER BANKS
9.3
Xi(z) =1M
M−1∑k=0
P (z1M e−
2πkM ), P (z) = z−iX(z)
=1M
M−1∑k=0
z−iM ei
2πkM X(z
1M e−
2πkM )
Ai(z) = Xi(zM )
=1M
M−1∑k=0
z−iei2πkM X(ze−
2πkM )
Y (z) =M−1∑l=0
z−lAM−1−l(z)
Y (z) =1M
M−1∑l=0
M−1∑k=0
z−(M−1−l+l)e(M−1−l) 2πkM X(ze−
2πkM )
=z−M+1
M
M−1∑l=0
M−1∑k=0
e2πM (k−1−l)X(ze−
2πkM )
=z−M+1
M
M−1∑l=0
M−1∑k=0
W(l+1−k)M X(zW k
M )
= 0,∀k 6= l − 1
Y (z) =z−M+1
M
M−1∑k=0
X(zW kM )
=z−M+1
MMX(z)
H(z) =Y (z)X(z)
= z−M+1
9.4
E(z) =[H00(z) H01(z)H10(z) H11(z)
]R(z) =
[G00(z) G10(z)G01(z) G11(z)
]
267
H00(z2) =H0(z) +H(−z)
2
H00(z2) =18
[−1 + 6z−2 − z−4]
H01(z2) =H0(z)−H0(−z)
2z−1
H01(z2) =14
[1 + z−2]
H10(z2) =H1(z) +H1(−z)
2
H10(z2) =12
[1 + z−2]
H11(z2) =H1(z)−H1(−z)
2z−1
H11(z2) = −1
G00(z2) =G0(z)−G0(−z)
2z−1
G00(z2) = 1
G01(z2) =G0(z) +G0(−z)
2
G01(z2) =12
[1 + z−2]
G10(z2) =G1(z)−G1(−z)
2z−1
G10(z2) =14
[1 + z−2]
G11(z2) =G1(z) +G1(−z)
2
G11(z2) =18
[1− 6z−2 + z−4]
E(z) =[
18 (−1 + 6z−1 − z−2) 1
4 (1 + z−1)12 (1 + z−1) −1
]R(z) =
[1 1
4 (1 + z−1)12 (1 + z−1) 1
8 (1− 6z−1 + z−2)
]
E(z)R(z) =[
18 (−1 + 6z−1 − z−2 + 1 + 2z−1 + z−2) 0
0 18 (1 + 2z−1 + z−2 − 1 + 6z−1 − z−2)
]E(z)R(z) = z−1I
268 CHAPTER 9. FILTER BANKS
9.5 Equations (9.131)-(9.134) of the textbook give:
H00(z) =18
(−1 + 6z−1 − z−2)
H01(z) =14
(1 + z−1)
H10(z) =12
(1 + z−1)
H11(z) = −1
From the previous exercise, we already know that E(z)R(z) = z−1I . For a general M -band perfectreconstruction filter bank, we have the following transfer function for the system:
T (z) = z−M+1z−M∆
T (z) = z−M(∆+1)+1
Thus, a zero-delay perfect reconstruction filter bank requires that
−M(∆ + 1) + 1 = 0
∆ =1M− 1
In the case of our 2-band filter bank, we can set ∆ = − 12 and use equations (9.120), (9.124) and
(9.125) of the textbook to derive the new synthesis filters.
H00(z)H11(z)−H01(z)H10(z) = cz−l Equation (9.120) of the textbook
= −18
(−1 + 6z−1 − z−2)− 18
(1 + 2z−1 + z−2)
= −z−1, (c = −1, l = 1))
G0(z) = −z2(1−(− 1
2 ))
−1H1(−z)
G0(z) =12
(z3 + 2z2 + z)
G1(z) =z2(1−(− 1
2 ))
−1H0(−z)
G1(z) =18
(z3 + 2z2 − 6z + 2 + z−1)
Therefore, the general rule is just to search for the value of ∆ that sets the delay of the overall transferfunction to zero. With this value for ∆, we can change the filters G0(z) and G1(Z) appropriately.
Another possible general rule consists of making E(z)R(z) = zI , and then put a pure delay after eachsynthesis filter. Note that this is equivalent to set ∆ = 1. Then, T (z) = z−M(∆+1)+1 = z1, and so thepure delay would compensate for the advance caused by ∆ = 1.
269
9.6
H0(z) = (1 + z−1)5
H1(z) = (1− z−1)A(z)
P (z) = H0(z)H1(−z) = (1 + z−6)A(z)
P (z) = z−5 +2∑k=0
a2k(z−2k + z−10+2k),M = 3
2∑k=0
a2k(5− 2k)2n =12δ(n), n = 0, 1, 2
a0 + a2 + a4 = 12
25a0 + 9a2 + a4 = 0625a0 + 81a2 + a4 = 0
...
a0 =3
256
a2 = − 25256
a4 =150256
The coefficients of the lowpass analysis filterH0(z) are easily obtained. Thus, one can calculateH1(z)coefficients by the deconvolution of H0(z) out of P (z).
Figure 9.1 shows the magnitude and phase responses for H0(z), whereas Figure 9.2 shows them forfilter H1(z).
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−500
−400
−300
−200
−100
0
Normalized frequency (Nyquist == 1)
Pha
se (
degr
ees)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−150
−100
−50
0
50
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 9.1: Magnitude and phase responses for H0(z) - Exercise 9.6.
270 CHAPTER 9. FILTER BANKS
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−400
−300
−200
−100
0
100
Normalized frequency (Nyquist == 1)
Pha
se (
degr
ees)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−60
−50
−40
−30
−20
−10
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 9.2: Magnitude and phase responses for H1(z) - Exercise 9.6.
The quality of the filter bank is not so "good", once the lowpass and highpass analysis filters arenot capable of dividing the incoming signal into two frequency bands (low and high frequency). Byanalyzing the filters responses, we can see that the magnitude responses of the filters have a largefrequency spectrum in which they overlap.
9.7 The design of a 2-band filter bank with 64 coefficients consists of designing the lowpass filter H0(z),since the other filters are functions of H0(z).
H1(z) = H0(−z)G0(z) = H1(−z)G1(z) = −H0(−z)
The overall transfer function of the filter bank can also be obtained as a function of H0(z):
H(z) =12
[H20 (z)−H2
0 (−z)]
The design ofH0(z) consists of minimizing the objective function proposed by equation (9.148) of thetextbook. The starting point for the lowpass filter was obtained using the remez optimization. Figure9.3 shows the performance of the filter bank obtained when the lowpass filter (H0(z)) is the start-ing point for the optimization. Note that the overall magnitude response has an undesired attenuationnear π2 (represented by 0.5 in normalized frequency). The stopband frequency was chosen to be 0.52π.
The δ parameter of equation (9.148) of the textbook provides the trade-off between the stopband at-tenuation of the lowpass filter and the overall amplitude distortion. Small values for δ (less than 0.5)result in filter banks with very small overall amplitude distortion. Instead, large values for δ result infilter banks with good stopband attenuation for the lowpass analysis filter (H0(z)). The optimizationused the fmins MatLab® function.
At first, a small value was used for δ (0.1). Figure 9.4 shows the performance of the resulting filterbank. It can be clearly seen that the overall magnitude distortion is much smaller that that obtained in
271
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−80
−60
−40
−20
0
20
Mag
nitu
de (
dB)
Magnitude responses for H0(z) and H
1(z)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−8
−6
−4
−2
0
2
Mag
nitu
de (
dB)
Normalized Frequency
Overall Magnitude Response
Figure 9.3: Magnitude of filters H0(z) and H1(z) (above), and the overall magnitude response of theJohnston filter bank - Exercise 9.7.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−60
−50
−40
−30
−20
−10
0
10
Mag
nitu
de (
dB)
Magnitude responses for H0(z) and H
1(z)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−0.5
0
0.5
Mag
nitu
de (
dB)
Normalized Frequency
Overall Magnitude Response
Figure 9.4: Magnitude of filters H0(z) and H1(z) (above), and the overall magnitude response of theJohnston filter bank, when δ = 0.1 - Exercise 9.7.
Figure 9.3.
Then, the value of δ was increased to 0.8, in order to improve the stopband attenuation for H0(z).Figure 9.5 shows the performance of the filter bank obtained. As one would expect, the improve instopband attenuation tends to increase the overall magnitude distortion.
272 CHAPTER 9. FILTER BANKS
• MatLab® program:
function y=exerc9_10(x,delta)
X=fft(x,512);i=0:(length(x)-1);x1(i+1)=((-1).^i).*x(i+1);X1=fft(x1,512);aux=[zeros(1,(length(x)-1)) 1];AUX=fft(aux,512);
i1=sum(abs(X(135:256)).^2);i2=sum(.25*abs(X(1:256).^2-X1(1:256).^2-AUX(1:256)).^2);y=delta*i1+(1-delta)*i2;
9.8 Now the stopband attenuation is a constraint, and must be set to 60dB. Once again, the initial lowpassfilter for the optimization is obtained by the remez routine. It is important to design a initial filter withvery small ripple in the passband (we have set δp = 0.0001), so that the overall transfer function canbe also exhibit a very small distortion.
The order obtained for the initial filter was 67. The optimization parameter δ was set to 0.1. Figure 9.6shows the performance of the resulting filter bank.
• MatLab® program:
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−80
−60
−40
−20
0
20
Mag
nitu
de (
dB)
Magnitude responses for H0(z) and H
1(z)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−2
−1.5
−1
−0.5
0
0.5
1
1.5
Mag
nitu
de (
dB)
Normalized Frequency
Overall Magnitude Response
Figure 9.5: Magnitude of filters H0(z) and H1(z) (above), and the overall magnitude response of theJohnston filter bank, when δ = 0.8 - Exercise 9.7.
273
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
−100
−80
−60
−40
−20
0
20
Mag
nitu
de (
dB)
Magnitude responses for H0(z) and H
1(z)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−0.6
−0.4
−0.2
0
0.2
0.4
Mag
nitu
de (
dB)
Normalized Frequency
Overall Magnitude Response
Figure 9.6: Magnitude of filters H0(z) and H1(z) (above), and the overall magnitude response of theJohnston filter bank (δ = 0.1) - Exercise 9.8.
function y=exerc9_11(x,delta)
X=fft(x,512);i=0:(length(x)-1);x1(i+1)=((-1).^i).*x(i+1);X1=fft(x1,512);aux=[zeros(1,(length(x)-1)) 1];AUX=fft(aux,512);
i1=sum(abs(X(135:256)).^2);if (i1>1e-3), i1=100*i1; end;i2=sum(.25*abs(X(1:256).^2-X1(1:256).^2-AUX(1:256)).^2);y=delta*i1+(1-delta)*i2;
9.9
E0(z) = 4 + z−1
E0(z) = (2 + z−1)(3 + z−1)H0(z) = 4 + z−2 + z−1(6 + 5z−2 + z−4)
= 4 + 6z−1 + z−2 + 5z−3 + z−5
G0(z) = z−1 14 + z−2
+1
(2 + z−2)(3 + z−2)
G1(z) = z−1 14 + z−2
− 1(2 + z−2)(3 + z−2)
9.10
E0(z2) = 2 + z−2
E1(z2) = 1 + z−2 + 0.25z−4
274 CHAPTER 9. FILTER BANKS
z2 =−1±√1− 1
2⇒ z = ±j 1√
2
G0(z) = z−1 1E0(z2)
+1
E1(z2)=
z−1
2 + z−2+
11 + z−2 + z−4
4
G1(z) = z−1 1E0(z2)
− 1E1(z2)
=z−1
2 + z−2− 1
1 + z−2 + z−4
4
9.11
E(z) =[
1− z−1 1 + z−1
−1− z−1 −1 + z−1
]z−1ET (z−1) =
[z−1 − 1 −z−1 − 1z−1 + 1 −z−1 + 1
]ET (z−1)E(z) =
[4z−1 0
0 4z−1
]Since the filters are lossless or paraunitary⇒ CQF. Therefore,
H1(z) = −z−3H0(−z−1) = −z−3(−z3 − z2 − z + 1)= −z−3 + z−2 + z−1 + 1
G0(z) = z−3H0(z−1) = z−3(z3 − z2 + z + 1) = z−3 + z−2 − z−1 + 1G1(z) = −H0(−z) = z−3 + z−2 + z−1 − 1
9.12 At first, a half-band filter (prototype) is generated, with the stopband gain set to −116 dB (attenuationof 2 × 55 + 6 = 116). By setting the passband ripple also to −116 dB, the generated filter P (z) (bythe use of ıremez) will satisfy the condition stated in equation (9.169) of the textbook, repeated below.
p(n)[1 + (−1)n] = 2δ(n− N +N)
The order of the filter P (z) was 74. Figure 9.7 shows the magnitude response for P (z), as well as theimpulse response p(n).
The factorization P (z) = H0(z)H0(z−1) was applied to generate linear-phase filters. Figure 9.8shows the zeros of P (z) and the selected zeros for H0(z).
By using equation (9.159) of the textbook it is easy to find the coefficients of the filter H1(z).
H1(z) = −z−NH0(−z−1)
Figure 9.9 shows the magnitude response for H0(z) and H1(z).
9.13
H1(z) = −z−NH0(−z−1) = −z−N (−z3 + az2 − bz + 2)= −2z−3 + bz−2 + az−1 + 1
G0(z) = z−3(z3 + az2 + bz + 1) = 2z−3 + bz−2 + az−1 + 1G1(z) = z−3 − az−2 + bz−1 − 2
275
0 10 20 30 40 50 60 70 80−0.2
−0.1
0
0.1
0.2
0.3
0.4
0.5Impulse Response
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−200
−150
−100
−50
0
50
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 9.7: Magnitude (above) and impulse (below) responses for the prototype filter - Exercise 9.12.
−3 −2 −1 0 1 2 3 4−1.5
−1
−0.5
0
0.5
1
1.5
Real part
Imag
inar
y pa
rt
Zeros of P(z)
−3 −2 −1 0 1 2 3 4−1.5
−1
−0.5
0
0.5
1
1.5
Real part
Imag
inar
y pa
rt
Zeros of H0(z)
Figure 9.8: Zeros of P (z) (above) and H0(z) (below) - Exercise 9.12.
Since
H0(z)H0(z−1) = (z−3 + az−2 + bz−1 + c)(z3 + az2 + bz + c)= 1 + z−1a+ bz−2 + cz−3
+az + a2 + abz−1acz2 + bz2 + abz + b2 + bcz−1
+cz3 + acz2 + bcz + c2
= 1 + a2 + b2 + c2 + (a+ ab+ bc)z−1 + (b+ ca)z−2
+cz−3 + cz3 + (a+ ab+ bc)z + (b+ ac)z2
276 CHAPTER 9. FILTER BANKS
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−120
−100
−80
−60
−40
−20
0
20
Normalized frequency
Mag
nitu
de R
espo
nse
(dB
)
H0(z)
H1(z)
Figure 9.9: Magnitude responses for H0(z) and H1(z) - Exercise 9.12.
Then
H0(−z)H0(−z−1) = 1 + a2 + b2 + c2 − (a+ ab+ bc)z−1 + (b+ ca)z−2
−cz−3 − cz3 − (a+ ab+ bc)z + (b+ ac)z2
As a result
P (z) = 2(1 + a2 + b2 + c2) + 2z−2(b+ ca)z−2 + 2(b+ ac)z2
By considering that c = −2 and b = −2a and
P (z) = 2(1 + a2 + b2 + c2) = 2(1 + a2 + 4a2 + 2) = 10(1 + a2)
The resulting filter bank is a scaled CQF.
9.14 Here we illustrate one practical issue concerning the design of QMF filter banks. We designed anequirriple prototype filter with the lowest order necessary for this filter to satisfy the given specifica-tions. The order is 10.
If little importance is given to perfect reconstruction, i.e., δ → 1, than the designed filters still satisfythe specifications, see Figure 9.10. However, a filter bank comprised by these filters is not of practicalusage since it introduces distortions on the input signal. On the other hand, if perfect reconstruction isvery important, i.e., δ → 0, than the designed filters do not satisfy the specifications, see attenuationat frequency 0.6. Figure 9.11 depicts the frequency responses of h0(n) and h1(n) for δ = 0.15. Theoverall frequency response of this filter is depicted in Figure 9.12.
As a conclusion, one must design the prototype filter with higher attenuation (i.e., higher order)! Thisway, each of the filters of the QMF filter bank can satisfy the specifications without loosing perfectreconstruction.
277
Figure 9.10: Frequency Response (QMF attenuation) - Exercise 9.14.
Figure 9.11: Frequency Response (QMF perfect reconstruction) - Exercise 9.14.
• MatLab® Program:
clear all;close all;
ws = 0.4;load ’prototype’prot1 = qmf(prot0); % prototype filter coefficients - H1figure, freqz(prot0);figure, freqz(prot1);
%%%%%%%%%%%%%%%% Optimization:%%%%%%%%%%%%%%%delta = 0.15; % Little importance to attenuationepsilon= qmf_objfunction(prot0,delta,ws); % QMF cost functionoptions= optimset(’MaxFunEvals’,4e3,’MaxIter’,4e3);ho = fminsearch(@(h) qmf_objfunction(h,delta,ws), prot0, options);h0 = ho;clear ho;h1 = qmf(fliplr(h0));
278 CHAPTER 9. FILTER BANKS
Figure 9.12: Frequency Response of the overall filter bank (QMF perfect reconstruction) - Exercise 9.14.
g0 = qmf(fliplr(h1));g1 = - h1;
% save QMFbank h0 h1 g0 g1
Z = conv(h0,g0); U = conv(h1,g1); Heq = Z + U; figure, freqz(Heq);% END
function [epsilon] = qmf_objfunction(h,delta,ws)% qmf_objfunction Objective Function of QMF (Equation (9.64)-DSP:Diniz) filter bank.%% Use:% epsilon = qmf_objfunction(h,delta,ws)%% Inputs:% h Lowpass prototype filter% delta Weight of stopband attenuation% ws Stopband frequency (normalized to 1)%% Output:% epsilon Cost function of QMF that we will minimize later%% Author:% Markus V. S. Lima (December 04,2008)%
N = length(h) - 1; % filter orderh0 = h;h1 = qmf(h0);[H0,w] = freqz(h0,1,1024);[H1,w] = freqz(h1,1,1024);% figure, plot(w,20*log10(abs(H0))), grid on; % checking the freqz OK
279
% Construction of the objective function:mod_H0_2 = (abs(H0)).^2; % |H0(e^jw)|^2mod_H1_2 = (abs(H1)).^2;H = (1/2) .* exp( (-j * N) .* w) .* ( mod_H0_2 + mod_H1_2 );RP_cond = trapz(w, abs( H - (1/2).*exp( (-j * N) .* w) ).^2 ); % Perfect Reconstruction Conditionaux = find(w > (ws*pi) );wstop = w(aux);mod_H0_s = mod_H0_2(aux);LP_att = trapz(wstop, mod_H0_s); % Lowpass Attenuationepsilon = delta * LP_att + (1 - delta) * RP_cond; % Objective Function
9.15
9.16 Since this is the first time we are projecting a CQF filter bank, we will explain it thoroughly.
(i)
At first, we have to design p(n). Since we want perfect reconstruction, we have to guarantee thatEquation (9.169) is respected. So, choosing p(n) as a halfband filter we guarantee this equation, ex-cept (possibly) by a constant, i.e., if p(n) is halfband we have p(n)[1 + (−1)n] = kδ(n− d), where drepresents a delay correspondent to the central sample of the halfband filter and k is the constant.
Since P (z) will be decomposed as P (z) = H0(z)H0(z − 1), the attenuation of P (z) must be, atleast, twice the attenuation of H0(z) (see Chapter 8). In practice, we also add 6 dB of attenuation.Therefore, the attenuation of the halfband filter must be, at least: Attenuationhb = (26×2)+6 = 58dB→ Gainhb = −58 dB.
Notice that the decomposition of P (z), stated in the last paragraph, imposes that the frequency re-sponse of p(n) is positive. So, we have to add a constant in all the frequency response of p(n) toguarantee that the oscillations in the stopband are over the zero (i.e., above the x axis). This can bealternatively accomplished by summing a constant (the ripple of the halfband) to the central sample ofp(n).
Figure 9.13 depicts the impulse response of the halfband filter. Notice that p(n) = 0 for even n, exceptin the central sample, and that we have “normalized” the filter in such a way to guarantee that k = 2,as in Equation (9.169). This implies that p(n− d) = 1 (the central sample).
(ii)
Now that we have P (z), we have to find H0(z) in order to satisfy P (z) = H0(z)H0(z − 1). Theeasiest way to accomplish this task is to look at the diagram of poles and zeros depicted in Figure 9.14.Notice that all the zeros in the unit circle have multiplicity 2. Now we choose the zeros as illustratedin Chapter 9 (Section 9.7).
The impulse response of h0(n) is
h0(n) = [0.0438, -0.0015, -0.0165 0.0023, 0.0188, -0.0045, -0.0201, 0.0088, 0.0186, -0.0158, -0.0124,0.0305, -0.0080, -0.0741, 0.1072, 0.5234, 0.7009, 0.3504, -0.1215, -0.1873, 0.0525, 0.1282, -0.0259,-0.0936, 0.0128, 0.0698, -0.0062, -0.0514, 0.0023, 0.0373, 0.0003, -0.0518];
The frequency response of h0(n) is depicted in Figure 9.15. Notice that the zeros of h0(n) were cho-sen in such a way that the phase is approximately linear. The attenuation in the stopband of h0(n) isabout 26 dB (relative to the passband attenuation).
(iii)
280 CHAPTER 9. FILTER BANKS
Figure 9.13: Impulse Response of the halfband filter - Exercise 9.16.
Figure 9.14: Diagram of poles and zeros of P (z) - Exercise 9.16.
The last step is to generate h1(n), g0(n) and g1(n) according to Equations (9.150), (9.151) and (9.159).So, we get:
h1(n) = [-0.0518, -0.0003, 0.0373, -0.0023, -0.0514, 0.0062, 0.0698, -0.0128, -0.0936, 0.0259, 0.1282,-0.0525, -0.1873, 0.1215, 0.3504, -0.7009, 0.5234, -0.1072, -0.0741, 0.0080, 0.0305, 0.0124, -0.0158,-0.0186, 0.0088, 0.0201, -0.0045, -0.0188, 0.0023, 0.0165, -0.0015, -0.0438];
g0(n) = [-0.0518, 0.0003, 0.0373, 0.0023, -0.0514, -0.0062, 0.0698, 0.0128, -0.0936, -0.0259, 0.1282,0.0525, -0.1873, -0.1215, 0.3504, 0.7009, 0.5234, 0.1072, -0.0741, -0.0080, 0.0305, -0.0124, -0.0158,0.0186, 0.0088, -0.0201, -0.0045, 0.0188, 0.0023, -0.0165, -0.0015, 0.0438];
g1(n) = [0.0438, 0.0015, -0.0165, -0.0023, 0.0188, 0.0045, -0.0201, -0.0088, 0.0186, 0.0158, -0.0124,
281
Figure 9.15: Frequency Response of h0(n) - Exercise 9.16.
-0.0305, -0.0080, 0.0741, 0.1072, -0.5234, 0.7009, -0.3504, -0.1215, 0.1873, 0.0525, -0.1282, -0.0259,0.0936, 0.0128, -0.0698, -0.0062, 0.0514, 0.0023, -0.0373, 0.0003, 0.0518];
The frequency responses of h1(n), g0(n) and g1(n) are depicted in Figures 9.16, 9.17 and 9.18. Noticethat h0(n) is orthogonal to h1(n) and g0(n) is orthogonal to g1(n).
(iv)
We will complete this exercise by showing that the designed filter bank achieves an almost perfectreconstruction. We will use a white noise signal at the input of the filter bank.
Figure 9.16: Frequency Response of h1(n) - Exercise 9.16.
282 CHAPTER 9. FILTER BANKS
Figure 9.17: Frequency Response of g0(n) - Exercise 9.16.
Figure 9.18: Frequency Response of g1(n) - Exercise 9.16.
We can see in Figure 9.19 that the filter bank introduces a delay of about 33 samples. The output signalis pretty much like the input signal, properly delayed. Figure 9.20 depicts the frequency content (inmodule) of the input and output signals, and is more appropriate to analyze the perfect reconstruction.In this figure we can see that an almost perfect reconstruction was achieved, since there is some distor-tion at about 1.5 which corresponds approximately to the frequency π/2, i.e., the transition band.
• MatLab® Program:
clear all;close all;
% Halfband Specifications:
283
Figure 9.19: Perfect reconstruction (time domain) - Exercise 9.16.
Figure 9.20: Perfect reconstruction (frequency domain) - Exercise 9.16.
delta_p = 5e-2; % ripple in the passbanddelta_r = 5e-2; % attenuation in the stopbandWs = 2*pi; % sampling freq. [ rad/s ]W_p = 0.45 * (Ws/2); % passband freq. [ rad/s ]W_r = 0.55 * (Ws/2); % stopband freq. [ rad/s ]
%%%%%%%%%%%%%%%%%%%%%%%%%%% LOWPASS HALFBAND FILTER: (filter order = 62)%%%%%%%%%%%%%%%%%%%%%%%%%%w_p = (W_p/pi); % converting to digital freq.
284 CHAPTER 9. FILTER BANKS
delta_Hb_dB = 20*log10( (delta_r) ) * 2 - 6 ; % -58 dB (attenuation)delta_Hb = 10^(delta_Hb_dB/20); % converting to linear scalep = firhalfband(’minorder’, w_p, delta_Hb); % designing a halfband filter
% % figure, zplane(p);
mid = round(length(p)/2); % the middle samplep(mid)= p(mid) + (delta_Hb); % to garantee that P is positivep = (p * ( 2/p(mid) ) )/2 ; % p .* ( 1 + (-1).^(1:39) ) = 2 * impulse
% figure, impz(p);figure, zplane(p); % showing the Zplane after summing the constantz = roots(p); % getting the zeros of p
% Decomposing P(z) = H0(z) H0(z^-1):aux = [56 46 47 54 55 41 42 52 53 37 38 50 51 3 4 35 36 31 32 27 28 23 24 19 2015 16 9 10 7 8];z0 = z(aux);h0 = poly(z0);h0 = h0/norm(h0);% figure, zplane(h0);% figure, freqz(h0);
aux_conj = [35 36 31 32 27 28 23 24 19 20 15 16 9 10 7 8 1 2 39 40 43 44 45 4849 57 58 59 60 61 62];z0_conj = z(aux_conj);h0_conj = poly(z0_conj);h0_conj = h0_conj/norm(h0_conj);% figure, zplane(h0_conj);
%%%%%%%%%%%%%%%%%% Projecting the filter bank (CQF):%%%%%%%%%%%%%%%%%h1 = fliplr(h0); % reverting the impulse response of h0h1 = - ( (-1).^(1:length(h1)) ) .* h1; % alternating the signal of the odd samplesh0 * h1.’ % show the orthogonality!!!
g0 = fliplr(h0);g1 = - ( (-1).^(1:length(h1)) ) .* h0;g0 * g1.’
figure, freqz(h1);figure, freqz(g0);figure, freqz(g1);
%%%%%%%%%%% Sending a signal:%%%%%%%%%%K = 120;x = randn(1,K);
% Channel 1:xh0 = filter(h0,1,x);xdown_h0 = downsample(xh0,2);xup_h0 = upsample(xdown_h0,2);xg0 = filter(g0,1,xup_h0);
% Channel 2:
285
xh1 = filter(h1,1,x);xdown_h1 = downsample(xh1,2);xup_h1 = upsample(xdown_h1,2);xg1 = filter(g1,1,xup_h1);
y = xg0 + xg1;
figure,subplot 211, plot(x), title(’Input - white noise’);subplot 212, plot(y), title(’Output - reconstructed white noise’);
[X,W] = freqz(x(1:K-32))[Y,W] = freqz(y(33:K))figure,subplot 211, plot(W,abs(X)), title(’Input - Frequency Response’);subplot 212, plot(W,abs(Y)), title(’Output - Frequency Response’);
9.17 First, we design a lowpass prototype filter satisfying the specifications, i.e.:
M = 6; % decimation factor = number of subbandsrho = 1/(8*M); % transition band (normalized frequency)wp = 1/(2*M) - pho; % end of the passband (normalized frequency)ws = 1/(2*M) + pho; % stopband frequency (normalized frequency)
We have chosen an attenuation of 60 dB in the stopband. In order to achieve such attenuation we de-signed the prototype filter using the equirriple method. The filter order is 116 and the impulse responseand frequency response of this prototype filter, h(n), are depicted in Figure 9.21.
The magnitude responses for the 6 analysis filters are depicted in Figure 9.22.
9.18 First, we design a lowpass prototype filter satisfying the specifications, i.e.:
M = 10; % decimation factor = number of subbandsrho = 1/(6*M); % transition band (normalized frequency)wp = 1/(2*M) - pho; % end of the passband (normalized frequency)ws = 1/(2*M) + pho; % stopband frequency (normalized frequency)
Figure 9.21: Prototype filter: Impulse response (left) and Frequency Response (right) - Exercise 9.17.
286 CHAPTER 9. FILTER BANKS
Figure 9.22: Magnitude Frequency Response for the 6 analysis filters - Exercise 9.17.
We have chosen an attenuation of 60 dB in the stopband. In order to achieve such attenuation we de-signed the prototype filter using the equirriple method. The filter order is 134 and the impulse responseand frequency response of this prototype filter, h(n), are depicted in Figure 9.23.
The magnitude responses for the 10 analysis filters are depicted in Figure 9.24.
9.19 The procedure to implement a CMFB is very simple. First, we design a lowpass prototype filtersatisfying the specifications, i.e.:
M = 10; % decimation factor = number of subbandsrho = 1/(8*M); % transition band (normalized frequency)wp = 1/(2*M) - pho; % end of the passband (normalized frequency)
Figure 9.23: Prototype filter: Impulse response (left) and Frequency Response (right) - Exercise 9.18.
287
Figure 9.24: Magnitude Frequency Response for the 10 analysis filters - Exercise 9.18.
ws = 1/(2*M) + pho; % stopband frequency (normalized frequency)
We have chosen an attenuation of 60 dB in the stopband. In order to achieve such attenuation we de-signed the prototype filter using the equirriple method. The filter order is 167, i.e., higher than the orderused in the previous exercise. The reason for this is simple: the transition band (ρ) became sharper.The impulse response and frequency response of this prototype filter, h(n), are depicted in Figure 9.25.
As we can see this filter is symmetric, therefore, only the first half of its coefficients are given below:
h(n) = [-0.0006, -0.0001, -0.0001, -0.0001, -0.0001, -0.0001, -0.0001, -0.0000, 0.0001, 0.0002,0.0003, 0.0004, 0.0006, 0.0008, 0.0010, 0.0013, 0.0015, 0.0018, 0.0021, 0.0024, 0.0027, 0.0030,0.0033, 0.0036, 0.0038, 0.0041, 0.0042, 0.0044, 0.0045, 0.0045, 0.0045, 0.0043, 0.0042, 0.0039,0.0035, 0.0031, 0.0026, 0.0020, 0.0013, 0.0006, -0.0002, -0.0010, -0.0019, -0.0028, -0.0037, -0.0046,
Figure 9.25: Prototype filter: Impulse response (left) and Frequency Response (right) - Exercise 9.19.
288 CHAPTER 9. FILTER BANKS
-0.0054, -0.0062, -0.0069, -0.0075, -0.0079, -0.0083, -0.0085, -0.0085, -0.0083, -0.0080, -0.0074, -0.0066, -0.0056, -0.0043, -0.0028, -0.0011, 0.0008, 0.0029, 0.0052, 0.0076, 0.0102, 0.0129, 0.0157,0.0185, 0.0214, 0.0242, 0.0270, 0.0297, 0.0323, 0.0348, 0.0371, 0.0391, 0.0409, 0.0425, 0.0438,0.0447, 0.0454, 0.0457];
Then, using Equations (9.195) and (9.196) of the textbook we generated the 10 analysis and synthesisfilters, respectively. Figures 9.26 and 9.27 illustrate the impulse and frequency responses of the 10analysis filters.
Finally, Figure 9.28 illustrate the reconstruction of the input signal (white noise) at the end of theCMFB both in the time and frequency domains, respectively.
9.20
c00 = cos(π
8
)c01 = cos
(32π
4
)c10 = cos
(32π
4
)c11 = cos
(3
32π
4
)
9.21 In order to design a cosine-modulated filter bank with 5 subbands, we need a prototype filter with thepassband set to π
10 . This filter should also have 40 dB of stopband attenuation. This filter is generatedby an optimization routine that minimizes the stopband energy of an initial filter with the constraintdepicted in equation (9.207) of the textbook, which guarantees the perfect reconstruction property.
The passband and stopband frequencies used were, respectively, ( π10−ρ) and ( π10 +ρ), where ρ was setto π
40 . Such a filter could be obtained using 100 coeficients (L = 10). Figure 9.30 shows the magnitudeand phase responses for the prototype filter.
The magnitude responses for the five analysis filters of the filter bank can be seen in Figure 9.31. Thesefilters are obtained from the prototype filter by using the relations showed in equation (9.195) of thetextbook.
9.22 The passband of the prototype filter for a 15-band filter bank should be π30 . The stopband attenuation
required by the exercise is 20 dB. Once again, the prototype filter is generated by an optimizationroutine that minimizes the stopband energy with the constraint depicted in equation (9.207) of the text-book, which guarantees the perfect reconstruction property for the filter bank.
The passband and stopband frequencies used were, respectively, ( π30−ρ) and ( π30 +ρ), where ρ was setto π
150 . Such a filter could be obtained using 150 coeficients (L = 5). Figure 9.32 shows the magnitudeand phase responses for the prototype filter.
The magnitude responses for the fifteen analysis filters of the filter bank can be seen in Figure 9.33.Once again, these filters are obtained from the prototype filter by using the relations showed in equa-tion (9.195) of the textbook.
289
Figure 9.26: Impulse responses of the filters: from h0(n) and h1(n) (first row) to h8(n) and h9(n) (lastrow) - Exercise 9.19.
9.23 To obtain the magnitude responses for the filter bank equivalent to the DFT operation (M = 8), onlya few MatLab® commands are necessary:
290 CHAPTER 9. FILTER BANKS
Figure 9.27: Frequency responses of the filters: from h0(n) and h1(n) (first row) to h8(n) and h9(n) (lastrow) - Exercise 9.19.
F=dftmtx(8); F=F./sqrt(8)Finv=inv(F);H=flipud(Finv);
291
Figure 9.28: Reconstruction: Time domain (left) and Frequency domain (right) - Exercise 9.19.
Figure 9.29: Magnitude response for the prototype filter - Exercise 9.20.
for i=1:8subplot(4,2,i)[M,f]=freqz(H(i,:),1,512,’whole’);plot(f./pi,20*log10(M))axis([0 2 -22 18])ylabel(’Magnitude (dB)’)text(.95,14,[’Filter ’ num2str(i)])
Figure 9.34 shows the magnitude responses for the eight analysis filters. It is very interesting to noticethat the whole frequency spectrum has been used (from 0 to 2π) to show the magnitude responses forthe DFT filter bank. This is important, since these filters have complex coefficients (with non-negativeimaginary part), and therefore their magnitude responses are not symmetric with relation to π, like thefilters with real coefficients.
9.24
9.25 The implementation of fast LOT uses matrixes CII and CIV to compose the fatorable matrix L1. Thisprocedure gives rise to linear-phase filters allowing fast implementation. Figure 9.35 shows the mag-nitude responses for the 8 analysis filters of the filter bank generated. It can be clearly seen that the
292 CHAPTER 9. FILTER BANKS
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−10000
−8000
−6000
−4000
−2000
0
Normalized frequency (Nyquist == 1)
Pha
se (
degr
ees)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
−100
−80
−60
−40
−20
0
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 9.30: Magnitude response for the prototype filter - Exercise 9.21.
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−100
−80
−60
−40
−20
0
20
Figure 9.31: Magnitude responses for the five analysis filters of the filter bank - Exercise 9.21.
filters generated are, in general, more selective that the ones generated by the DFT-based filter bank,in the previous exercise.
9.26 In order to prove that CT1 C2 = CT2 C1 = 0, we first define matrices C1 and C2, acording to equations(9.219) and (9.220) of the textbook:
C1 =√M(−1)b
L2 cCIVM (I − (−1)LJ)
C2 = −√M(−1)b
L2 cCIVM ((−1)LI + J)
293
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−15000
−10000
−5000
0
Normalized frequency (Nyquist == 1)
Pha
se (
degr
ees)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−70
−60
−50
−40
−30
−20
−10
0
Normalized frequency (Nyquist == 1)
Mag
nitu
de R
espo
nse
(dB
)
Figure 9.32: Magnitude response for the prototype filter - Exercise 9.22.
0 0.5 1 1.5 2 2.5 3 3.5−80
−70
−60
−50
−40
−30
−20
−10
0
10
Figure 9.33: Magnitude responses for the fifteen analysis filters of the filter bank - Exercise 9.22.
where bxc represents the largest integer smaller that or equal to x. CIVM is a DCT IV matrix, definedin equation (9.218) of the textbook.
CT1 =√M(−1)b
L2 c(IT − (−1)LJT )CIV
T
M
CT1 =√M(−1)b
L2 c(I − (−1)LJ)CIV
T
M
294 CHAPTER 9. FILTER BANKS
0 0.5 1 1.5 2−20
−10
0
10Filter 1
Mag
nitu
de (
dB)
0 0.5 1 1.5 2−20
−10
0
10Filter 2
Mag
nitu
de (
dB)
0 0.5 1 1.5 2−20
−10
0
10Filter 3
Mag
nitu
de (
dB)
0 0.5 1 1.5 2−20
−10
0
10Filter 4
Mag
nitu
de (
dB)
0 0.5 1 1.5 2−20
−10
0
10Filter 5
Mag
nitu
de (
dB)
0 0.5 1 1.5 2−20
−10
0
10Filter 6
Mag
nitu
de (
dB)
0 0.5 1 1.5 2−20
−10
0
10Filter 7
Mag
nitu
de (
dB)
Frequency ( × π )0 0.5 1 1.5 2
−20
−10
0
10Filter 8
Mag
nitu
de (
dB)
Frequency ( × π )
Figure 9.34: Magnitude and phase responses for the analysis filters - Exercise 9.23.
CT1 C2 =√M(−1)b
L2 c(I − (−1)LJ)CIV
T
M (−√M)(−1)b
L2 cCIVM ((−1)LI + J)
= −M(I − 1(−1)LJ)CIVT
M CIVM ((−1)LI + J)= −M(I − (−1)LJ)((−1)LI + J)= −M(I(−1)LI + IJ − JI − (−1)LJJ)= −M((−1)LI − (−1)LJJ)= −M(−1)L(I − JJ), JJ = I
CT1 C2 = 0
CT2 = −√M(−1)b
L2 c((−1)LIT + JT )CIV
T
M
CT2 = −√M(−1)b
L2 c((−1)LI + J)CIV
T
M
295
0 0.2 0.4 0.6 0.8 1
−40
−20
0 Filter 1
Mag
nitu
de (
dB)
0 0.2 0.4 0.6 0.8 1
−40
−20
0 Filter 2
Mag
nitu
de (
dB)
0 0.2 0.4 0.6 0.8 1
−40
−20
0 Filter 3
Mag
nitu
de (
dB)
0 0.2 0.4 0.6 0.8 1
−40
−20
0 Filter 4
Mag
nitu
de (
dB)
0 0.2 0.4 0.6 0.8 1
−40
−20
0 Filter 5
Mag
nitu
de (
dB)
0 0.2 0.4 0.6 0.8 1
−40
−20
0 Filter 6
Mag
nitu
de (
dB)
0 0.2 0.4 0.6 0.8 1
−40
−20
0 Filter 7
Mag
nitu
de (
dB)
Normalized Frequency0 0.2 0.4 0.6 0.8 1
−40
−20
0 Filter 8
Mag
nitu
de (
dB)
Normalized Frequency
Figure 9.35: Magnitude and phase responses for the analysis filters of the 8-band fast LOT - Exercise 9.25.
CT2 C1 = −M((−1)LI + J)CIVT
M CIVM (I − (−1)LJ)= −M(−1)LI + J)(I − (−1)LJ)= −M((−1)LII − IJ + JI − J(−1)LJ)= −M((−1)LI − (−1)LJJ)= −M(−1)L(I − JJ)
CT2 C1 = 0
9.27
C3 = C1CT1
C4 = C1 + C2
C1 =12
[Ce − CoCe − Co
]C1 = C3C4
C2 =12
[Ce + Co−Ce − Co
]C2 = (I − C3)C4
296 CHAPTER 9. FILTER BANKS
According to equation (9.235) of the textbook:
E(z) = C1 + z−1C2
= C3C4 + z−1(I − C3)C4
E(z) = [C3 + z−1(I − C3)]C4
C3 = C1CT1 =12
[Ce + Co−Ce − Co
]12[CTe − CTo CTe − CTo
]=
14
[CeC
Te − CeCTo − CoCTe + CoC
To CeC
Te − CeCTo − CoCTe + CoC
To
CeCTe − CeCTo − CoCTe + CoC
To CeC
Te − CeCTo − CoCTe + CoC
To
]=
14
[I + I I + II + I I + I
]C3 =
12
[I II I
]C4 = C1 + C2
=12
[Ce − CoCe − Co
]+
12
[Ce + Co−Ce − Co
]C4 =
[Ce−Co
]
9.28
9.29
9.30
9.31
9.32 (A) LOT:
In this exercise, since the only constraint is the lapped biorthogonal transform to have linear phase, wewill design a particular case known as fast LOT. In addition of being easy to implement, fast LOT hasfast implementation based on DCTs and, therefore, has linear phase. Besides, we do not need as manydegrees of freedom as we would have in a biorthogonal transform design.
All the matrices in Equation (9.240) of the textbook are defined, except by matrix L1 which is afunction of matrix L2, see Equation (9.244). So, L2 is actually the only matrix we need to design.Since we opted for fast implementation we will choose this matrix as:
L2 = CIVCII ,
where CIV is the DCT type-four and CII is the DCT type-two, both matrices having dimensionM/2 = 5, where M = 10 is the number of sub-bands.
So, after calculating L2 which is a 5×5 matrix, we use Equation (9.244) to form L1 which is a 10×10matrix. Then, we compute the matrix products in Equation (9.240), which involves only square matri-ces (10 × 10), to get the polyphase components of the analysis filter bank. Finally, we use Equation(9.231) to calculate the analysis filters.
297
The impulse response of h0(n) is given below. Notice that, just as expected, its length is 2M = 20.
h0(n) =[-0.0627, -0.0411, -0.0000, 0.0566, 0.1231, 0.1931, 0.2596, 0.3162, 0.3573, 0.3790, 0.3790,0.3573, 0.3162, 0.2596, 0.1931, 0.1231, 0.0566, 0.0000, -0.0411, -0.0627];
Figures 9.36 and 9.37 depict the impulse responses and frequency responses, respectively, of the anal-ysis filters. Notice that their phases are linear.
• MatLab® program:
M = 10; % number of sub-bandsM_til = M/2;C_II = zeros(M_til,M_til);C_IV = zeros(M_til,M_til);
% Computing DCT types II and IV:for l=0:M_til-1for n=0:M_til-1
if n == 0alpha = 1/sqrt(2);
elsealpha = 1;
endC_II(l+1,n+1) = alpha * sqrt(2/M_til) * cos((2*l+1) * (pi/(2*M_til)) * n );C_IV(l+1,n+1) = sqrt(2/M_til) * cos((2*l+1) * (pi/(2*M_til)) * (n + 1/2) );
endendC_II = C_II.’;C_IV = C_IV.’;
L2 = C_II * C_IV;L1 = [eye(M_til) zeros(M_til,M_til) ; zeros(M_til,M_til) L2];
% Computing Ce and Co:C = dctmtx(10);Ce = C(1:2:10,:);Co = C(2:2:10,:);
% Computing the polyphase components:Minus = (Ce - Co) .* 1/2;Plus = (Ce + Co) .* 1/2;L2_Minus = (L2 * Minus) ;L2_Plus =-(L2 * Plus ) ;
aux1 = kron(Minus , [1 0]);aux2 = kron(Plus , [0 1]);aux3 = kron(L2_Minus , [1 0]);aux4 = kron(L2_Plus , [0 1]);
E_z = [ aux1 + aux2 ; aux3 + aux4 ];k = 1;for m=1:10
for n=1:2:19
E_vec(k,:) = [E_z(m,n) zeros(1,M-1) E_z(m,n+1)];k = k + 1;
298 CHAPTER 9. FILTER BANKS
Figure 9.36: Impulse responses (fast LOT): from h0(n) and h1(n) (first row) to h8(n) and h9(n) (last row)- Exercise 9.32.
endend
% Computing H0:
299
Figure 9.37: Frequency responses (fast LOT): from h0(n) and h1(n) (first row) to h8(n) and h9(n) (lastrow) - Exercise 9.32.
h0 = zeros(1,2*M);d = [1 zeros(1,9)];for k=1:10
300 CHAPTER 9. FILTER BANKS
h0 = h0 + conv(E_vec(k,:),d);d = circshift(d,[0 1]);
end
% Computing H1:h1 = zeros(1,2*M);d = [1 zeros(1,9)];for k=11:20
h1 = h1 + conv(E_vec(k,:),d);d = circshift(d,[0 1]);
end
% Computing H2:h2 = zeros(1,2*M);d = [1 zeros(1,9)];for k=21:30
h2 = h2 + conv(E_vec(k,:),d);d = circshift(d,[0 1]);
end
% Computing H3:h3 = zeros(1,2*M);d = [1 zeros(1,9)];for k=31:40
h3 = h3 + conv(E_vec(k,:),d);d = circshift(d,[0 1]);
end
% Computing H4:h4 = zeros(1,2*M);d = [1 zeros(1,9)];for k=41:50
h4 = h4 + conv(E_vec(k,:),d);d = circshift(d,[0 1]);
end
% Computing H5:h5 = zeros(1,2*M);d = [1 zeros(1,9)];for k=51:60
h5 = h5 + conv(E_vec(k,:),d);d = circshift(d,[0 1]);
end
% Computing H6:h6 = zeros(1,2*M);d = [1 zeros(1,9)];for k=61:70
301
h6 = h6 + conv(E_vec(k,:),d);d = circshift(d,[0 1]);
end
% Computing H7:h7 = zeros(1,2*M);d = [1 zeros(1,9)];for k=71:80
h7 = h7 + conv(E_vec(k,:),d);d = circshift(d,[0 1]);
end
% Computing H8:h8 = zeros(1,2*M);d = [1 zeros(1,9)];for k=81:90
h8 = h8 + conv(E_vec(k,:),d);d = circshift(d,[0 1]);
end
% Computing H9:h9 = zeros(1,2*M);d = [1 zeros(1,9)];for k=91:100
h9 = h9 + conv(E_vec(k,:),d);d = circshift(d,[0 1]);
end
(B) BOLT:
Now, we want to design a BOLT (Biorthogonal Lapped Transform) which is not orthogonal (in op-posite to what was done in (a)). By analyzing equations (9.256) and (9.259) we note that the easiestway to design a BOLT is to keep C3 just like it is in Equation (9.257) and to change the matrix C4 bya different matrix, i.e., different from Equation (9.258). The question is: What characteristics shouldthis new matrix C4 have?
The answer is simple:
• it must have only symmetric or anti-symmetric lines, in order to generate only linear phase filters;
• it must not be orthogonal because we must not have R(z) = ET (z − 1) (this is clear fromEquations (9.256) and (9.259) ), since this would lead to a particular case, the LOT.
So, since there is no constraint in this exercise, we chose any C4 satisfying the two characteristicsmentioned above. No optimization criterion was used. The matrix we used was:
302 CHAPTER 9. FILTER BANKS
C4 =
0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.5 0.50.5 0.3 0.1 −0.1 −0.3 −0.3 −0.1 0.1 0.3 0.50.4 0.1 −0.2 −0.5 −0.2 −0.2 −0.5 −0.2 0.1 0.40.4 −0.2 −0.5 −0.1 0.5 0.5 −0.1 −0.5 −0.2 0.40.3 −0.3 0.3 −0.3 0.3 0.3 −0.3 0.3 −0.3 0.3−0.5 −0.5 −0.5 −0.5 −0.5 0.5 0.5 0.5 0.5 0.5−0.5 −0.3 −0.1 0.1 0.3 −0.3 −0.1 0.1 0.3 0.5−0.4 −0.1 0.2 0.5 0.2 −0.2 −0.5 −0.2 0.1 0.4−0.4 0.2 0.5 0.1 −0.5 0.5 −0.1 −0.5 −0.2 0.4−0.3 0.3 −0.3 0.3 −0.3 0.3 −0.3 0.3 −0.3 0.3
;
The frequency response of the filters are given in Figure 9.38. The impulse response of h0(n) is givenbelow.
h0(n) = [0, 0, 0, 0, 0, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000, 0.5000,0, 0, 0, 0, 0];
Notice that the attenuation of the designed BOLT is low and, therefore, its practical usage is limited toapplications in which the attenuation in the stopband is not a big issue.
9.33
9.34 The procedure to implement the GenLOT is very simple. We compute Equation (9.272) to get thepolyphase components and then we use Equation (9.231) to compute the analysis filters. We havechosen C4 as in Equation (9.258). This choice enables fast implementation structures. Therefore, ourdesign parameters are:
• matrix L3,j ;
• matrix L2,j ;
• number of stages L.
We have used combinations of the DCTs type-two and type-four in order to generate the matrices L3,j
and L2,j , and we used L = 2.
The frequency responses of the analysis filters are depicted in Figure ??.
9.35
9.36
303
Figure 9.38: Frequency responses (BOLT): from h0(n) and h1(n) (first row) to h8(n) and h9(n) (last row)- Exercise 9.32.
9.37 Hi(z) and Gi(z) are linear-phase filters of length N = LM .
Hi(z) = ±z−N+1H(z−1)Hi(z) = Dz−N+1Hi(z−1)
e(z) =
H0(z)H1(z)
...HM−1(z)
e(z) = Dz−Ne(z−1)
e(z) = E(zM )d(z), d(z)
1z−1
...z−M+1
E(zM )d(z) = Dz−N+1E(z−M )d(z−1), d(z−1) = zM−1Jd(z)E(zM )d(z) = z−LM+1DE(z−M )zM−1Jd(z)
E(zM ) = z−M(L−1)DE(z−M )JE(z) = z−L+1DE(z−1)J
304 CHAPTER 9. FILTER BANKS
Gi(z) = ±z−N+1G(z−1)Gi(z) = Dz−N+1Gi(z−1)
r(z) =
G0(z)G1(z)
...GM−1(z)
r(z) = Dz−N+1r(z−1)
r(z) = RT (zM )p(z), p(z) =
z−(M−1)
z−(M−2)
...1
RT (zM )p(z) = Dz−N+1RT (z−M )p(z−1), p(z−1) = zM−1Jp(z)RT (zM )p(z) = z−LM+1DRT (z−M )zM−1Jp(z)
RT (zM ) = z−M(L−1)DRT (z−M )JRT (z) = z−L+1DRT (z−1)J
9.38
E(z) = E2(z)E1(z)
E(z) = z−2L+1DE(z−1)JE1(z) = z−L+1DE1(z−1)J
E(z) = z−2L+1DE(z−1)JE2(z)E1(z) = z−2L+1DE2(z−1)E1(z−1)J
E2(z)z−L+1DE1(z−1)J = z−2L+1DE2(z−1)E1(z−1)JE2(z)DE1(z−1) = z−LDE2(z−1)E1(z−1)
E2(z)D = z−LDE2(z−1)
E2(z) = z−LDE2(z−1)D
By taking just the elements of the i-th order of z (actually, z−i), we have that:
E2,i = Dz−LE2,−iD
E2,i = DE2,L−iD
Chapter 10
WAVELETS
10.1 Before the decimation, we have that:
X0(z) = X(z)H0(z)X1(z) = X(z)H0(z)H0(z2)
...
XS(z) = X(z)S∏k=0
H0(z2k)
H(S)
low(z) =S∏k=0
H0(z2k)
C0(z) = X(z)H1(z)C1(z) = X(z)H1(z2)H0(z)C2(z) = X(z)H1(z4)H0(z)H0(z2)
...
CS(z) = X(z)H1(z2S )S−1∏k=0
H0(z2k)
H(S)
high(z) = H1(z2S )S−1∏k=0
H0(z2k)
H(S)
high(z) = H1(z2S )H(S−1)
low (z)
Y (z) = XS(z)G0(z)G0(z2) . . . G0(z2S )
G(S)
low(z) =S∏k=0
G0(z2k)
305
306 CHAPTER 10. WAVELETS
Y (z) = CS(z)G1(z2S )G0(z)G0(z2) . . . G0(z2S−1)
G(S)
high(z) = G1(z2S )S−1∏k=0
G0(z2k)
G(S)
high(z) = G1(z2S )G(S−1)
low (z)
10.2 If P (z) has a zero at z = −1, it should be obvious that P (ew) will also be zero when w = π.
P (ew) = ew(−2M+1) +M−1∑k=0
a2k(ew(−2k) + ew(−4M+2+2k))
P (ew) = e−w(2M−1)Q(w)
Q(w) = 1 +M−1∑k=0
2a2k cos [(2M − 1− 2k)w]
Thus, it is clear that the derivatives of Q(w) with respect to w will exhibit sines or cosines with thealternation of the expression sign. As a general rule, we have that:
d2iQ(w)dw2i
= (−1)iM−1∑k=0
2a2k(2M − 1− 2k)2i cos[(2M − 1− 2k)w]
d2i−1Q(w)dw2i−1
= (−1)iM−1∑k=0
2a2k(2M − 1− 2k)2i−1 sin[(2M − 1− 2k)w]
d2i−1Q(w)dw2i−1
∣∣∣∣w=π
= 0
(a) We know that ifQ(w) has 2M zeros at w = π, then it must haveQ(π) = 0, and all its derivativesat w = π, until order (2M − 1), must also be zero.
By analogy, in order to have more that 2M zeros atw = π,Q(w) must have at least the derivativeof order 2M equal to zero. If this happens, we will have the following:
Q(π) = 0d2iQ(w)dw2i
∣∣∣∣w=π
= 0, n = 1, . . . ,M
The above equations perform a system with M + 1 linearly independent equations. However,Q(w) had only M degrees of freedom (a2k, k = 0, . . . ,M − 1). Thus, this system has nosolution, and then our former assumption must be wrong, and so Q(w) cannot have more that2M zeros at w = π.
(b) Since the odd derivatives of Q(w) at w = π are always zero, to guarantee that Q(w) has exactly2M zeros at π, the following M equations must be true:
Q(π) = 0d2iQ(w)dw2i
∣∣∣∣w=π
= 0, n = 1, . . . ,M − 1
307
By the analysis of the equations that show the derivatives of Q(w), to satisfy the previous equa-tions, it is obvious that the following M equations should also be true:
M−1∑k=0
a2k(2M − 1− 2k)2n =12δ(n), n = 0, . . . ,M − 1
10.3 Exercise 10.3
10.4 Exercise 10.4
10.5 Exercise 10.5
10.6 Exercise 10.6
10.7 Exercise 10.7
10.8 Exercise 10.8
10.9 Exercise 10.9
10.10 Exercise 10.10
10.11 Exercise 10.11
10.12 Exercise 10.12
10.13 Exercise 10.13
10.14 Exercise 10.14
10.15 Exercise 10.15
10.16 Exercise 10.16
308 CHAPTER 10. WAVELETS
Chapter 11
FINITE PRECISSION EFFECTS
11.1 Circuit shown at Figure 11.1
AdderFull
X(k) Y(k)
DFF
clock
reset
carry
Figure 11.1: Circuit for exercise 11.1.
The reset must be set to ‘1’ for the first bit an then set to ‘0’ for the next ones.
11.2 (a) X − Y = X + c[Y ]
X = −sx +n∑i=1
xi2−i
Y = −sy +n∑i=1
yi2−i
c[Y ] = 2−(sy +
n∑i=1
yi2−i)
With xi, yi, sx, sy ∈ [0,1].
309
310 CHAPTER 11. FINITE PRECISSION EFFECTS
Thus, we have :
X − Y = X + c[Y ]
−sx +n∑i=1
xi2−i −(−sy +
n∑i=1
yi2−i)
= −sx +n∑i=1
xi2−i + 2−(sy +
n∑i=1
yi2−i)
(sy − sx) +n∑i=1
(xi − yi) 2−i = 2− (sy + sx) +n∑i=1
(xi − yi) 2−i
sy − sx = 2− (sy + sx)2sy = 2sy = 1
It can be concluded that for sy = 1, X − Y = X + c[Y ] is always true. However, since we aredealing with binary numbers that are limited to interval (-1,1), we have to analize if the operationcaused an overflow, which will be the case when sy = 0, by looking into the binary representationof the operation.Calling β[.] as the operator that transforms a number to its binary representation we have : xb = sx.x1x2 · · ·xn
β[x] = xb, x ≥ 0β[x] = β[2− |x|] = xb, x < 0
β
[(sy − sx) +
n∑i=1
(xi − yi) 2−i]
= β
[2− (sy + sx) +
n∑i=1
(xi − yi) 2−i]
1) For sy = 1 and sx = 0 :
β
[(1− 0) +
n∑i=1
(xi − yi) 2−i]
= β
[2− (1 + 0) +
n∑i=1
(xi − yi) 2−i]
β
[1 +
n∑i=1
(xi − yi) 2−i]
= β
[1 +
n∑i=1
(xi − yi) 2−i]
2) For sy = 1 and sx = 1 :
β
[(1− 1) +
n∑i=1
(xi − yi) 2−i]
= β
[2− (1 + 1) +
n∑i=1
(xi − yi) 2−i]
β
[n∑i=1
(xi − yi) 2−i]
= β
[n∑i=1
(xi − yi) 2−i]
3) For sx = 1 and sy = 0 :
β
[(0− 1) +
n∑i=1
(xi − yi) 2−i]
= β
[2− (0 + 1) +
n∑i=1
(xi − yi) 2−i]
β
[−1 +
n∑i=1
(xi − yi) 2−i]
= β
[1 +
n∑i=1
(xi − yi) 2−i]
311
Since (−1 +n∑i=1
(xi − yi) 2−i) ≤ 0, then
β
[2− | − 1 +
n∑i=1
(xi − yi) 2−i|]
= β
[1 +
n∑i=1
(xi − yi) 2−i]
β
[2− (1−
n∑i=1
(xi − yi) 2−i)
]= β
[1 +
n∑i=1
(xi − yi) 2−i]
β
[1 +
n∑i=1
(xi − yi) 2−i]
= β
[1 +
n∑i=1
(xi − yi) 2−i]
4) For sx = 0 and sy = 0 :
β
[(0− 0) +
n∑i=1
(xi − yi) 2−i]
= β
[2− (0 + 0) +
n∑i=1
(xi − yi) 2−i]
β
[n∑i=1
(xi − yi) 2−i]
= β
[2 +
n∑i=1
(xi − yi) 2−i]
If (n∑i=1
(xi − yi) 2−i) ≤ 0, then
β
[n∑i=1
(xi − yi) 2−i]
= β
[2 +
n∑i=1
(xi − yi) 2−i]
β
[2− |
n∑i=1
(xi − yi) 2−i|]
= β
[2 +
n∑i=1
(xi − yi) 2−i)
]
β
[2 +
n∑i=1
(xi − yi) 2−i)
]= β
[2 +
n∑i=1
(xi − yi) 2−i)
]
If (n∑i=1
(xi − yi) 2−i) > 0, then β
[2 +
n∑i=1
(xi − yi) 2−i]
will overflow, so :
β
[2 +
n∑i=1
(xi − yi) 2−i]
= β
[n∑i=1
(xi − yi) 2−i]
(b)
X = −sx +n∑i=1
xi2−i
Y = −sy +n∑i=1
yi2−i
X = 1 +n∑i=1
2−i −X
X = −(1− sx) +n∑i=1
(1− xi)2−i
312 CHAPTER 11. FINITE PRECISSION EFFECTS
With xi, yi, sx, sy ∈ [0,1].
Y +X = −(1− sx + sy) + +n∑i=1
(1− xi + yi)2−i
Y +X = 2 + 1 +n∑i=1
2−i − (Y +X)
Y +X = 2 + 1 +n∑i=1
2−i −[−(1− sx + sy) +
n∑i=1
(1− xi + yi)2−i]
Y +X = 4 + (sy − sx) +n∑i=1
(xi − yi)2−i
1) If (sy − sx) +n∑i=1
(xi − yi)2−i ≥ 0, then :
β[Y +X
]= β
[4 + (sy − sx) +
n∑i=1
(xi − yi)2−i]
β[Y +X
]= β[4] + β
[(sy − sx) +
n∑i=1
(xi − yi)2−i]
This means the sum caused an overflow, thus we have:
β[Y +X
]= β
[(sy − sx) +
n∑i=1
(xi − yi)2−i]
β[Y +X
]= β[X − Y ]
For X − Y ≥ 0.
2) If (sy − sx) +n∑i=1
(xi − yi)2−i < 0, then :
β[Y +X
]= β[2] + β
[2− |(sy − sx) +
n∑i=1
(xi − yi)2−i|]
This means the sum also caused an overflow, and we took the two complement of (sy−sx)+n∑i=1
(xi − yi)2−i, thus we have:
β[Y +X
]= β
[2− |(sy − sx) +
n∑i=1
(xi − yi)2−i|]
β[Y +X
]= β[X − Y ]
For X − Y < 0.
313
11.3
X = −sx +n∑i=1
xi2−i
Y = −sy +n∑i=1
yi2−i
With xi, yi, sx, sy ∈ [0,1]. So :
X × Y =
(−sx +
n∑i=1
xi2−i)(−sy +
n∑i=1
yi2−i)
X × Y = sxsy − sxn∑i=1
yi2−i − syn∑i=1
xi2−i +n∑i=1
xi2−in∑j=1
yj2−j
X × Y = sxsy −(sx
n∑i=1
yi2−i + sy
n∑i=1
xi2−i)
+n∑i=1
xi
n∑j=1
yj2−(i+j)
The first term, sxsy , is very simple to calculate. The second term, sxn∑i=1
yi2−i + sy
n∑i=1
xi2−i, the
sums are only calculated if sx = 1 or sy = 1, and a parallel adder can be used to perform the sum.
If n is even :
X × Y = sxsy −(sx
n∑i=1
yi2−i + sy
n∑i=1
xi2−i)
+n/2∑i=1
(xi + xn/2+i2−i
) n∑j=1
yj2−(n2 +j)
X × Y = sxsy −(sx
n∑i=1
yi2−i + sy
n∑i=1
xi2−i)
+
+n/2∑i=1
xi n∑j=1
yj2−(n2 +j) + xn/2+i
n∑j=1
yj2−(n2 +i+j)
If n is odd :
X × Y = sxsy −(sx
n∑i=1
yi2−i + sy
n∑i=1
xi2−i)
+
+(n−1)/2∑i=1
(xi + x(n−1)/2+i2−i
) n∑j=1
yj2−(n−12 +j)
X × Y = sxsy −(sx
n∑i=1
yi2−i + sy
n∑i=1
xi2−i)
+(n−1)/2∑i=1
+
+
xi n∑j=1
yj2−(n−12 +j) + x(n−1)/2+i
n∑j=1
yj2−(n−12 +i+j)
+
+xnn∑j=1
yj2−(n+j)
314 CHAPTER 11. FINITE PRECISSION EFFECTS
To perform the sum on the last term, first it’s necessery to shift the decimal part of Y by n/2 for neven and by (n− 1)/2 for n odd, and then for every i it’s, also, necessary to shift the decimal part ofY by i. It will be needed n/2 parallel adders for n even and (n− 1)/2 + 1 parallel adder for n odd.In order to keep the precision of the multiplier, the internal signals and the parallel adder must haveat least 2n bits.
11.4 Multiplication (AxB), with n = 2:
Q = · · · 000000000 (truncating)
Q = · · · 000001000 (rounding)
(a) Truncating
A = 0.5 => β[A] = 0.10B = 0.5 => β[B] = 0.10A ×B = 0.25 => β[A×B] = 0.01
Internal data at Table 11.1.
Table 11.1: Internal data (exercise 12.4a)
9 8 7 6 5 4 3 2 1 0Q 0 0 0 0 0 0 0 0 0 0P1 0 0 0 0 0 0 0 0 0 0S1 0 0 0 0 0 0 0 0 0 0Ct′1 0 0 0 0 0 1 1 0 0 0S′1 0 0 0 0 0 0 0 0 0 0P2 0 0 0 0 1 0 0 0 0 0S2 0 0 0 0 1 0 0 0 0 0Ct′2 0 0 0 1 1 0 0 0 0 0S′2 0 0 0 1 0 0 0 0 0 0P3 0 0 0 0 0 0 0 0 0 0S3 0 0 0 1 0 0 0 0 0 0Ct′3 1 1 1 1 1 1 1 1 1 1S′3 0 0 1 0 0 0 0 0 0 0
Result : β[A×B] = S′3(1).S′3(2)S′3(3) = 0.01 => A×B = 0.25.
(b) Rounding
A = −0.5 => β[A] = 1.10B = 0.5 => β[B] = 0.10A ×B = −0.25 => β[A×B] = 1.11
Internal data at Table 11.2.Result : β[A×B] = S′3(1).S′3(2)S′3(3) = 1.11 => A×B = -0.25.
315
Table 11.2: Internal data (exercise 12.4b)
9 8 7 6 5 4 3 2 1 0Q 0 0 0 0 0 0 1 0 0 0P1 0 0 0 0 0 0 0 0 0 0S1 0 0 0 0 0 0 1 0 0 0Ct′1 0 0 0 0 0 1 1 0 0 0S′1 0 0 0 0 0 1 0 0 0 0P2 0 0 0 1 1 0 0 0 0 0S2 0 0 0 1 1 1 0 0 0 0Ct′2 0 0 0 1 1 0 0 0 0 0S′2 1 1 1 1 0 0 0 0 0 0P3 0 0 0 0 0 0 0 0 0 0S3 1 1 1 1 0 0 0 0 0 0Ct′3 1 1 1 1 1 1 1 1 1 1S′3 1 1 1 0 0 0 0 0 0 0
11.5 Shown at Figure 11.2
......
x(n)
x(n−1)
x(n−M)
. . .. . .
h(1)
h(0)
h(M
)
Register
A
BAdderFull
Clock
y(n)
Clock
Figure 11.2: FIR implementation for exercise 12.5
11.6 M is the filter order, then:
y =M+1∑i=1
ciXi
Observing Figure 11.3 of the distributed arithmetic, it’s possible to notice that M+1 shift registerswill be necessary to store the input. The ROM, that will be addressed by the shift registers, must have2M+1 memory positions.
The processing time of the FIR filter and the size of them memory unit will be very big if the FIRfilter order is too big. For example, a FIR filter with order equal to 29, will need a memory unit withaproximatly 1G positions.
316 CHAPTER 11. FINITE PRECISSION EFFECTS
SR 1
SR 2
SR N
REGISTER A REGISTER B
REGISTER C
y
ADDER/SUBTRACTOR
xNj
x2j
x1j
ROM
ARITHMETIC LOGIC UNIT(ALU)
Sj
Figure 11.3: Distributed arithmetic - exercise 12.6
11.7 The normalized filter coeficients, with the scaling factor λ equal to 0.477611940298507, and itsquantized version with 8 bits can be seen in the Table 11.3.
Table 11.3: 8 bits quantized version of the filter of exercise 12.7
Original (Normalized) Quantized (8 bits)a1 -0.923330502793296 -0.921875a2 0.453787411545624 0.453125b0 0.0374894599627561 0.0390625b1 -0.0708314338919925 -0.0703125b2 0.0374894599627561 0.0390625
The memory content is presented in the Table 11.4.
• Matlab® program:
clear all;close all;Q=2^(-8);
b=[0.07864,-0.14858, 0.07864]/(Q*537)a=[-1.93683 0.95189]/(Q*537)
bq=quant(b,Q);aq=quant(a,Q);
c=[b,-a];cq=[bq,-aq];
317
Table 11.4: Memory contens of the filter of exercise 12.7
Memory position Memory content (Quantized) Memory position Memory content (Quantized)0 0 16 0.03906251 -0.453125 17 -0.41406252 0.921875 18 0.96093753 0.46875 19 0.50781254 0.0390625 20 0.0781255 -0.4140625 21 -0.3756 0.9609375 22 17 0.5078125 23 0.5468758 -0.0703125 24 -0.031259 -0.5234375 25 -0.48437510 0.8515625 26 0.89062511 0.3984375 27 0.437512 -0.03125 28 0.007812513 -0.484375 29 -0.445312514 0.890625 30 0.929687515 0.4375 31 0.4765625
bit=5;
d=0:2^bit-1;tmp=dec2bin(d);for m=1:length(d)
for k=1:5x(m,k)=str2num(tmp(m,k));
endend
y=x*c.’;
yq=x*cq.’;yq=quant(yq,Q);
11.8 (a) Memory contens of Section 1 shown at Table 11.5.
Table 11.5: Memory contens for exercise 12.8a.Memory position Memory 1 Memory 2 Memory 3
0 0 0 01 0 0 0.003906252 -0.1640625 0.2265625 -0.00781253 -0.1640625 0.2265625 -0.003906254 0.2265625 0.1640625 -0.003906255 0.2265625 0.1640625 06 0.0625 0.390625 -0.011718757 0.0625 0.390625 -0.0078125
(b) Memory contens of Section 2 shown at Table 11.6.
(c) Memory contens of Section 3 shown at Table 11.7.
• Matlab® program:
318 CHAPTER 11. FINITE PRECISSION EFFECTS
Table 11.6: Memory contens for exercise 12.8b.
Memory position Memory 1 Memory 2 Memory 30 0 0 01 0 0 0.031252 -0.16796875 0.2265625 -0.003906253 -0.16796875 0.2265625 0.027343754 0.2265625 0.16796875 0.011718755 0.2265625 0.16796875 0.042968756 0.05859375 0.39453125 0.00781257 0.05859375 0.39453125 0.0390625
Table 11.7: Memory contens for exercise 12.8c.
Memory position Memory 1 Memory 2 Memory 30 0 0 01 0 0 0.101562552 -0.1640625 0.23046875 -0.00781253 -0.1640625 0.23046875 0.093754 0.23046875 0.1640625 -0.02343755 0.23046875 0.1640625 0.0781256 0.06640625 0.39453125 -0.031257 0.06640625 0.39453125 0.0703125
clear all;close all;Q=2^(-8);bit=3;
bd=0:2^bit-1;tmp=dec2bin(bd);for m=1:length(bd)
for k=1:bitx(m,k)=str2num(tmp(m,k));
endend
lambda=0.2832031250;%lambda=1;
% Section 1a=[0.8027343750, -0.58203125;
0.58203125 , 0.8027343750];b=[0;0];c=[-0.0097656250,-0.02734375];d=0.009765625;
aq=quant(a*lambda,Q);bq=quant(b*lambda,Q);cq=quant(c*lambda,Q);dq=quant(d*lambda,Q);
M=[aq,bq;cq,dq];S1=quant(M*x.’,Q);
319
% Section 2a=[0.7988281250, -0.5996093750;
0.5996093750, 0.7988281250];b=[0;0];c=[0.046875,-0.0136718750];d=0.1050390625;
aq=quant(a*lambda,Q);bq=quant(b*lambda,Q);cq=quant(c*lambda,Q);dq=quant(d*lambda,Q);
M=[aq,bq;cq,dq];S2=quant(M*x.’,Q);
% Section 3a=[0.8125000000, -0.5859375000;
0.5859375000, 0.8125000000];b=[0;0];c=[-0.0859375,-0.0332031250];d=0.3593750000;
aq=quant(a*lambda,Q);bq=quant(b*lambda,Q);cq=quant(c*lambda,Q);dq=quant(d*lambda,Q);
M=[aq,bq;cq,dq];S3=quant(M*x.’,Q);
11.9 Ex. 11.9
11.10 Ex. 11.10
11.11 Ex. 11.11
11.12 Conditions:
+∞∫−∞
Pq (e) de = 1
Pq (e) ≥ 0
where Pq (e) is the probability density function of the quantization error, and Pq (e) = cte, since thedynamic range troughout the digital filter is much larger than the quantization step q = 2−b and thenumber is represented as x = b0b1b2b3...bb.
• Sign-magnitudex = b0b1b2b3...bb (b0 is the sign bit)
– rounding: the error (e) is always between the range [-q/2, q/2].ex: Qn(x) means x quantized with n bits (including sign bit.)Q3(0101000)=011 (e=q/2)Q3(0100111)=010 (e≈-q/2)Q3(1101000)=111 (e=-q/2)Q3(1100111)=110 (e≈q/2)
320 CHAPTER 11. FINITE PRECISSION EFFECTS
Figure 11.4: Rounding probability density function for exercise 11.12.
– truncation: the truncated number is always less than the original. The error is always in therange [-q,0].ex:Q3(00111111111111)=001 (e≈-q)Q3(01000000000000)=010 (e=0)Q3(11000000000000)=110 (e=0)Q3(10000000000001)=101 (e≈-q)
Figure 11.5: Truncation probability density function 11.12.
– magnitude truncation: reduces the magnitude of the number. Equal to truncation for posi-tive number and the opposite for negative number.ex:Q3(100111111111111111)=100 (e≈q)
Figure 11.6: Magnitude truncation probability density function for exercise 7.12.
• One’s complementx = b0b1b2b3...bb (b0 is the sign bit)
x =
sign-magnitude if ≥ 0complement of positive sign-magnitude if < 0
– rounding: the same as sign-magnitude rounding.
321
– truncation: the truncated number is always less than the original ( independent of represen-tation ). The error is always in the range [-q,0].
– magnitude truncation: reduces the magnitude of the number (independent of representa-tion). The error is in the range [-q, q].
11.13 Exercise 11.13
11.14 Exercise 11.14
11.15 Exercise 11.15
11.16 The scaling factor is given by:
λ =1
max‖Fm1(eω)‖p , ‖Fm2(eω)‖p , ‖Fα1(eω)‖p
where P determines the norm, and
|Fm1(eω)| = |Fm2(eω)| = |Fα1(eω)| = 1|D(eω)|
where D(eω) is the denominator of the transfer function H(eω). Ones’s or two’s complementarithmetic was assumed and we are supposing that the output is already scaled (only the transfers tothe multipliers are critical). Thus,
λ = ‖D(eω)‖pThe transfer function is
H(z) =1− 1.8894z−1 + z−2
1− 1.933683z−1 + 0.95189z−2.
So, using Matlab® to evaluate the L2 and L∞ norms we get:
L2 ≈ 23.95 ∴ λ = 123.95 = 0.042
L∞ ≈ 155.05 ∴ λ = 1155.05 = 0.0064
Figure (11.7) shows the magnitude response of 1D(eω)
• MatLab® Program:
clear all;close all;
%establishing the transfer function:npoints=2^13; %number of discrete frequency points in the gridFs = 1;lambda1=-1.88937;m1=-1.933683;m2=0.95189;num=[1 lambda1 1];den=[1 m1 m2];disp(’Transfer function of the system ( H(z) )’);% H(z)=N(z)/D(z) is the transfer function of the systemHz=tf(num,den,’var’,’z^-1’)[H,W]=freqz(num,den,npoints,Fs);figure;plot(W,abs(H));
322 CHAPTER 11. FINITE PRECISSION EFFECTS
0 0.1 0.2 0.3 0.4 0.50
20
40
60
80
100
120
140
160
mag
nitu
de
normalized frequency
Figure 11.7: Magnitude response of D(z)−1 of exercise 11.16.
title(’Frequency response of the system’);xlabel(’frequency (rad/sample)’);ylabel(’amplitude’);grid on;
%the magnitude transfer to all multipliers in the type I canonic...%direct form realization for IIR filters depends only on D(z)% the 1/D(z) transfer:N = freqz(num,1,npoints,Fs);[Dinv,W] = freqz(1,den,npoints,Fs);
figure;plot(W,abs(Dinv),’k’);grid on;ylabel( ’magnitude’);xlabel(’normalized frequency’);figure;plot(W,abs(N),’k-.’);ylabel( ’amplitude’);grid on;legend(’N(z)’);xlabel(’frequency (rad/s)’);
norm2 = ( sum( abs(Dinv(:)).^2 )/npoints ).^0.5lambda_2 = 1/norm2;disp([’||D||_2 ~= ’,num2str(lambda_2)]);normINF= ( max(abs(Dinv)) )lambda_inf = 1/normINF;disp([’||D||_inf ~=’,num2str(lambda_inf)]);
format loose;
11.17
H(z) =b0 + b1z
−1 + b2z−2 + ...+ bMz
−M
1 + a1z−1 + a2z−2 + ...+ aNz−N
323
H(z) =Y (z)X(z)
=N(z)D(z)
Fai(eω) =
eωi
D(eω); ∀i / 1 ≤ i ≤ N
Fbk(eω) =eωk
D(eω); ∀k / 1 ≤ k ≤M
λ =1
max‖Fa1(eω)‖p , ... ‖FaN (eω)‖p , ‖Fb1(eω)‖p , ... ‖FbM (eω)‖p
λ =
1∥∥∥ 1D(eω)
∥∥∥p
∴ λ =1∥∥∥ 1
D(z)
∥∥∥p
• MatLab® Program:
clear all;close all;%clc;
%establishing the transfer function:npoints=2^13; %number of discrete frequency points in the gridnum=[.02820776 -.00149475 .03174758 .03174758 -.00149475 .02820776];den=[1 -3.02848473 4.56777220 -3.900015349 1.89664138 -.41885419];zinv=tf([0 1],1,’var’,’z^-1’);disp(’ ’);disp(’Transfer function of the system ( H(z) )’);% H(z)=N(z)/D(z) is the transfer function of the systemHz=tf(num,den,’var’,’z^-1’)[H,W]=freqz(num,den,npoints);figure;plot(W/pi,abs(H));title(’Frequency response of the system’);xlabel(’frequency (pi rad/sample)’);ylabel(’amplitude’);hold;grid on;zoom on;disp(’ ’);disp(’Quantized numerator’);numq=sclquantr(num,11)denq=sclquantr(den,11)Hzq=tf(numq,denq,’var’,’z^-1’)%format long;[Hq,W]=freqz(numq,denq,npoints);%figure;plot(W/pi,abs(Hq),’r’);grid on;zoom on;legend(’original’,’quantized’);%format loose;
[A,W]=freqz(den,1,npoints); %denominator transfer functionS_b=1./abs(A);
324 CHAPTER 11. FINITE PRECISSION EFFECTS
figure;plot(W/pi,abs(S_b));S_a=abs(H)./abs(A);hold;%figure;plot(W/pi,abs(S_a),’r’);grid on;legend(’b multipliers sensitivity’,’a multipliers sensitivity’);
N=5 ; %number of multipliers in numerator and denominatorS_e_jw=( (N+1)+N*abs(H) )./ abs(A) ;figure;plot(W,S_e_jw);grid on;zoom on;title(’Sensibility function’);xlabel(’frequency (rad/sample)’);ylabel(’amplitude’);b=11; %including sign bitdelta_m=2^-b;delta_H=delta_m*S_e_jw;figure;plot(W,(abs(H)),’b’,W,(abs(H)+delta_H),’r-’,W,(abs(abs(H)-delta_H)),’k-.’);grid on;zoom on;title(’Transfer function and deltas’);xlabel(’frequency (rad/sample)’);ylabel(’amplitude’);legend(’Original’,’delta added’,’delta subtracted’);
• Function sclquantr:
function [ydec,xscaled,yscaled,c] = sclquantr(x,n)
% Given a vector of real (positive and negative) numbers,%the functions scales then to the range [-1,1]%and performs quantization using n bits on fractional part.%The x vector is assumed to be a true representation of the%numbers (no quantization effects).%The ydec vector is returned as a representation in fixed%point arithmetic with n bits on fractional part.(rounding)%c is the scaling factor.
c=[abs(min(x)) abs(max(x))];c=max(c);nbits=n;temp=abs(x)-floor(abs(x));%uses only fractional part of number for quantization
xscaled = temp * ( 2^(nbits) );%representation in the range [-2^nbits,+2^nbits]
yscaled=round(xscaled) ;%returns a vector in the range [-2^nbits,+2^nbits]
yscaled= yscaled / ( 2^(nbits) );
325
%dynamic range = [-1,+1]
ydec=sign(x).*( yscaled+floor(abs(x)) );
11.18Fai(e
ω) = H(eω) ; ∀i / 1 ≤ i ≤ N (11.1)
Fbk(eω) = 1 ; ∀k / 1 ≤ k ≤M (11.2)
λ =1
max‖Fa1(eω)‖p , ... ‖FaN (eω)‖p , ‖Fb1(eω)‖p , ... ‖FbM (eω)‖p
(11.3)
By simply substituting equations (11.1) and (11.2) in equation (11.3), we get:
λ =1
max‖1‖p , ... ‖1‖p , ‖H(eω)‖p , ... ‖H(eω)‖p
=
1
max
1, ‖H(eω)‖p
11.19
• Fixed-point arithmetics: Gm1 = Gm2 = H(z)Gα1 = 1
The relative output-noise variance is given by:
σy2
σe2=
k∑i=1
‖Gi(eω)‖22
where, for fixed-point arithmetic:
σe2 =
2−2b
12
σ2y
σ2e
= 2.‖H (eω)‖22 + ‖1‖22
σ2y
σ2e
= 2.
12π
2π∫0
|H (eω)|2 dω+ 1
σ2y
σ2e
(dB) = 10log
2.
12π
2π∫0
|H (eω)|2 dω+ 1
(11.4)
The integral in equation (11.4) can be solved using the residues theorem:
326 CHAPTER 11. FINITE PRECISSION EFFECTS
12π
2π∫0
|H (eω)|2 dω =1
2πj
∮H(z)H(z−1)z−1dz =
P∑i=1
ResP (z), z = pi
where P (z) = H(z)H(z−1)z−1,ResP (z), z = pi is the residue of P (z) due to the pole pi,and P is the total number of singularities in the closed region and H(z) = N(z)
D(z) is the transferfunction of the system.The polynomial P (z) is rearranged as follows:
P (z) = H(z)H(z−1)z−1 =(z2 + α1 + 1)(z2 + α1 + 1)
(p1p2)(z − p1)(z − p2)(z − 1p1
)(z − 1p2
)z
where
p1 = −m1+j√
4m2−m12
2 =√m2e
θ
p2 = −m1−j√
4m2−m12
2 =√m2e
−θ
θ = arctan(−√
4m2−m12
m1
)The residues of P (z) are given by:(a)
ResP (z), z = p1 =
(p1
2 + α1p1 + 1)2
p1(p1 − p2)(p12 − 1)(p1p2 − 1)
(b)
ResP (z), z = p2 = ResP (z), z = p1∗
assuming that the ∗ notation is the complex conjugate of the term.(c)
ResP (z), z = 0 =1
p1p2=
1m2
since p1p2 = m2.Given the residues, the average energy of the transfer function H(z) is given by:
‖H(z)‖22 = ResP (z), z = p1+ ResP (z), z = p2+ ResP (z), z = 0
=
(p1
2 + α1p1 + 1)2
p1(p1 − p2)(p12 − 1)(p1p2 − 1)
−(p2
2 + α1p2 + 1)2
p2(p1 − p2)(p22 − 1)(p1p2 − 1)
+1
p1p2
=
(p1
2 + α1p1 + 1)2
p1(p1 − p2)(p12 − 1)(p1p2 − 1)
−(p2
2 + α1p2 + 1)2
p2(p1 − p2)(p22 − 1)(p1p2 − 1)
+1
p1p2
=1
p1p2(p1 − p2)(p12 − 1)(p2
2 − 1)(p1p2 − 1)×(
p23 − p2
) [1 + 2α1p1 + (2 + α1
2)p12 + 2α1p1
3 + p14]
+
− (p13 − p1
) [1 + 2α1p2 + (2 + α1
2)p22 + 2α1p2
3 + p24]
+1
p1p2
=1
p1p2(p1 − p2)(p12 − 1)(p2
2 − 1)(p1p2 − 1)×
(p1 − p2)[1− (2 + α1
2)m2 − (2 + α12)m2
2 +m23]
+
− (p12 − p2
2)
(4α12m2)− (p1
3 − p23)
(m2 + 1)
+1
p1p2
=1 +m2
3 − [(2 + α12)(m2 + 1)− 4α1m1
]m2 − (m1
2 −m2)(m2 + 1)m2 [(m2 + 1)2 −m1
2] (m2 − 1)+
1m2
327
Since: p1
2 − p22 = −m1(p1 − p2)
p13 − p2
3 = (m12 −m2)(p1 − p2)
(p12 − 1)(p2
2 − 1) = (m2 + 1)2 −m12
p1p2 = m2
Going back to equation (11.4), and substituting the values for m1, m2 and α1, we get:
σ2y
σ2e
(dB) = 10log (2 · (5.876) + 1) ≈ 11.06 dB
• Floating-point arithmetics: Gm1 = Gm2 = H (z)Gγ1 = 1Ga1 = H (z)Ga2 = 1
whereGa1 andGa2 are the noise transfers of the adders to the output. The output-noise varianceis given by:
σy2 = σer
2k∑i=1
‖Gi(eω)‖22
where, for floating-point arithmetic, the noise variance due to multipliers is given by
σ2er = σ2
np = 0.180.2−2b
and the noise variance due to the adders is given by
σ2er = σ2
na = 0.165.2−2b
as shown in the textbook.Thus, the output-noise variance is
σ2y = σ2
np
(2 ‖H (eω)‖22 + ‖1‖22
)+ σ2
na
(‖H (eω)‖22 + ‖1‖22
).
Defining the relative output-noise variance as for the fixed-point case, and considering σ2np as
reference, we get:
σ2y
σ2e
=(
2 ‖H (eω)‖22 + 1)
+ 0.917(‖H (eω)‖22 + 1
)In order to accomplish the exercise we evaluate it in decibels:
σ2y
σ2e
(dB) = 10log ((2 · 5.876 + 1) + 0.917 (5.876 + 1)) ≈ 12.8 dB
• MatLab® Program:
clear all;close all;format long
%establishing the transfer function:
328 CHAPTER 11. FINITE PRECISSION EFFECTS
npoints=2^15; %number of discrete frequency points in the gridFs = 1;alpha1 = -1.88937;m1 = -1.933683;m2 = 0.95189;num=[1 alpha1 1];den=[1 m1 m2];x = zeros(1,npoints); x(1) = 1; %impulsional inputh = filter(num,den,x);[H,f]=freq_response(h,npoints,Fs);
%%%%%%%%%%%%%%%%%%% ( ||L||_2 )^2%%%%%%%%%%%%%%%%%%increment = pi/npoints;L2q = sum( ( abs(H).^2 ) )/npoints
clear j;delta = (m1)^2-4*m2;theta = atan( ((-delta)^0.5)/(-m1) );A = 2 + (alpha1)^2 + 2*alpha1*(m2+1)/(m2^0.5)*cos(theta) + ...
(m2^2+1)/m2*cos(2*theta);B = 2*alpha1*(m2-1)/(m2^0.5)*sin(theta) + (m2^2-1)/m2*sin(2*theta);C = (m2+1)*(m2^0.5)*cos(theta) + m1;D = -(m2+1)*(m2^0.5)*sin(theta);residue = 2 * (A*D + B*C) * 1/(m2-1) ...
/( (-delta)^0.5*( m2^2-2*m2*cos(2*theta)+1 ) ) + 1/m2
format loose;break;
r1 = ( -m1 +j*(-delta)^0.5 )/2;r2 = conj(r1);NN1 = 2 + 2*alpha1*(r1+1/r1) + r1^2 + 1/r1^2 + alpha1^2;NN2 = 2 + 2*alpha1*(r2+1/r2) + r2^2 + 1/r2^2 + alpha1^2;res1 = r1/(r1-r2) * 1/((r1^2-1)*(m2-1)) * NN1;res2 = r2/(r2-r1) * 1/((r2^2-1)*(m2-1)) * NN2;RES = 2 * real(res1)
• Function freq_response:
function [H,f] = freq_response(h,pontos,Fs);
f=0;if nargin==3
df=(Fs/2)/pontos;f=0:df:(Fs/2)-df;
end
H = fft(h,2*pontos);H = H(1:pontos);
11.20
RPSD = 10log(Py (eω)Pe (eω)
)• Fixed-point arithmetic:
329
Pe (eω) = σ2e =
112.2−2b
Py (eω) = σ2e
k∑i=1
Gi (z) .Gi(z−1)
The multipliers give us:
Gm1 =Y (z)Em1 (z)
=z−1
D (z)
Gm2 =Y (z)Em2 (z)
=z−2
D (z)
where
D (z) = 1 +m1 z−1 +m2 z
−2
Thus, the result is:
RPSD = 10log(Gm1 (z) .Gm1
(z−1)
+Gm2 (z) .Gm2
(z−1))
The evaluation of this expression is similar to the one made in exercise 7.19
RPSD = 10log(
2[
(1 +m2)(1−m2)(m2
2 −m12 + 2m2 + 1)
])• Floating-point arithmetic:
Gm1 =Y (z)Em1 (z)
=z−1
D (z)
Gm2 =Y (z)Em2 (z)
=z−2
D (z)
D (z) = 1 +m1 z−1 +m2 z
−2
Using the same definition of the PSD of the noise source and evaluating the transfer functionsof noise from the adders to the output,
Ga1 =1
D(z); Ga2 = Gm1 ; Ga3 = Gm2
RPSD = 10log[(Gm1 (z) .Gm1
(z−1)
+Gm2 (z) .Gm2
(z−1))
+
0.917(Ga1 (z) .Ga1
(z−1)
+Ga2 (z) .Ga2
(z−1)
+Ga3 (z) .Ga3
(z−1))]
Resulting:
RPSD = 10log(
4.751[
(1 +m2)(1−m2)(m2
2 −m12 + 2m2 + 1)
])
• MatLab® Program:
330 CHAPTER 11. FINITE PRECISSION EFFECTS
clear all;close all;format long
%establishing the transfer function:npoints=2^15; %number of discrete frequency points in the gridFs = 1;alpha1 = -1.88937;m1 = -1.933683;m2 = 0.95189;num=[1 alpha1 1];den=[1 m1 m2];x = zeros(1,npoints); x(1) = 1; %impulsional input
[Dinv,f]=freqz(1,den,npoints,Fs);figure;plot(f,abs(Dinv).^2,’k’);xlabel(’normalized frequency’);ylabel(’magnitude’);grid on;
%%%%%%%%%%%%%%%%%%% ( ||L||_2 )^2%%%%%%%%%%%%%%%%%%increment = pi/npoints;L2q = sum( ( abs(Dinv).^2 ) )/npoints
clear j;delta = (m1)^2-4*m2;theta = atan( ((-delta)^0.5)/(-m1) );A = 2 + (alpha1)^2 + 2*alpha1*(m2+1)/(m2^0.5)*cos(theta) + ...
(m2^2+1)/m2*cos(2*theta);B = 2*alpha1*(m2-1)/(m2^0.5)*sin(theta) + (m2^2-1)/m2*sin(2*theta);C = (m2+1)*(m2^0.5)*cos(theta) + m1;D = -(m2+1)*(m2^0.5)*sin(theta);residue = 2 * (D) * 1/(m2-1) ...
/( (-delta)^0.5*( m2^2-2*m2*cos(2*theta)+1 ) )
Res = (1+m2) /( (1-m2)*(m2^2-m1^2+2*m2+1) )
format loose;break;
r1 = ( -m1 +j*(-delta)^0.5 )/2;r2 = conj(r1);NN1 = 2 + 2*alpha1*(r1+1/r1) + r1^2 + 1/r1^2 + alpha1^2;NN2 = 2 + 2*alpha1*(r2+1/r2) + r2^2 + 1/r2^2 + alpha1^2;res1 = r1/(r1-r2) * 1/((r1^2-1)*(m2-1)) * NN1;res2 = r2/(r2-r1) * 1/((r2^2-1)*(m2-1)) * NN2;RES = 2 * real(res1)
11.21
H(eω) = |H(eω)|eθ(w)
For simplicity, we define:
331
H = H(eω)θ = θ(eω)
• Sensitivity criteria I:
ISH(z)mi (z) =
∂H(z)∂mi
=∂H
∂mi
=∂ [|H|]∂mi
eθ + |H|∂[eθ]
∂mi
ISHmi(e
ω) =[IS|H|mi (eω)
]eθ + |H| ∂θ
∂mieθ
∣∣IS
Hmi(e
ω)∣∣ =
√[IS|H|mi (eω)
]2+ |H|2
[∂θ
∂mi
]2
• Sensitivity criteria II:
IISH(z)mi (z) =
1H(z)
∂H(z)∂mi
=1H
∂H
∂mi
=1
|H|eθ(∂ [|H|]∂mi
eθ + |H| ∂θ∂mi
eθ)
=1|H|
∂ [|H|]∂mi
+ j∂θ
∂mi
∣∣∣IISH(z)mi (eω)
∣∣∣ =
√(1|H|
∂ [|H|]∂mi
)2
+(∂θ
∂mi
)2
=
√(IIS|H|mi (eω)
)2
+(∂θ
∂mi
)2
• Sensitivity criteria III:
IIISH(z)mi (z) =
mi
H(z)∂H(z)∂mi
=mi
H
∂H
∂mi
=mi
|H|eθ(∂ [|H|]∂mi
eθ + |H| ∂θ∂mi
eθ)
IIISH(z)mi (eω) =
mi
|H|∂ [|H|]∂mi
+ jmi∂θ
∂mi
∣∣∣IIISH(z)mi (eω)
∣∣∣ =
√(mi
|H|∂ [|H|]∂mi
)2
+(mi
∂θ
∂mi
)2
=
√(IIIS
|H|mi (eω)
)2
+(mi
∂θ
∂mi
)2
332 CHAPTER 11. FINITE PRECISSION EFFECTS
0 0.1 0.2 0.3 0.4 0.5−50
−48
−46
−44
−42
−40
−38
−36
−34
−32
−30
normalized frequency
mag
nitu
de r
espo
nse
( dB
)
originalquantized (7 bits)
(a) Frequency response.
0 0.1 0.2 0.3 0.4 0.5−200
−150
−100
−50
0
50
100
150
200
normalized frequency
phas
e (d
egre
e)
originalquantized (7 bits)
(b) Phase response.
Figure 11.8: Frequency response for original minimax and quantized filters of exercise 7.22.
Table 11.8: Original filter of exercise 4 .4a for exercise 7.22.
original0.003400000000000.010600000000000.002500000000000.01490000000000
11.22 • Filter designed in exercise 4.4a:Figure (11.8) shows both frequency response (dB) and phase (degree) response of the originaland quantized filters.Table (11.8) shows the original coefficients.Table (11.9) shows the 7 bits quantized coefficients.
• Filter designed in exercise 4.4b:Figure (11.9) shows both frequency response (dB) and phase (degree) response of the originaland 7 bits quantized filters.The original coefficients of the filter in exercise 4.4b are presented in Table (11.10).Table (11.11) shows the 7 bits quantized coefficients.
• MatLab® Program:
clear allclose allclc
Table 11.9: Quantized filter of exercise 4 .4a for exercise 7.22.
quantized0
0.007812500000000
0. 01562500000000
333
0 0.1 0.2 0.3 0.4 0.5−60
−40
−20
0
20
40
60
80
100
120
frequency (rad/sample)
mag
nitu
de r
espo
nse
( dB
)
originalquantized (7 bits)
(a) Frequency response.
0 0.1 0.2 0.3 0.4 0.5−200
−150
−100
−50
0
50
100
150
200
normalized frequency
phas
e (d
egre
e)
originalquantized (7 bits)
(b) Phase response.
Figure 11.9: Frequency response for original minimax and quantized filters of exercise 7.22, originallyfrom exercise 4.4b.
Table 11.10: Original filter of exercise 4.4b for exercise 7.22.
first section second sectionnumerator denominator numerator denominator
1.00000000000000 1.00000000000000 1.00000000000000 1.00000000000000-1.3490000000000 -1.88900000000000 -1.93700000000000 -1.937000000000001.00000000000000 0.92300000000000 1.00000000000000 0.95200000000000
format long;points = 2^10;Fs = 1;%exercise a)disp(’ ’);disp(’ ’);disp(’Exercise 7.8 item a)’);disp(’ ’);disp(’ ’);disp(’Original coefficients’);num1=[0.0034 0.0106 0.0025 0.0149];num1(:)[num1q]=sclquantr(num1,7);disp(’ ’);disp(’Quantized coefficients’);num1q(:)[H1,W] = freqz(num1,1,points,Fs);H1q = freqz(num1q,1,points,Fs);
Table 11.11: Quantized filter coefficients of exercise 4.4b for exercise 7.22.
first section second sectionnumerator denominator numerator denominator
1.00000000000000 1.00000000000000 1.00000000000000 1.00000000000000-1.34375000000000 -1.92187500000000 -1.89062500000000 -1.937500000000001.00000000000000 0.92187500000000 1.00000000000000 0.95312500000000
334 CHAPTER 11. FINITE PRECISSION EFFECTS
figure;plot(W,20*log10(abs(H1)),’k’,W,20*log10(abs(H1q)),’k-.’);xlabel(’normalized frequency’);ylabel(’magnitude response ( dB )’);legend(’original’,’quantized (7 bits)’,3)grid on;figure;phase1 = angle(H1)/pi*180;phase2 = angle(H1q)/pi*180;plot(W(:),phase1,’k’,W,phase2,’k-.’);xlabel(’normalized frequency’);ylabel(’phase (degree)’);grid on;legend(’original’,’quantized (7 bits)’,3)
%exercise b)disp(’ ’);disp(’ ’);disp(’Exercise 7.8 item b)’);disp(’ ’);disp(’ ’);disp(’Original coefficients’);num2a=[1 -1.349 1];den2a=[1 -1.919 0.923];num2b=[1 -1.889 1];den2b=[1 -1.937 0.952];num2aq=sclquantr(num2a,7);den2aq=sclquantr(den2a,7);num2bq=sclquantr(num2b,7);den2bq=sclquantr(den2b,7);[num2a(:) den2a(:) num2b(:) den2b(:)]disp(’ ’);disp(’ ’);disp(’Quantized coefficients’);[num2aq(:) den2aq(:) num2bq(:) den2bq(:)]Hz2a=tf(num2a,den2a,’var’,’z^-1’);Hz2b=tf(num2b,den2b,’var’,’z^-1’);Hz2=Hz2a*Hz2b;Hz2aq=tf(num2aq,den2aq,’var’,’z^-1’);Hz2bq=tf(num2bq,den2bq,’var’,’z^-1’);Hz2q=Hz2aq*Hz2bq;[num2 den2 ts]=tfdata(Hz2,’v’);H2=freqz(num2,den2,W);[num2q den2q tsq]=tfdata(Hz2q,’v’);H2q=freqz(num2q,den2q,W);figure;plot(W,20*log10(abs(H2)),’k’,W,20*log10(abs(H2q)),’k--’);xlabel(’frequency (rad/sample)’);ylabel(’magnitude response ( dB )’);legend(’original’,’quantized (7 bits)’,3)grid on;figure;plot(W,angle(H2)/pi*180,’k’,W,angle(H2q)/pi*180,’k--’);xlabel(’normalized frequency’);ylabel(’phase (degree)’);grid on;legend(’original’,’quantized (7 bits)’,3)format loose;
335
11.23 Transfer function of original system ( H(z) ):
H(z) =1− 1.889z−1 + z−2
1− 1.934z−1 + 0.9519z−2
Table (11.12) shows the 6 bits (excluding sign bit) quantized coefficients.
Figure (11.10) shows the frequency response of the original filter and it’s quantized version. Figure(11.11) depicts the real and practical sensitivity figures of merit that were used to determine thedeterministc and statistical deviations of the transfer function due to quantization. Figure (11.12)shows the magnitude deviations of the transfer function, relatively to the deterministc and statisticalforecast and real and practical sensitivities.
0 0.1 0.2 0.3 0.4 0.50
2
4
6
8
10
12
14
16
18
normalized frequency
mag
nitu
de r
espo
nse
original filter6 bits quantized
(a) Frequency response.
0 0.01 0.02 0.03 0.04 0.050
2
4
6
8
10
12
14
16
18
normalized frequency
mag
nitu
de r
espo
nse
original filter6 bits quantized
(b) Passband detail.
Figure 11.10: Frequency response for original minimax and quantized filters for exercise 7.23.
• MatLab® program:
clear all;close all;
disp(’--------’);disp(’--------’);disp(’--------’);bits=6 %define number of fractional bits;Fs = 1; %sampling frequency;%establishing the transfer function:npoints=2^13; %number of discrete frequency points in the gridlambda1=-1.88937;m1=-1.933683;m2=0.95189;
Table 11.12: Quantized filter coefficients ( 6 bits ) for exercise 7.23.
quantized coefficientsnumerator denominator
1.00000000000000 1.00000000000000-1.89062500000000 -1.937500000000001.00000000000000 0.95312500000000
336 CHAPTER 11. FINITE PRECISSION EFFECTS
0 0.1 0.2 0.3 0.4 0.50
500
1000
1500
2000
2500
3000
3500
4000
4500
5000Sensitivity function
frequency (rad/sample)
ampl
itude
practicalreal
(a) Frequency response.
0 0.01 0.02 0.03 0.04 0.050
500
1000
1500
2000
2500
3000
3500
4000
4500
5000Sensitivity function
frequency (rad/sample)
ampl
itude
practicalreal
(b) Detail in the passband.
Figure 11.11: Real and practical sensitivity figures of merit for exercise 7.23.
0 0.1 0.2 0.3 0.4 0.50
5
10
15
20
25
30
35
40
normalized frequency
mag
nitu
de d
evia
tion
statistical (practical)deterministic (practical)statistical (real)deterministic (real)
(a) Frequency response.
0 0.01 0.02 0.03 0.04 0.050
5
10
15
20
25
30
35
40
normalized frequency
mag
nitu
de d
evia
tion
statistical (practical)deterministic (practical)statistical (real)deterministic (real)
(b) Detail in the passband range.
Figure 11.12: Magnitude deviations for deterministic and stochastic analysis for exercise 7.23.
num=[1 lambda1 1];den=[1 m1 m2];zinv=tf([0 1],1,’var’,’z^-1’);disp(’ ’);disp(’Transfer function of original system ( H(z) )’);% H(z)=N(z)/D(z) is the transfer function of the systemHz=tf(num,den,’var’,’z^-1’)[H,W]=freqz(num,den,npoints,Fs);
disp(’ ’);disp(’Quantized numerator’);numq=sclquantr(num,bits)disp(’Quantized denominator’);denq=sclquantr(den,bits)disp(’ ’);disp(sprintf(’Transfer function of quantized system ( Hq(z) )with %d bits’,bits));Hzq=tf(numq,denq,’var’,’z^-1’)Hq = freqz(numq,denq,W,Fs);
337
figure;plot(W,abs(H),’k’,W,abs(Hq),’k--’);grid on;%title(’Frequency response of systems’);xlabel(’normalized frequency’);ylabel(’magnitude response’);legend(’original filter’,’6 bits quantized’);
%%%%%%%%%%%%%%% Sensitivity analysis :%%%%%%%%%%%%%%N_n = 1 ; %number of multipliers in numeratorN_d = 2; %number fo multipliers in denominatorD = freqz(den,1,W,Fs); %denominator transfer functionN = freqz(num,1,W,Fs);
% practical sensitivity figure of merit:S_e_jwP = N_n./abs(D) + N_d*abs(H)./abs(D) ;
% real sensitivity figure of merit (no aproximations):term1 = real( exp(-j*W) .* 1./D .* conj(H) );term2 = real( exp(-j*W) .* H./D .* conj(H) );term3 = real( exp(-j*2*W) .* H./D .* conj(H) );term = abs(term1) + abs(term2) + abs(term3);S_e_jwR = term ./abs(H);
%%%%%%%%%%%%%%figure;plot(W,S_e_jwP,’k-.’,W,S_e_jwR,’k--’);grid on;title(’Sensitivity function’);xlabel(’frequency (rad/sample)’);ylabel(’amplitude’);legend(’practical’,’real’);
%%%%%%%%%%%%% Determining the deterministic deviation for 6 bits quantization%%%%%%%%%%%%b=bits+1; %rounding error is (2^-(bits)) / 2delta_m=2^-b;delta_H_P=delta_m*S_e_jwP; %practicaldelta_H_R=delta_m*S_e_jwR; %real
%%%%%%%%%%%%% Determining the statistical deviation with confidence factor% Q = 0.95%%%%%%%%%%%%% Normalized Gaussian probability density functionfuncao=sprintf(’2/(pi^.5) * exp( -u^2 )’ );fun=inline(funcao,’u’)fun_vec=vectorize(fun);a=0;x=1.3858; %this value was got from a probability tableQ = quad(fun_vec,a,x) %confidence factor (just testing)
sigma_delta_m = delta_m/(12^0.5);rho_W_P = x*sigma_delta_m* S_e_jwP; %practicalrho_W_R = x*sigma_delta_m* S_e_jwR; %realdisp(’The maximum deviation of the desired amplitude response’);
338 CHAPTER 11. FINITE PRECISSION EFFECTS
disp([’is ’,num2str(max(rho_W_P)),’(using practical S(e^jw)),with a confidence factor of 0.95%’]);disp(’The maximum deviation of the desired amplitude response’);disp([’is ’,num2str(max(rho_W_R)),’(using real S(e^jw)),with a confidence factor of 0.95%’]);
figure;plot(W,rho_W_P,’k’,W,delta_H_P,’k-.’,...
W,rho_W_R,’k:’,W,delta_H_R,’k--’);xlabel(’normalized frequency’);ylabel(’magnitude deviation’);legend(’statistical (practical)’,’deterministic (practical)’,...
’statistical (real)’,’deterministic (real)’);grid on;
break;figure;plot(W,rho_W_R,’k’,W,delta_H_R,’k--’);grid on;xlabel(’normalized frequency’);ylabel(’magnitude deviation’);legend(’statistical (real)’,’deterministic(real)’);
11.24
SNR =σ2x
σ2e
where σ2x is the signal power and σ2
e is the noise power
Considering that the type of quantization is rounding, we have:
σ2e = 2−2b
12 where b is the number of bits used in the representation.
If we assume that the signal have a uniform probability density function, as depicted in Figure 11.13,we can evaluate its variance (signal power).
Figure 11.13: Uniform probability density function for exercise 7.24.
σ2x =
L∫−L
(x− x)2P (x) dx
where P (x) = 1/2L = 1/∆, L =b∑i=1
2−i (assuming L < 1) and x = 0
σ2x = 1/∆
L∫−L
(x)2dx
339
σ2x =
L2
3=
∆2
2
3=
∆2
12
Assuming that the quantization step is given by q = δ2b
, we can rewrite the noise power as:
σ2e =
q2
12=
(∆2b
)212
Resulting:
σ2x
σ2e
= SNR = 22b
The requirement is SNR = 80 dB (108), so:
22b = 108 ∴ bmin = d26.57e = 27 bits
11.25 Exercise 11.25
11.26 The state-space equations describing the system are:[x1 (n+ 1)x2 (n+ 1)
]
y (n) =[
1 0] [ x1 (n)
x2 (n)
]We must verify stability first. This can be done applying Jury stability test.
Necessary conditions:
D (z)|z=1 > 0
(−1)nD (z)|z=−1 > 0
sufficient conditions:a0 a1... anan an−1... a0
b0 b1... bn−1
bn−1 bn−2... b0c0 c1... cn−2
cn−2 cn−3... c0
wherebk = ak − an
a0an−k
ck = bk − bn−1b0
bn−k−1
if a0 , b0 , c0 , ... > 0then the system is stable.
The necessary and sufficient conditions result in:
• 1 +m1 +m2 > 0
• (−1)2 (1−m1 +m2) > 0
•a0 = 1 > 0b0 = 1−m2
2 > 0−1 < m2 < 1
as shown at Figure 11.14.
Now, we can analyze the conditions to eliminate zero input limit cycles:
340 CHAPTER 11. FINITE PRECISSION EFFECTS
Figure 11.14: Region of stability for exercise 7.26.
A =[ −m1 1−m2 0
]
a12a21 ≥ 0 ∴ m2 ≤ 0
If quantizers use magnitude truncation and the multipliers are inside the dark region of Figure 11.15,the zero-input limit cycle will be eliminated or
a12a21 < 0 ∴ m2 > 0|a11 − a22|+ det (A) ≤ 1
The last condition results in
m1 +m2 ≤ 1
By just applying magnitude truncation during quantization and choosing m1 and m2 on the shaded atFigure 11.16, we can eliminate zero-input limit cycle.
Constant-input limit cycles can be eliminated using the elements of vector p as multipliers for theinput, as described in chapter 7 of the textbook. p is defined as:
p = (I −A)−1B
p =[ 1−m2
m21+m1−3m2
m2
]Assuming u0 to be a constant input, if pu0 is machine representable, constant input limit cycles willbe eliminated.
341
Figure 11.15: First zero-input limit cycles free region for exercise 7.26.
Figure 11.16: Second zero-input limit cycles free region for exercise 7.26.
11.27 The system is described with the state-space equations:
[x1 (n+ 1)x2 (n+ 1)
]
y (n) =[
1 −1] [ x1 (n)
x2 (n)
]Stability conditions:
342 CHAPTER 11. FINITE PRECISSION EFFECTS
m1 > 0m2 > 0m1 +m2 < 2
Zero-input limit cycles conditions:
−m1m2 ≥ 0
wich can’t be met, or
−m1m2 < 0|2−m1 −m2|+ det (A) ≤ 1 ∴ 1 ≤ 1
where both were satisfied.
Figure 11.17: Stability region (gray), and zero-input limit cycles free region (dark gray) for exercise 7.27.
In the stability region in Figure (11.17) (shaded region), zero-input limit cycles can be eliminated ifwe use magnitude truncation during quantization. If the filter is free from zero-input limit cycles itis also forced-input stable if the overflow nonlinearities are in the shaded region of Figure 7.21 (ex:saturation), then, eliminating overflow limit cycles
Constant-input limit cycles:
p = (I −A)−1B
p =[ −1
0
]Since the elements of p are easily representable, constant input limit cycles can also be eliminated.
11.28 The transfer function of the system is given by:
H(z) =z−(M+1) − 1z−1 − 1
343
This recursive filter has a peculiar property: it’s impulse response is finite (M + 1 samples long), andcan be represented by:
h(n) =M∑i=0
δ(n− i)
This structure is easily implemented and doesn’t present zero-input limit cycles, as the output of therecursive register is canceled with the output of the forward one, after M samples without input.
The state-space equations are:
[x1(n+ 1)x2(n+ 1)
]=[
1 01 0
] [x1(n)x2(n)
]+[ −1
0
]u(n)
and
y(n) =[ −1 z−(M−1)
] [ x1(n)x2(n)
]+ u(n)
In order to eliminate zero-input limit cycles, the matrix A must satisfy:
a12a21 ≤ 0→ 0 · 1 ≤ 0
The elimination of constant-input is determined by p = (I −A)−1B, what gives:
p =[
00
]therefore, no multipliers are needed.
11.29 The overflow characteristics of one’s-complement arithmetic will be plotted here assuming that:
1 – number of fractional bits is very large
2 – dynamic range is approximately [-1,1]
Figure (11.18) shows the overflow characteristics.
Figure 11.18: Overflow characteristics of exercise 7.29.
344 CHAPTER 11. FINITE PRECISSION EFFECTS
11.30 If overflow saturation is removed and the arithmetic is two’s-complement, overflow oscillations willoccur due to the fact that the two’s complement overflow characteristics is similar to the one’s com-plement, which was depicted in the last exercise. The shaded area of Figure (11.19) guarantees forcedresponse stability, and, as we can see, the overflow characteristics of two’s complement is not in theshaded region. A large wordlength was assumed, and the signals are scaled in the range [−1, 1].
!#"$&%(' )+*,%(*$-./102*,34567"985:;39:;",<*,3<=>! <=>"$!
Figure 11.19: Overflow characteristics of two’s complement for exercise 7.30
11.31|v (ei (n) , n)| < |ei (n)| ∀i = 1, 2, ..., N
Equation (7.165) of the book.
(a) ∣∣∣e′i (n+ 1)∣∣∣ < |ei (n)| ∀i = 1, 2, ..., N
∣∣∣[ei (n) + fi (n)]Q0− fi (n)
∣∣∣ < |ei (n)|∣∣∣∣[f ′i (n)
]Q0
− fi (n)∣∣∣∣ < ∣∣∣f ′i (n)− fi (n)
∣∣∣(b)
−(f′
i (n)− fi (n))<[f′
i (n)]Q0
− fi (n) < f′
i (n)− fi (n)
or
f′
i (n)− fi (n) <[f′
i (n)]Q0
− fi (n) < −(f′
i (n)− fi (n))
these two inequalities result in the union of two regions:
region 1
[f′
i (n)]Q0
< f′
i (n)[f′
i (n)]Q0
> 2fi (n)− f ′i (n)
and
region 2
[f′
i (n)]Q0
> f′
i (n)[f′
i (n)]Q0
< 2fi (n)− f ′i (n)
(c)|fi (n)| ≤ 1
345
(d) overflow limited to [-1,1]Regions 1 and 2 and conditions (iii) and (iv) give us the overflow characteristics:
[f′
i (n)]Q0
< f′
i (n)[f′
i (n)]Q0
> 2− f ′i (n)[f′
i (n)]Q0
≤ 1[f′
i (n)]Q0
≥ −1
U
[f′
i (n)]Q0
> f′
i (n)[f′
i (n)]Q0
< −2− f ′i (n)[f′
i (n)]Q0
≤ 1[f′
i (n)]Q0
≥ 1
The union of these two sections give us Figure (7.21) in the textbook.
346 CHAPTER 11. FINITE PRECISSION EFFECTS
Chapter 12
EFFICIENT FIR STRUCTURES
12.1 Using the equations: [Ei(z)Ei(z)
]=[
1 kiz−1
ki z−1
] [Ei−1(z)Ei−1(z)
]
Ni(z) = k0Ei(z)E0(z)
and
Ni(z) = k0Ei(z)E0(z)
we get: [Ni(z)Ni(z)
]=[
1 kiz−1
ki z−1
] [Ni−1(z)Ni−1(z)
](12.1)
Equation (12.1) will now be used for i = 1, 2:
→ i = 1
N1(z) = k0 + k0k1z−1
N1(z) = k0k1 + k0z−1
N1(z) = z−1N1(z−1)
→ i = 2
N2(z) = N1(z) + k2z−2N1(z−1)
N2(z) = k2N1(z) + z−2N1(z−1)N2(z) = z−2N2(z−1)
Then:
N1(z) = z−iNi(z−1)Ni(z) = Ni−1(z) + kiz
−iNi−1(z−1) for i = 1, 2, ...,M (12.2)
12.2 By changing z by z−1 in equation (12.2) and rearranging, we get:
Ni−1(z−1) = Ni(z−1)− kiziNi−1(z) (12.3)
Now, by replacing the equation (12.3) in equation (12.2):
Ni(z) = Ni−1(z) + kiz−i(Ni(z−1)− kiziNi−1(z))
Ni(z) = Ni−1(z)(1− k2i ) + kiz
−iNi(z−1)Ni−1(z) = 1
(1−k2i )
(Ni(z)− kiz−iNi(z−1))
which is the desired result.
347
348 CHAPTER 12. EFFICIENT FIR STRUCTURES
12.3 The written function, in MatLab® code, was called fir2lattice and normalizes the filter impulseresponse in order to get h(0) = 1, what leads to k0 = 1. The function is as follows:
% converts a given direct-form FIR filter h(n) to the lattice form
function k = fir2lattice(h);
%warning: h(0) described in the book is h(1) in MatLab
%supposing that h(n) is known:hn = h/h(1); % normalized by ko (h(1))
%hn(n) is the filter normalized impulse response, where hn(1)=1M = length(hn)-1; %filter orderN(M+1,:) = hn;k(1) = hn(1);
for i=M:-1:1k(i+1) = N(i+1,i+1);%Nflip(z) = z^-i * Ni(z^-1)Nflip(i+1,:) = [ fliplr(N(i+1,1:end+i-M)) zeros(1,M-i) ];dif = ( N(i+1,:)-k(i+1)*Nflip(i+1,:) );%dif(1:end+i-M-1)if ( dif(1:end+i-M-1) ~=0 )
N(i,:) = 1/(1-k(i+1)^2) * dif;else
if ( i~=1 )ierror([’more than one reflection coefficient, ’...
’excluding the first, is equal to one’]);end%N(i,:) = 1/(-2*k(i+1))*( -sum(Nflip(i+1,:)) ); % treat%errors like the one introduced by the filter h(n)=1+z^-1
endendk = k(2:end); %removind k0=1 for compatibility with tf2latck=k(:);
12.4 The polynomials E(z) and R(z) are described in matrix notation for the lattice structure as:
E(z) = κ
[1 1−1 1
]K−1∏i=1
[1 00 z−1
] [1 kiki 1
]
and
R(z) =
i=1∏K−1
[1 −ki
−ki 1
] [z−1 00 1
][1 −11 1
]
where 2K = M + 1, and M is the order of analysis and synthesis filters.
(a) By making the product of the two polynomials we shall find a delay of order K − 1. Let’s see:
349
R(z)E(z) =
1∏
i=K−1
[1−ki
−ki 1
] [z−1 00 1
][1−11 1
]
κ
[1 1−1 1
]K−1∏i=1
[1 00 z−1
] [1 kiki 1
]
= 2κ
1∏
i=K−1
[1−ki
−ki 1
] [z−1 00 1
]K−1∏i=1
[1 00 z−1
] [1 kiki 1
]
= 2κz−1(1− k2K−1)
1∏
i=K−2
[1−ki
−ki 1
] [z−1 00 1
]K−2∏i=1
[1 00 z−1
] [1 kiki 1
]
= 2κz−(K−1)1∏
i=K−1
(1− k2i )
= κκ−1z−(K−1)
= z−(K−1)
(b) We will prove that the filter bank has linear-phase for the analysis filters. The demonstration forthe synthesis filters is equivalent.
A given FIR filter H(z) is said to be linear-phase if it satisfies the following property:
H(z) = ±z−MH(z−1) (12.4)
where M is the filter order. Therefore, we must show that the equation which describes theanalysis filters is of this form.
[H0(z)H1(z)
]=E(z2)
[1z−1
]= κ
[1 1−1 1
]K−1∏i=1
[1 00 z−2
] [1 kiki 1
][1z−1
]= κ
[1 1−1 1
] [A0(z)A1(z)
](12.5)
350 CHAPTER 12. EFFICIENT FIR STRUCTURES
[H0(z)H1(z)
]= z−(2K−2)κ
[1 1−1 1
]K−1∏i=1
[z2 00 1
] [1 kiki 1
][1z−1
]
= z−Mκ
[1 1−1 1
]K−1∏i=1
[z2 00 1
] [1 kiki 1
][z1
]= z−Mκ
[1 1−1 1
]·
K−1∏i=1
[0 11 0
] [0 11 0
] [z2 00 1
] [0 11 0
] [0 11 0
] [1 kiki 1
] [0 11 0
] [0 11 0
][z1
]= z−Mκ
[1 1−1 1
] [0 11 0
]·
K−1∏i=1
[0 11 0
] [z2 00 1
] [0 11 0
] [0 11 0
] [1 kiki 1
] [0 11 0
][0 11 0
] [z1
]
= z−Mκ
[1 11−1
]K−1∏i=1
[1 00 z2
] [1 kiki 1
][1z
]= z−Mκ
[1 11−1
] [A0(z−1)A1(z−1)
](12.6)
where the following property was applied:
I =[
0 11 0
] [0 11 0
]By verifying equations (12.5) and (12.6), we can realize that the property of equation (12.4) issatisfied. More than this, we can state that:
H0(z) = z−MH0(z−1)H1(z) = −z−MH1(z−1)
The same relationship describes the synthesis filters G0(z) and G1(z).
12.5 Problem 12.5
H0(z) = 1 + 2z−1 + 4z−2 + 4z−3 + 2z−4 + z−5
E00(z2) = 1 + 4z−2 + 2z−4
E01(z2) = 2 + 4z−2 + z−4
k = −12
11− κ2
1
11− κ2
2
E(z) = k
[1 1−1 1
] [1 00 z−1
] [1 κ2
κ2 1
] [1 00 z−1
] [1 κ1
κ1 1
]= k
[1 1−1 1
] [1 κ2
z−1κ2 z−1
] [1 κ1
z−1κ1 z−1
]= k
[1 + z−1κ2 κ2 + z−1
−1 + z−1κ2 −κ2 + z−1
] [1 κ1
z−1κ1 z−1
]
351
E00(z) = k[1 + κ2z−1 + κ1κ2z
−1 + κ1z−2]
E01(z) = k[κ1 + κ1κ2z−1 + κ2z
−1 + z−2] = −2 + 0.5z−1 + z−2
therefore
1 =1k
κ2 + κ1κ2 =4k
κ1 =2k⇒ κ1 = 2⇒ 3κ2 = 4⇒ κ2 =
34
E10(z) = −1 + κ2z−1 − κ1κ2z
−1 + κ1z−2
E11(z) = −κ1 + κ1κ2z−1 − κ2z
−1 + z−2
H1(z) = −1 + (κ2 − κ1κ2)z−2 + κ1z−4 − κ1z
−1 + (κ1κ2 − κ2)z−3 + z−5
= −1 + (43− 8
3)z−2 + 2z−4 − 2z−1 +
53z−3 + z−5
12.6 Exercise 12.6
12.7 Exercise 12.7
12.8 (a) The transfer function H1(z) has second order.
E(z2) =(α1 00 α2
)[ 1∏i=I
(1 + z−2 1
1 + βiz−2 + z−4 1 + z−2
)(γi 00 1
)]
As a result
(H0(z)H1(z)
)= E(z2)
(1z−1
)=(
α1(1 + z−2) α1
α2(1 + βiz−2 + z−4) α2(1 + z−2)
)(γ1
z−1
)=(
γ1α1(1 + z−2) + α1z−1
α2[γ1α2(1 + βiz−2 + z−4) + z−1 + z−3]
)=( −1 + z−1 − z−2
−1 + z−1 + βiz−2 + z−3 − z−4
)where γ1 = 1, α1 = α2 = 1 and β1 = −2.
(b)
H0 = −1 + z−1 − z−2
(c)
E−1(z) =(
1 + z−1 11 + βiz
−1 + z−2 1 + z−1
)−1
=1
(1 + z−1)2 − 1− βiz−1 − z−2
(1 + z−1 −1
−1− βiz−1 − z−2 1 + z−1
)=
1(2− βi)z−1
(1 + z−1 −1
−1− βiz−1 − z−2 1 + z−1
)=
(z+12−βi − z
2−βi−(z+βi+z
−1)2−βi
z+12−βi
)
352 CHAPTER 12. EFFICIENT FIR STRUCTURES
The causal solution is then given by
R(z) =
[I∏i=1
12− βi
( 1γi
00 1
)(1 + z−1 −1
−1− βiz−1 − z−2 1 + z−1
)]( 1α1
00 1
α2
)
12.9 The MatLab® commands remezord and remez, were used to estimate the filter order and to buildthe filter, respectively.
The highpass filter is of order N = 96. Figure 12.1 shows the frequency response of the filter.
0 1000 2000 3000 4000 5000 6000−120
−100
−80
−60
−40
−20
0
20
frequency
ampl
itude
( d
B )
Figure 12.1: Frequency response of the highpass minimax filter of exercise 10.9.
There are different ways of determining the lattice coefficients. The easiest one is to consider thenormalized impulse response of the FIR filter ( the first coefficient is made unitary ), and use theMatLab® command tf2latc over this impulse response, excluding the last coefficient. To the resultof the executed command is added the last reflection coefficient, which is 1. The same result can beachieved by using the command introduced in exercise 3, in the same way.
After evaluating the reflection coefficients, an easy way to determine if two commands are gettingthe same result is to build a graph were the x axis is the response of one of them, and the y axis theresponse of the other. If we get a straight line with derivative one, the commands supplied equivalentresult. Figure 12.2 shows this result.
As we can see, the obtained coefficients by both commands are equal.
12.10 The processing time is dependent of processor clock and memory available. The machine used isa simple Pentium II® with 256Mb of RAM. Depending on the version of MatLab® , the number offloating point operations calculated by the function flops will be no longer available.
First of all, the impulse responses of the direct FIR form and lattice structure were determined. Figure12.3 shows the result.
A simple gaussian sequency x(n), with mean zero and variance one was applied as input to bothfilter and latcfilt. Table 12.1 summarizes the results. The execution time and flops countgiven were determined per output sample.
353
−10 −5 0 5 10 15 20−10
−5
0
5
10
15
20
user supplied reflection coefficients
tf2la
tc c
alcu
late
d re
flect
ion
coef
ficie
nts
Figure 12.2: Tf2latc reflection coefficients in function of the user supplied command fir2latticecoefficients of exercise 10.9.
0 10 20 30 40 50 60 70 80 90 100−0.15
−0.1
−0.05
0
0.05
0.1
0.15
samples
impu
lse
resp
onse
direct formlattice
Figure 12.3: Impulse response of direct and lattice structures, for exercise 10.10.
As we can see, the flops count sometimes is not precise. Despite the fact that the commandlatcfilt is faster than filter for the same filter length, the floating point operations countwas not smaller than the one for the filter command.
The difference between the output signals of both structures is depicted in Figure 12.4, and is almostzero ( the small difference is due to machine representation ).
The lattice structure has proved to be faster then direct filtering, as seen in Table 12.1.
12.11 This exercise is similar to exercise 10.4b. Now, we will prove that the filter bank has linear-phase for
354 CHAPTER 12. EFFICIENT FIR STRUCTURES
Table 12.1: Processing time and floating point operations per output sample, at exercise 10.10.
texe(µs) nfp
filter 4.34 190latcfilt 3.26 376
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
x 105
−5
0
5x 10
−15
samples
h(n)
−h la
t(n)
Figure 12.4: Difference of the impulse response of direct and lattice structures for exercise 10.10.
the synthesis bank.
Remember that a given FIR filter H(z) is said to be linear-phase if it satisfies the following property:
H(z) = ±z−MH(z−1) (12.7)
where M is the filter order. Therefore, we must show that the equation which describes the synthesisfilters is of this form.
Using the fact that the synthesis matrix R(z) is given by
R(z) =
i=K−1∏
1
[1 −ki
−ki 1
] [z−1 00 1
][1 −11 1
]we can describe the synthesis filters in a matrix equation as:[
G0(z)G1(z)
]=RT (z2)
[z−1
1
]
=[
1 1−1 1
] 1∏i=K−1
[1 −ki−ki 1
] [z−2 00 1
]T [z−1
1
]
=[
1 1−1 1
]K−1∏i=1
[z−2 00 1
] [1 −ki−ki 1
][z−1
1
]=[
1 1−1 1
] [B0(z)B1(z)
](12.8)
355
where was done the definition[B0(z)B1(z)
]=
K−1∏i=1
[z−2 00 1
] [1 −ki−ki 1
][z−1
1
]
Based on equation (12.8), another formulation can be derived, which will make easy to realize thatthe linear-phase property holds:[
G0(z)G1(z)
]= z−(2K−2)
[1 1−1 1
]K−1∏i=1
[1 00 z2
] [1 −ki−ki 1
][z−1
1
]
= z−M[
1 1−1 1
]K−1∏i=1
[1 00 z2
] [1 −ki−ki 1
][1z
]= z−M
[1 1−1 1
]·
K−1∏i=1
[0 11 0
] [0 11 0
] [1 00 z2
] [0 11 0
] [0 11 0
] [1 −ki−ki 1
] [0 11 0
] [0 11 0
][1z
]= z−M
[1 1−1 1
] [0 11 0
]·
K−1∏i=1
[0 11 0
] [1 00 z2
] [0 11 0
] [0 11 0
] [1 −ki−ki 1
] [0 11 0
][0 11 0
] [1z
]
= z−M[
1 11−1
]K−1∏i=1
[z2 00 1
] [1 −ki−ki 1
][z1
]= z−M
[1 11−1
] [B0(z−1)B1(z−1)
](12.9)
where the following property was applied:
I =[
0 11 0
] [0 11 0
]By verifying equations (12.8) and (12.9), we can realize that the property (12.7) is satisfied, and statethat:
G0(z) = z−MG0(z−1)G1(z) = −z−MG1(z−1)
12.12 The MatLab® command conv is a script which uses resources from the built-in command filter.The last one implements FIR and IIR filtering, and the output is the same size of the input. Thecommand conv introduces zeros in the end of the input vector in order to give as result the fullconvolution of two sequences, with size Lx(n) +Lh(n)−1, where Lx(n) is the length of the sequencex(n) and Lh(n) of the sequence h(n). So, the command filter will produce the same output asconv if the input sequence is padded with zeros to complete the total size of the convolution.
The implementation of filtering using fft is made by padding the impulse response of the filter andthe input signal with zeros until the total size of convolution is reached. So, the transform is appliedto both padded sequences and the inner product is made. The inverse transform is applied to theresulting sequence to obtain the convoluted output. There are fast implementations of filtering in thefrequency domain, specially when the input signal is very large. In this case, an overlap/add methodis applied, and reduces significantly the execution time. This fast implementation is available in theMatLab® command fftfilt.
A simple test was made with the four techniques described above and the results are presented inTable 12.2. The machine configuration is described in exercise 10.10.
356 CHAPTER 12. EFFICIENT FIR STRUCTURES
Table 12.2: Processing time and floating point operations per output sample for the tests performed atexercise 10.12.
texe(µs) nfp
filter 4.34 190conv 4.34 190fft 6.98 225fftfilt 3.46 108
According to Table 12.2, the command filter and conv have the same execution time, but thesimple fft filtering was about two times slower. The improved version of frequency domain filteringfftfilt was the fastest.
It is important to point that the overlap/add could be used in the time domain filtering too.
12.13 Exercise 12.13
12.14 Exercise 12.14
12.15 Exercise 12.15
12.16 The magnitude response of the RRS filter for M = 2, 4, 6, 8 is shown in Figure 12.5. As we can see,as M increases, the DC gain is increased (given by M + 1 ) and the resulting passband is reduced.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.50
1
2
3
4
5
6
7
8
9
normalized frequency
ampl
itude
M=2M=4M=6M=8
Figure 12.5: Magnitude response of different RRS filters of exercise 10.16.
12.17 Exercise 12.17
12.18 (a) Perform the transformation z → −z2 in the circuit of Figure 10.5 of the book.
(b)
L∞ → λ = b 1N + 1
c
357
with the maximum occurring at ω = ±π2 .Using Parseval’s theorem, for this filter
L2 → λ = b 1N + 1
c
(c) Any disturbance in the feedback part of the circuit will not affect the output in the transposedversion of Figure 10.5, as long as the output is scaled in 1’s and 2’s complement arithmetic. Onthe other hand, a computational error in the feedback state of the original structure will requirea reset.
12.19 The first design may be the minimax lowpass filter in order to be a reference. After this, we canbuild the other two structures, namely prefilter and interpolator. Table 12.3 shows the results of thesedesigns. N is the filter order and the variable Π is the number of multiplications per output sample.
The prefilter approach was made with a RRS filter of order M = 1, which satisfies M = bΩsΩr− 1c,
where Ωs is the sampling frequency ( 12000Hz ), Ωr the stopband edge ( 5200Hz ), and the operatorbxc denotes the highest integer smaller than or equal to x.
The interpolation design was implemented with an interpolation factor L = 3. For this structure,the first filter is called the base filter and was designed with order Nb=10. The second stage of thecascade was implemented with an interpolator of order Nint = 12, as described in Table 12.3. Thebest reduction scenario was achieved with the interpolator design.
Figure 12.6 shows the frequency response of the three designs, with the respectives passband andstopband details.
Table 12.3: Comparisons among some filter structures done in exercise 10.19.
N Πminimax 27 14
prefilterNp = 1Neq = 27 14
interpolatorNb = 10Nint = 12 13
12.20 Exercise 12.20
12.21 In this exercise, we will design the three structures in MatLab® and measure their figures of merit,plotting them.
The specifications are the following: Ap = 2 dBAr = −30 dBΩp = 1
14 HzΩr = 1
9.2 HzΩs = 1 Hz
As defined in chapter 7, the RPSD is given by:
RPSD = 10log(Py(eω)Pe(eω)
)where Pe(eω) = σe
2, and fixed-point implementation is assumed (quantization after multipliersonly).
358 CHAPTER 12. EFFICIENT FIR STRUCTURES
0 1000 2000 3000 4000 5000 6000−120
−100
−80
−60
−40
−20
0
20
frequency
ampl
itude
(dB
)
direct FIRprefilterinterpolation
(a) Frequency response.
500 1000 1500 2000 2500 3000 3500 4000 4500
−0.4
−0.2
0
0.2
0.4
0.6
frequency
ampl
itude
(dB
)
direct FIRprefilterinterpolation
(b) Passband detail.
5000 5100 5200 5300 5400 5500 5600 5700 5800 5900 6000
−85
−80
−75
−70
−65
−60
−55
−50
−45
−40
−35
frequency
ampl
itude
(dB
)
direct FIRprefilterinterpolation
(c) Stopband attenuation.
0 1000 2000 3000 4000 5000 6000−70
−60
−50
−40
−30
−20
−10
0
10
frequency
ampl
itude
(dB
)
RRS filterequalizer
(d) Recursive Running Sum (RRS) prefilter and the equalizer.
0 1000 2000 3000 4000 5000 6000−100
−80
−60
−40
−20
0
20
frequency
ampl
itude
(dB
)
interpolatorinterpol. base
(e) Interpolated base filter and the interpolator frequency re-sponses.
Figure 12.6: The direct minimax, the prefilter and the interpolation approach frequency responses done atexercise 10.19.
The sensitivity will be over estimated using the figure of merit S(eω), given by:
S(eω) =K∑i=1
| SH(eω)mi (eω) |
359
whereK is the total number of multipliers, mi is the multiplier i, and SH(eω)mi (eω) given by the first
definition of sensitivity in the text book:
SH(eω)mi (eω) =
∂H(eω)∂mi
These figures of merit will now be described for each structure:
→ direct FIR:PSD = σe
2d (N+1)2 e, where dxe is the smallest integer equal to or higher than x, and N is the
filter order.RPSD = 10log
(PSDσe2
)S(eω) = N + 1
→ prefilter:
PSD = σe2(| Heq(eω) |2 + d (Neq+1)
2 e)
, where Neq is the equalizer order.
RPSD = 10log(PSDσe2
)S(eω) = (Neq + 1)| Heq(eω) |The RRS filter was assumed to introduce a roundoff noise with variance σe2.
→ interpolation:
PSD = σe2(d (Nb+1)
2 e| Hint(eω) |2 + d (Nint+1)2 e
), where Nb and Nint are the base filter
and interpolator orders respectively.
RPSD = 10log(PSDσe2
)S(eω) = (Nb + 1)| Hint(eω) |+ (Nint + 1)| Hb(ejwL) |Where L is the interpolation factor.
The strcutures charachteristics studied in this exercise are summarized in Table 12.4.
Table 12.4: The structures characteristics for exercise 10.21.
N Πminimax 29 15
prefilterNp = 8Neq = 22 12
interpolatorNb = 10Nint = 6 10
As seen in Figures 12.7(a) and 12.7(b), the RPSD and the sensitivity figures of merit of the prefilterand interpolator structures are lower than the ones for the direct minimax filter. Figure 12.8 presentsthe overall magnitude response of the structures and their building filters.
12.22 The lowpass filter was designed with three structures: a direct minimax, a FRM approach and theFRM with efficient ripple margin distribution (ERMD). The direct minimax structure is of orderN = 124, resulting in Π = 63 multiplicatons per output sample. Table 12.5 shows the results forthe FRM realizations with interpolation factor L = 7, which leads to the smallest computationalcomplexity. The FRM with ERPD is clearly the best structure.
The ERMD technique is capable of reduce the order of the filters that compose the whole FRMstructure. The maximum ripple in the passband and the minimum stopband attenuation were madeclose to the specifications using the ERMD ( see Figures 12.9 and 12.10 ).
360 CHAPTER 12. EFFICIENT FIR STRUCTURES
0 0.1 0.2 0.3 0.4 0.56
7
8
9
10
11
12
13
frequency
RP
SD
(dB
)
direct FIRprefilterinterpolation
(a) Relative power spectral density of noise (RPSD).
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.50
5
10
15
20
25
30
35
frequency
Sen
sitiv
ity (
wor
st−
case
)
direct FIRprefilterinterpolation
(b) S(eω) sensitivity figure of merit (worst-case).
Figure 12.7: RPSD and sensitivity figures of merit for exercise 10.21.
Table 12.5: Characteristics of the realizations of exercise 10.22.
NHa NHMa NHMc ΠFRM 34 19 33 45FRM (ERMD) 30 19 29 41
12.23 The direct minimax design for the highpass filter is of order N = 96, resulting in Π = 49.
In this exercise, the FRM filters were implemented using the complementary concept, and so, alowpass FRM filter with the following specifications was designed:
dp = 10−0.05·40
dr = 100.05·0.8−1100.05·0.8+1
Ωp = 5000 HzΩr = 5200 HzΩs = 12000 Hz
With this specifications, the interpolation factor which leads to the smallest number of multiplicationsper output sample seems to be L = 4.
Table 12.6 shows the results for the FRM and the FRM-ERMD designs. One can see that the ERMDtechnique has reduced the computational complexity.
As seen in exercise 10.22, the ERMD has reduced the total number of multiplications per outputsample because the ripples were made closer to the specifications. This fact reduces the order of thefilters in the FRM structure and results in a reduced computational complexity.
Figures 12.11 and 12.12 present the magnitude response of the direct minimax and the FRM-ERMDdesigns, respectively.
Table 12.6: Characteristics of the realizations for the highpass filter of exercise 10.23.
NHa NHMa NHMc ΠFRM 38 22 24 45FRM (ERMD) 32 20 20 39
361
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5−140
−120
−100
−80
−60
−40
−20
0
20
frequency
ampl
itude
(dB
)
direct FIRprefilterinterpolation
(a) Frequency response.
0 0.05 0.1
−6
−5
−4
−3
−2
−1
0
1
2
3
4
frequency
ampl
itude
(dB
)
direct FIRprefilterinterpolation
(b) Passband detail.
0.1 0.2 0.3 0.4 0.5
−80
−70
−60
−50
−40
−30
frequency
ampl
itude
(dB
)
direct FIRprefilterinterpolation
(c) Stopband attenuation.
0 0.1 0.2 0.3 0.4 0.5−80
−70
−60
−50
−40
−30
−20
−10
0
10
frequency
ampl
itude
(dB
)
RRS filterequalizer
(d) Recursive Running Sum (RRS) prefilter and the equalizer.
0 0.1 0.2 0.3 0.4 0.5−90
−80
−70
−60
−50
−40
−30
−20
−10
0
10
frequency
ampl
itude
(dB
)
interpolatorinterpol. base
(e) Interpolated base filter and the interpolator frequency re-sponses.
Figure 12.8: Frequency response of the three structures for exercise 10.21.
12.24 The lowpass prototype filter will be designed with the FRM technique and the number of multiplica-tions per output sample for the quadrature filter is just twice the one for the FRM, as shown at Table12.7. The lowest computational complexity for the FRM prototype was reached with the interpola-tion factor L = 17. The direct minimax realization is of order N = 2577 and it’s total number ofmultiplications per output sample is Π = 1289.
362 CHAPTER 12. EFFICIENT FIR STRUCTURES
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−120
−100
−80
−60
−40
−20
0
20
normalized angular frequency
mag
nitu
de r
espo
nse
( dB
)
direct minimaxFRM
(a) Overall frequency response.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35−3
−2
−1
0
1
2
3
normalized angular frequency
mag
nitu
de r
espo
nse
( dB
)
direct minimaxFRM
(b) Passband detail.
0.4 0.5 0.6 0.7 0.8 0.9 1−80
−75
−70
−65
−60
−55
−50
−45
−40
−35
−30
normalized angular frequency
mag
nitu
de r
espo
nse
( dB
)
direct minimaxFRM
(c) Stopband attenuation.
Figure 12.9: Frequency response of the minimax and the FRM-ERMD structures of exercise 10.22.
Table 12.7: The FRM lowpass prototype filter characteristics of exercise 10.24.
NHa NHMa NHMc ΠFRM (ERMD) 226 121 121 236
Figure 12.13 shows the minimax filter and it’s bands details. Figure 12.14 shows the quadrature filterimplementation.
In order to reach the low ripple margin in the passband of the quadrature filter, the lowpass prototypefilter must have it’s stopband attenuation and it’s passband ripple given respectively by
δr′ = min[δp, δr] · 0.4
δp′ = δp · 0.4
because the passband ripple of the quadrature filter is given by δp = deltap′ + δr
′, disregarding themasking filters effects.
The cutoff frequencies for the FRM lowpass filter are given byωp′ = Ωp2−Ωp1
2ωr′ = ωp
′ +min[Ωp1 − Ωr1 ,Ωr2 − Ωp2 ]
363
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−120
−100
−80
−60
−40
−20
0
20
normalized angular frequency
mag
nitu
de r
espo
nse
( dB
)
direct minimaxFRM
(a) Overall magnitude response.
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35−3
−2
−1
0
1
2
3
normalized angular frequency
mag
nitu
de r
espo
nse
( dB
)
direct minimaxFRM
(b) Passband detail.
0.4 0.5 0.6 0.7 0.8 0.9 1−80
−75
−70
−65
−60
−55
−50
−45
−40
−35
−30
normalized angular frequency
mag
nitu
de r
espo
nse
( dB
)
direct minimaxFRM
(c) Stopband attenuation.
Figure 12.10: Frequency response of direct minimax and FRM without efficient ripple margin distribution(ERMD) of exercise 10.22.
Table 12.7 shows the properties of the lowpass prototype FRM filter. The required number of multi-plications per output sample is clearly smaller than the one for the direct minimax realization.
12.25 This exercise is similar to exercise 10.24, except that we will obtain, as the final result, the comple-mentary filter of the bandpass quadrature filter. The complementary of a bandpass filter is a stopbandfilter, and we must only take care with the fact the passband ripple in the original filter is the stop-band attenuation of it’s complementary and the passband ripple of the complementary is the stopbandattenuation of the original filter.
The FRM filter which will generate the bandpass quadrature filter is described in Table 12.8. Thedesired bandstop is the complementary of this quadrature filter and has the same computational com-plexity. The quadrature filter requires twice the number of multiplications of the lowpass FRM.
The direct realization with the minimax procedure requires N = 518 and Π = 260. This designis depicted in Figure 12.15 and the complementary quadrature is depicted in Figure 12.16. Theinterpolation factor used was L = 7.
By analysing the direct minimax and the quadrature realizations, we can see that a significant reduc-tion in the mean number of multiplications per output sample was reached by the last one, whichrequired only 63.8% multiplications of the direct minimax.
12.26 Exercise 12.26
364 CHAPTER 12. EFFICIENT FIR STRUCTURES
0 1000 2000 3000 4000 5000 6000−120
−100
−80
−60
−40
−20
0
20
frequency
mag
nitu
de r
espo
nse
( dB
)
direct minimax complementary FRM
(a) Overall frequency response.
5000 5100 5200 5300 5400 5500 5600 5700 5800 5900 6000−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
frequency
mag
nitu
de r
espo
nse
( dB
)
direct minimax complementary FRM
(b) Passband detail.
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 5500−80
−75
−70
−65
−60
−55
−50
−45
−40
−35
−30
frequency
mag
nitu
de r
espo
nse
( dB
)
direct minimax complementary FRM
(c) Stopband attenuation.
Figure 12.11: Frequency response of the minimax and the complementary FRM-ERMD structures of ex-ercise 10.23.
Table 12.8: The FRM lowpass prototype filter for the complementary quadrature filter for exercise 10.25.
NHa NHMa NHMc ΠFRM (ERMD) 112 50 - 83
12.27 Exercise 12.27
365
0 1000 2000 3000 4000 5000 6000−120
−100
−80
−60
−40
−20
0
20
frequency
mag
nitu
de r
espo
nse
( dB
)
direct minimax complementary FRM
(a) Overall magnitude response.
5000 5100 5200 5300 5400 5500 5600 5700 5800 5900 6000−1
−0.8
−0.6
−0.4
−0.2
0
0.2
0.4
0.6
0.8
1
frequency
mag
nitu
de r
espo
nse
( dB
)
direct minimax complementary FRM
(b) Passband detail.
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000 5500−80
−75
−70
−65
−60
−55
−50
−45
−40
−35
−30
frequency
mag
nitu
de r
espo
nse
( dB
)
direct minimax complementary FRM
(c) Stopband attenuation.
Figure 12.12: Frequency response of direct minimax and complementary FRM without efficient ripplemargin distribution ( ERMD ) of exercise 10.23.
366 CHAPTER 12. EFFICIENT FIR STRUCTURES
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−140
−120
−100
−80
−60
−40
−20
0
20
normalized angular frequency
(a) Overall frequency response.
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07−85
−80
−75
−70
−65
−60
−55
−50
−45
−40
−35
−30
normalized angular frequency
(b) First stopband detail.
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9−0.05
−0.04
−0.03
−0.02
−0.01
0
0.01
0.02
0.03
0.04
0.05
normalized angular frequency
(c) Passband detail.
0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1−85
−80
−75
−70
−65
−60
−55
−50
−45
−40
−35
−30
normalized angular frequency
(d) Second stopband attenuation.
Figure 12.13: Frequency response of the minimax filter of exercise 10.24.
367
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−120
−100
−80
−60
−40
−20
0
20
mag
nitu
de r
espo
nse
( dB
)
normalized angular frequency
(a) Overall frequency response.
0 0.01 0.02 0.03 0.04 0.05 0.06 0.07−85
−80
−75
−70
−65
−60
−55
−50
−45
−40
−35
−30
mag
nitu
de r
espo
nse
( dB
)
normalized angular frequency
(b) First stopband detail.
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9−0.05
−0.04
−0.03
−0.02
−0.01
0
0.01
0.02
0.03
0.04
0.05
mag
nitu
de r
espo
nse
( dB
)
normalized angular frequency
(c) Passband detail.
0.91 0.92 0.93 0.94 0.95 0.96 0.97 0.98 0.99 1−85
−80
−75
−70
−65
−60
−55
−50
−45
−40
−35
−30
mag
nitu
de r
espo
nse
( dB
)
normalized angular frequency
(d) Second stopband attenuation.
Figure 12.14: Frequency response of the quadrature filter of exercise 10.24.
368 CHAPTER 12. EFFICIENT FIR STRUCTURES
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−140
−120
−100
−80
−60
−40
−20
0
20
normalized angular frequency
(a) Overall frequency response.
0 0.05 0.1 0.15 0.2 0.25 0.3−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
0.5
normalized angular frequency
(b) Lower passband detail.
0.34 0.36 0.38 0.4 0.42 0.44 0.46 0.48−85
−80
−75
−70
−65
−60
−55
−50
−45
normalized angular frequency
(c) Stopband attenuation.
0.5 0.6 0.7 0.8 0.9 1−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
0.5
normalized angular frequency
(d) Upper passband detail.
Figure 12.15: Frequency response of the minimax filter of exercise 10.25.
369
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−140
−120
−100
−80
−60
−40
−20
0
20
mag
nitu
de r
espo
nse
( dB
)
normalized angular frequency
(a) Overall frequency response.
0 0.05 0.1 0.15 0.2 0.25 0.3−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
0.5
mag
nitu
de r
espo
nse
( dB
)
normalized angular frequency
(b) Lower passband detail.
0.34 0.36 0.38 0.4 0.42 0.44 0.46 0.48−85
−80
−75
−70
−65
−60
−55
−50
−45
mag
nitu
de r
espo
nse
( dB
)
normalized angular frequency
(c) Stopband attenuation.
0.5 0.6 0.7 0.8 0.9 1−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
0.4
0.5
mag
nitu
de r
espo
nse
( dB
)
normalized angular frequency
(d) Upper passband detail.
Figure 12.16: Frequency response of the quadrature filter of exercise 10.25.
370 CHAPTER 12. EFFICIENT FIR STRUCTURES
Chapter 13
EFFICIENT IIR STRUCTURES
13.1 The transfer function of the parallel form is given by:
H(z) = h′0 +m∑k=1
Hk(z) = h′0 +m∑k=1
γ′1kz + γ′2kz2 +m1kz +m2k
The scaling coefficients can be calculated as:
λk =1
‖Fk(z)‖pfor k = 1, ...,m and Fk(z) defined as:
Fk(z) =1
z2 +m1kz +m2k
The relative noise variance for the structure depends on where the quantizers are placed.
First, assuming quantization before sum:
σo2 = σe
2
1 + 2m+ 3
m∑k=1
1λk
2Hk(z)Hk(z−1)
Then, the RPSD is given by:
RPSD =
1 + 2m+ 3
m∑k=1
1λk
2 ‖Hk(eω)‖22
Similarly, assuming quantization after sum, the RPSD is derived as:
RPSD =
1 +
m∑k=1
1λk
2 ‖Hk(eω)‖22
13.2 The values of c1,j and c2,j that minimize the output noise are calculated by solving:
minc1,j ,c2,j
∥∥∥∥∥∥(1 + c1,jz
−1 + c2,jz−2)
m∏i=j
Hi(eω)
∥∥∥∥∥∥2
2
371
372 CHAPTER 13. EFFICIENT IIR STRUCTURES
Then, what must be minimized is the integral defined as:
12π
∮ π
−π|F (eω)|2
∣∣∣∣∣∣m∏i=j
Hi(eω)
∣∣∣∣∣∣2
dω (13.1)
where the polynomial F (eω) is defined in the z-domain as:
F (z) = (1 + c1,jz−1 + c2,jz
−2)
and
G(z) = F (z)F (z−1) = 1 + c1,j(z + z−1) + c2,j(z2 + z−2) + c1,jc2,j(z + z−1) + c1,j2 + c2,j
2
As the produtory in the function to be minimized doesn’t depend on the coefficients c1,j and c2,j , wejust need of the partial derivatives of G(z) with respect to them:
∂G
∂c1,j|z=eω= 2(cosw + c2,j cosw + c1,j) (13.2)
∂G
∂c2,j|z=eω= 2(cos 2w + c1,j cosw + c2,j) (13.3)
Now, by replacing the derivatives of equations (13.2) and (13.3) in equation (13.1), and equating theresults to zero, we get:
12π
∮ π
−π2(cosw + c2,j cosw + c1,j)
∣∣∣∣∣∣m∏i=j
Hi(eω)
∣∣∣∣∣∣2
dω = 0
12π
∮ π
−π2(cos 2w + c1,j cosw + c2,j)
∣∣∣∣∣∣m∏i=j
Hi(eω)
∣∣∣∣∣∣2
dω = 0
These equations can be solved for c1,j and c2,j . Rearranging the terms, we get:
c1,j∮ π−π
∣∣∣∏mi=j Hi(eω)
∣∣∣2 dω =∮ π−π(− cosw − c2,j cosw)
∣∣∣∏mi=j Hi(eω)
∣∣∣2 dωc2,j
∮ π−π
∣∣∣∏mi=j Hi(eω)
∣∣∣2 dω =∮ π−π(− cos 2w − c1,j cosw)
∣∣∣∏mi=j Hi(eω)
∣∣∣2 dωFirst, we will solve for c1,j .
c1,jt3 = −t1 − c2,jt1c1,jt3 = −t1 +
(t2 + c1,jt1)t3
t1
c1,j =t2t1 − t1t3t3
2 − t12
where:
373
t1 =
∮ π−π
∣∣∣∏mi=j Hi(eω)
∣∣∣2 coswdω
t2 =∮ π−π
∣∣∣∏mi=j Hi(eω)
∣∣∣2 cos 2wdω
t3 =∮ π−π
∣∣∣∏mi=j Hi(eω)
∣∣∣2 dωSimilarly, with c2,j in evidence:
c2,jt3 = −t2 − c1,jt1c2,jt3(t32 − t12) = −t2t32 + t1
2t2 − t12t2 + t12t3
c2,j =t1
2 − t2t3t23 − t21
13.3 The second-order transfer function is as follows:
H(z) =γkz + γ2
z2 +m1z +m2
The norm p (2 or∞) is given by:
Lp =(
12π
∮ 2π
0
|H(eω)|p dω) 1p
The residues theorem can be applied to solve this type of integral if one is interested in the L2 norm. Itstates that the result is given by the sum of the residues of the polynomial P (z) = H(z)H(z−1)z−1.
• So, in the L2 case, we just need to find the residues.
•ResP (z), z = p1 =
γ1γ2(p12 + 1) + p1(γ1
2 + γ22)
(p1 − p2)(p12 − 1)(p1p2 − 1)
•ResP (z), z = p2 = −γ1γ2(p2
2 + 1) + p2(γ12 + γ2
2)(p1 − p2)(p2
2 − 1)(p1p2 − 1)where
p1 = −m1+j√
4m2−m12
2
p2 = −m1−j√
4m2−m12
2
The summation of the residues is then given by:
∑Res =
(p2
2 − 1)[γ1γ2(p1
2 + 1) + p1(γ12 + γ2
2)]
+ (p12 − 1)[
γ1γ2(p22 + 1) + p2(γ1
2 + γ22)] 1
(p1 − p2)(p12 − 1)(p2
2 − 1)(p1p2 − 1)=
γ1γ2
[(p1
2 + 1)(p22 − 1)− (p1
2 − 1)(p22 + 1)
]+ (γ1
2 + γ22)[
(p22 − 1)p1 − (p1
2 − 1)p2
] 1(p1 − p2)(p1
2 − 1)(p22 − 1)(p1p2 − 1)
=−2γ1γ2(p1
2 − p22)− (γ1
2 + γ22)(p1p2 + 1)(p1 − p2)
(p1 − p2)(p12 − 1)(p2
2 − 1)(p1p2 − 1)
=γ1
2 + γ22 − 2m1
m2+1γ1γ2
(1−m22)[1−
(m1m2+1
)2]
374 CHAPTER 13. EFFICIENT IIR STRUCTURES
since: p12 − p2
2 = −m1(p1 − p2)(p1
2 − 1)(p22 − 1) = (m2 + 1)2 −m1
2
p1p2 = m2
• In the L∞ case we must find the maximum value of the squared modulus of the transfer
‖H(eω)‖∞2 = max|H(eω)|2The transfer may present a maximum for w = 0 (z=1) or w = π (z=-1), or any other intermedi-ate frequency. For these two points, it is easy see the value of the transference.
|H(eω)|2 |w=0=(
γ1 + γ2
1 +m1 +m2
)2
|H(eω)|2 |w=π=( −γ1 + γ2
1−m1 +m2
)2
• The first case is derived for γ1γ2 6= 0.
|H(eω)|2 =γ1
2 + γ22 + 2γ1γ2 cosw
4m2 cos2 w + 2m1(1 +m2) cosw +m12 + (1−m2
2)(13.4)
We must get the derivative with respect to w and make it equal to zero.
∂|H(eω)|2∂w
= −2γ1γ2 sinw
4m2 cos2 w + 2m1(1 +m2) cosw +m12 + (1−m2)2
+2γ1γ2 sinw
[2m1(1 +m2) + 8m2 cosw]
(γ1
2 + γ22
2γ1γ2+ cosw
)= 0
In that way, there are two solutions:
sinw = 0 (13.5)
or
4m2 cos2 w + 8m2ν cosw + 2m1(1 +m2)ν −m12 − (1−m2)2 = 0 (13.6)
As equation (13.5) was already treated (w = 0 or w = π), we must turn the attention to theequation (13.6):
cos2 w + 2ν cosw +2m1(1 +m2)ν −m1
2 − (1−m2)2
4m2= 0
cos2 w + 2ν cosw − 2ην − m12 − (1−m2)2
4m2= 0
cos2 w + 2ν cosw − 2ην − (η2 + v) = 0
Resulting:
cosw = −ν ±√
(ν + η)2 + v
= ν
(√(1 +
η
ν)2 +
v
ν2− 1)
(13.7)
375
Since the coefficients η and v are defined as:η = −m1(1+m2)
4m2
v =(
1− m12
4m2
)(1−m2)2
4m2
Moreover, as
cosw = −ν −√
(ν + η)2 + v
is always negative, it was discarded.The result in equation 13.7 forces the definition of another coefficient:
ζ = sat
ν
(√(1 +
η
ν)2 +
v
ν2− 1)
(13.8)
where
sat(x) =
1, for x > 1−1, for x < −1x, for − 1 ≤ x ≤ 1
Equation (13.8) defines a coefficient that remains in the range [−1, 1], just like the cosinefunction.Now, by replacing equation (13.7) in equation (13.4), we get:
|H(eω)|max2 =γ1
2 + γ22 + 2γ1γ2ζ
4m2 [ζ2 − 2ηζ + η2 + v]
=γ1
2 + γ22 + 2γ1γ2ζ
4m2 [(ζ − η)2 + v]
• The second case is derived for γ1γ2 = 0.
∂|H(eω)|2∂w
= sinw(γ12 + γ2
2) [8m2 cosw + 2m1(1 +m2)]
= 0
sinw = 0
or
cosw =−m1(1 +m2)
4m2= η ∴ ζ = sat(η) (13.9)
with the sat(x) function being the same as the one defined before.Equation (13.9), when replaced in equation (13.4) results:
|H(eω)|max2 =γ1
2 + γ22 + 2γ1γ2ζ
4m2 [(ζ − η)2 + v]
=γ1
2 + γ22
4m2 [(ζ − η)2 + v]
13.4 For odd-order filters, the expression of the transfer function is:
H(z) =m+1∏i=1
Hi(z)
376 CHAPTER 13. EFFICIENT IIR STRUCTURES
where Hi(z) are second-order sections for i = 1, ...,m, and Hm+1(z) is a first-order section definedas
Hm+1(z) =γ0,m+1z + γ1,m+1
z +m1,m+1
In the fixed-point analyses, the quantization is performed after the multipliers.
The PSD and the output variances are treated in the sequence for the cascade of direct-form , optimalstate-space and state-space free of limit cycles structures. The first-order section is assumed to be thelast one.
• Cascade of direct-form structure:
Py(z) = σe2
3λ1
2
m+1∏i=1
Hi(z)Hi(z−1) + 5m∑j=2
1λj
2
m+1∏i=j
Hi(z)Hi(z−1)+
+41
λm+12Hm+1(z)Hm+1(z−1) + 2
]
σ2 =σo
2
σe2=
3λ1
2 ‖m+1∏i=1
Hi(z)Hi(z−1)‖22 + 5m∑j=2
1λj
2 ‖m+1∏i=j
Hi(z)Hi(z−1)‖22+
+41
λm+12 ‖Hm+1(z)Hm+1(z−1)‖22 + 2
]• Cascade of state-space sections:
The state-space first-order section is described by:x(k + 1) = a11x(k) + b1u(k)y(k) = c1x(k) + du(k)
These state-space equations lead to the transfer:
Hm+1(z) =Y (z)U(z)
=c1b1
z − a11+ d
The output noise PSD and variance are:
Py(z) = 3σe2
m∑j=1
m+1∏l=j+1
Hi(z)Hi(z−1)
[1 +2∑i=1
G′ij(z)G′ij(z
−1)
]+
+2σe2
1 +G′1(m+1)(z)G′1(m+1)(z
−1)
σ2 = 3
m∑j=1
‖ m+1∏l=j+1
Hi(z)Hi(z−1)
[1 +2∑i=1
G′ij(z)G′ij(z
−1)
]‖22
+
+2
1 + ‖G′1(m+1)(z)G′1(m+1)(z
−1)‖22
where G′ij(z) is the scaled transfer i from state xi(k + 1) to the output y(k), in the section j.
377
• Cascade of state-space free of limit cycles:The output noise PSD and variance are:
Py(z) = σe2
m∑j=1
m+1∏l=j+1
Hi(z)Hi(z−1)
[3 + 22∑i=1
G′ij(z)G′ij(z
−1)
]+
+σe2
2 +G′1(m+1)(z)G′1(m+1)(z
−1)
σ2 =
m∑j=1
‖ m+1∏l=j+1
Hi(z)Hi(z−1)
[3 + 22∑i=1
G′ij(z)G′ij(z
−1)
]‖22
+
+
2 + ‖G′1(m+1)(z)G′1(m+1)(z
−1)‖22
13.5 In the floating-point case, the quantization is performed after the multipliers and after the adders.The quantization after the multipliers was already analyzed in exercise 11.4. Therefore, we will onlydescribe the effect of the quantization in the adders. The total output noise PSD and variance aregiven by the sum of the these two effects.
• Cascade of direct-form structure:
Pyadder (z) = σe2
m+1∑j=1
1λj
2
m+1∏i=j
Hi(z)Hi(z−1) + 1
σ2 =σo
2
σe2= σe
2
m+1∑j=1
1λj
2 ‖m+1∏i=j
Hi(z)Hi(z−1)‖22 + 1
• Cascade of state-space sections:
The output noise PSD and variance are:
Pyadder (z) = σe2
m∑j=1
m+1∏l=j+1
Hi(z)Hi(z−1)
[1 +2∑i=1
G′ij(z)G′ij(z
−1)
]+
+σe2
1 +G′1(m+1)(z)G′1(m+1)(z
−1)
σ2 =
m∑j=1
‖ m+1∏l=j+1
Hi(z)Hi(z−1)
[1 +2∑i=1
G′ij(z)G′ij(z
−1)
]‖22
+
+
1 + ‖G′1(m+1)(z)G′1(m+1)(z
−1)‖22
• Cascade of state-space free of limit cycles:In this case, the PSD and the variance are equal to the optimal state-space sections shown in thelast item.
378 CHAPTER 13. EFFICIENT IIR STRUCTURES
13.6 The two equations that can help the derivation are:
K′ =∞∑k=0
A′kb′b′H(A′k)H (13.10)
W′ =∞∑k=0
(A′k)Hc′Hc′A′k (13.11)
• First, using equation (13.10), we will show that
K ′ii = ‖F ′i (eω)‖22 (13.12)
The impulse response of the transfer from the input u(n) to the state vector x(n) is given, afterscaling, by:
f ′(n) =
f ′1(n)f ′2(n)
...f ′N (n)
=∞∑k=0
A′nb′δ(n− k)
where N is the filter order.So, for a given sample k:
f ′(k) = A′kb′
The Parseval’s theorem states that:
∞∑k=−∞
x(k)x∗(k) = ‖X(eω)‖22 (13.13)
where x(k) is a sequence defined for −∞ ≤ k ≤ ∞, X(eω) is it’s discrete fourier transform,and the operator ∗ denotes the conjugate of the sequence.Using equation (13.13) in a vectorized form, we get:
∞∑k=0
A′kb′(A′kb′
)∗T=∞∑k=0
A′kb′b′H(A′k)H = K′ (13.14)
where XT is the transpose of X.Equation (13.14) defines the matrix K′, where the elements in the main diagonal K ′ii are givenby ‖F ′i (eω)‖22. Given this property, equation (13.12) is proved.
• Using equation (13.11), and using the same procedure, we will show that
W ′ii = ‖G′i(eω)‖22 (13.15)
g′(n) =
g′1(n)g′2(n)
...g′N (n)
=∞∑k=0
A′Tnc′T δ(n− k)
g′(k) = A′Tkc′T
379
∞∑k=0
(A′T
kc′T)∗
c′A′k =∞∑k=0
(A′k
)H
c′Hc′A′k = W′ (13.16)
In equation (13.16), the diagonal of W′ is W ′ii = ‖G′i(eω)‖22. Given this property, equation(13.15) is proved.
13.7 The transfer function of the input u(k) to the output y(k), in the state-space structure is, in the z-domain:
Y (z)U(z)
= c(zI−A)−1b + d (13.17)
where A =
[a11 a12
a21 a22
]b = [b1b2]T
c = [c1c2]
For simplicity, assuming d = 0, we will find the polynomial that describes the transfer in equation(13.17).
H(z) = [c1c2]([
z − a11 −a12
−a21 −a22
])−1 [b1b2
]=
2b1b2z − 2b1b2a11 + (b22 − b12)a12
z2 − 2a11z + a112 + a12
2
=γ1z + γ2
z2 +m1z +m2(13.18)
The following relations were assumed in equation (13.18), to get the optimal state-space structure:
a11 = a22
a12 = −a21
b1 = c2b2 = c1
(13.19)
From equation (13.18), we easily infer that:
2b1b2 = γ1
(b22 − b12)a12 − 2b1b2a11 = γ2
−2a11 = m1
a112 + a12
2 = m2
(13.20)
Now, we will solve this set of equations, starting with a11 and a12:
a11 = m1
2
a122 = m2− a11
2 ∴ a12 = −√m2 − m12
4
(13.21)
since we need a21 positive, we forced a12 to be negative.
Equation (13.21) is equal to equation (11.60) of the textbook.
380 CHAPTER 13. EFFICIENT IIR STRUCTURES
We still must find the equations for the coefficients b1 and b2. By simply examining equation (13.20)we get:
b2 = γ12b1
0 = b14 − b12 (γ1a11+γ2)
a21− γ1
2
4
→ b12 = γ1a11+γ2±
√γ12m2+γ22−γ1γ2m1
2a21
∴ b12 = γ1a11+γ2+σ
2a21
as b12 is always positive
or b1 =√
γ1a11+γ2+σ2a21
(13.22)
where σ =√γ1
2m2 + γ22 − γ1γ2m1.
As b12 is always positive for real b1, a21 must be positive and a12 negative (a12 = −a21).
The set of equations (13.19), (13.21) and (13.22) give the same result as in the equations (11.60)-(11.61) in the textbook.
13.8 The transfer is
H(z) = d+γkz + γ2
z2 +m1z +m2
and the poles are given by: p1 = −m1+j
√4m2−m12
2
p2 = −m1−j√
4m2−m12
2
If we want the poles to be complex conjugate, we mus have:
4m2 −m12 > 0 ∴ −2
√m2 < m1 < 2
√m2 (13.23)
Using the maximum value for m1 in equation 13.23 in the equation that defines σ, we get:
σ =√γ1
2m2 + γ22 − γ1γ2m1
>√γ1
2m2 + γ22 − γ1γ22
√m2 (13.24)
=√
(γ1√m2 − γ2)2 ≥ 0
According to equation (13.24), σ is always greater than zero if the poles are complex conjugate,because the coefficients of the filter are necessarily real.
13.9 The relation between b1 and b2, as derived in the textbook, is:
b1b2
= −2 +m1
2σζ
In the same way, the relation that defines c1 and c2 is:
c2c1
=−(m1 + 2m2)γ1 + (2 +m1)γ2
2σζ(γ1 + γ2)
The minimum roundoff noise is reached if
381
b1b2
=c2c1
−2 +m1
2σζ=−(m1 + 2m2)γ1 + (2 +m1)γ2
2σζ(γ1 + γ2)γ1(2m2 − 2) = γ22(2 +m1)
γ1
γ2=
m1 + 2m2 − 1
13.10 The design specifications are:
Ap = 0.4 dBAr = 50 dBΩr1 = 1000 rad/sΩp1 = 1150 rad/sΩp2 = 1250 rad/sΩr2 = 1400 rad/sΩs = 10000 rad/s
This exercise was solved with the aid of Matlab® . The unique finite wordlength effect assumed isthe coefficients quantization. Quantization after product and sum were not taken into account, andare affecting the results only due to the double floating-point precision used in the design. Hopefully,these effects are small near the seventeen bits quantization of the filter coefficients.
The coefficients, after quantization with seventeen bits in the decimal part (excluding the sign bit),are present in the following items. The scaling was implemented with norm 2.
This design leads to a elliptic filter with order N = 8 and its coefficients in a direct-form realizationare in Table 13.1.
• Parallel of direct-form sections: The filter is arranged in second-order sections and the scaledand quantized coefficients are present in Table 13.2.
• Cascade of direct-form sections: The scaling was made using L2 norm. The section orderingwas made such that the maximum value of the output noise PSD was minimized.Table 13.3 shows the coefficients after scaling and quantization.
• Cascade of optimal state-space sections: The coefficients present in Table 13.4 are alreadyscaled using the L2 norm.
Table 13.1: Direct-form elliptic filter coefficients for exercise 11.10.
numerator denominatorb0 2.779805100600000e-04 a0 1.000000000000000e+00b1 -1.538851369928211e-03 a1 -5.773370013400000e+00b2 4.247183645559548e-03 a2 1.641792113000000e+01b3 -7.395723052765707e-03 a3 -2.899770689800000e+01b4 8.840281765965295e-03 a4 3.459878640900000e+01b5 -7.395723052765707e-03 a5 -2.842297768400000e+01b6 4.247183645559548e-03 a6 1.577356793500000e+01b7 -1.538851369928211e-03 a7 -5.436845501600000e+00b8 2.779805100600000e-04 a8 9.230518096400000e-01
382 CHAPTER 13. EFFICIENT IIR STRUCTURES
Table 13.2: Parallel-form scaled and quantized coefficients for exercise 11.10.
Feed-forward coefficient: h0=2.975463867187500e-04
section 1 section 2 section 3 section 4
γ′0 -6.594848632812500e-02 4.438018798828125e-02 3.948211669921875e-02 -6.696319580078125e-02
γ′1 -2.114868164062500e-02 9.420013427734375e-02 -1.536331176757812e-01 1.157760620117188e-01
m0 -1.404647827148438e+00 -1.418899536132812e+00 -1.456573486328125e+00 -1.493240356445312e+00
m1 9.883804321289062e-01 9.712524414062500e-01 9.721069335937500e-01 9.891357421875000e-01
λ 1.075988764505649e-01 1.652437370301113e-01 1.581054807943862e-01 9.713033197431090e-02
Table 13.3: Cascade-form scaled and quantized coefficients for exercise 11.10.
Constant gain: λ1=9.712982177734375e-02
section 1 section 2 section 3 section 4
γ′0 4.251632690429688e-01 1.837158203125000e-01 1.350708007812500e-01 2.712631225585938e-01
γ′1 -6.850357055664062e-01 -2.312088012695312e-01 -2.368850708007812e-01 -2.474822998046875e-01
γ′2 4.251632690429688e-01 1.837158203125000e-01 1.350708007812500e-01 2.712631225585938e-01
m0 -1.493240356445312e+00 -1.404647827148438e+00 -1.456573486328125e+00 -1.418899536132812e+00
m1 9.891357421875000e-01 9.883804321289062e-01 9.721069335937500e-01 9.712524414062500e-01
• Cascade of state-space sections free of limit cycles:Table 13.5 shows the coefficients after scaling and quantization with seventeen bits in the deci-mal part.
The frequency response of the structures developed above are depicted in Figures 13.1 and 13.2.
13.11 The design specifications are:
Ap = 1 dBAr = 70 dBΩp = 0.025 rad/sampleΩr = 0.04 rad/sampleΩs = 2π rad/sample
The design specifications lead to a elliptic filter with order N = 6 and its coefficients in a direct-formrealization are in Table 13.6. The scaling was implemented using L2 norm.
Table 13.4: Optimal state-space sections scaled and quantized coefficients for exercise 11.10.
section 1 section 2 section 3 section 4
a′11 7.466201782226562e-01 7.023239135742188e-01 7.282867431640625e-01 7.094497680664062e-01
a′12 -6.567077636718750e-01 -7.203140258789062e-01 -6.631164550781250e-01 -6.939392089843750e-01
a′21 6.573562622070312e-01 6. 873626708984375e-01 6.661071777343750e-01 6.743164062500000e-01
a′22 7.466201782226562e-01 7.023239135742188e-01 7.282867431640625e-01 7.094497680664062e-01
b′1 5.653381347656250e-02 2.348785400390625e-01 1.830062866210938e-01 7.665710449218750e-01
b′2 -1.361846923828125e-01 8.804321289062500e-02 -4.286804199218750e-01 2.882461547851562e-01
c′1 -2.476959228515625e-01 3.191375732421875e-02 -6.122589111328125e-02 5.004119873046875e-02
c′2 1.028289794921875e-01 8.512878417968750e-02 2.613830566406250e-02 1.330718994140625e-01
d 2.373504638671875e-01 1.025619506835938e-01 7.540893554687500e-02 1.514358520507812e-01
383
Table 13.5: State-space free of limit cycles sections scaled and quantized coefficients for exercise 11.10.
λ1=3.225479125976562e-01
section 1 section 2 section 3 section 4
a11 7.094497680664062e-01 7.466201782226562e-01 7.023239135742188e-01 7.282867431640625e-01
a12 -6.837539672851562e-01 -6.569900512695312e-01 -7.035980224609375e-01 -6.643142700195312e-01
a21 6.843566894531250e-01 6.570816040039062e-01 7.036895751953125e-01 6.649017333984375e-01
a22 7.094497680664062e-01 7.466201782226562e-01 7.023239135742188e-01 7.282867431640625e-01
b1 2.905502319335938e-01 2.533798217773438e-01 2.976760864257812e-01 2.717132568359375e-01
b2 -6.843566894531250e-01 -6.570816040039062e-01 -7.036895751953125e-01 -6.649017333984375e-01
c′1 6.206512451171875e-02 -5.248260498046875e-02 6.098937988281250e-02 -1.282577514648438e-01
c′2 -2.104949951171875e-02 2.338409423828125e-02 -2.106475830078125e-02 5.733489990234375e-02
d′ 6.404113769531250e-02 2.429351806640625e-01 2.256317138671875e-01 2.455291748046875e-01
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000−140
−120
−100
−80
−60
−40
−20
0
20
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
direct−formparallelcascade direct
(a) Overall frequency response.
0 100 200 300 400 500 600 700 800 900 1000−100
−95
−90
−85
−80
−75
−70
−65
−60
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
direct−formparallelcascade direct
(b) Lower stopband attenuation.
1140 1160 1180 1200 1220 1240 1260−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
direct−formparallelcascade direct
(c) Passband detail.
1500 2000 2500 3000 3500 4000 4500 5000−100
−95
−90
−85
−80
−75
−70
−65
−60
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
direct−formparallelcascade direct
(d) Upper stopband attenuation.
Figure 13.1: Frequency response of direct, cascade and parallel forms of exercise 11.10.
• Parallel of direct-form sections: The filter is arranged in second-order sections and the scaledand quantized coefficients are present in Table 13.7.
• Cascade of direct-form sections: The scaling was made using L2 norm. The section orderingwas made such that the maximum value of the output noise PSD was minimized.Table 13.8 shows the coefficients after scaling and quantization.
384 CHAPTER 13. EFFICIENT IIR STRUCTURES
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000−140
−120
−100
−80
−60
−40
−20
0
20
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
directcascade state−spacelimit cycles free
(a) Overall frequency response.
0 100 200 300 400 500 600 700 800 900 1000−100
−95
−90
−85
−80
−75
−70
−65
−60
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
directcascade state−spacelimit cycles free
(b) Lower stopband attenuation.
1140 1160 1180 1200 1220 1240 1260−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
directcascade state−spacelimit cycles free
(c) Passband detail.
1500 2000 2500 3000 3500 4000 4500 5000−100
−95
−90
−85
−80
−75
−70
−65
−60
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
directcascade state−spacelimit cycles free
(d) Upper stopband attenuation.
Figure 13.2: Frequency response of direct, optimal and free of limit cycles state-space forms for exercise11.10.
Table 13.6: Direct-form elliptic filter coefficients for exercise 11.11.
numerator denominatorb0 2.352497635500000e-04 a0 1.000000000000000e+00b1 -1.358494159413024e-03 a1 -5.917720139300000e+00b2 3.318735970368700e-03 a2 1.460155275700000e+01b3 -4.390963339193363e-03 a3 -1.922832885800000e+01b4 3.318735970368700e-03 a4 1.425291361500000e+01b5 -1.358494159413024e-03 a5 -5.638449831200000e+00b6 2.352497635500000e-04 a6 9.300324787699999e-01
• Cascade of optimal state-space sections: The coefficients present in Table 13.9 are alreadyscaled using the L2 norm.
• Cascade of state-space sections free of limit cycles:Table 13.10 shows the coefficients after scaling and quantization with seventeen bits in thedecimal part.
The frequency response of the structures developed above are depicted in Figures 13.3 and 13.4.
385
Table 13.7: Parallel-form scaled and quantized coefficients for exercise 11.11.
Feed-forward coefficient: h0=2.517700195312500e-04
section 1 section 2 section 3
γ′0 7.868041992187500e-01 -9.570770263671875e-01 6.401443481445312e-01
γ′1 -6.509170532226562e-01 8.414611816406250e-01 -5.943603515625000e-01
m0 -1.959518432617188e+00 -1.971939086914062e+00 -1.986267089843750e+00
m1 9.604721069335938e-01 9.757461547851562e-01 9.923706054687500e-01
λ 8.651733398437500e-03 1.358032226562500e-02 9.643554687500000e-03
Table 13.8: Cascade-form scaled and quantized coefficients for exercise 11.11.
Constant gain: λ1=9.643554687500000e-03
section 1 section 2 section 3
γ′0 2.258682250976562e-01 1.271362304687500e-01 8.493804931640625e-01
γ′1 -4.479827880859375e-01 -2.507095336914062e-01 -1.545310974121094e+00
γ′2 2.258682250976562e-01 1.271362304687500e-01 8.493804931640625e-01
m0 -1.986267089843750e+00 -1.971939086914062e+00 -1.959518432617188e+00
m1 9.923706054687500e-01 9.757461547851562e-01 9.604721069335938e-01
13.12 The direct-form IIR elliptic filter was already designed in exercise 11.10, but now we don’t want thecoefficients quantization to be taken in account. So, we present in Table 13.11 the non-quantizedcoefficients of the direct-form IIR filter. The assumption of quantizaton only after the adders wasmade to let us analyse this effect separately, and its error spectrum as well. Therefore, we showin Table 13.12 the coefficients of the cascade realization, after scaling (but not quantized), and thecoefficients c1,j and c2,j of the ESS technique.
The frequency response of the structures developed above are depicted in Figure 13.5. Figure 13.5(e)depicts the transfer of the error due to quantization after sum for the cascade structures implementedwith and without the ESS technique. As can observed, the spectrum of the error was really shapedwith the ESS, and has a reduced output effect in the passband of the filter when compared to the errorspectrum without the ESS.
13.13 As in exercise 11.10, only coefficients quantization will be performed. The direct-form IIR filtercoefficients were already shown in Table 13.11.
The L2 norm scaled coefficients are in Table 13.13, and the L∞ in Table 13.14.
The frequency analyses of these designs are in Figure 13.6.
Table 13.9: Optimal state-space sections scaled and quantized coefficients for exercise 11.11.
section 1 section 2 section 3
a′11 9.931335449218750e-01 9.859695434570312e-01 9.797592163085938e-01
a′12 -7.744598388671875e-02 -5.164337158203125e-02 -1.084899902343750e-02
a′21 7.822418212890625e-02 7.001495361328125e-02 5.043029785156250e-02
a′22 9.931335449218750e-01 9.859695434570312e-01 9.797592163085938e-01
b′1 1.232070922851562e-01 6.470718383789062e-01 4.180168151855469e+00
b′2 1.335144042968750e-03 -6.103515625000000e-05 8.352661132812500e-02
c′1 5.645751953125000e-04 0 3.028869628906250e-03
c′2 5.249023437500000e-02 1.445007324218750e-02 1.517181396484375e-01
d 4.808044433593750e-02 2.706146240234375e-02 1.808013916015625e-01
386 CHAPTER 13. EFFICIENT IIR STRUCTURES
Table 13.10: State-space free of limit cycles sections scaled and quantized coefficients for exercise 11.11.
λ1=7.637260437011719e+00
section 1 section 2 section 3
a11 9.797592163085938e-01 9.931335449218750e-01 9.859695434570312e-01
a12 -1.501464843750000e-02 -7.764434814453125e-02 -5.782318115234375e-02
a21 3.644561767578125e-02 7.801818847656250e-02 6.253051757812500e-02
a22 9.797592163085938e-01 9.931335449218750e-01 9.859695434570312e-01
b1 2.024078369140625e-02 6.866455078125000e-03 1.403045654296875e-02
b2 -3.644561767578125e-02 -7.801818847656250e-02 -6.253051757812500e-02
c′1 1.238868713378906e+00 4.337387084960938e-01 1.179428100585938e-01
c′2 6.626892089843750e-01 2.880859375000000e-02 2.648162841796875e-02
d′ 6.591796875000000e-03 2.512283325195312e-01 1.859283447265625e-02
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−140
−120
−100
−80
−60
−40
−20
0
20
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
direct−formparallelcascade direct
(a) Overall frequency response.
0 0.005 0.01 0.015 0.02 0.025−5
−4
−3
−2
−1
0
1
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
direct−formparallelcascade direct
(b) Passband detail.
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−120
−110
−100
−90
−80
−70
−60
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
direct−formparallelcascade direct
(c) Stopband attenuation.
Figure 13.3: Frequency response of direct, cascade and parallel forms of exercise 11.11.
13.14 In order to prove equation (11.120) we must use equations (11.115) and (11.116). They are respec-tively:
zBj(z) = Dj(z−1)z−j (13.25)
387
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−140
−120
−100
−80
−60
−40
−20
0
20
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
directcascade state−spacelimit cycles free
(a) Overall frequency response.
0 0.005 0.01 0.015 0.02 0.025−5
−4
−3
−2
−1
0
1
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
directcascade state−spacelimit cycles free
(b) Passband detail.
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1−120
−110
−100
−90
−80
−70
−60
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
directcascade state−spacelimit cycles free
(c) Stopband attenuation.
Figure 13.4: Frequency response of direct, optimal and free of limit cycles state-space forms of exercise11.11.
Table 13.11: Direct-form elliptic filter coefficients for exercise 11.12.
numerator denominatorb0 2.779805100600000e-04 a0 1.000000000000000e+00b1 -1.538851369928211e-03 a1 -5.773370013400000e+00b2 4.247183645559548e-03 a2 1.641792113000000e+01b3 -7.395723052765707e-03 a3 -2.899770689800000e+01b4 8.840281765965295e-03 a4 3.459878640900000e+01b5 -7.395723052765707e-03 a5 -2.842297768400000e+01b6 4.247183645559548e-03 a6 1.577356793500000e+01b7 -1.538851369928211e-03 a7 -5.436845501600000e+00b8 2.779805100600000e-04 a8 9.230518096400000e-01
Dj−1(z) =1
1− a2jj
[Dj(z)− ajjzBj(z)] (13.26)
388 CHAPTER 13. EFFICIENT IIR STRUCTURES
Table 13.12: Cascade-form scaled and ESS coefficients for exercise 11.12.
Constant gain: λ1=9.713055480290132e-02
section 1 section 2 section 3 section 4
γ′0 4.251605074391967e-01 1.837185862631622e-01 1.350709976518540e-01 2.712628740967959e-01
γ′1 -6.850323242208637e-01 -2.312093728687646e-01 -2.368837301127678e-01 -2.474793597445170e-01
γ′2 4.251605074391747e-01 1.837185862631635e-01 1.350709976518603e-01 2.712628740967959e-01
m0 -1.493238752054216e+00 -1.404651334152399e+00 -1.456576770990463e+00 -1.418903156202923e+00
m1 9.891329568141431e-01 9.883769608171952e-01 9.721102111639396e-01 9.712550442136454e-01
c1 -1.456735860063858e+00 -1.436066449898573e+00 -1.451711344814359e+00 -1.224343205147788e+00
c2 9.991109543456721e-01 9.993753746997142e-01 9.940708321150308e-01 7.454732685796770e-01
Table 13.13: Parallel-form L2 scaled and quantized coefficients for exercise 11.13.
Feed-forward coefficient: h0=2.975463867187500e-04
section 1 section 2 section 3 section 4
γ′0 -6.594848632812500e-02 4.438018798828125e-02 3.948211669921875e-02 -6.696319580078125e-02
γ′1 -2.114868164062500e-02 9.420013427734375e-02 -1.536331176757812e-01 1.157760620117188e-01
m0 -1.404647827148438e+00 -1.418899536132812e+00 -1.456573486328125e+00 -1.493240356445312e+00
m1 9.883804321289062e-01 9.712524414062500e-01 9.721069335937500e-01 9.891357421875000e-01
λ 1.075973510742188e-01 1.652450561523438e-01 1.581039428710938e-01 9.712982177734375e-02
The second one, after some manipulation and changing z by z−1 can be rewritten as:
Dj(z−1) = (1− a2jj)Dj−1(z−1) + ajjDj(z)zj (13.27)
By replacing equations (13.25) and (13.27) in equation (13.26) we get:
(1− a2jj)Dj(z) = (1− a2
jj)Dj−1(z) + (1− a2jj)ajjz
−jDj−1(z−1)
Dj(z) = Dj−1(z) + ajjz−jBj−1(z) (13.28)
Now, using equation (13.27) with z replaced by z−1 and using the result in equation (13.28):
Dj−1(z) + ajjz−jBj−1(z) = (1− a2
jj)Dj−1(z) + ajjzBj(z)
Bj(z) = z−1Bj−1(z) + ajjz−1Dj−1(z) (13.29)
Equations (13.28) and (13.29) can be written in a vectorized form, resulting in equation (11.120) ofthe textbook: [
Dj(z)Bj(z)
]=[
1 ajjajjz
−1 z−1
] [Dj−1(z)Bj−1(z)
]13.15 For the scaled lattice, the following properties hold:
[Dj(z)Bj−1(z)
]=
(∏j+1i=N λi
)0
0 λj
(∏j+1i=N λi
) [ Dj(z)Bj−1(z)
](13.30)
389
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000−140
−120
−100
−80
−60
−40
−20
0
20
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
non−quantizedESS cascadecascade
(a) Overall frequency response.
0 100 200 300 400 500 600 700 800 900 1000−100
−95
−90
−85
−80
−75
−70
−65
−60
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
non−quantizedESS cascadecascade
(b) Lower stopband attenuation.
1140 1160 1180 1200 1220 1240 1260−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
non−quantizedESS cascadecascade
(c) Passband detail.
1500 2000 2500 3000 3500 4000 4500 5000−100
−95
−90
−85
−80
−75
−70
−65
−60
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
non−quantizedESS cascadecascade
(d) Upper stopband attenuation.
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000−160
−140
−120
−100
−80
−60
−40
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
ESS errornon−ESS error
(e) The quantization after sum error transfer for the whole struc-ture.
Figure 13.5: Frequency response of direct, cascade and parallel forms for exercise 11.12.
[Dj−1(z)Bj(z)
]=
λj
(∏j+1i=N λi
)0
0(∏j+1
i=N λi
) [ Dj−1(z)Bj(z)
](13.31)
390 CHAPTER 13. EFFICIENT IIR STRUCTURES
Table 13.14: Parallel-form L∞ scaled and quantized coefficients for exercise 11.13.
Feed-forward coefficient: h0=2.975463867187500e-04
section 1 section 2 section 3 section 4
γ′0 -8.625564575195312e-01 3.675842285156250e-01 3.320617675781250e-01 -9.059753417968750e-01
γ′1 -2.765808105468750e-01 7.801742553710938e-01 -1.292045593261719e+00 1.566421508789062e+00
m0 -1.404647827148438e+00 -1.418899536132812e+00 -1.456573486328125e+00 -1.493240356445312e+00
m1 9.883804321289062e-01 9.712524414062500e-01 9.721069335937500e-01 9.891357421875000e-01
λ 8.224487304687500e-03 1.995086669921875e-02 1.879882812500000e-02 7.179260253906250e-03
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000−140
−120
−100
−80
−60
−40
−20
0
20
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
direct−formparallel (norm 2)parallel (norm inf)
(a) Overall frequency response.
0 100 200 300 400 500 600 700 800 900 1000−100
−95
−90
−85
−80
−75
−70
−65
−60
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
direct−formparallel (norm 2)parallel (norm inf)
(b) Lower stopband attenuation.
1140 1160 1180 1200 1220 1240 1260−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
direct−formparallel (norm 2)parallel (norm inf)
(c) Passband detail.
1500 2000 2500 3000 3500 4000 4500 5000−100
−95
−90
−85
−80
−75
−70
−65
−60
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
direct−formparallel (norm 2)parallel (norm inf)
(d) Upper stopband attenuation.
Figure 13.6: Frequency response of direct, cascade and parallel forms for exercise 11.13.
The two-multiplier lattice is governed by the relation:
[Dj−1(z)Bj(z)
]=[
1 −aj,jaj,jz
−1 (1− a2j,j)z
−1
] [Dj(z)Bj−1(z)
](13.32)
By simply applying equations (13.32) and (13.30) in equation (13.31), we get:
391
[Dj−1(z)Bj(z)
]=
[λj∏j+1i=N λi 00
∏j+1i=N λi
] [1 −aj,j
aj,jz−1 (1− a2
j,j)z−1
][ 1Qj+1
i=N λi0
0 1
λjQj+1i=N λi
] [Dj(z)Bj−1(z)
]
=
[λj −aj,j
aj,jz−1 (1−a2
j,j)z−1
λj
] [Dj(z)Bj−1(z)
](13.33)
Equation (13.33) is the scaled version of equation (13.32). In order to eliminate one multiplier of thestructure, we can make λj = 1 ± aj,j . This choice will eliminate the quadratic dependency of aj,j ,eliminating one of the these multipliers. Applying this value for the scaling factor in equation (13.33)we get:
[Dj−1(z)Bj(z)
]=
[1± aj,j −aj,jaj,jz
−1 1∓ aj,jz−1
] [Dj(z)Bj−1(z)
]This last equation results in the one-multiplier lattice.
13.16 Each section of the two-port scaled lattice is governed by the relation:[Dj−1(z)Bj(z)
]=
[λj −aj,j
aj,jz−1 (1−a2
j,j)
λjz−1
] [Dj(z)Bj−1(z)
]
where, if the structure is L2 normalized, the multiplier must be given by λj =√
1− a2j,j , resulting
in [Dj−1(z)Bj(z)
]=
√1− a2j,j −aj,j
aj,jz−1
√1− a2
j,jz−1
[ Dj(z)Bj−1(z)
]
The solution of this exercise requires finding a new λj , providing a simplified structure, with onlythree multipliers. The new two-port scaled structure is given by
[Dj−1(z)Bj(z)
]=
λj√
1− a2j,j −aj,j
aj,jz−1
√1−a2
j,j
λjz−1
[ Dj(z)Bj−1(z)
]
where, if λj = 1√1−a2
j,j
, results in
[Dj−1(z)Bj(z)
]=
[1 −aj,j
aj,jz−1 (1− a2
j,j)z−1
] [Dj(z)Bj−1(z)
]which is directly implemented with three multipliers (aj,j , −aj,j and 1− a2
j,j). The implementationof this structure can also be done with two multipliers, as described in the textbook.
13.17 The two-multiplier lattice that implements the filter in problem 11.10 has its coefficients as given inTable 13.15. The frequency response of this structure is depicted in Figure 13.7.
13.18 Exercise 13.18
13.19 Exercise 13.19
392 CHAPTER 13. EFFICIENT IIR STRUCTURES
Table 13.15: Two-multiplier lattice coefficients for exercise 11.17.
section aj,j vj
0 - -1.292426776035478e-06
1 -7.313586388771941e-01 2.538009421949178e-07
2 9.990796409007403e-01 5.560742306805373e-06
3 -7.285069961671096e-01 -8.757606958786912e-07
4 9.991603307919437e-01 -1.070028378933816e-05
5 -7.294394550631826e-01 -7.580202665163818e-05
6 9.981806878908159e-01 2.018146296579697e-05
7 -7.279986708423641e-01 6.603297116182950e-05
8 9.230518096400000e-01 2.779805100600000e-04
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000−140
−120
−100
−80
−60
−40
−20
0
20
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
(a) Overall frequency response.
0 100 200 300 400 500 600 700 800 900 1000−100
−95
−90
−85
−80
−75
−70
−65
−60
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
(b) Lower stopband attenuation.
1140 1160 1180 1200 1220 1240 1260−0.5
−0.4
−0.3
−0.2
−0.1
0
0.1
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
(c) Passband detail.
1500 2000 2500 3000 3500 4000 4500 5000−100
−95
−90
−85
−80
−75
−70
−65
−60
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
(d) Upper stopband attenuation.
Figure 13.7: Frequency response of the two-multiplier lattice for exercise 11.17.
13.20 The standard wave digital filter realization is depicted in Figure 13.8. Its six multipliers are derivedas follows:
G1 =1R1
393
Figure 13.8: Wave filter for the doubly-terminated RLC network for exercise 11.20.
G′1 =2C1
T+G1
α1 =2G1
G1 + 2C1T +G′1
=G1
G1 + 2C1T
R′2 = R′1 +2L1
T
β1 =2R′1
R′1 + 2L1T +R′2
=R′1
R′1 + 2L1T
G′3 = G′2 +2C2
T
α2 =2G′2
G′2 + 2C2T +G′3
=G′2
G′2 + 2C2T
R′4 = R′3 +2L1
T
β2 =2R′3
R′3 + 2L1T +R′4
=R′3
R′3 + 2L1T
α3 =2G′4
G′4 + 2C1T +G1
α4 =2 2C1T
G′4 + 2C1T +G1
The wave lattice realization requires five multiplications per output sample while the standard wavedesign requires six multiplications. Therefore, the lattice structure is less computationally complexfor the design of symmetric filters derived from analog prototypes.
13.21 The cascade-form structure implements a highpass filter. This exercise was done supposing that thequantization is performed only after the multipliers, just like in a fixed-point arithmetic machine.
(a) The output-noise RPSD is:
RPSD = 10log
3 +3λ1
2
m∏i=1
Hi(z)Hi(z−1) + 5m∑j=2
1λj
2
m∏i=j
Hi(z)Hi(z−1)
for i = 1, 2, 3.The RPSD is shown in Figure 13.9.
394 CHAPTER 13. EFFICIENT IIR STRUCTURES
0 500 1000 1500 2000 2500 3000 3500 4000 4500 50000
5
10
15
20
25
30
35
40
45
50
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
Figure 13.9: Output-noise RPSD for exercise 11.21a
(b) The filter coefficient wordlength determined applying the statistical forecast is ten bits in thefractional part (F = 10). The integer part is implemented with only one bit (I = 1) and we stillneed the sign bit. The confidence factor used was Q = 0.95.
(c) The quantized version has reached the specifications. The passband ripple is lower than 1.0 dB.The stopband attenuation remained the same. The greatest variation in the stopband occurred forfrequencies near z = 1 (DC).Figure 13.10 shows the frequency response of the original and quantized filters.
0 500 1000 1500 2000 2500 3000 3500 4000 4500 5000−400
−350
−300
−250
−200
−150
−100
−50
0
50
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
originalquantized
(a) Overall frequency response.
4000 4100 4200 4300 4400 4500 4600 4700 4800 4900 5000−1
−0.8
−0.6
−0.4
−0.2
0
0.2
angular frequency (rad/s)
mag
nitu
de r
espo
nse
(dB
)
originalquantized
(b) Passband detail.
Figure 13.10: Frequency response for exercise 11.21c.
13.22 Exercise 13.22