60894318 digital effects

62
1 Digital Audio Effects Guide: Prof. V.M. Gadre Advisor: Ritesh Kolte Chetan Rao Abhishek Badki SatyaPrakash Pareek

Upload: saif-shah

Post on 06-Mar-2015

49 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 60894318 Digital Effects

1

Digital Audio Effects

Guide: Prof. V.M. Gadre

Advisor: Ritesh Kolte

Chetan Rao

Abhishek Badki

SatyaPrakash Pareek

Page 2: 60894318 Digital Effects

2

Contents: Introduction

Section A: Getting familiar with the tools

DSP Kit overview………………………………………………………………………………3

CCS: usage and programming………………………………………………………………… 6

Matlab vs CCS………………………………………………………………………………...12

Section B: Digital Audio Effects

DAFX ………………………………………………………………………………………….13

Delay Based effects: …………………………………………………………………….........16

Echo……………………………………………………………………................................19

Chorus………………………………………………………………………………………..24

Flanging……………………………………………………………………...........................25

Reverberation………………………………………………………………………………..25

Filtering based effects…………………………………………………………………………31

Equalizer……………………………………………………………………………………..32

Using cascade technique – peak and shelving filters………………………………….33

Using Band filters……………………………………………………………………...34

Wah wah effect……………………………………………………………………………....38

Modulation based effects………………………………………………………………………42

Ring modulation………………………………………………………………………..42

Other effects……………………………………………………………………………………44

Distortion

Section C: References…………………………………………………….. ………………….46

Section D: Appendix ……………………………………………………………………….....47

Page 3: 60894318 Digital Effects

3

Getting familiar with the tools

DSP Kit overview

Fig. 5510 DSP Starter Kit (DSK)

The primary features of the DSK are:

• 200 MHz TMS320VC5510 DSP

• AIC23 Stereo Codec

• Four Position User DIP Switch and Four User LEDs

• On-board Flash and SDRAM

Digital Signal processor: The TMS320VC5510 DSP is the heart of the system.

Codec:

Codec stands for coder/decoder. The DSK includes an on-board codec called the AIC23.The

job of the AIC23 is to code analog input samples into a digital format for the DSP to process,

and then decode data coming out of the DSP to generate the processed analog output.

Page 4: 60894318 Digital Effects

4

Four Position User DIP Switch and Four User LEDs:

The DSK has 4 light emitting diodes (LEDs) and 4 DIP (Dual-In-Line Package) switches that

allow users to interact with programs through simple LED displays and user input on the

switches.

Flash and SDRAM:

The 5510 has a significant amount of internal memory so typical applications will have all

code and data on-chip. But when external accesses are necessary, it uses a 32-bit wide

external memory interface. The DSK includes an external non-volatile Flash chip to store

boot code and an external SDRAM to serve as an example of how to include external

memories in your own system. The DSK implements the logic necessary to tie board

components together in a complex programmable logic device (CPLD). In addition to

random glue logic, the CPLD implements a set of 4 software programmable registers that can

be used to access the on-board LEDs and DIP switches as well as control the daughter card

interface.

JTAG emulator:

The 5510 DSK includes a special device called a JTAG emulator on-board that can directly

access the register and memory state of the 5510 chip through a standardized JTAG interface

port. When a user wants to monitor the progress of his program, Code Composer sends

commands to the emulator through its USB host interface to check on any data the user is

interested in.

There are many softwares that can be used for creating audio effects:

• MATLAB

• CCS : Code Composer Studio

• LABView

We have used CCS for programming and MATLAB for finding filter coefficients.

Page 5: 60894318 Digital Effects

5

Page 6: 60894318 Digital Effects

6

CCS: Usage and Programming

CODE COMPOSER STUDIO: CCS

CCS is TEXAS Instrument’s software development tool.

It consists of:

• An assembler

• A C compiler

The Code Composer IDE is the piece you see when you run Code Composer. It consists of

an editor for creating source code, a project manager to identify the source files and options

necessary for your programs and an integrated source level debugger that lets you examine

the behavior of your program while it is running. The IDE is responsible for calling other

components such as the compiler and assembler so developers don’t have to deal with the

hassle of running each tool manually.

Code Composer Studio provides integrated program management using projects. A project

keeps track of all information that is needed to build a target program or library.

A project records:

• Filenames of source code and object libraries

• Compiler, assembler, and linker options

• Include file dependencies

Program management is most easily accomplished using the Project View window. The

Project View window displays the entire contents of the project, organized by the types of

files associated with the project. All project operations can be performed from within the

Project View window. The project environment speeds development time by providing a

variety of commands for building your project. If the project contains many source files and

only few of the files are edited since the project was last built, use the Incremental Build

command to recompile only the files that have changed. The Rebuild All command forces all

files to be compiled. Use the Compile File command to compile an individual source file.

Code Composer Studio allows you to collect execution statistics about specific areas in your

code. This is called profiling, and it gives you immediate feedback on your application's

performance and lets you optimize your code. You can determine, for instance, how much

Page 7: 60894318 Digital Effects

7

CPU time algorithms use. You can also profile other processor events, such as the number of

branches, subroutine calls, or interrupts taken.

To start using code composer studio, first we have power the DSK and connect it to computer

using USB port connection. Now we start CCS software. The software while starting

recognizes the DSK through USB and allows to work with it using instructions. The program

is written in C language, loaded and run. The audio signal which is given at the

LINEIN/MICROPHONE terminal of the DSK is converted into samples by the ADC. These

samples are modulated by the written program in the DSP. Now the modulated samples are

converted by the DAC and given to the HEADPHONE and LINEOUT. This is how the input

is modulated and various effects are generated.

Things to be aware of:

• DSK is a different system than a PC, when recompiling a program in Code Composer

on PC; it must be specifically loaded onto the 5510 on the DSK.

• When program in Code Composer is run, it simply starts executing at the current

program counter. To restart the program, program counter must be reset by using

Debug ->Restart or re-loaded which sets the program counter implicitly.

• After a program starts running it continues running on the DSP indefinitely. To stop

it use Debug -> Halt.

Basic body of the program:

For beginners, a sample skeleton is provided, which includes all the basic instructions to

properly interface or load program on the DSK.

To open sample_test_skeleton: Click Project--> open --> open the sample_test.pjt in

sample_skeleton folder. Open the C file in the source folder on the left hand side box, which

will be similar to: sample_test_skeleton given in the Appendix.

It uses the AIC23 codec module of the 5510 DSK Board Support Library to read data in and

write data out through AIC23 codec and serial port. This also contains pre-calculated sine

wave data which is commented. Sine wave data is stored in an array called sinetable. This can

be used for generation of sine waves of different frequencies. The codec operates at 48 KHz

Page 8: 60894318 Digital Effects

8

by default. But here we have changed the sampling frequency to 24 KHz. The different

sampling frequencies available are also written inside comments.

DSP is configured using the DSP configuration tool. Settings for this example are stored in a

configuration file called sample_test.cdb. At compile time, Code Composer will auto-

generate DSP/BIOS related files based on these settings. Contains the results of the auto

generation and must be included for proper operation. The name of the file is taken from

sample_test.cdb and adding cfg.h.

#include "sample_testcfg.h"

To use the BSL (Board Support Library), we have to write this instruction in the program.

#include "dsk5510.h"

To use the AIC23 codec module

#include "dsk5510_aic23.h"

To set the Length of sine wave table

#define SINE_TABLE_SIZE 48

Pre-generated sine wave data, 16-bit signed samples. For example a decimal number: 25595 can be represented in hex format as: 0x658B

Int16 sinetable[SINE_TABLE_SIZE] =

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

Fig. Sampling and quantizing by ADC, digital audio effects and reconstruction by DAC.

DAC DAFX ADC

0.5 Depth

Page 9: 60894318 Digital Effects

9

0x0000, 0x10b4, 0x2120, 0x30fb, 0x3fff, 0x4dea, 0x5a81, 0x658b…

;

Effect of Sampling on Frequency Response:

The sampling frequency must be at least twice as high as the highest frequency(Nyquist

criteria) that we wish to reproduce because we must have at least 1 data point for each half

cycle of the audio waveform. The highest frequency that we can record with a sampling rate

of 8khz is 4000hz. At a sampling rate of 44k, we can record up to 22khz but the filters used in

the D/A conversion process have a very high rate slope at 20khz which will allow nothing

higher than 20khz to get through.

Hence for 8khz sampling rate we hear mostly less of high frequencies and hence the quality

of output is less compared to the 16khz sampling frequency. The higher sampling rates

require twice as much hard drive space, and twice as much CPU processing power hence to

save space and to allow for more DSP processing low sampling rates are used in recording

sound. Larger Sample rates may technically sound better.

Frequency Definitions

#define DSK5510_AIC23_FREQ_8KHZ 1

#define DSK5510_AIC23_FREQ_16KHZ 2

#define DSK5510_AIC23_FREQ_24KHZ 3

Codec configuration settings (Used to increase or decrease input or output)

DSK5510_AIC23_Config config =

0x0017, // 0 DSK5510_AIC23_LEFTINVOL Left line input channel volume

0x0017, // 1 DSK5510_AIC23_RIGHTINVOL Right line input channel volume

0x00d8, // 2 DSK5510_AIC23_LEFTHPVOL Left channel headphone volume

0x00d8, // 3 DSK5510_AIC23_RIGHTHPVOL Right channel headphone volume

;

Main code routine, here we initialize BSL, read samples, perform operations on the samples and write the samples.

void main()

Page 10: 60894318 Digital Effects

10

DSK5510_AIC23_CodecHandle hCodec;

Int16 leftsample, rightsample;

To initialize the board support library, it must be called first before any other BSL function

DSK5510_init();

To start the codec

hCodec = DSK5510_AIC23_openCodec(0, &config);

Set Sampling frequency of the codec to 24KHz .

DSK5510_AIC23_setFreq(hCodec, DSK5510_AIC23_FREQ_24KHZ);

The above statement can be also written as DSK5510_AIC23_setFreq(hCodec,3); but for this we have to include the frequency definitions.

while(TRUE) (This while loop is an infinite loop. It starts when the program is loaded and run and stops when the program is halted.)

There are mainly two types of jack connectors:

Mono has only one channel hence in this case the rightsample and leftsample will be the same but in case of Stereo there are two channels and hence the rightsample and leftsample may or may not be same depending on the input.

Read a sample from the left channel

while (!DSK5510_AIC23_read16(hCodec, &leftsample));

Read a sample from the right channel

while (!DSK5510_AIC23_read16(hCodec, &rightsample));

/*--------------------------------------

Enter Signal Processing Code here which will be loaded on to the DSK for processing the signal on the chip

Balanced Mono Stereo

Tip Positive / Hot Left channel

Ring Negative / Cold Right channel

Sleeve Ground Ground

Page 11: 60894318 Digital Effects

11

---------------------------------------*/

Send content of leftsample to the left channel.

while (!DSK5510_AIC23_write16(hCodec, leftsample));

Send content of rightsample to the right channel.

while (!DSK5510_AIC23_write16(hCodec, rightsample));

Close the codec

DSK5510_AIC23_closeCodec(hCodec);

Page 12: 60894318 Digital Effects

12

Matlab vs CCS

Comparison of Matlab and CCS

Signal processing codes can be written in CCS (in C programming language) and Matlab.

Both methods have their advantages and shortcomings. We have used CCS in most of audio

processing. Here we compare both schemes, so that user can get a feel of tools being used.

Method of processing:

Matlab takes an audio file as input (using wavread function) and converts it into an array.

Then it works on the processing code and writes into another array. Second array is written

into audio file and returned to user. So all the processing is done offline. In CCS, audio files

are processed online. A predefined number of samples are stored and processed. The audio

files are played using a media player and simultaneously we can hear the processed sounds.

Memory concerns: Matlab stores all the samples in array that takes a lot of memory space.

For example a typical one minute length audio file is stored in 9,60,000 length

array.(sampling frequency 16 kHz).

In CCS, very few elements are stored (order of hundreds), taking a very small memory space.

Simplicity of code:

Matlab codes are very simple. If the required difference equation is known, matlab code is a

slight modification of the same but in CCS a little bit of thinking is required. How CCS stores

and processes the audio files has to be known. C code includes many for loops, making code

a bit complex.

Page 13: 60894318 Digital Effects

13

Digital Audio Effects

Basics of Digital Audio Signals:

An audio signal consists of variations in air pressure as a function of time so that it represents

a continuous time signal x(t). This signal is converted into voltage signals by means of some

hardware say microphone. This analog signal is difficult to process as it is (for various

reasons). To process it on a computer, it needs to be converted to corresponding digital

signal. The process includes discretization and then quantization of analog signals.

The process is depicted in the following figure:

Fig. An utterance of the vowel "a" in analog, discrete-time and digital format. Sampling at 4khz and quantization on 4bits(16 levels)

The process involves use of Digital to Analog Converter (DAC). It takes samples of analog

signals after fixed times intervals. Process is called sampling. Sampling frequency is very

carefully chosen since for the reconstruction of digital signals sampling frequency must be at

least twice the maximum frequency present in the original signal (Nyquist criterion). After

Page 14: 60894318 Digital Effects

14

sampling signal is passed through a quantizer in which the discrete time signal x[n] R is

approximated by a digital signal xd[n] A with only a finite set A of possible levels. The

number of possible representation levels in set A is hardware defined, typically 2b where b is

no. of bits in a word. Typical WAVE files are sampled at 44.1 kHz and resolution is 16 bits.

Digital signal thus produced is then passed through a system which processes it. The output

signal is then converted to analog signal using DAC.

Digital Audio Effects (DAFX):

Audio effects are used by every individual involved in the generation of music signals. They

start with special playing tricks by musicians, merge to the use of special microphone

techniques and migrate to effect processors for synthesizing, recording, production and

broadcasting of music signals.

Audio effects are, in layman’s words, sound modifications. Not all the sound modifications

are very useful as audio effects. The modifications which alter the properties of sound such

that it appears to be some other natural sound are generally useful. Almost all the audio

effects were first played by musicians (either accidentally or knowingly), then analyzed.

Properties of sound can be can both in analog and digital domain. The most important and

popular ones are Digital Audio Effects (DAFX).

DAFX are boxes or software tools with input audio signals or sounds which are modified

according to some sound control parameters and deliver audio signals or sounds. (See figure).

Input and output signals are monitored by loudspeakers, headphones or some visual

representation such as time signal, the signal level and the spectrum.

Most important task is to set the control parameters according to what modifications we want

to achieve. Both input and output signals are trains of digital signals which represent

corresponding analog audio signals. The settings of control parameters are often done by

sound engineers or musicians. The digital audio effects are basically digital signal processing.

So, for understanding the algorithms of DAFX knowledge of DSP is required.

The most basic changes can be thought in time domain. Changing the sound levels of some

particular samples, filtering some frequencies, changing pitch, enhancing or diminishing

some particular frequencies are some examples. Similarly in frequency domain too we can

Page 15: 60894318 Digital Effects

15

Input DAFX Output Signal Signal

Acoustic and Acoustic and visual visual representation representation

Control Parameters

Listener

Fig. Digital Audio Effects and its control

have the similar changes. Here we mainly focus on the time domain signal processing and

classify DAFX into following categories:

1. Simple Effects

2. Delay based effects

3. Filtering effects

4. Modulation-Demodulation based effects

5. Others

Some simple effects Using only C programming and the CCS software we can create simple effects which do not

use any filters. These programs are best for understanding the CCS working.

For example the pendulum effect (Section-D - Reference). This program sends the samples to

the two headphones according the function for each headphone (or the function which

modifies leftsample and rightsample). The function used is an exponential function.

Similarly, for clock-anticlock-wise effect a fixed number of samples are sent to the right

speaker and then the next set sent to both and then again the next samples are sent to left

speakers. This process is clockwise. And then similarly the anticlockwise effect is produced.

Page 16: 60894318 Digital Effects

16

Delay Based Digital Audio Effects

Introduction:

Delays can be experienced in acoustical spaces. A sound wave reflected by a wall will be

superimposed on the sound wave at the source. If the wall is far away, such as a cliff, we will

hear an echo. If the wall is close to us, we will notice the reflections through a modification

of the sound colour. Other delay based effects are doubling, chorus, vibrato, flanging…etc.

Equivalents of these acoustical phenomena have been implemented as signal processing

units. The basic structure for delay based effects is comb filter.

Most Basic Delay Structure

FIR Comb Filter:

The network that simulates a single delay is called the FIR comb filter. The input signal is

delayed by a given time duration. This given time delay can be constant or varying with

respect to time. The effect will be audible only when the processed signal is combined to the

input signal, which will act as a reference signal. This effect has two tuning parameters:

• The amount of delay time (Γ)

• The relative depth of the delayed signal to that of the reference signal (g)

These parameters will account for the type of audio effect observed at the output signal,

which will be discussed later.

The difference equation is given by:

y (n) = x (n) + g x (n – M)

where, M = Γ * fs

M – Number of samples delayed

Γ - The amount of time delay

fs - sampling frequency

Page 17: 60894318 Digital Effects

17

Hence, the transfer equation is given by:

H (z) = 1 + g z –M

Fig. Block diagram for FIR Comb Filter

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9-14

-12

-10

-8

-6

-4

-2

0

2

4

6

Normalized Frequency

Mag

nitu

de (d

B)

Magnitude Response (dB)

Fig. Frequency response of FIR Comb Filter

The time response of this filter is made up of the direct signal and the delayed version. The

frequency response shows notches at regular frequencies and looks like a comb. That is why

this type of filter is called a comb filter. For positive values of g, the filter amplifies all

frequencies that are multiples of 1 / Γ and attenuates all frequencies that lie in between. For

negative values of g, the filter attenuates frequencies that are multiples of 1 / Γ and amplifies

those that lie in between. For example, for a delay of 1 millisecond and a positive value of g,

the filter will amplify the frequencies 1000Hz, 2000 Hz, etc. while it will attenuate the

frequencies 500 Hz, 1500 Hz, 2500 Hz, etc. The gain varies between 1 + g and 1 – g.

y (n) +x (n)

z ‐Mg

Page 18: 60894318 Digital Effects

18

Thus, FIR comb filter has an effect both in time and frequency domains. Our ear is more

sensitive to the one aspect or the other according to the range where the time delay is set. For

large values of Γ, we can hear a delayed signal that is distinct from the direct signal. The

frequencies by the comb are so close to each other that we barely identify the spectral effect.

For small values of Γ, our ear can no longer isolate the time events but can notice the spectral

effect of the comb.

As mentioned earlier, the given parameters will describe the type of audio effect at the

output. For example, to realize doubling or echo, the amount of time delay has to be constant

over the time interval. To realize chorus or flanging, the amount of time delay has to be

varied around an average value of time delay with a signal of low frequency such as 1 Hz.

This external signal is called low frequency oscillator.

Implementation of delay based effects:

For online processing of signal, to get the delay based effects, the input signal, as it is

accepted at the input port, has to be stored in an array. This stored signal has to be shifted to

allow the next sample at the input port to be stored. The array, so formed, would contain the

delayed signals and hence is called as the delay line. The length of the delay line would

define the maximum delay that could be achieved in this way.

Hence, at every sample time n the newest input sample is accepted into the left-hand side of

the delay line (sample 0), while the oldest sample is discarded off the right-hand side. That

action defines the delay line. The delay-line output is rarely the last sample, so the output is

shown above the block as an arbitrary tap, somewhere within the body of the delay line.

Consider the following delay line, represented by an array, delayline [2048]. If we want a

delay of i samples, the delay line has to be tapped at i th sample to get the desired delay

output. It is represented as follows.

Fig. Representation of delayline [2048]

delayline [2048] . . . . . .

0 2047i

y (n)

x(n)

Page 19: 60894318 Digital Effects

19

If i is constant, the delay of the output signal is constant. To get the time varying delay, i have

to be varied as a function of n which will result in fractional delay. This is explained further

while discussing chorus and flanging effect.

The implementation of the above block diagram in CCS is as follows:

/* important note: all the elements of the arrays, delayline1 and delayline2 , have to be

initialized with zero */

/* the following for loop has to be implemented for each input sample */

/* the leftsample and rightsample are processed separately */

for (k=2047; k>0; k--)

delayline1[k] = delayline1[k-1] ;

delayline2[k] = delayline2[k-1] ;

delayline1[0] = leftsample;

delayline2[0] = leftsample;

leftsample = delayline1[i];

rightsample = delayline2[i];

DOUBLING and ECHO:

Theory:

Doubling and echo are constant delay effects. Doubling involves quick repetition of the

reference signal. The amount of time delay is constant for doubling. The delay is in the range

10 to 25 milliseconds. If the delay is greater than 50 milliseconds, we will hear an echo.

The difference equation, transfer function and block diagram for the constant delay effects

are shown below:

Page 20: 60894318 Digital Effects

20

y (n) = x (n) + g x (n – M)

H (z) = 1 + g z –M

Fig. Block diagram for constant delay based effects

Implementation:

• Set the sampling frequency appropriately according to input signal

• To calculate the number of samples to be delayed

M = Γ * fs

where, Γ = amount of delay time (10 – 25 milliseconds for doubling, greater than 50

milliseconds for echo)

• The number of elements of the delay line has to be greater than or equal to M.

• The value of g is to be chosen appropriately, to determine the relative depth of the

delayed signal to that of the reference signal.

• The input sample has to be combined with the delayed sample to get the output

sample.

CHORUS, VIBRATO AND FLANGING:

Theory:

Chorus comprises of the combination of the reference signal and it’s delayed and pitch

modulated version. This delayed and pitch modulated version can be obtained by varying the

y (n) +x (n)

z ‐Mg

Page 21: 60894318 Digital Effects

21

delay time around the average value with the help of low frequency oscillator (LFO). The

delay time for chorus is in the range 20-30 milliseconds.

Vibrato comprises of the delayed and pitch modulated version of the input signal. When a

car is passing by, we hear a pitch deviation due to the Doppler Effect. This pitch variation is

due to the fact that the distance between the source and our ears is being varied. Varying the

distance is, for our application, equivalent to varying the time delay. If we keep on varying

periodically the time delay we will produce a periodical pitch variation. The delay time is in

the range 20-30 milliseconds and is varied with the help of low frequency oscillator (LFO).

Chorus is combination of vibrato and the input signal.

Flanging has a very characteristic sound, which is generally referred to as a “whooshing”

sound. Like chorus, flanging is also created by mixing the reference signal with a slightly

delayed copy of itself, where the delay time is constantly changing. From the point of view of

implementation of the effect, the only difference between chorus and flanging is of delay

ranges. The delay for flanging usually ranges from 1 millisecond to 10 milliseconds. As

discussed earlier, if the amount of time delay is small, our ear can no longer isolate the time

events but can notice the spectral effect of the comb filter. This aspect of the flanging is

covered separately in later section.

The difference equation, transfer function and the block diagram for the time varying delay

effects are as follows:

y (n) = x (n) + g x (n – M (n))

H (z) = 1 + g z – M (n)

Fig. Block diagram for time varying delay based effects

y (n) +x (n)

z ‐M (n)g

Page 22: 60894318 Digital Effects

22

To understand how the pitch is changed, picture the delay as a recording device. It is storing

an exact copy of the input signal as is arrives, much like a cassette recorder, and it then

outputs that a little later, at the same rate. To increase the amount delay, you want a longer

segment of the signal to be stored in the delay before it is played back. To do this, you want

to read out of the delay line at a slower rate than its being written (the recording rate is

unchanged, so more of the signal is being stored). Reading back at a slower rate is just like

dragging your fingers on the wheel of the cassette, which we know lowers the pitch.

Similarly, to reduce the delay time, we can just read back faster, analogous to speeding up a

playing cassette, which increases the pitch.

Fractional delay line:

The changing delay time will require delay times that are not integer multiples of the

sampling period (and the input signal is being sampled at multiples of this sampling period).

That is, there is a need for fractional delay. The computation of fractional delay will require

the delay line interpolation technique. This way, the effective delay is not discretized, thus

avoiding signal discontinuities when the desired delay time is continuously swept or

modulated. The most common methods for interpolation are linear interpolation and all pass

interpolation.

Implementation of vibrato using delay line interpolation:

The desired output, v (n) (vibrato) dynamically points (i.frac) to a place between two discrete

samples. The index i, an integer, is defined as the current computed whole relative index into

our delay line, relative to the beginning of the delay line. The integer i requires computation

because we want it modulated by the LFO w (n), oscillating as a function of discrete time n.

The integer range of i, ± CHORUS_WIDTH, is centered about the nominal tap point into

the delay line, NOMINAL_DELAY, the fixed positive integer tap center.

i.frac = i + frac

i.frac = NOMINAL_DELAY + CHORUS_WIDTH * w (n)

For linear interpolation: v (n) = frac * delayline [i + 1] + (1 - frac) * delayline [i]

For all-pass interpolation: v (n) = delayline [i + 1] + (1 - frac) * delayline [i] - (1 - frac) * v (n – 1)

Page 23: 60894318 Digital Effects

23

Fig. Implementation of Vibrato effect using interpolation of delayline

Parameters:

LFO Waveform:

The LFO waveform shows how the delay changes over time. When the waveform reaches a

maximum, then the delay is at its largest value. When the waveform (and total delay time) is

increasing, the total delay time is increasing. The rate of storing the input in the delay line is

unchanged. This results in writing the output at slower rate and hence resulting in lowering

the pitch. Similarly, when waveform is decreasing, the total time delay is reducing. Since the

rate of storing the input is unchanged, the output is written at faster rate, resulting in

increasing the pitch.

Refer [ ], for the derivation of pitch change ratio.

Some of the commonly used waveforms are sinusoidal, triangular, logarithmic and saw tooth.

NOMINAL_DELAY (the tap center)

x(n) delayline [2048] . . . . . .

0 2047

i.frac

i i + 1

i.frac

1 ‐ fracfrac

2 * CHORUS_WIDTH

Page 24: 60894318 Digital Effects

24

Following points are worth noting:

• The pitch change ratio varies sinusoidally with time and proportional to modulation

frequency and sample period for the sinusoidal waveform.

• The pitch change ratio is piecewise constant in case of triangular waveform.

• To get constant pitch change ratio the LFO waveform has to be linear. Unfortunately

i.frac will eventually pass one or the other delay line boundary, so this technique

cannot be used indefinitely.

NOMINAL_DELAY:

It is the average value of time delay required to implement the effect. This delay value should

be within the desired delay range.

CHORUS_WIDTH:

The amount of pitch modulation introduced by the chorus is related to how quickly the LFO

waveform changes - the steepest portions on the waveform produce a large amount of pitch

modulation, while the relatively flat portions have very little or no effect on the pitch. We can

use this view to understand how the CHORUS_WIDTH varies the pitch. If we increase the

CHORUS_WIDTH, we are effectively stretching the waveform vertically, which makes it

steeper, and thus, the pitch is altered more. This value is to be chosen, so that net delay

should be within the specified ranges.

Implementation:

• Set the sampling frequency appropriately according to the input signal

• Set the value of the NOMINAL_WIDTH midway between the specified delay

ranges

• Set the value of the CHORUS_WIDTH, so that net delay does not exceed the

specified delay ranges

• Select the appropriate LFO waveform whose frequency (F) is less than 3 Hz

• Find out number of samples (MAX_COUNT), each period of the waveform would

cover

MAX_COUNT = fs / F

Page 25: 60894318 Digital Effects

25

• Compute the maximum and minimum delays as MAX_DELAY and MIN_DELAY

MAX_DELAY=NOMINAL_DELAY + CHORUS_WIDTH

MIN_DELAY = NOMINAL_DELAY – CHORUS_WIDTH

• Find out the number of samples that will correspond to MAX_DELAY and

MIN_DELAY

max = MAX_DELAY * fs

min = MIN_DELAY * fs

• The value of g is to be chosen appropriately, to determine the relative depth of the

delayed signal to that of the reference signal

Spectral analysis of flanging:

As already mentioned, the delay time ranges for the flanging effect are small. Hence, our ear

can no longer isolate the time events but can notice the spectral effect of the comb filter. As

the delay time (Γ) is variable for the flanging effect, the value of M is also variable. We know

that, for g > 0, there are M peaks in the frequency response, centered about the frequencies: Ω

= (2 * Π * k) / M; for k = 0, 1, and 2 . . . M – 1. Between these peaks, there are M notches at

intervals of fs / M Hz. As M changes over time, the peaks and notches of the comb response

are compressed and expanded. The spectrum of a sound passing through the flanger is thus

accentuated and deaccentuated by frequency region in a time-varying manner. Due to this

reason, we hear a characteristic ‘whooshing’ sound in case of flanging.

REVERBERATION:

Theory: Reverberation occurs when copies of an audio signal reach the ear with different

delays and different amplitudes, after taking different paths and having bounced against

surrounding objects. Its effect on the overall sound that reaches the listener depends on the

room or environment in which the sound is played. Reverb is a time-invariant effect. Time-

invariant systems can be completely characterized by their impulse response. The impulse

response tells everything about the room. The reason this works is that an impulse is, in its

ideal form, an instantaneous sound that carries equal energy at all frequencies. What comes

back, in the form of reverberation, is the room's response to that instantaneous, all-frequency

Page 26: 60894318 Digital Effects

26

burst. Hence, the most convenient way to obtain the reverberation effect is by building a

digital filter that will simulate the impulse response of a room. But, this method is

computationally extremely expensive.

Fig. Impulse response of a concert hall

Thus, in a typical reverberation pattern we can distinguish three main parts:

• The direct sound, normally the first sound to arrive to the listener’s ears.

• The early reflections, caused by the reflection of the sound off large nearby surfaces,

and perceived as discrete echoes.

• The late reverberation, a dense collection of echoes travelling in all directions, usually

showing an exponentially decaying curve.

The time required for the reverberation level to decay 60 dB below the initial level is defined

as the reverberation time (Tr). The early reflections and late reverberation have different

physical and perceptual properties.

Late reverberation is characterized by a dense collection of echoes, produced by a very

large number of reflected waves, travelling in all directions. Plotting the impulse response of

natural acoustic spaces, we observe normally an exponentially decaying late reverberation. It

is also possible to represent the reverberant pattern of a room as a function of time and

frequency. This concept, formalized by Jot (1992) is known as the energy decay relief,

EDR(t, w). For a fixed t0, EDR(t0, w) gives the energy of each frequency at this moment. If

we have a fixed w0, EDR(t, w0) gives the decaying energy curve at frequency w0 (Gardner,

1998).

To properly simulate the late reverberation, it is important to consider carefully the frequency

response envelope and the reverberation time, both of which are functions of frequency. The

late reverberation in an artificial reverberation should have sufficient echo density in the time

domain and sufficient density of maxima in the frequency domain. The first acceptable form

Page 27: 60894318 Digital Effects

27

of digital signal producing device to produce artificial reverberator is Schroeder’s

reverberator.

Fig. Block diagram for Schroeder’s Reverberator

Comb filters in Schroeder’s reverberator:

The comb filter used here is a combination of IIR filter and FIR filter. The difference

equation, transfer function, the frequency response and impulse response of this filter is

shown below.

y(n) = x(n - M) + g y(n - M)

H(z) = z –M / (1 – g z –M)

0 1 2 3 4 5 6 7-10

-5

0

5

10

15

20

Frequency (kHz)

Magnitude Response (dB)

Mag

nitu

de (d

B)

Figure: Frequency response of Schroeder’s Comb Filter

x (n) y (n)

COMB

COMB

COMB

COMB

ALL PASS ALL PASS x (n) y (n)

COMB

COMB

COMB

COMB

ALL PASS ALL PASS

Page 28: 60894318 Digital Effects

28

0 5 10 15 20 25 300

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Time (mseconds)

Impulse Response

Am

plitu

de

Fig. Impulse response of Schroeder’s Comb Filter

Hence, the impulse response consists of impulses with equal distance Γ (the amount of time

delay) and magnitudes exponentially decayed. This decay rate depends on feedback factor g.

The frequency response is characterized by series of peaks equally spaced at 0, 1/Γ, 2/Γ…etc.

the peak height is given by the feedback factor.

We have two points of view to Comb filter. In time domain it acts as signal repeater and in

frequency domain it acts as multimodal resonator. Two density criteria for impulse response

should be satisfied to give natural sounding reverb. We should have sufficient echo density in

the time domain and sufficient density of maxima in the frequency domain.

By increasing the delay time (Γ), the density of maxima in the frequency domain can be

increased, but impulse density falls. On the other side for the short delay times (Γ) we have

dense impulse response, but the density of maxima in the frequency domain will fall. Hence,

we will use larger Γ with dense frequency response and connect such filters in parallel to

obtain dense impulse response in time domain. The total number of pulses produced is the

sum of the pulses produced by the individual comb filters.

Schroeder suggested that the delays of the comb filters should be chosen such that ratio of

largest to smallest be about 1.5 (in particular between 30 and 45 milliseconds). Choose loop

times that are relative prime to each other so the decay is smooth. If the delay line loop times

have common divisors, pulses will coincide, producing increased amplitude resulting in

distinct echoes and an audible frequency bias. If Tr is the reverberation time (in seconds) and

fs the sampling frequency (in Hertz) we have:

Page 29: 60894318 Digital Effects

29

g = 10 (– 3 M) / (fs Tr)

Typical concert halls have reverb time ranging from 1.5 - 3 seconds. Hence, the gains g of the

comb filters are adjusted to obtain the desired reverberation time. Schroeder reverb time is

about equal to the longest of the 4 comb filter reverb times.

All pass filters in Schroeder’s reverberator:

Unlike a comb filter, the all-pass filter passes signal of all frequencies equally. That is, the

amplitudes of frequency components are not changed by the filter. The all-pass filter

however, has substantial effect on the phase of individual signal components, that is, the time

it takes for frequency components to get through the filter. This makes it ideal for modeling

frequency dispersion. The difference equation, transfer function, the frequency response and

impulse response of this filter is shown below.

y(n) = −g x(n) + x(n −M) + g y(n −M)

H (z) = (- g + z - M) / (1 – g z - M)

0 1 2 3 4 5 6 70

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Frequency (kHz)

Mag

nitu

de

Magnitude Response

Fig. Frequency response of Schroeder’s All Pass Filter

-1 0 1 2 3 4 5 6 7 8 9 10-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

Time (mseconds)

Impulse Response

Am

plitu

de

Page 30: 60894318 Digital Effects

30

Fig. Impulse response of Schroeder’s All Pass Filter

When placed in series (cascade), the impulse response of one unit triggers the response of the

next, producing much denser response. The number of pulses produced is the product of the

number of pulses produced by individual units. The first all-pass turns each comb echo into a

string of echoes. The second all-pass adds another layer of echoes.

Implementation:

• The sampling frequency is selected according to the input signal.

• Comb filter delay times are taken in range .03 - .05 seconds and relatively prime to

one another (e.g., .031, .037, .041, .043).

• The proposed all pass delays are about 5 milliseconds and 1.7 milliseconds, and both

all pass gains are adjusted to 0.7.

• Number of samples (M) to be delayed, in each filter, is calculated by multiplying

delay time (Γ) with sampling frequency (fs).

• The gain for each comb filter is adjusted according to the below formula to give

reverb time in the range 1.5 – 3 seconds

g = 10 (– 3 M) / (fs Tr)

• Consider the Schroeder reverberator (figure[ ]). A common delay line is used for all

the comb filters.

• Two separate delay lines are used for two all pass filters.

• The outputs of the comb filters are added. This serve as input for the delay line of the

first all pass filter.

• The output of the first all pass filter serves as input for the delay line for the second all

pass filter.

Page 31: 60894318 Digital Effects

31

Filtering based effects

Filter Design:

Filter Design in itself is very vast topic but here for our purpose we use it in a very limited

way. We have used FIR filters in our designs. We design the filter in matlab, get the

coefficients of transfer function and then accordingly code in C language.

For designing the filters in matlab, we have used filter design and analysis tool (fdatool). At

the first interface we select the type of filter (highpass, lowpass or bandpass), its design

method (IIR or FIR/ Butterworth or Chebyshev etc.) and filter parameters (like cut off

frequency, gain, ripple magnitude etc.). After the selection of filter parameters, click on

‘design filter’. Now the filter has been designed. We use the coefficients of this filter in our C

code. To view the coefficients follow the steps- analysis filter coefficients. This gives us the

filter coefficients and scale factors. This form is not used directly. Now go to edit Convert

to Single Section. The coefficients shown are those which we use in our C code.

A typical transfer function of a second order IIR filter is of the form:

H(z) = b0 + b1 z-1 + b2 z-2

1 + a1 z-1 + a2 z-2

Y(z) = b0 + b1 z-1 + b2 z-2

X(z) 1 + a1 z-1 + a2 z-2

Y(z) (1+ a1 z-1 + a2 z-2 ) = X(z)(b0 + b1 z-1

+ b2 z-2)

Taking inverse z-transform both sides:

y(n) + a1 y(n-1) + a2 y(n-2) = b0 x(n) + b1 x(n-1) + b2 x(n-2)

this gives

y(n) = -∑ak y(n-k) + ∑ bk x(n-k)

Output depends both on inputs and previous outputs. This can be explained using the block

diagrams also as following:

Page 32: 60894318 Digital Effects

32

b0

x(n) + + y(n)

x(n-1) z-1 b1 a1

z-1 y(n-1)

+ +

x(n-2) z-1 b2 a2 z-1 y(n-2)

Fig. A typical second order filter – block diagram

This process can be used for transfer function of filter of any order. The process is discussed

using C code:

for (i=0; i < order; i++) x[i] = x[i+1]; // the current value goes to x[2] and then it is

//looped and sent to lower indices continuously x[2] = leftsample; w=0; for (i=0; i<=order; i++)

w += x[order-i]*b[i]; // all numerator coefficients are multiplied for (i=0; i<order; i++)

y[i] = y[i+1]; // similar process of storage for y array y[2] = 0; // y[2] is value to be obtained as output for (i=1; i<= order; i++)

y[2] += y[order - i] * a[i]; // denominator coefficients are multiplied y[2] +=w; leftsample = y[2]; This design method is known as Direct Form 1. This is basic design. Direct form 2 is more

efficient but more complex. So we restrict ourselves to the use of Direct Form 1.

EQUALIZER:

Introduction:

Equalizer is a very well known audio effect. In most audio players it can be easily found as

bass and treble knobs. In this effect different frequencies are treated differently. As the name

suggests it tries to equalize the effects of all frequencies. Some frequencies are boosted while

Page 33: 60894318 Digital Effects

33

some other frequencies are cut and some others remain unaffected i.e. for different

frequencies filters have different gains which may be positive (boost if gain>1), negative

(cut) or zero. In this effect all the filters are controlled independently. If low frequencies are

boosted, effect is called bass. If high frequencies are boosted, effect is called treble.

Implementation:

Equalizer can be implemented in two ways:

1. Parallel filters

2. Series filters (cascade design)

Parallel Filter Equalizer:

In this design method, three filters are connected in parallel as shown in the figure. The three

filters are of Low pass, High pass and Band pass nature. All these filters are controlled

independently i.e. gains and cut off frequencies of one filter are independent of the

parameters of the other two filters. The parameters of three filters are set in such a way that

no frequencies have zero gain which means that the pass bands of two adjacent filters must

have some frequencies in common. One such example is: (Low pass: 500 Hz, Band pass:

450-2000 Hz, high pass: 1900 Hz).

To get the equalizer effect gains of the three filters are set accordingly but this involves

change in filter coefficients for every minute change. So we can design all these filters for

unity gain and then multiply by the suitable gain factor at the time of addition. The output

would be weighted average of the three filtered signals. If gain factor triplet is (10, 0.5, 0.5)

then output will be:

(10 LP + 0.5 BP + 0.5 HP) / (10 + 0.5 + 0.5)

For getting bass, ‘a’ should be high. Similarly for treble ‘b’ should be high.

0 2 4 6 8 10 120

0.2

0.4

0.6

0.8

1

1.2

1.4

Frequency (kHz)

Mag

nitu

de

Magnitude Response of Low Pass Filter

0 2 4 6 8 10 120

0.2

0.4

0.6

0.8

1

1.2

1.4

Frequency (kHz)

Mag

nitu

de

Magnitude Response

0 2 4 6 8 10 120

0.2

0.4

0.6

0.8

1

1.2

1.4

Frequency (kHz)

Mag

nitu

de

Magnitude Response

Fig. Magnitude Response of Lowpass, Bandpass, Fighpass Filters

Page 34: 60894318 Digital Effects

34

Low Pass Filter

a

Input Band Pass Filter b + output

c

High Pass Filter

Fig. Parallel Equalizer

Series (Cascade) Equalizer:

Apart from parallel connection of band filters there is another method of realizing equalizer.

The special types of filters are used for the same. These are peak filters and shelving filters.

First and second order shelving and peak filters are connected in series and are controlled

independently. The first filter is Low pass shelving filter followed by several peak filters and

a High pass shelving filter.

The mathematics of shelving and peak filters is very deep. Here we briefly explain the

characteristics and formulae for coefficients of these filters.

in LP Peak Peak HP out Shelving Filter - - - - - - - - - - - - - - - Filter Shelving Filter Filter

Fig. Series of peak filters

Page 35: 60894318 Digital Effects

35

Fig. Series Equalizer

Shelving Filters: There are two types of Shelving Filters: Low pass shelving filter and High

pass shelving filter. They boost or cut low or high frequency bands. The unaffected part has

unity gain. The desired band of frequencies is cut or boost with respect to unity gain. The

parameters of the filter are cut off frequency fc and gain G.

There are two designs for shelving filters depending on the order of filters. Both First and

second order filters are shown in the figure given below:

Fig. First and second order filters

First Order Design:

Transfer function for the first order shelving filters is as follows:

H(z) = 1+ H0 /2 [1 + A(z)]

Page 36: 60894318 Digital Effects

36

Where A(z) is All Pass transfer function (z-1 + aB/C ) / (1 + aB/C z-1)

H0 = V0 - 1 with V0 = 10G/20 G = Pass Band Gain in dB

+ to be used for Low Shelving and - to be used for High Shelving.

The cut-off Frequency parameter aB for boost and aC for cut can be calculated as follows:

aB = [tan(πfc/fs) - 1] / [tan(πfc/fs) + 1]

aC = [tan(πfc/fs) - V0] / [tan(πfc/fs) + V0]

Second Order Design:

The second order filters can also be realized similarly. The coefficients to be used in the

transfer function are given in the table below:

Here K = tan(πfc/fs)

fc is cut-off frequency and fs is sampling frequeny

Table: Second Order Shelving Filter Design

Page 37: 60894318 Digital Effects

37

Table: Peak Filter Design

Parameter setting in an equalizer: parameters to be set are the cut off frequencies of two

shelving filters and the peak frequencies of several peak filters. Gains of the filters are set as

per the effect required. In case of bass gain of Low pass shelving filter is kept relatively

higher than other filters’ gains. Gains are set similarly for other cases.

A typical octave equalizer has the cut off frequencies: 31.25 Hz (Low pass shelving), 62.5

Hz, 125 Hz, 250 Hz, 500 Hz, 500Hz, 1000 Hz, 2000 Hz, 4000 Hz, 8000 Hz (9 peak filters)

and 16000 Hz (high pass shelving). This is just an example. The frequencies can be chosen

by user as per the effect required.

Page 38: 60894318 Digital Effects

38

WAH WAH:

Theory :

Wah Wah effect is a time varying filter effect. Time-varying filters are filters whose

coefficients change with time. The wah-wah effect is produced mostly by foot-controlled

signal processors containing a bandpass filter with variable central resonant frequency and a

small bandwidth. Moving the pedal back and forth changes the bandpass cut-off center

frequency.

Fig. Wah Wah effect implementation

The wah audio effect produces an impression of motion in pitch. The effect amplifies a small

band of frequencies, and as time passes, the amplified band shifts toward higher frequencies.

Over time, the wah effect creates the perception that the input signal is raising its pitch.

However, the wah effect merely accentuates a different section of frequency at a different

point in time. In other words, the center frequency of a filter that moves from lower to higher

frequencies causes the wah audio effect. To accomplish this effect, rotation of filter

coefficients is implemented. Each set of filter coefficients corresponds to a band pass filter

around a specific center frequency, see graph below.

In wah-wah effect, a set of samples are passed through a lowpass(LP), bandpass(BP), or a

highpass(HP) filter. For example, first set of samples pass through LP filter, and the next pass

through BP filter and the next passed through HP filter and then again in opposite manner. In

the end of each passage through the filter, the original sample is added. The effect of creates a

WAH WAH sound is obtained.

Page 39: 60894318 Digital Effects

39

Fig. Frequency Response for an Ideal Bandpass Filter

As the filter coefficients change the pass band will travel right into the higher Frequencies.

This motion will create the illusion that the pitch is changing, essentially the wah effect. By

controlling when these filter coefficients execute upon the input signal, the wah effect can be

produced.The wah code focuses on loading a different set of filter coefficients at increments

of time. This is executed by first creating all of the filter coefficients in MATLAB. Band pass

filter coefficients were made in MATLAB.

Using Wah Wah effect we can produce wind effect. For this we have to just provide the input

to the wah wah effect producer as white noise. For more prominent and realistic effect the

white noise can be passed through the wah wah effect producer twice.

Implementation:

• Set the sampling frequency appropriately according to input signal.

• Decide the central frequencies of the BP filters and accordingly find the coefficients

using MATLAB. Note filter coefficients are designed for a specific sampling

frequency, hence the sampling frequency has to be set equal to the one used during

the designing of filter.

• The filter is designed and the samples are passed through a filter in the order

LP,BP1,BP2,..,HP,HP,BPn,…,BP1,LP,…. respectively.

Pass‐band

Frequency 0

1

Gain

Page 40: 60894318 Digital Effects

40

• For the manner in which the samples are passed through the filters, a program is

written which increments or decrements the filter number according to the value of m

which changes when first filter or last filter is obtained.

Page 41: 60894318 Digital Effects

41

Modulation based effects: RING MODULATION:

Theory :

Modulation is the process by which parameters of a sinusoidal signal (amplitude, frequency

and phase) are modified or varied by an audio signal. In the field of audio processing

modulation techniques are mainly used with very low frequency sinusoids. Especially the

variation of control parameters for filters or delay lines can be regarded as an amplitude

modulation or phase modulation of the audio signal.

The combination of two waveforms mixed together to create a new waveform is Ring

Modulation. The name ‘Ring Modulator’ comes from the way the original Ring Modulators

were created. They consisted of a ring of diodes which were in the shape of a circle, or ring.

Implementation:

• Set the sampling frequency appropriately according to input signal

• To calculate the number of samples to be delayed M = Γ * fs where, Γ = amount of

delay time (10 – 25 milliseconds for doubling, greater than 50

milliseconds for echo )

• The number of elements of the delay line has to be greater than or equal to M.

• The value of g is to be chosen appropriately, to determine the relative depth of the

delayed signal to that of the reference signal.

• The input sample has to be combined with the delayed sample to get the output

sample.

The input signal x(n) (called as the modulator) is multiplied by a sinusoid m(n) (called as

the Carrier) with carrier frequency fc. Output: y(n)=x(n).m(n) or [(Input 1) . (Input 2)]

Fig. Ring modulation

Page 42: 60894318 Digital Effects

42

If m(n) is a sine wave of frequency fc, the spectrum of the output y(n) is made of two copies

of the input spectrum: the lower sideband and the upper sideband. LSB is reversed in

frequency and both sidebands are centered around fc. When carrier and modulator are sine

waves of frequency fc and fx respectively we hear the sum and the difference frequencies

fc+fx and fc-fx.

fc

|X(f)|

f

Fig. Ring modulation of a signal x(n) by a sinusoidal carrier‐signal m(n). The spectrum of the modulator is shifted around the carrier frequency.

LSB USB

fc

|X(f)|

fLSB USB

0

Page 43: 60894318 Digital Effects

43

Other effects:

DISTORTION:

Theory :

Distortion is an effect which is often applied to electric guitars though it is not limited to any

one instrument. It can be accomplished by electronically altering the dynamic range

compression or clipping the input signal, this effect adds additional harmonics and overtones

to the signal, creating a richer sound. The word distortion refers to any aberration of the

waveform of an electronic circuit's output signal from its input signal. In the context of

musical instrument amplification, it refers to various forms of clipping, which is the

truncation of the part of an input signal that exceeds certain voltage limits.

When we play a signal on your speaker, the resulting sound consists of a number of

frequencies with various amplitudes. If we play a 100 Hz sine wave through a speaker, then

the fundamental frequency is 100 Hz. Now, we may also see response at other frequencies

that are typically a given order higher than the fundamental. Assuming a fundamental

frequency of 100 Hz, we can say that the second harmonic is 200 Hz, the third harmonic is

300 Hz, the 4th is 400 Hz, and so on. The presence of any of these additional harmonics is

considered distortion as they were not present in the original signal. Typically, the amplitude

of these harmonics decrease as the harmonic itself increases, ie. second harmonic distortion is

often higher than third, third is higher than fourth, etc. Clipping is itself a form of distortion.

If we were to increase the gain until we have fully clipped the signal, the result would be the

fundamental frequency (100 Hz) and its higher order harmonics.

This effect doesn’t require knowledge deep mathematics. All we need is to distort the original

(input) audio signal. In time domain, distorting amplitude is the best and easiest we can do.

We will modify the amplitude of discrete signals coming in. The similar distortions in

frequency domain can also be realized by playing with frequency content.

Realizing distortion in matlab is fairly simple. As soon as we have the input signal converted

into digital signal, we have an array to process. We know the maximum and minimum of the

sequence. Distortion can be done in any arbitrary manner. We have got desired effect mainly

using following two methods:

Page 44: 60894318 Digital Effects

44

Clipping:

The input signal is clipped so that only low amplitude values remain in the output sequence. A

threshold can be defined with respect to the maximum of all the input amplitudes i.e. if we

want to clip with a factor of 0.5 it means that all the amplitudes below half of the maximum

will remain as they are but all higher amplitudes will be clipped to half the amplitude.

The following is code for clipping and it is self explanatory:

function [new] = clip(x,f,t) // t is threshold fraction d = max(x); new = x; for i=1:1:length(x) if(abs(x(i)) > t*d) if(x(i) > 0) new(i)= t*d; end if(x(i) < 0) new(i)= -1*t*d; end end end wavwrite(new,f,32,'clipped.wav')

Parabolic Distortion:

The effect is similar to compression

but slightly different. It is called

parabolic distortion because transfer

function (plot of y[n] vs. x[n]) looks

like a parabola. The motive again is

same as clipping but in clipping,

transfer function is not smooth. Hence

noise is more in case of clipping. Here

we try to keep the transfer function smooth (we have chosen parabola, user can choose any

other similar transfer function). Parabolic distortion does not clip at threshold but the

increment for higher amplitudes is gradual.

-6

-4

-2

0

2

4

6

-30 -20 -10 0 10 20 30

Fig. Graph y2= x

Page 45: 60894318 Digital Effects

45

For realization, all the positive values (x[n] > 0) in input signal are replaced by )

and all the negative values are replaced by ). Parameter ‘a’ decides the rate of

rise for high amplitudes, so should be chosen as per requirement.

The code for the same is shown below and it is self explanatory.

function [new]= distortion(x,f,a) new = x; for i=1:1:length(x) if(x(i)>=0) new(i)= sqrt(4*a*x(i)); end if (x(i)<0) new(i)= -1*sqrt(4*a*-1*x(i)); end end wavwrite(new,f,'distortion.wav')

Implementation:

• Set the sampling frequency appropriately according to input signal

• Depending on the amplitude of the sample the output is clipped if input exceeds some

fixed value. This condition for clipping is given by the parabola equation in the

program.

• Hence the output is proportional to the square root of the input, x. which implies

output is clipped.

Page 46: 60894318 Digital Effects

46

Section C: References

References:

• ``Physical Audio Signal Processing'', by Julius O. Smith III

• “DAFX - Digital Audio Effects”, JOHN WILEY & SONS, LTD: Udo Zolzer

• Effect Design Part 2: Delay-Line Modulation and Chorus: JON DATTORRO

• Computational Acoustic Modeling with Digital Delay: Julius O. Smith III

• Comparative Performance Analysis of Artificial Reverberation Algorithms: Norbert

Toma, Marina Dana Ţopa, Victor Popescu, Erwin Szopos

• Optimization of delay lines in Schroeder’s reverberator structure: Ing. Bohumil

Bohunicky

• Matlab Implementation of Reverberation Algorithms: José R. Beltrán, Fernando A.

Beltrán

• Delay Effects: Flanging, Phasing, Chorus, Artificial Reverb: Tamara Smyth

• SIGNAL AND NOISE IN PROGRAMMING LANGUAGE P.J. PLAUGER

• Harmonycentral.com- effects explained

http://www.harmony-central.com/Effects/effects-explained.html

• www.music.mcgill.ca: Audio Effects in MATLAB

• http://www.hydrogenaudio.org/forums/index.php?act=ST&f=1&t=4949

• http://www.buzzle.com/articles/audio-effects-compression-ring-modulation.html

• dsprelated.com

http://www.dsprelated.com/groups/code-comp/1.php http://www.dsprelated.com/groups/c55x/1.php

• Wikipedia

http://en.wikipedia.org/wiki/Audio_effects http://en.wikipedia.org/wiki/Guitar_effects

Page 47: 60894318 Digital Effects

47

Section D: Appendix SOME SIMPLE EFFECTS: Clock-anticlock-wise effect: double i; int m; void main() DSK5510_AIC23_CodecHandle hCodec; Int16 leftsample, rightsample; /* Initialize the board support library, must be called first */ DSK5510_init(); /* Start the codec */ hCodec = DSK5510_AIC23_openCodec(0, &config); /* Ser Sampling frequency of the codec to 24KHz */ DSK5510_AIC23_setFreq(hCodec, DSK5510_AIC23_FREQ_24KHZ); while(TRUE) for(i=0;i<15000;i++) while (!DSK5510_AIC23_read16(hCodec, &leftsample)); while (!DSK5510_AIC23_read16(hCodec, &rightsample)); if(m==0) leftsample=i/10000* leftsample; rightsample= rightsample* 1000/i; else leftsample= leftsample* 1000/i; rightsample= rightsample* i/10000; while (!DSK5510_AIC23_write16(hCodec,2* leftsample)); while (!DSK5510_AIC23_write16(hCodec,2* rightsample)); for(i=0;i<15000;i++) while (!DSK5510_AIC23_read16(hCodec, &leftsample)); while (!DSK5510_AIC23_read16(hCodec, &rightsample)); while (!DSK5510_AIC23_write16(hCodec, 2* leftsample)); while (!DSK5510_AIC23_write16(hCodec, 2* rightsample));

Page 48: 60894318 Digital Effects

48

for(i=15000;i>0;i--) while (!DSK5510_AIC23_read16(hCodec, &leftsample)); while (!DSK5510_AIC23_read16(hCodec, &rightsample)); if(m==0) leftsample=i/10000* leftsample; rightsample= rightsample* 1000/i; else leftsample= leftsample* 1000/i; rightsample= rightsample* i/10000; while (!DSK5510_AIC23_write16(hCodec, 2* leftsample)); while (!DSK5510_AIC23_write16(hCodec,2* rightsample)); if(m==0) m=1; else m=0; /* Close the codec */ DSK5510_AIC23_closeCodec(hCodec); Pendulum effect: double i; int m; void main() DSK5510_AIC23_CodecHandle hCodec; Int16 leftsample, rightsample; /* Initialize the board support library, must be called first */ DSK5510_init(); /* Start the codec */ hCodec = DSK5510_AIC23_openCodec(0, &config);

Page 49: 60894318 Digital Effects

49

/* Ser Sampling frequency of the codec to 24KHz */ DSK5510_AIC23_setFreq(hCodec, DSK5510_AIC23_FREQ_24KHZ); while(TRUE) for(i=0;i<15000;i++) while (!DSK5510_AIC23_read16(hCodec, &leftsample)); while (!DSK5510_AIC23_read16(hCodec, &rightsample)); if(m==0) leftsample=i/10000* leftsample; rightsample= rightsample* 1000/i; else leftsample= leftsample* 1000/i; rightsample= rightsample* i/10000; while (!DSK5510_AIC23_write16(hCodec, leftsample)); while (!DSK5510_AIC23_write16(hCodec, rightsample)); for(i=15000;i>0;i--) while (!DSK5510_AIC23_read16(hCodec, &leftsample)); while (!DSK5510_AIC23_read16(hCodec, &rightsample)); if(m==0) leftsample=i/10000* leftsample; rightsample= rightsample* 1000/i; else leftsample= leftsample* 1000/i; rightsample= rightsample* i/10000; while (!DSK5510_AIC23_write16(hCodec, leftsample)); while (!DSK5510_AIC23_write16(hCodec, rightsample)); if(m==0) m=1;

Page 50: 60894318 Digital Effects

50

else m=0; /* Close the codec */ DSK5510_AIC23_closeCodec(hCodec); CHEBYSHEV FILTER: #include "sample_testcfg.h" #include "dsk5510.h" #include "dsk5510_aic23.h" void main() DSK5510_AIC23_CodecHandle hCodec; Int16 leftsample, rightsample; int i; float b[9]=0,0.0001,0.0002,0.0004,0.0005,0.0004,0.0002,0.0001,0.0000; float a[9]=1,-6.1340,17.4320,-29.7955,33.3858,-25.0641,12.3028,-3.6111,0.4860; float w=0; float y[9]=0,0,0,0,0,0,0,0,0; float x[9]=0,0,0,0,0,0,0,0,0; for(i=0;i<9;i++) a[i]=-a[i]; DSK5510_init(); hCodec = DSK5510_AIC23_openCodec(0,&config); DSK5510_AIC23_setFreq(hCodec, DSK5510_AIC23_FREQ_8KHZ); while(TRUE) while (!DSK5510_AIC23_read16(hCodec,&leftsample)); while (!DSK5510_AIC23_read16(hCodec,&rightsample)); for(i=0;i<8;i++) x[i]=x[i+1]; x[8]=leftsample; for(i=0;i<9;i++) w += b[i] * x[8-i]; for(i=0;i<8;i++) y[i]=y[i+1]; y[8]=0; for(i=1;i<9;i++) y[8]+= a[i] * y[8-i]; y[8]+=w; while (!DSK5510_AIC23_write16(hCodec,y[8]));

Page 51: 60894318 Digital Effects

51

while (!DSK5510_AIC23_write16(hCodec,y[8])); DSK5510_AIC23_closeCodec(hCodec); DIP INTERFACING:

Header file: #include "dsk5510_dip.h"

Inside the main(): DSK5510_DIP_init();

if (DSK5510_DIP_get(0) == 0)

DSK5510_AIC23_setFreq(hCodec, DSK5510_AIC23_FREQ_8KHZ);

else

if (DSK5510_DIP_get(1) == 0)

DSK5510_AIC23_setFreq(hCodec, DSK5510_AIC23_FREQ_16KHZ);

else

if (DSK5510_DIP_get(2) == 0)

DSK5510_AIC23_setFreq(hCodec, DSK5510_AIC23_FREQ_24KHZ);

else

DSK5510_AIC23_setFreq(hCodec, DSK5510_AIC23_FREQ_16KHZ);

ECHO:

void main()

DSK5510_AIC23_CodecHandle hCodec;

Int16 leftsample, rightsample,y=0,i,j=0,k,x[800];

for(i=0;i<800;i++)

x[i]=0;

k=0;

DSK5510_init();

DSK5510_DIP_init();

hCodec = DSK5510_AIC23_openCodec(0, &config);

DSK5510_AIC23_setFreq(hCodec, DSK5510_AIC23_FREQ_16KHZ);

Page 52: 60894318 Digital Effects

52

while(TRUE)

while (!DSK5510_AIC23_read16(hCodec, &leftsample));

while (!DSK5510_AIC23_read16(hCodec, &rightsample));

for(i=800;i>0;i--)

x[i]=x[i-1];

x[0]=leftsample+y;

if(k==0)

y=leftsample+ 0.65* x[800];// 50 milliseconds delay - echo

if(k==1)

y=leftsample+ x[320]; // 20 milliseconds delay - doubling

leftsample=y;

if (DSK5510_DIP_get(0) == 0)

while (!DSK5510_AIC23_write16(hCodec, leftsample));

while (!DSK5510_AIC23_write16(hCodec, leftsample));

else

if (DSK5510_DIP_get(1) == 0)

while (!DSK5510_AIC23_write16(hCodec,0.005* leftsample));

while (!DSK5510_AIC23_write16(hCodec,0.005* leftsample));

else

if (DSK5510_DIP_get(2) == 0)

while (!DSK5510_AIC23_write16(hCodec,rightsample));

while (!DSK5510_AIC23_write16(hCodec,rightsample));

else

Page 53: 60894318 Digital Effects

53

while (!DSK5510_AIC23_write16(hCodec, leftsample));

while (!DSK5510_AIC23_write16(hCodec, leftsample));

DSK5510_AIC23_closeCodec(hCodec);

DISTORTION:

main()

DSK5510_AIC23_CodecHandle hCodec;

Int16 leftsample, rightsample,y;

DSK5510_init();

hCodec = DSK5510_AIC23_openCodec(0, &config);

DSK5510_AIC23_setFreq(hCodec, DSK5510_AIC23_FREQ_16KHZ);

while(TRUE)

while (!DSK5510_AIC23_read16(hCodec, &leftsample));

while (!DSK5510_AIC23_read16(hCodec, &rightsample));

y=sqrt(4* leftsample);

while (!DSK5510_AIC23_write16(hCodec,10* y));

while (!DSK5510_AIC23_write16(hCodec, 10* y));

DSK5510_AIC23_closeCodec(hCodec);

EQUALISER: PARALLEL IMPLEMENTATION

void main()

Page 54: 60894318 Digital Effects

54

DSK5510_AIC23_CodecHandle hCodec;

Int16 leftsample, rightsample;

int i,j=0,k=0,order=2,filter=3;

float b[3][3],a[3][3];

/*samp freq=16k...low pass cut off-800Hz...bandpass cut off-800-4000Hz...high pass

cut off-4000Hz*/

float z[9]=0.0200,0.0401,0.0200,0.4208,0,-0.4208,0.46515,-0.9303,0.46515;

float d[9]=1,-1.5610,0.64135,1,-0.8416,0.15838,1,-0.6202,0.2404;

float w=0,out=0;

float y[3][3],x[3][3];

float high[3]=0.1,10,10 ;

float low[3]=10,0.1,1 ;

float mid[3]=5,10,0.1 ;

float coff[3][3];

float sum[3];

for(i=0;i<3;i++)

sum[i]=low[i]+mid[i]+high[i];

coff[i][2]=high[i]/sum[i];

coff[i][0]=low[i]/sum[i];

coff[i][1]=mid[i]/sum[i];

for(i=0;i<filter;i++)

for(j=0;j<order+1;j++)

b[i][j]=z[k];

a[i][j]=-d[k];

Page 55: 60894318 Digital Effects

55

x[i][j]=0;

y[i][j]=0;

k++;

DSK5510_init();

DSK5510_DIP_init();

hCodec = DSK5510_AIC23_openCodec(0,&config);

DSK5510_AIC23_setFreq(hCodec, DSK5510_AIC23_FREQ_16KHZ);

while(TRUE)

while (!DSK5510_AIC23_read16(hCodec,&leftsample));

while (!DSK5510_AIC23_read16(hCodec,&rightsample));

out=0;

if (DSK5510_DIP_get(0) == 0) //DIP interfacing to change the filter

//type : CUT/BOOST LP/MP/HP.

k=0;

else

if (DSK5510_DIP_get(1) == 0)

k=1;

else

if (DSK5510_DIP_get(2) == 0)

k=2;

else

k=0;

for(j=0;j<filter;j++)

w=0;

for(i=0;i<order;i++)

Page 56: 60894318 Digital Effects

56

x[j][i]=x[j][i+1];

x[j][order]=leftsample;

for(i=0;i<order+1;i++)

w += (b[j][i] * x[j][order-i]);

for(i=0;i<order;i++)

y[j][i]=y[j][i+1];

y[j][order]=0;

for(i=1;i<order+1;i++) /*direct form 1 to find output*/

y[j][order]+= a[j][i] * y[j][order-i];

y[j][order]+=w;

out+=coff[k][j]* y[j][order];

leftsample=out;

while (!DSK5510_AIC23_write16(hCodec,leftsample));

while (!DSK5510_AIC23_write16(hCodec, leftsample));

DSK5510_AIC23_closeCodec(hCodec);

ECHO:

void main()

DSK5510_AIC23_CodecHandle hCodec;

Int16 leftsample, rightsample,y,i,j=0,k,x[1280];

for(i=0;i<1280;i++)

x[i]=0;

k=0;

DSK5510_init();

hCodec = DSK5510_AIC23_openCodec(0, &config);

Page 57: 60894318 Digital Effects

57

DSK5510_AIC23_setFreq(hCodec, DSK5510_AIC23_FREQ_16KHZ);

while(TRUE)

while (!DSK5510_AIC23_read16(hCodec, &leftsample));

while (!DSK5510_AIC23_read16(hCodec, &rightsample));

for(i=1280;i>0;i--)

x[i]=x[i-1];

x[0]=leftsample;

if(k==0)

y=leftsample+ 0.65* x[1280];// 80 milliseconds delay - echo

if(k==1)

y=leftsample+ x[320]; // 20 milliseconds delay - doubling

leftsample=y;

while (!DSK5510_AIC23_write16(hCodec,leftsample));

while (!DSK5510_AIC23_write16(hCodec,leftsample));

DSK5510_AIC23_closeCodec(hCodec);

FLANGING:

void main()

DSK5510_AIC23_CodecHandle hCodec;

Int16 leftsample, rightsample,i,j=0,k=20,m;

float y,x[161],frac;

for(i=0;i<161;i++)

x[i]=0;

DSK5510_init();

hCodec = DSK5510_AIC23_openCodec(0, &config);

DSK5510_AIC23_setFreq(hCodec, DSK5510_AIC23_FREQ_16KHZ);

Page 58: 60894318 Digital Effects

58

while(TRUE)

while (!DSK5510_AIC23_read16(hCodec, &leftsample));

while (!DSK5510_AIC23_read16(hCodec, &rightsample));

j++;

for(i=160;i>0;i--)

x[i]=x[i-1];

x[0]=leftsample;

frac= j/600;

y=leftsample+ frac* x[k]+ (1-frac) * x[k+1];

if(j==600)

j=0;

if(k==160)

m=0;

if(k==20)

m=1;

if(m==0)

k--;

if(m==1)

k++;

while (!DSK5510_AIC23_write16(hCodec, y));

while (!DSK5510_AIC23_write16(hCodec, y));

DSK5510_AIC23_closeCodec(hCodec);

TREMBLING:

void main()

Page 59: 60894318 Digital Effects

59

DSK5510_AIC23_CodecHandle hCodec;

Int16 leftsample, rightsample;

int j=0,k=0;

DSK5510_init();

hCodec = DSK5510_AIC23_openCodec(0, &config);

DSK5510_AIC23_setFreq(hCodec, DSK5510_AIC23_FREQ_8KHZ);

while(TRUE)

while (!DSK5510_AIC23_read16(hCodec, &leftsample));

while (!DSK5510_AIC23_read16(hCodec, &rightsample));

if(j==30)

j=0;

k++;

j++;

k=k%5;

if(k==0)

leftsample=leftsample;

else if(k==1)

leftsample=0.8* leftsample;

else if(k==2)

leftsample=0.6* leftsample;

else if(k==3)

leftsample=0.4* leftsample;

else if(k==4)

leftsample=0.2* leftsample;

while (!DSK5510_AIC23_write16(hCodec, leftsample));

Page 60: 60894318 Digital Effects

60

while (!DSK5510_AIC23_write16(hCodec, leftsample));

DSK5510_AIC23_closeCodec(hCodec);

WAH-WAH:

void main()

DSK5510_AIC23_CodecHandle hCodec;

Int16 leftsample, rightsample;

int i,j=0,k=0,s=0,samples=1000,order=2,filter=6;

double b[6][3],z[18]= 0.072959657268267, 0.000000000000000,-0.072959657268267, 0.072959657268267, 0.000000000000000,-0.072959657268267, 0.136728735997319, 0.000000000000000,-0.136728735997319, 0.165910681040351, 0.000000000000000,-0.165910681040351, 0.193599605930034, 0.000000000000000,-0.193599605930034, 0.292893218813453,-0.585786437626905, 0.292893218813453;

double a[6][3],d[18]= 1.000000000000000,-1.836916476010566, 0.854080685463466,1.000000000000000,-1.768788101059395, 0.854080685463466, 1.000000000000000,-1.490469659645659, 0.726542528005361, 1.000000000000000,-1.052992251962794, 0.668178637919299, 1.000000000000000,-0.387199211860068, 0.612800788139932, 1.000000000000000,-0.000000000000000, 0.171572875253810;

double w=0,m=0;

double y[3]=0,0,0;

double x[3]=0,0,0;

for(i=0;i<filter;i++)

for(j=0;j<order+1;j++)

b[i][j]=z[k];

a[i][j]=-d[k];

k++;

DSK5510_init();

hCodec = DSK5510_AIC23_openCodec(0,&config);

Page 61: 60894318 Digital Effects

61

DSK5510_AIC23_setFreq(hCodec, DSK5510_AIC23_FREQ_16KHZ);

while(TRUE)

while (!DSK5510_AIC23_read16(hCodec,&leftsample));

while (!DSK5510_AIC23_read16(hCodec,&rightsample));

for(i=0;i<order;i++)

x[i]=x[i+1];

x[order]=leftsample;

w=0;

for(i=0;i<order-1;i++)

w += (b[j][i] * x[order-i]);

for(i=0;i<order;i++)

y[i]=y[i+1];

y[order]=0;

for(i=1;i<order+1;i++)

y[order]+= a[j][i] * y[order-i];

y[order]+=w;

leftsample=y[order];

while (!DSK5510_AIC23_write16(hCodec,0.05* leftsample));

while (!DSK5510_AIC23_write16(hCodec,0.05* leftsample));

s++;

if(s==samples)

s=0;

if(j==filter-1)

m=1;

If(j==0)

m=0;

if(m==0)

Page 62: 60894318 Digital Effects

62

j++;

if(m==1)

j--;

DSK5510_AIC23_closeCodec(hCodec);