intro lecture notes

Upload: kevin-dany

Post on 14-Apr-2018

224 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/27/2019 Intro Lecture Notes

    1/15

    KEET 3107

    Information Theory & CodingSemester 1

    Session 2013/2014

    Dr Norf izah Md A li

    [email protected]@gmail.com

    mailto:[email protected]:[email protected]
  • 7/27/2019 Intro Lecture Notes

    2/15

  • 7/27/2019 Intro Lecture Notes

    3/15

    Introduction

    Information theory- a discipline centered around common mathematicalapproach, to study the collection and manipulation of information.

    -theoretical basis for observation, measurement, data compression, datastorage, communication, estimation, decision making and pattern

    recognition.

    -guide to the development of information- transmission systems based on astudy of possibilities and limitations inherent in natural law.

    -a study of how the laws of probability, and of mathematics in general,

    describe limits on the designs of information-transmission systems, but alsooffer opportunities.

    -one may design strategies into a communication system to overcome noiseand errors that may occur in the communication channel.

  • 7/27/2019 Intro Lecture Notes

    4/15

    Introduction(Contd)

    Information theory originated by Claude Shannon in his 1948 paper. Basic

    theory underlying the task of communication through noisy channel. Heshowed that each channel is characterized by channel capacity such that

    an arbitrarily small probability of error is achievable at any transmission

    rate below the channel capacity.

    Probability of error and rate of data transmission can be specifiedindependently

    To achieve small error of probability is the method of coding the

    transmitted information in blocks.

    Information theory gives the insight into design of information-

    transmission systems. By developing a clear concept of information and its

    transmission, a much deeper understanding of the purposes and

    limitations of a technique is obtained.

  • 7/27/2019 Intro Lecture Notes

    5/15

    Channel Capacity

    C= W log2(1+P/WNo)bits/s

    Where

    P is the average transmitted power

    W is the channel bandwidth

    No/2 is the power spectral density of additive noise

    OR

    C= W log2(1+S/N) bits/s

    S/N is the signal-to-noise power ratio

  • 7/27/2019 Intro Lecture Notes

    6/15

    Information and Sources

    Let E be some event which occurs with probability P(E). I(E) = log 1/P(E) units of information

    loga x = 1/ (logba) logb x choice of base for log determine the unitfor information

    I(E) = log2 1/P(E) bits

    I(E) = ln 1/P(E) nats (natural unit)

    I(E) = log10 1/P(E) Hartleys

    In general if we use a logarithm to the base r,

    I(E) = logr 1/P(E) r-ary units

    1 Hartley = 3.32 bits

    1 nat = 1.44 bits

    Take note that, If P(E) = 1/2 , then I(E) = 1 bit

  • 7/27/2019 Intro Lecture Notes

    7/15

    A source emits a sequence of symbols from a fixed finite source with

    alphabet S ={s1, s2., sq} are statistically independent.Such an information source is termed as zero-memory source

    The probabilities with which the symbols occur:

    P(s1), P(s2),P(sq)

    If symbol s1 occurs, the amount of information is:

    I(s1) = log 1/P(s1) bits

    Source Si,.

  • 7/27/2019 Intro Lecture Notes

    8/15

    The average amount of information obtained per symbol from

    the source is:

    (s)I(si) bitsS

    This quantity, the average amount of information per source

    symbol, is called the entropy H(S) of the zero memory source.

    H(S) = (s) log 1/P(si) bit

    H(S) is interpreted either as the average amount of information per

    symbol provided by the source OR as the average amount of

    uncertainty which the observer has before the inspection of the

    output of the source.

  • 7/27/2019 Intro Lecture Notes

    9/15

    Example 1:

    Consider the source S = { s1, s2 ,s3} with P(s1)

    = , P(s2) = P(s3) = , then

    H(S) = log 2 + log 4 + log 4

    = 3/2 bits

  • 7/27/2019 Intro Lecture Notes

    10/15

    For zero memory information source with q-symbol source

    alphabet, the maximum value of the entropy is exactly log q, and this

    maximum value of the entropy is achieved if, and only if, all thesource symbols are equiprobable.

    Binary entropy function is at

    maximum value for p=0.5, ie whenboth 1 and 0 are equally likely.

    In general, the entropy of a

    discrete source is maximum

    when the letters from the

    source are equally probable.

  • 7/27/2019 Intro Lecture Notes

    11/15

  • 7/27/2019 Intro Lecture Notes

    12/15

    Example 2:Symbols of Sn 1 2 3 4 5 6 7 8 9

    Sequence of

    symbols

    s1s1 s1s2 s1s3 s2s1 s2s2 s2s3 s3s1 s3s2 s3s3

    Probability P(i) 1/4 1/8 1/8 1/8 1/16 1/16 1/8 1/16 1/16

    H(S2) = (i) log 1/P (i) bits

    =1/4 log4+ 4 x 1/8 log 8 + 4 x 1/16 log 16= 3 bits per symbol

  • 7/27/2019 Intro Lecture Notes

    13/15

    The Markov Information

    Source A more general type of information source with q symbols in

    which the occurrence of a source symbol si may depend on a

    finite number m of preceding symbols.

    M th-order Markov source

    P(si/ sji, sj2,.. sjm) for i=1,2, q;jp= 1,2.,q

    For an m th order markov source, the probability of emitting a

    given symbol is known if we know the m preceding symbols

    the state of the m th order Markov source, ie, qm possible

    state.

  • 7/27/2019 Intro Lecture Notes

    14/15

    Example 3

    Consider a second-order Markov source with the binary source

    alphabet S = {0,1}. Assume the conditional symbol probabilities:

    P(0/00) = P(1/11) =0.8

    P(1/00) = P(0/11) = 0.2

    P(0/01) = P(0/10) = P(1/01) = P(1/10) = 0.5

    Since q is equal to 2, we assumed a second order Markov source,

    hence there are four states of the source 00, 01, 10, 11

  • 7/27/2019 Intro Lecture Notes

    15/15

    State Diagram of a Second Order Markov Source. The possible states are indicated by four

    possible dots. The possible state transitions are indicated by arrows from state to state, with

    probability of a transition shown .

    For example: if we are in state 00 we can go to either state 01 or 00 but not to state 10 or 11.

    The probability of remaining in state 00 is 0.8 and the probability of going to state 01 is 0.2.