Information Theory And Codes Exercises

  • Uploaded by: Project Symphony Collection
  • 0
  • 0
  • October 2019
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Information Theory And Codes Exercises as PDF for free.

More details

  • Words: 2,580
  • Pages: 13
Information Theory and Codes Vittorio Giovara, Alberto Grand, Marina Lemessom 19/10/2007

Chapter 1

Entropy and Quantity of Information 1.1

Handout

Two fair six-faced dices are thrown by Alice and only their sum is communicated to Bob. Which is the amount of information given with respect to communicating the distinct occurrences of the two dices?

1.2

Resolution

First we calculate the entropy of the sum of the dices by filling this table, A being the possible symbol to be sent and pa the probability of that symbol to be sent. So we compute the entropy with this formula H(A) =

M X

pi log

i=1

1 pi

where M is the number of different symbols and pi the probability associated to the i-th symbol. The base of the logarithm is 2 because we want entropy to be expressed in bits. 11 X

1 1 H(A) = pi log2 = log pi 36 i=1 +

4 log 36

+

4 log 36

1

!

4 36

1 4 36

!

5 + log 36 3 + log 36

1

!

1 36

1

!

5 36

1 3 36

1

!

2 + log 36 6 + log 36 2 + log 36

1

!

2 36

1

!

6 36

1 2 36

!

3 + log 36 5 + log 36 1 + log 36

1

!

+

3 36

1

!

+

5 36

1 1 36

!

=

A 2

 pa  1

 36  2

3

 36  3

4

 36  4

5

 36  5

6

 36  6

7

 36  5

8

 36  4

9

 36  3

10

 36  2

11

 36  1 36

12

1 2 = 2 log(36) + 2 log 36 36

1

!

1 18

3 + 2 log 36

1 1 12

!

4 + 2 log 36

1 1 9

!

+

!

5 1 36 6 1 1 + 2 log + log 1 = log(36) + log(18) + 36 5 36 18 9 6   2 5 1 36 1 + log(6) = log(12) + log(9) + log + 6 9 18 5 6 = 3.27 bits 



Then we compute the entropy of a roll of a single dice, using this B1 alphabet. B 1 2 3 4 5 6

H(B1 ) =

pb 1

6 1

6 1

6 1

6 1

6 1 6

6 X i=1

2

pi log2

1 = pi

1 = 6 log 6

1

!

1 6

=

= log(6) = = 2.58 bits Since the symbols of the two dices have the same probability, the alphabet of the second dice B2 is equal to the first. B1 = B2 Thus also the entropy of the second dice is equal to the first. H(B1 ) = H(B2 ) The entropy of submitting the distinct results of the two dices Btot is the sum of the two entropies. H(Btot ) = H(B1 ) + H(B2 ) = = 2H(B1 ) = = 2 ∗ 2.58 = = 5.16 bits At last we calculate the difference of the quantity of information when sending the sum of the dices and the single result of the roll.

H(∆) = H(Btot ) − H(A) = = 5.16 − 3.27 = = 1.89 bits

3

Chapter 2

Bynary Enconding 2.1

Handout

Constuct the binary Huffman tree for a four letters alphabet with probability distribution P = (0.8, 0.1, 0.05, 0.05). Compute the average code word length and entropy.

2.2

Resolution

We can build the binary Huffman tree by associating the alphabet A = (a1 , a2 , a3 , a4 ) which has a probability distribution of P = (0.8, 0.1, 0.05, 0.05) to the binary alphabet B = (0, 1).

4

So we can see this association: a1 −→ 0 a2 −→ 10 a3 −→ 110 a4 −→ 111 from which we can calculate the average code length ¯l with this formula ¯l =

M X

l i pi

i=1

¯l =

4 X

l i pi =

i=1

= 1 ∗ 0.8 + 2 ∗ 0.1 + 3 ∗ 0.05 + 3 ∗ 0.05 = = 0.8 + 0.2 + 0.3 = = 1.3 And as for the entropy we use the previous formula H(A) =

M X

pi log

i=1

1 H(B) = 0.8 log 0.8 = 1.022 bits 



1 pi

1 + 0.1 log 0.1 



1 + 0.1 log 0.05 



We observe that the First Shannon Theorem is respected, since H(A) ≤ ¯l log D ≤ H(A) + log D 1.022 ≤ ¯l ≤ 2.044

5

=

Chapter 3

Coding Theory 3.1

Handout

Let C be a binary (11, 7, d)-code with parity-check matrix    

H=

0 0 1 0

0 1 0 1

1 0 0 1

0 1 0 0

1 1 0 1

0 1 1 0

1 1 1 0

1 1 1 1

0 0 0 1

1 0 1 1

1 0 1 0

    

1. Find a generator matrix (echelon form) G for C; 2. Find the minimum distance d; 3. Determine the weight enumerator polynomial for C; 4. Decode the following received words: R2 = (0111 0100 011)

R1 = (1000 0000 111)

3.2 3.2.1

Resolution Generator Matrix Computation

It is easy to obtain the generator matrix from the parity-check matrix if we are able to reduce the latter to an echelon form. In this computation we are allowed to exchange and add rows. So we swap r1 with r3 and r2 with r4 , obtaining    

H=

1 0 0 0

0 1 0 1

0 1 1 0

0 0 0 1

0 1 1 1

6

1 0 0 1

1 0 1 1

1 1 1 1

0 1 0 0

1 1 1 0

1 0 1 0

    

then we add r3 to r2 and this new r2 to r4 , obtaining    

H=

1 0 0 0

0 1 0 0

0 0 1 0

0 0 0 1

0 0 1 1

1 0 0 1

1 1 1 0

1 0 1 1

0 1 0 1

1 0 1 0

1 1 1 1

    

We have been able to write H in the form H = (In−k |A) As a consequence, the corresponding generator matrix will have the form 

G=

A Ik



Having said that, it is trivial to compute the generator matrix            G=         

0 0 1 1 1 0 0 0 0 0 0

1 0 0 1 0 1 0 0 0 0 0

1 1 1 0 0 0 1 0 0 0 0

1 0 1 1 0 0 0 1 0 0 0

0 1 0 1 0 0 0 0 1 0 0

1 0 1 0 0 0 0 0 0 1 0

1 1 1 1 0 0 0 0 0 0 1

                    

We are used to information symbols being the first k symbols of a codeword, whereas the following (n − k) symbols are parity-check symbols. However, this order is reversed in our case, the identity matrix being located in the lower part of the generator matrix. The first 4 symbols of every codeword are therefore parity-check symbols, while the last 7 symbols are information symbols.                                         

c1 c2 c3 c4 c5 c6 c7 c8 c9 c10 c11

= x2 + x3 + x4 + x6 + x7 = x3 + x5 + x7 = x1 + x3 + x4 + x5 + x7 = x1 + x2 + x4 + x5 + x7 = x1 = x2 = x3 = x4 = x5 = x6 = x7 7

3.2.2

Minimum Distance and Weight Enumerator Polynomial

In order to find the minimum distance and the weight enumerator polynomial for the given code, we have decided to write a simple algorithm in C. Its purpose is to compute all possible 27 codewords, surveying their Hamming weight and storing the results in a statistics vector. The i-th entry of this vector contains the number of codewords having weight i and represents therefore the i-th coefficient of the weight enumerator polynomial. Since we are dealing with a linear code, it is trivial to find the minimum distance between codewords. As a matter of fact min d(x, y) = min WH (x − y) x,y∈C

x,y∈C

x6=y

x6=y

but since the code is linear, c = x−y still belongs to C, the equation becomes min d(x, y) = min WH (c) x,y∈C

c∈C

x6=y

c6=0

and thus looking for the minimum distance between codewords boils down to looking for the minimum weight over all codewords. This allows us to simply inspect the weight enumerator polynomial coefficients of the given code: the index of the minimum-index non-zero coefficient (disregarding coefficient A0 , because it corresponds to the 0 codeword) is the sought minimum distance d. The result provided by the algorithm are the following: i 0 1 2 3 4 5 6 7 8 9 10 11

Ai 1 0 0 13 26 24 24 26 13 0 0 1

Thus the minimum distance d is 3 and the error correction capabilty of the given code is 1. 8

Finally the weight enumerator polynomial is W (x, y) =

11 X

Ai xi y 11−i =

1=0

= y 11 + 13x3 y 8 + 26x4 y 7 + 24x5 y 6 + 24x6 y 5 + 26x7 y 4 + 13x8 y 3 + x11

3.2.3

Words Decoding

In order to decode R1 and R2 , we have to compute their respective syndrome, being     1 0      0   1       0   1       0   1       0   0          R1 =  0  R2 =  1       0   0       1   0       1   0           1   1  1 1 and E1 and E2 their respective error pattern. So we find the syndrome of the first word as    

S1 = HR1 = HE1 = 

1 0 0 0

    

and we can notice that the computed syndrome is equal to the first column of the parity-check matrix. Since the syndrome is a linear combination of the columns of the paritycheck matrix H, it is clear that the minimum weight error pattern is            ˆ E1 =          

9

1 0 0 0 0 0 0 0 0 0 0

                    

Knowing that the error correction capability of the code is 1, we can correct this error. The estimated codeword for the first received vector is            Cˆ1 = R1 + Eˆ1 =          

0 0 0 0 0 0 0 0 1 1 1

                    

and consequentely the sent message was       M1 =      

0 0 0 0 1 1 1

           

We now proceed to computing the syndrome for the second received vector   1  0    S2 = HR2 = HE2 =    1  1 The computed syndrome is equal to the 8th column of the parity-check matrix; therefore the minimum weight error pattern is            ˆ E2 =          

10

0 0 0 0 0 0 0 1 0 0 0

                    

We can correct the error with the same method as before, obtaing the estimated codeword   0    1     1     1     0      Cˆ2 = R2 + Eˆ2 =  1     0     1     0      1   1 and consequentely the sent message was       M2 =      

11

0 1 0 1 0 1 1

           

Contents 1

Entropy and Quantity of Information 1.1 Handout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1 1 1

2

Bynary Enconding 2.1 Handout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4 4 4

3 Coding Theory 6 3.1 Handout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.2 Resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 3.2.1 Generator Matrix Computation . . . . . . . . . . . . . 6 3.2.2 Minimum Distance and Weight Enumerator Polynomial 8 3.2.3 Words Decoding . . . . . . . . . . . . . . . . . . . . . 9

12

Related Documents


More Documents from ""