Source Coding

  • Uploaded by: gaurave3
  • 0
  • 0
  • June 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Source Coding as PDF for free.

More details

  • Words: 511
  • Pages: 9
EEE358S Fundamentals of Communications Engineering Emmanuel O Bejide [email protected] http://www.uct.ac.za/depts/staff/rebejide/ Department of Electrical Engineering University of Cape Town

Source Coding • Source coding is a process by which data that is generated by a discrete source is represented efficiently. • The knowledge of the statistics of the source is required in order to develop an efficient source code. • In particular, we may assign short code-word to frequent source symbols and long codewords to rare source symbols. • Such a source code is referred to as variable-length code.

Source Coding

Discrete Memoryless source

sk

Source Encoder

bk

Binary Sequence

Source Coding • Consider a souce that has an alphabet of K different symbols such that sk is the kth symbol in the alphabet. • Also let sk occur with probability pk and let the binary codeword assigned to sk by the encoder have length measured in bits. • The average codeword length of the source encoder is defined as k

l

K

L = ∑ pkl k k =0

Source Coding •

represents the average number of bits per source symbol used in the source encoding process. • If Lmin denotes the minimum possible codeword length, the coding efficiency of the source encoder is defined as

L

Lmin η= L

Source Coding • With L ≥ Lmin, then η ≤ 1 . • The source encoder is said to be efficient when η approaches unity. • Source coding Theorem – Given a discrete memoryless source of entropy H(s), the average code-word length L for any source encoding is bounded as

L ≥ H (s)

Example of Source Coding (Huffman Coding). • The Huffman code is a source code whose averahe wordlenght approaches the fundamental limit set by the source entropy H(s). • Huffman code uses the statistics of the source to generate a binary sequence that represents the source symbols.

Example of Source Coding (Huffman Coding). •

The Huffman Algorithm proceeds as follows: 1. 2.

3.



The source symbols is listed in order of decreasing probability. The two source symbols with the lowest probability are assigned a 0 and a 1 (Splitting) . The two source symbols are regarded as being combined into a new source symbol with probability equal tot eh sum of the two original probabilities. The probability of the new symbol is placed in the list in accordance with its value. The procedure is repeated until we are left with a final list of source statistics of only two for which a 0 and a 1 is assigned.

The code for each original source symbol is found by working back and tracing the sequence of 0s and 1s assigned to that symbol as well as its successors.

Example of Source Coding (Huffman Coding). •

Exercise 1. The five symbol from a source and their probabilities are shown in the table below. By using the Huffman algorithm, find the source code for these symbols and detrmine the average code-word length and the entropy of the source.

Symbol S1 S2 S3 S4 S5

Probabilities 0.4 0.2 0.2 0.1 0.1

Related Documents

Source Coding
June 2020 4
Coding
December 2019 22
Coding
July 2020 13
Coding
November 2019 26
Coding
April 2020 13
Coding
November 2019 26

More Documents from ""

Source Coding
June 2020 4