Essentials of Error-Control Coding

Essentials of Error-Control Coding

Jorge Castiñeira Moreira, Patrick Guy Farrell

Language: English

Pages: 390

ISBN: B01C4852SK

Format: PDF / Kindle (mobi) / ePub


Rapid advances in electronic and optical technology have enabled the implementation of powerful error-control codes, which are now used in almost the entire range of information systems with close to optimal performance. These codes and decoding methods are required for the detection and correction of the errors and erasures which inevitably occur in digital information during transmission, storage and processing because of noise, interference and other imperfections.
Error-control coding is a complex, novel and unfamiliar area, not yet widely understood and appreciated. This book sets out to provide a clear description of the essentials of the subject, with comprehensive and up-to-date coverage of the most useful codes and their decoding algorithms. A practical engineering and information technology emphasis, as well as relevant background material and fundamental theoretical aspects, provides an in-depth guide to the essentials of Error-Control Coding.* Provides extensive and detailed coverage of Block, Cyclic, BCH, Reed-Solomon, Convolutional, Turbo, and Low Density Parity Check (LDPC) codes, together with relevant aspects of Information Theory* EXIT chart performance analysis for iteratively decoded error-control techniques* Heavily illustrated with tables, diagrams, graphs, worked examples, and exercises* Invaluable companion website features slides of figures, algorithm software, updates and solutions to problems     
Offering a complete overview of Error Control Coding, this book is an indispensable resource for students, engineers and researchers in the areas of telecommunications engineering, communication networks, electronic engineering, computer science, information systems and technology, digital signal processing and applied mathematics.

Haptic Rendering: Foundations, Algorithms and Applications

Fundamentals of Database Systems (7th Edition)

The Intelligent Web: Search, Smart Algorithms, and Big Data

A Short Introduction to Quantum Information and Quantum Computation

Windows Server 2012 R2 Administrator Cookbook

Building Software for Simulation: Theory and Algorithms, with Applications in C++

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

defined between the input and the output, which allows us to determine the reliability of the information arriving at the receiver. The important result provided by the Shannon capacity theorem is that it is possible to have an error-free (reliable) transmission through a noisy (unreliable) channel, by means of the use of a rather sophisticated coding technique, as long as the transmission rate is kept to a value less than or equal to the channel capacity. The bound imposed by this theorem is

second are transmitted, the encoded source information rate is R= s log2 M n (77) The Shannon theorem requires that R ≤ C = sCs which in this case means that log2 M ≤ Cs n log2 M ≤ nCs (78) M = 2n(Cs −δ) (79) 0 ≤ δ < Cs (80) δ can be arbitrarily small, and in this case R → C. Assume now that the coded vectors of length n bits are in an n-dimensional vector space. If the vector components are taken from the binary field, the coordinates of this vector representation adopt one of the two

the source entropy, the transinformation I (X, Y) and the capacity of the BSC defined in Figure P.1.1. 1 − p = 0.75 P(0) = α = 0.2 p = 0.25 P(1) = 1− α = 0.8 Figure P.1.1 0 p = 0.25 1 − p = 0.75 1 A binary symmetric channel 1.4 Show that for the BSC, the entropy is maximum when all the symbols of the discrete source are equally likely. 1.5 An independent-symbol binary source with probabilities 0.25 and 0.75 is transmitted over a BSC with transition (error) probability p = 0.01. Calculate

repetition of the transmission of coded information. Due to this, all the capability of the code is used for error correction. The source information generates a binary signal representing equally likely symbols at a rate rb . The encoder takes a group of k message bits, and adds to it n − k parity check bits. This is the encoding procedure for a linear block code Cb (n, k) whose code rate is Rc = k/n, with Rc < 1. Figure 2.2 shows a block diagram of an FEC communication system. The transmission

be only one possible way of interpreting that sequence. This is necessary when variable-length coding is used. If the code shown in Table 1.1 is compared with a constant-length code for the same case, constituted from four codewords of two bits, 00, 01, 10, 11, it is seen that the code in Table 1.1 adds redundancy. Assuming equally likely messages, the average number of transmitted bits per symbol is equal to 2.75. However, if for instance symbol s2 were characterized by a probability of being

Download sample

Download