Introduction

Source Coding

The goal is to compress the information source at the lowest possible rate for a given level of distortion or, conversely, with as little distortion as possible for a given rate. This is done by removing the source redundancy (entropy coding), discarding irrelevant data (perceptual knowledge), and (re)quantization. The last two points may result in losses and therefore distortion. We assume the channel to be ideal (noiseless).
Rate-Distortion Theory: Let R(D) be the rate-distortion function. If D is the allowable distortion, then R(D) is the minimum possible rate. Conversely, if R is the given rate, then D(R) is the minimum possible average distortion.

Channel Coding

Channel coding involves the use of redundancy for protection of the bit stream (coming from the source code) against channel errors, e.g. error-detecting or error-correcting codes. In Shannon's model of a communication channel, additive noise is added to the source-coded signal and the signal-to-noise ratio defines the capacity of the channel (the maximum rate or amount of information that can be reliably transmitted).
 

Shannon's Separation Principle

Source and channel coding can be done independently without loss of optimality. This is important because the effect of channel errors does not play a role in the design of the source code and similarly the characteristics of the source play no role in the design of the channel code. For practical purposes, this separation leads to big toolboxes of reusable tools.Shannons Separation Principle, however, holds only when complexity and delay are not an issue, i.e. we get the optimal coding only if the block length goes toward infinity.

Joint Source/Channel Coding

For a particular finite complexity or delay, one can often get better results with a joint source/channel code.
As an example, let's transmit a text file containing a story. If a handful of characters are deleted or modified at random due to channel errors, the reader may still be able to understand the story. On the other hand, losing a few random bytes of a Lempel-Ziv compressed file could be catastrophic.
Certain characteristics of the source are used to design the channel code or certain characteristics of the channel are used to design the source code. This is already used in unequal error protection where some of the bits from the output of the source encoder are protected more heavily than the rest. Another idea is to use the redundancy of the source to protect against channel errors.

Streaming Multimedia Data over Packet Networks

The problems of the internet as a communication channel are congestion, routing delay, and network heterogeneity. Consequently, large segments (whole packets) of the signal may be lost or useless (erasure channel). For example, in a heterogeneous network (link of different bandwidths, e.g. high-capacity fiber network switched over to a wireless network) packets have to be dropped to accomodate the lower capacity. On the other hand, if the routing delay is too long, the packets are useless because if we like to transmit speech (or/and video) data for telephone or teleconference applications, we have to satisfy strong real-time constraints! Therefore, another consequence is that retransmission of lost packets is not an option. We want to reconstruct the signal using the transmitted redundancy.

Multi Description Coding

The idea is to send multiple descriptions of a single source to the receiver. If all the descriptions are received, we are able to reconstruct the original data (or at least a high-quality estimate if the coding is lossy). If only some of the descriptions are received, we want to reconstruct the sent data as well as possible. This implies that each of the descriptions should individually be good (close to the original). So, the descriptions are very similar and receiving more descriptions will add little extra information. On the other hand, if we don't like to add too much to the total data rate, the descriptions must be relatively independent. These requirements are conflicting. Therefore we are looking for an optimum solution or, in other words, we get an extension of the classical rate-distortion problem:
Let M be the number of descriptions of the source and Xi the codes we are looking for (i=1,2,...,M) such that Xi achieves the rate-distortion pair (Ri Di), any combinations of more than one codes (R > Ri) achieve smaller distortion (D < Di), and all codes together (R1+...+RM) achieve the global minimum distortion Do. Each code Xi must be independently decodable and must carry new information.
The first multi description coding problem was posed by Gersho, Ozarov, Witsenhausen, Wolf, Wyner, and Ziv in 1979.
 

Balanced Multi Description Coding

Each description has the same rate (R1=R2=...=RM) and the same importance.