Project Type:
Student Project
Student:
Stefan Wakolbinger
Mentor: Bernhard Geiger

If a random variable is passed through a nonlinear system, where multiple input values are mapped onto the same output value, information is lost. This means that by observing the output of the system, the input values cannot be reconstructed correctly with certainty. Analytic results for calculating the information loss in terms of the conditional entropy exist, but are not always computable since logarithms of sums are involved. Similar to Fano’s Inequality, the information loss can be bounded in terms of the probability of making a reconstruction error when using a Maximum a Posteriori (MAP) estimator. If the analytic result for the information loss exists, these bounds can be used conversely to bound the error probability of the MAP estimator.
In this work, two diﬀerent scenarios are explored: In the ﬁrst scenario, independent, identically distributed (i.i.d.) samples of a continuous random variable are passed through the system. The information loss is bounded by using the error probability of a samplebysample estimator, which attempts to reconstruct every input value x[n] by just observing its corresponding output value y[n] , therefore x'[n] = f(y[n]), where x'[n] is the estimated input value.
In a second scenario, the input samples are not independent from each other, but form a Markov process. Because of this dependency it makes sense to observe more than just one output value, since also past output values contain information about the current input value. Therefore, the information loss can be bounded using the error probability of an estimator with memory. In this work, only estimators using the past two output values are examined (x'[n]=f(y[n], y[n1])).
Three diﬀerent examples are presented which illustrate the application of the analytic results.
The project report can be downloaded here.