Signal Processing and Speech Communication Laboratory
homephd theses › Efficient Floating-Point Implementation of Speech Processing Algorithms on Reconfigurable Hardware

Efficient Floating-Point Implementation of Speech Processing Algorithms on Reconfigurable Hardware

Status
Finished
Student
Thang Huynh Viet
Mentor
Gernot Kubin
Research Areas

    This doctoral thesis aims at optimising the floating-point implementations of signal processing algorithms on reconfigurable hardware with respect to accuracy, hardware resource and execution time. It is known that reduced precision in floating-point arithmetic operations on reconfigurable hardware directly translates into increased parallelism and peak performance. As a result, efficient implementations can be obtained by choosing the minimal acceptable precision for floating-point operations. Furthermore, custom-precision floating-point operations allow for trading accuracy with parallelism and performance. We use Affine Arithmetic (AA) for modeling the rounding errors of floating-point computations. The derived rounding error bound by the AA-based error model is then used to determine the smallest mantissa bit width of custom-precision floating-point number formats needed for guaranteeing the desired accuracy of floating-point applications.

    In this work, we implement the first Matlab-based framework for performing rounding error analysis and numerical range evaluation of arbitrary floating-point algorithms using the AA-based error model. Our framework enables users to best reuse their own existing Matlab code to effectively conduct rounding error analysis tasks and run bit-true custom-precision computations of floating-point algorithms in Matlab for verification.

    We apply the AA-based error analysis technique and our Matlab-based framework to the floating-point rounding error evaluation and optimal uniform bit width allocation of two signal and speech processing applications: i) the floating-point dot-product and ii) the iterative Levinson-Durbin algorithm for linear prediction and autoregressive modeling. For the floating-point dot-product, it is shown that the AA-based error model can provide tighter rounding error bounds compared to existing error analysis techniques. This corresponds to the overestimation of up to 2 mantissa bits compared to those estimated by running extensive simulations. For the iterative Levinson-Durbin algorithm, the AA-based error analysis technique can model accurately the rounding errors of the coefficients when using a restricted range for the input parameters. When using a general range for the input parameters, the AA-based error analysis technique can give a qualitative estimate for the error bound of the coefficients.