Guest lecture by Manfred Mücke

"Implementation-Aware Cost Functions for Efficient and Effective Digital Implementation of Binary Classifiers"

Binary classifiers are at the heart of machine learning. Possible methods include Bayesian networks, support vector machines (SVMs) and neural networks. Research in machine learning typically focuses on improving classification accuracy. Practical use of binary classifiers, however, depends on sufficiently low execution time of the classification phase. This depends both on the complexity of the classifier and the computer architecture the classifier is implemented on.
Implementation of a binary classifier in digital hardware corresponds to (i) parameter quantisation, altering the classifier and (ii) arithmetic round-off error, disturbing the modified classifier's result. The round-off error itself depends on the sequence of operations and the type of basic operations provided.
So far, the machine learning community has paid little attention to these effects of classifier implementation in digital hardware both in terms of analysis and mitigation. The engineering community, increasingly faced with the need to implement complex classifiers under real-time and/or low-power constraints, therefore lacks models to explore different method's performance under given architecture and time constraints.

My work focuses on bridging the gap between classifier design in the real number space and the effects of implementation on digital hardware using limited number spaces. The grand challenge is to derive a codesign framework allowing for joint optimisation of classification performance, power, area, throughput and latency.

Both, classification accuracy and implementation cost are significantly influenced by the number format chosen for implementation and the basic arithmetic operations provided. This results in a vast design space, which needs to be pruned effectively in order to find acceptable implementation choices in limited time. Doing so touches on numerical analysis, computer arithmetic and computer architecture.

I will report on recent research investigating the effect of reduced-precision floating-point formats on the classification accuracy and implementation cost of Bayesian Classifiers and SVMs.

Manfred Mücke received his M.Sc. and Ph.D. in Electrical Engineering from Graz University of Technology, Austria in 2001 and 2007, respectively. During his studies, he specialised on computer architecture, hardware specification languages and design space exploration for digital systems. He was lead engineer with EVK Electronics for DSP- and FPGA-based high-speed smart camera systems design before joining CERN in 2004. He contributed to the LHC infrastructure, working on efficient mapping of signal processing algorithms onto LHCb's multi-FPGA trigger infrastructure.

From 2007 till 2012, he was a member of the Research Lab Computational Technologies and Applications at the University of Vienna, Faculty of Computer Science.

His research focuses on enabling acceleration of distributed scientific applications through use of reconfigurable logic (FPGAs). This includes in-depth performance analysis of distributed applications on multi/many-core architectures as well as on embedded platforms.
One key interest is precision estimation for complex numerical algorithms as this knowledge is a mandatory prerequisite to achieving high-performance, low-power digital implementations.

Date with Time
18. July 2012 - 10:00 - 11:30
Seminar Room IDEG134 at Inffeldgasse 16c, ground floor