Floating-Point Matrix Multiplication on FPGAs
- Master Thesis
- Announcement date
- 12 Jan 2011
- Bernd Lesser
- Research Areas
FPGAs have evolved to powerful computing platforms. Today, multiple double- precision floating point units can be implemented in parallel, rivaling the performance of CPUs. In contrast to CPUs, however, architectures implemented on FPGAs can be custom-tailored to a specific task, thereby outperforming CPUs by an order of magnitude for selected kernels. Matrix multiplication is ubiquitous in scientific computing and its execution time is a key indicator for overall system performance.
You will learn how FPGAs can be used in high-performance computing, how to implement different architectures for matrix multiplication and how to write generic VHDL code. You will extend existing VHDL implementations and learn how to automate Altera Quartus II and to distribute the work load on large servers to synthesize multiple design variants for different FPGAs.
- Experience with Altera Quartus II
- Basic Linux shell scripting, Tcl, FPGA technology, computer arithmetic (beneficial)
- Thesis can be conducted in Graz or Vienna
Bernd Lesser (Bernd.Lesser@univie.ac.at)
 Y. Dou, S. Vassiliadis, G. K. Kuzmanov, and G. N. Gaydadjiev, “64-bit floating-point FPGA matrix multiplication,” in FPGA ‘05: Proceedings of the 2005 ACM/SIGDA 13th international symposium on Field-programmable gate arrays. New York, NY, USA: ACM, 2005, pp. 86-95. [Online]. Available: http://dx.doi.org/10.1145/1046192.1046204