Véstias, MárioDuarte, RuiDe Sousa, JoseCláudio de Campos Neto, Horácio2019-04-012019-04-012017-10-05VÉSTIAS, Mário; [et al] – Parallel dot-products for deep learning on FPGA. In 2017 27th International Conference on Field Programmable Logic and Applications (FPL). Ghent, Belgium: IEEE, 2017. ISBN 978-9-0903-0428-1. Pp. 1-4978-9-0903-0428-1978-1-5386-2040-31946-1488http://hdl.handle.net/10400.21/9807Deep neural networks have recently shown great results in a vast set of image applications. The associated deep learning models are computationally very demanding and, therefore, several hardware solutions have been proposed to accelerate their computation. FPGAs have recently shown very good performances for these kind of applications and so it is considered a promising platform to accelerate the execution of deep learning algorithms. A common operation in these algorithms is multiply-accumulate (MACC) that is used to calculate dot-products. Since many dot products can be calculated in parallel, as long as memory bandwidth is available, it is very important to implement this operation very efficiently to increase the density of MACC units in an FPGA. In this paper, we propose an implementation of parallel MACC units in FPGA for dot-product operations with very high performance/area ratios using a mix of DSP blocks and LUTs. We consider fixed-point representations with 8 bits of size, but the method can be applied to other bit widths. The method allows us to achieve TOPs performances, even for low cost FPGAs.engMultiply-accumulateDeep learningFPGAMultiplicar-acumularParallel dot-products for deep learning on FPGAconference object10.23919/FPL.2017.8056863