Name: | Description: | Size: | Format: | |
---|---|---|---|---|
126.44 KB | Adobe PDF |
Advisor(s)
Abstract(s)
Convolutional Neural Networks (CNN) are quite useful in edge devices for security, surveillance, and many others. Running CNNs in embedded devices is a design challenge since these models require high computing power and large memory storage. Data quantization is an optimization technique applied to CNN to reduce the computing and memory requirements. The method reduces the number of bits used to represent weights and activations, which consequently reduces the size of operands and of the memory. The method is more effective if hybrid quantization is considered in which data in different layers may have different bit widths. This article proposes a new hardware module to calculate dot-products of CNNs with hybrid quantization. The module improves the implementation of CNNs in low density FPGAs, where the same module runs dot-products of different layers with different data quantizations. We show implementation results in ZYNQ7020 and compare with state-of-the-art works. Improvements in area and performance are achieved with the new proposed module.
Description
Este trabalho foi financiado pelo Concurso Anual para Projetos de Investigação, Desenvolvimento, Inovação e Criação Artística (IDI&CA) 2018 do Instituto Politécnico de Lisboa. Código de referência IPL/2018/LiteCNN_ISEL
Keywords
Deep learning Convolutional Neural Network Embedded computing Hybrid quantization Field-Programmable Gate Array
Citation
VÉSTIAS, Mário P.; [et al] – Hybrid dot-product calculation for convolutional neural networks in FPGA. In 2019 29th International Conference on Field Programmable Logic and Applications (FPL). Barcelona, Spain: IEEE, 2019. ISBN 978-1-7281-4884-7. Pp. 350-353
Publisher
Institute of Electrical and Electronics Engineers