MIT EECS | Advanced Micro Devices Undergraduate Research and Innovation Scholar
Power/Accuracy Tradeoff in FPGA Low Resolution Neural Network
- Artificial Intelligence and Machine Learning
Anantha P. Chandrakasan
I will be developing a binary neural network on a field-programmable gate array (FPGA). Binary neural networks have 1-bit coefficients, which may greatly reduce their power usage compared with real-valued networks. Although slightly less accurate, binary neural networks with low power consumption may be appropriate for certain use cases and contexts. I will start by implementing a common real-valued neural network on an FPGA. Once I understand the implementation of real-valued neural networks on an FPGA, I will begin developing a binary neural network. By replacing multiplications with binarization power of two shifting and bit packing, I will build a binary neural network. If time allows, I would like to research the relationship between coefficient bit size, accuracy, and power consumption on an FPGA.
I am participating in SuperUROP because I am very interested in applying my hardware knowledge to machine learning. In Introductory Digital Systems (6.111), I learned about Verilog code and FPGAs. I am excited to apply this knowledge to develop a neural network on an FPGA platform, particularly for the purpose of emotion detection. Because I have an electrical engineering background, I hope to learn more about machine learning in my SuperUROP project.