MIT EECS | Morais and Rosenblum Undergraduate Research and Innovation Scholar
A Hardware Accelerator for Artificial Intelligence
- Computer Systems
Artificial intelligence (AI) hardware often focuses on accelerating dense neural networks. However, neural networks are increasingly tending towards sparsity, to make networks more compact and decrease training/inference times. AI accelerators that exploit this sparsity achieve performance gains that arise from eliminating unnecessary, ineffectual computations. Unfortunately, sparse neural network acceleration is challenging for a number of reasons. It often features more irregular data accesses and reuse, requiring complicated data fetching and traversal, and on-chip storage. We address these challenges in this project to design a general-purpose AI accelerator with a flexible design to support dense and sparse neural networks in a variety of formats.
I am looking forward to continuing my work with Prof. Sanchez’s group through my SuperUROP project! I have enjoyed the work that I have done so far in hardware-software co-design for artificial intelligence, and I hope that I’m able to delve deeper into this area over the course of the coming year.