Research Project Title:
Analog Memory-Based Devices for Deep Learning
abstract:Deep learning and artificial intelligence (AI) have been revolutionary in pattern recognition, such as in speech and in image classification. Software optimizations have advanced computing speeds for deep learning. However, the conventional von-Neumann (VN) computing architecture used in datacenters requires a vast amount of energy for the parallel computation in deep neural networks. Dedicated hardware that physically implements the neural network architecture can circumvent the complexity of VN architecture and thus significantly improve training efficiency. We developed an ion-based analog synapse for the next generation on-chip AI hardware accelerator. These synapses will be CMOS-compatible (to ease integration into industrial process flows), while demonstrating the large number of states and the conductance linearity needed for accurate training.
"This year, I would like to dive deeper into semiconductor device research, not only getting experience with the specific hands-on laboratory techniques, but also spending time deconstructing problems, brainstorming ideas, and devising solutions. I believe that these are the important aspects of research and that SuperUROP is an avenue to familiarize with these challenges as I move along towards my future beyond my undergraduate studies at MIT."