Research Project Title:
Analog Memory-Based Devices for Deep Learning
abstract:Software optimizations have significantly advanced computing speeds for deep learning. However, the conventional von-Neumann (VN) computing architecture used in datacenters requires a vast amount of energy for the parallel computation in deep neural networks. Dedicated hardware that physically implements the neural network architecture can circumvent the complexity of VN architecture and thus significantly improve training efficiency. We developed an ion-based analog synapse for the next generation on-chip AI hardware accelerator. Past approaches are often limited to phase-change and resistive RAM materials which exhibit non-linear conductivity responses, detrimental to the training accuracy. Our synapses will be CMOS-compatible (to ease integration into industrial process flows) while demonstrating a large number of states and the conductance linearity needed for accurate training.
"This year, I would like to dive deeper into semiconductor device research, not only getting experience with the specific hands-on laboratory techniques, but also spending time deconstructing problems, brainstorming ideas, and devising solutions. I believe that these are the important aspects of research and that SuperUROP is an avenue to familiarize with these challenges as I move along towards my future beyond my undergraduate studies at MIT."