Camila Moran-Hidalgo
Implementing a Human Speech Perception Model Based on Acoustic Cues
2024–2025
Electrical Engineering and Computer Science
- Natural Language and Speech Processing
Stefanie Shattuck–Hufnagel
Jeung–Yoon Elizabeth Choi
This research project focuses on enhancing the understanding of human speech perception by examining acoustic cues and their relationship to phonological categories. Building on the foundational work of Shipman and Zue (1982), this project aims to demonstrate that acoustic cues can effectively constrain lexical candidates, thereby advancing models of speech perception. Through the development of a computational algorithm that incorporates phrase-level information, we will calculate the likelihood of lexical candidates based on observed acoustic cues. This approach not only improves automated speech recognition and language processing systems but also offers valuable insights into the cognitive mechanisms underlying human communication.
I chose to participate in SuperUROP to enhance my ability to communicate research findings and explore a career in academia. Growing up trilingual sparked my interest in language, and this project allows me to merge that background with my academic pursuits. IÂ’m excited to deepen my understanding of inferential algorithms by applying them to linguistics, aiming to uncover new insights into human speech recognition.