MIT EECS | Nadar Foundation Undergraduate Research and Innovation Scholar
Guardrails for LLMs Supporting Security
Improving the accuracy of results generated by large language models (LLMs) has been a major focus in recent generative artificial intelligence research. This project focuses on connecting LLMs with BRON, a collation of data sources that bridges many different databases in the cybersecurity domain. By doing so, the generative power of LLMs can be harnessed with the verified facts and inherent structure of BRON to produce higher-quality query results. This method of guarded query and retrieval will be tested on three different applications: generating cyber domain PDDL (planning domain definition language) files, directly generating the plans that would be the result of running the aforementioned PDDL files through a classical planner, and retrieving general cyber information.
I am participating in this SuperUROP because I want to apply my skills and interest in generative artificial intelligence to a more long-term research project. My previous UROPs and my coursework in machine learning have provided me with a foundation in this area, and I am excited to explore cutting-edge techniques and collaborate with the lab. I hope to learn more about the technology behind large language models.