Grace  Tian

Grace Tian

Scholar Title

MIT EECS Undergraduate Research and Innovation Scholar

Research Title

LLM Fine-Tuning with Insights from Optimization

Cohort

2024–2025

Department

Electrical Engineering and Computer Science

Research Areas
  • Theory of Computation
Supervisor

Justin Solomon

Abstract

Large language models (LLMs) are typically pre-trained on extensive datasets and subsequently fine-tuned on smaller, task-specific datasets. One common method of fine-tuning is low-rank adaptation (LoRA), which uses low-rank updates of matrix parameters. However, the impact changing the rank of LoRA during fine-tuning remains poorly understood. Understanding LoRA rank dynamics is a crucial step towards making LoRA more efficient. This project examines the influence of changing rank during LoRA using insights from optimization theory, aiming to provide both theoretical grounding and experimental results.

Quote

Through SuperUROP, I want to apply my machine learning knowledge from past coursework and projects to complete a longer research project. I am excited to expand my understanding of machine learning and large language models, and to learn more effective communication for presenting my work. I hope to keep up with this fast-moving field, and to contribute meaningfully.

Back to Scholars