
Grace Tian
MIT EECS Undergraduate Research and Innovation Scholar
LLM Fine-Tuning with Insights from Optimization
2024–2025
Electrical Engineering and Computer Science
- Theory of Computation
Justin Solomon
Large language models (LLMs) are typically pre-trained on extensive datasets and subsequently fine-tuned on smaller, task-specific datasets. One common method of fine-tuning is low-rank adaptation (LoRA), which uses low-rank updates of matrix parameters. However, the impact changing the rank of LoRA during fine-tuning remains poorly understood. Understanding LoRA rank dynamics is a crucial step towards making LoRA more efficient. This project examines the influence of changing rank during LoRA using insights from optimization theory, aiming to provide both theoretical grounding and experimental results.
Through SuperUROP, I want to apply my machine learning knowledge from past coursework and projects to complete a longer research project. I am excited to expand my understanding of machine learning and large language models, and to learn more effective communication for presenting my work. I hope to keep up with this fast-moving field, and to contribute meaningfully.