Grace Yingjia Tian

Grace Yingjia Tian

Research Title

How Does Fine-Tuning Affect Uncertainty in LLMs?

Cohort

2024–2025

Department

Electrical Engineering and Computer Science

Research Areas
  • Theory of Computation
Supervisor

Justin Solomon

Abstract

Large language models (LLMs) are typically pre-trained on extensive datasets and subsequently fine-tuned on smaller, task-specific datasets. However, the impact of fine-tuning on model uncertainty remains poorly understood. Understanding uncertainty is a crucial first step towards minimizing hallucinations and calibrating LLMs. This project examines the influence of fine-tuning on LLM uncertainty using a parameter-efficient method known as Low-Rank Adaptation (LoRA). We assess uncertainty using metrics like semantic entropy, aiming to enhance our understanding of PEFT methods like LoRA and improve the reliability and applicability of LLM outputs.

Quote

Through SuperUROP, I want to apply my machine learning knowledge from past coursework and projects to complete a longer research project. IÂ’m excited to expand my understanding of machine learning and large language models, and to learn more effective communication for presenting my work. I hope to keep up with this fast-moving field, and to contribute a meaningful piece.

Back to Scholars