Daria Kryvosheieva
MIT EECS | Nadar Foundation Undergraduate Research and Innovation Scholar
Task-Level Performance Prediction in Agentic Coding Benchmarks
2025–2026
Electrical Engineering and Computer Science
- AI and Machine Learning
Yoon Kim
As the focus in LLM-based coding shifts from static single-step code generation to multi-step agentic interaction with tools and environments, there is a growing need to understand what evaluations of coding agents tell us. At present, agent performance is typically measured by aggregate pass rates on benchmarks, but single-number metrics obscure the diversity of tasks within a benchmark. We present a framework for predicting success or failure on individual tasks tailored to the agentic coding regime. Our approach augments Item Response Theory (IRT) with rich features extracted from tasks, including issue statements, repository contexts, solutions, and test cases, and introduces a novel decomposition of agent ability into LLM and scaffold ability components. This parameterization enables us to aggregate evaluation data across heterogeneous leaderboards and accurately predict task-level performance for unseen benchmarks, as well as unseen LLM-scaffold combinations. Our methods have practical utility for benchmark designers, who can better calibrate the difficulty of their new tasks without running computationally expensive agent evaluations.
I am participating in this SuperUROP because I am interested in crafting better evaluations for coding agents. I aim to develop efficient methods for (1) estimating the capabilities of agentic systems and their underlying components (LLMs and scaffolds), and (2) estimating the difficulties of novel coding tasks. I hope that those methods will prove useful for practical use cases, such as designing benchmarks or scaffolds, or selecting training tasks of appropriate difficulty in RL training rollouts.
