Abdel Kareem Abdallah Dabbas

Abdel Kareem Abdallah Dabbas

Research Title

Infrastructure for Multimodal LLM Training

Cohort

2025–2026

Department

Electrical Engineering and Computer Science

Research Areas
  • AI and Machine Learning
Supervisor

Liang, Paul

Abstract

Multimodal models are rapidly becoming the way people use AI, bringing together language, vision, audio, and video. Yet most training stacks are siloed and brittle, and each new pairing of modality and model takes too much custom work. This project aims to build a fast, modular infrastructure for any to any multimodal training. It will offer a generic interface that lets researchers attach new modalities to models of any backbone, while handling heterogeneous data types and sizes in a consistent way, while targeting performant, scalable training across setups. The focus is on performance, scalability, and ease of use for large-scale training. The goal is an open source toolkit with clear documentation and examples that reduces boilerplate, improves throughput, and makes multimodal experimentation straightforward.

Quote

Through this SuperUROP, I plan to draw on my background in computer science and mathematics to build practical, high performance tools. I am especially excited to apply these skills to develop infrastructure that makes any to any multimodal LLM training fast, consistent, and scalable. More broadly, I enjoy tackling problems that span disciplines and turn solid theory into systems that work in practice.

Back to Scholars