
Department of Computer Science and Data Science Institute Presents: Weijia Shi
2:30–3:30 pm DSI 105
Weijia Shi
PhD Candidate
University of Washington
Title: Breaking the Language Model Monolith
Abstract: Language models (LMs) are typically monolithic: a single model storing all knowledge and serving every use case. This design presents significant challenges; they often generate factually incorrect statements, require costly retraining to add or remove information, and face serious privacy and copyright issues. In this talk, I will discuss how to break this monolith by introducing modular architectures and training algorithms that separate capabilities across composable components. I’ll cover two forms of modularity: (1) External modularity, which augments LMs with external tools like retrievers to improve factuality and reasoning; and (2) internal modularity, which builds inherently modular LMs from decentrally trained components to enable flexible composition and an unprecedented level of control.