Joint Statistics and DSI Colloquium: Jingfeng Wu

4:00–5:00 pm DSI 105

5460 S University Ave
Chicago IL 60615

Jingfeng Wu
Postdoctoral Fellow
University of California, Berkeley

Title: Towards a Less Conservative Theory of Machine Learning: Unstable Optimization and Implicit Regularization

Abstract: Deep learning’s empirical success challenges the “conservative" nature of classical optimization and statistical learning theories. Classical theory mandates small stepsizes for training stability and explicit regularization for complexity control. Yet, deep learning leverages mechanisms that thrive beyond these traditional boundaries. In this talk, I present a research program dedicated to building a less conservative theoretical foundation by demystifying two such mechanisms:  
 

1. Unstable Optimization: I show that large stepsizes, despite causing local oscillations, accelerate the global convergence of gradient descent (GD) in overparameterized logistic regression.  

2. Implicit Regularization: I show that the implicit regularization of early-stopped GD statistically dominates explicit $\ell_2$-regularization across all linear regression problem instances.

I further showcase how the theoretical principles lead to practice-relevant algorithmic designs (such as Seesaw for reducing serial steps in large language model pretraining). I conclude by outlining a path towards a rigorous understanding of modern learning paradigms.

Event Type

Statistics Colloquium, Seminars, Lectures

Feb 5