Events: Statistics Colloquium

Bahadur Memorial Lectures: Nancy Reid (Day 2)

3:00–4:00 pm DSI 105

Title: “All Models are Wrong”

Abstract: This talk will consider the assessment of semiparametric and other highly-parametrized models from the perspective of foundational principles of parametric statistical inference. It is cast as a generalised version of the Fisherian sufficiency/co-sufficiency separation, replacing out-of-sample prediction error by a type of within-sample prediction error. The theory is illustrated through several examples, including a post-reduction inference approach to confidence sets of sparse regression models.  This is joint work with Heather Battey. 

May 6

Statistics Colloquium: Aravindan Vijayaraghavan

11:30 am–12:30 pm Jones 303

Aravindan Vijayaraghavan
Department of Computer Science
Northwestern University

Title:  Finding Small Confidence Sets in High Dimensions

Abstract: Constructing confidence sets is a fundamental problem in statistics: given samples from an arbitrary distribution and a target coverage $1-\alpha$ (e.g., 0.90), the goal is to find a set that covers $1-\alpha$ probability mass while having as small a volume as possible. This task underlies a wide range of applications, including uncertainty quantification and support estimation. Even when restricted to simple geometric families such as Euclidean balls, finding small confidence sets is computationally challenging in high dimensions.

This raises a key question: can we design computationally efficient methods that find these sets with provably near-optimal size?

In this talk, I will present new algorithms that learn confidence balls and confidence ellipsoids with rigorous guarantees of coverage and approximate volume optimality. The algorithms use new connections to robust statistics, convex optimization duality, and the Brascamp-Lieb inequality. Time permitting, I will discuss discrete variants of the problem and their applications to conformal prediction.

Based mostly on joint work with Chao Gao, Liren Shan, and Vaidehi Srinivas.

May 11

Statistics Colloquium: David Blei

11:30 am–12:30 pm Jones 303

David Blei
Departments of Statistics and Computer Science
Columbia University

Title: A Fresh Look at Empirical Bayes

Abstract: Empirical Bayes improves simultaneous inference by learning fromrelated data. In this talk, I will present three recent directions in empirical Bayes. First, I will discuss a general method based on probabilistic symmetries, which extends empirical Bayes beyond exchangeable settings to structured problems such as arrays, graphs, conditional data, and spatial models. Second, I will discuss empirical
Bayes for implicit likelihoods, where the model is available only  through a simulator, and show how simulation-based inference can be used to produce empirical Bayes estimates without evaluating a
density. Third, I will discuss an empirical Bayes approach to combining randomized experiments and observational studies, where calibration studies make it possible to learn the distribution of observational bias and use observational data in a principled way. These three ideas illustrate new roles for empirical Bayes in modern statistics and machine learning.

This is joint work with Diana Cai, Don Green, Sebastian Salazar, Xinwei Shen Sebastian Wagner-Carena, Bohan Wu, Cheng Zhang.

 

May 18