STAT991: Regularization Methods in Learning

Spring 2009

Time & Location: MW 1:30- 3:00, F88 JMHH

Course Description

In this course, we survey regularization (penalization) methods in machine learning and statistics. As we go back and forth between statistical estimation and optimization, we address both statistical convergence properties and computational issues that arise from minimizing a regularized objective. We closely study one apparent connection between statistics and optimization through the lens of minimax duality. One of the goals of the course is to gain some understanding of the trade-off between computational and statistical costs -- a largely unexplored area. We will touch upon the following topics: online convex optimization, Fenchel duality, Tikhonov regularization, SVMs for classification and regression, limited feedback (bandit) problems, random perturbation and random projection methods, aggregation methods, multitask learning and matrix regularization, Lasso and L1-penalization methods, model selection and sieves, Rademacher complexity, regularization via early stopping, information-based complexity of Convex Programming, and more. We will aim to develop a general framework and tools for many of the above methods. Open questions and potential topics for research will be given in most lectures.

Tentative Schedule -- this will change with probability 1

Suggested Articles (this list will be greatly expanded)

Useful Books