Tutorial on online supervised learning with applications to node classification.

Book draft (currently lecture notes from STAT928: Statistical Learning and Sequential Prediction) is getting a major overhaul in 2016.

Slides for Spring School "Structural Inference in Statistics," Sylt, Germany, 2015.

Slides for the Machine Learning Summer School in Kyoto, 2012

Lecture notes on online convex optimization, written mostly in 2008 at UC Berkeley (latest revision April 2009).

On Detection and Structural Reconstruction of Small-World Random Networks (with T. Cai and T. Liang). Submitted.

Inference via Message Passing on Partially Labeled Stochastic Block Models (with T. Cai and T. Liang). Submitted.

On Equivalence of Martingale Tail Bounds and Deterministic Regret Inequalities (with K. Sridharan). [slides]

Online Nonparametric Regression with General Loss Functions (with K. Sridharan).

Sequential Probability Assignment with Binary Alphabets and Large Classes of Experts (with K. Sridharan).

On Zeroth-Order Stochastic Convex Optimization via Random Walks (with T. Liang and H. Narayanan). [The paper is now mostly superseded by ``Escaping the Local Minima via Simulated Annealing: Optimization of Approximately Convex Functions'']

Computational and Statistical Boundaries for Submatrix Localization in a Large Noisy Matrix (with T. Cai and T. Liang). The Annals of Statistics. Accepted, 2016.

BISTRO: An Efficient Relaxation-Based Method for Contextual Bandits (with K. Sridharan). ICML 2016, to appear.

Adaptive Online Learning (with D. Foster and K. Sridharan). NIPS 2015.

Our framework recovers and improves a wide variety of adaptive bounds including quantile bounds, second-order data-dependent bounds, and small loss bounds. In addition we derive a new type of adaptive bound for online linear optimization based on the spectral norm, as well as a new online PAC-Bayes theorem that holds for countably infinite sets.

Escaping the Local Minima via Simulated Annealing: Optimization of Approximately Convex Functions (with A. Belloni, H. Narayanan, and T. Liang). COLT 2015.

Hierarchies of Relaxations for Online Prediction Problems with Evolving Constraints (with K. Sridharan). COLT 2015.

Learning with Square Loss: Localization through Offset Rademacher Complexity (with T. Liang and K. Sridharan). COLT 2015.

Geometric Inference for General High-Dimensional Linear Inverse Problems (with T. Cai and T. Liang). The Annals of Statistics. Accepted, 2015.

Efficient Sampling from Time-Varying Distributions (with H. Narayanan). Journal of Machine Learning Research. Accepted, 2015.

Distributed Detection : Finite-time Analysis and Impact of Network Topology (with S. Shahrampour and A. Jadbabaie). IEEE Transactions on Automatic Control. Accepted, 2015.

On Online Optimization : Competing with Dynamic Comparators (with A. Jadbabaie, S. Shahrampour, and K. Sridharan). Accepted, 2015.

On Martingale Extensions of Vapnik-Chervonenkis Theory with Applications to Online Learning (with K. Sridharan). To appear in Festschrift in honor of A. Chervonenkis, 2014

Online Nonparametric Regression (with K. Sridharan). COLT 2014.

In addition to a non-algorithmic study of minimax regret, we exhibit a generic forecaster that enjoys the established optimal rates. We also provide a recipe for designing online regression algorithms that can be computationally efficient. We illustrate the techniques by deriving existing and new forecasters for the case of finite experts and for online linear regression.

Entropy, Minimax Regret and Minimax Risk (with K. Sridharan and A. Tsybakov). Accepted to Bernoulli Journal, 2014.

Sequential Complexities and Uniform Martingale Laws of Large Numbers (with K. Sridharan and A. Tewari). Probability Theory and Related Fields, 2014.

We demonstrate the utility of these results in two domains. First, we consider the problem of sequential prediction. Analogous to the role of classical empirical process theory in statistical learning (with i.i.d. data), the developed theory is shown to yield precise learning guarantees for the problem of sequential prediction. In particular, the minimax learning rate is shown to be tightly controlled by the universal uniform convergence rates for empirical processes. As a second (direct) application of our results, we provide exponential inequalities for sums of martingale difference sequences in Banach spaces.

Online Learning via Sequential Complexities (with K. Sridharan and A. Tewari). To appear in the Journal of Machine Learning Research, 2014.

Partial monitoring -- classification, regret bounds, and algorithms (with G. Bartók, D. Foster, D. Pál, and C. Szepesvári). Mathematics of Operations Research, vol. 39, no. 4, pp. 967–-997, 2014.

On Semi-Probabilistic Universal Prediction (with K. Sridharan). Proceedings of IEEE Information Theory Workshop, 2013. Invited paper.

Optimization, Learning, and Games with Predictable Sequences (with K. Sridharan). NIPS 2013.

Online Learning of Dynamic Parameters in Social Networks (with S. Shahrampour and A. Jadbabaie). NIPS 2013.

Competing with Strategies (with W. Han and K. Sridharan). COLT 2013.

Online Learning with Predictable Sequences (with K. Sridharan). COLT 2013.

We further extend our methods and results to include competing with a set of possible predictable processes (models), that is ``learning'' the predictable process itself concurrently with using it to obtain better regret guarantees. We show that such model selection is possible under various assumptions on the available feedback. Our results suggest a promising direction of further research with potential applications to stock market and time series prediction.

Localization and Adaptation in Online Learning (with O. Shamir and K. Sridharan). AISTATS 2013.

Stochastic Convex Optimization with Bandit Feedback (with A. Agarwal, D. Foster, D. Hsu, and S. Kakade). SIAM Journal on Optimization, 23-1 (2013), pp. 188--212.

Relax and Randomize: From Value to Algorithms (with O. Shamir and K. Sridharan). NIPS 2012. Full arXiv version.

We define

We present a number of new algorithms, including a family of randomized methods that use the idea of a ``random playout''. Several new versions of the Follow-the-Perturbed-Leader algorithms are presented, as well as methods based on the Littlestone's dimension, efficient methods for matrix completion with trace norm, and algorithms for the problems of transductive learning and prediction with static experts.

Making Stochastic Gradient Descent Optimal for Strongly Convex Problems (with O. Shamir and K. Sridharan). ICML 2012.

Interior-Point Methods for Full-Information and Bandit Online Learning (with J. Abernethy and E. Hazan). IEEE Transactions on Information Theory, vol 58, issue 7, pp. 4164--4175, 2012.

No Internal Regret via Neighborhood Watch (with D. Foster). AISTATS 2012.

Lower Bounds for Passive and Active Learning (with M. Raginsky). NIPS 2011.

Complexity-Based Approach to Calibration with Checking Rules (with D. Foster, K. Sridharan and A. Tewari). COLT 2011.

Online Learning: Stochastic and Constrained Adversaries (with K. Sridharan and A. Tewari). NIPS 2011.

Information-Based Complexity, Feedback, and Dynamics in Convex Programming (with M. Raginsky). IEEE Transactions on Information Theory, 2011.

Online Learning: Beyond Regret (with K. Sridharan and A. Tewari). COLT 2011.

Online Learning: Random Averages, Combinatorial Parameters, and Learnability (with K. Sridharan and A. Tewari). NIPS 2010.

Random Walk Approach to Regret Minimization (with H. Narayanan). NIPS 2010.

Online Convex Programming in Adaptive Control (with M. Raginsky and S. Yüksel), IEEE Conference on Decision and Control, 2010.

Quantitative Analysis of Systems Using Game-Theoretic Learning (with S. Seshia). ACM Transactions on Embedded Computing Systems, 2010.

Information Complexity of Black-Box Convex Optimization: A New Look via Feedback Information Theory (with M. Raginsky). Allerton Conference on Communication, Control, and Computing, 2009.

A Stochastic View of Optimal Regret through Minimax Duality (with J. Abernethy, A. Agarwal, and P. Bartlett). COLT 2009.

Beating the Adaptive Bandit with High Probability (with J. Abernethy). Information Theory and Applications Workshop, 2009; COLT 2009.

The tech report version with all the proofs is here.

Matrix Regularization Techniques for Online Multitask Learning (with A. Agarwal and P. Bartlett). Technical Report, 2008.

Game-Theoretic Timing Analysis (with S. Seshia). IEEE/ACM Conference on Computer-Aided Design (ICCAD), 2008.

Competing in the Dark: An Efficient Algorithm for Bandit Linear Optimization (with J. Abernethy and E. Hazan). COLT 2008.

High probability regret bounds for online optimization (with P. Bartlett, V. Dani, T. Hayes, S. Kakade, and A. Tewari). COLT 2008.

Optimal Strategies and Minimax Lower Bounds for Online Convex Games (with Jacob Abernethy, Peter Bartlett, and Ambuj Tewari). COLT 2008.

Closing the Gap between Bandit and Full-Information Online Optimization: High-Probability Regret Bound (with Peter Bartlett and Ambuj Tewari), 2007

Adaptive Online Gradient Descent (with Peter Bartlett and Elad Hazan), NIPS 2007. Technical report version available here.

Online Discovery of Similarity Mappings (with Jacob Abernethy and Peter Bartlett), ICML 2007.

Multitask Learning with Expert Advice (with Jacob Abernethy and Peter Bartlett), COLT 2007. Technical report version available here.

Stability of K-means Clustering (with Andrea Caponnetto). NIPS, 2006.

Stability Properties of Empirical Risk Minimization over Donsker Classes (with Andrea Caponnetto). Journal of Machine Learning Research. Vol. 7 (Dec), 2565--2583, 2006.

(Older version as a technical report: AI Memo 2005-018. May 2005)

Risk Bounds for Mixture Density Estimation (with Dmitry Panchenko and Sayan Mukherjee). ESAIM Probability and Statistics. Vol. 9, 220-229, June 2005.

(Older version as a technical report: AI Memo 2004-001. Jan 2004)

Stability Results in Learning Theory (with Sayan Mukherjee and Tomaso Poggio). Analysis and Applications, Special Issue on Learning Theory. Vol. 3, No. 4, 397-419. October 2005.

On Stability and Concentration of Measure (with Sayan Mukherjee and Tomaso Poggio). CBCL Paper, Massachusetts Institute of Technology, Cambridge, MA, June 2004.

B (with Poggio, T., S. Mukherjee, R. Rifkin and A. Verri). In: Uncertainty in Geometric Computations, J. Winkler and M. Niranjan (eds.), Kluwer Academic Publishers, 131-141, 2002.

Bagging Regularizes (with T. Poggio, R. Rifkin, S. Mukherjee). AI Memo 2002-003, Massachusetts Institute of Technology, Cambridge, MA, February 2002.

"Extra-label Information: Experiments with View-based Classification." (with G. Yeo and T. Poggio). Proceedings of the Sixth International Conference on Knowledge-Based Intelligent Information & Engineering Systems (KES'2002), Podere d'Ombriano, Crema, Italy, September 16-18, 2002.