Tuesday, January 19 |
Course outline. Probabilistic formulations of prediction problems |
01-notes.pdf
Download 01-notes.pdf |
Thursday, January 21 |
Plug-in estimators. Empirical risk minimization. Linear threshold functions. Perceptron algorithm |
02-notes.pdf
Download 02-notes.pdf |
Tuesday, January 26 |
Minimax risk bounds. Uniform convergence. Concentration |
03-notes.pdf
Download 03-notes.pdf |
Thursday, January 28 |
Hoeffding, Bernstein inequalities. Bounded differences inequality. |
04-notes.pdf
Download 04-notes.pdf |
Tuesday, February 2 |
Uniform laws of large numbers. Glivenko-Cantelli theorem. Rademacher complexity. |
05-notes.pdf
Download 05-notes.pdf |
Thursday, February 4 |
Growth function. Vapnik-Chervonenkis dimension. Sauer's Lemma. |
06-notes.pdf
Download 06-notes.pdf |
Tuesday, February 9 |
VC-dimension of neural networks. |
07-notes.pdf
Download 07-notes.pdf |
Thursday, February 11 |
Covering numbers, packing numbers. |
08-notes.pdf
Download 08-notes.pdf |
Tuesday, February 16 |
Chaining. Dudley's entropy integral. Sudakov's Theorem. |
09-notes.pdf
Download 09-notes.pdf |
Thursday, February 18 |
Pollard's pseudodimension. Convex losses for classification. |
10-notes.pdf
Download 10-notes.pdf |
Tuesday, February 23 |
Kernel methods. Reproducing kernel Hilbert spaces. Mercer's theorem. |
11-notes.pdf
Download 11-notes.pdf |
Thursday, February 25 |
Support vector machines. Convex optimization. |
12-notes.pdf
Download 12-notes.pdf |
Tuesday, March 1 |
Soft margin SVMs. Representer theorem. |
13-notes.pdf
Download 13-notes.pdf |
Thursday, March 3 |
Risk bounds for SVMs: Rademacher averages. Kernel ridge regression. Gaussian process regression. |
14-notes.pdf
Download 14-notes.pdf |
Tuesday, March 8 |
Online learning. Prediction with expert advice. Exponential weights. |
15-notes.pdf
Download 15-notes.pdf |
Thursday, March 10 |
Exponential weights as Bayesian prediction. Online to batch. Optimal regret. |
16-notes.pdf
Download 16-notes.pdf |
Tuesday, March 15 |
The boosting problem. AdaBoost as greedy variational algorithm. Weak learning, strong learning & large margin. |
17-notes.pdf
Download 17-notes.pdf |
Thursday, March 17 |
AdaBoost: other losses, multi-class, dual, iterative projection, and convergence. |
18-notes.pdf
Download 18-notes.pdf |
Tuesday, March 22 |
Spring |
|
Thursday, March 24 |
Break |
|
Tuesday, March 29 |
Model selection. Universal consistency of AdaBoost. Optimal regret and the dual game. |
19-notes.pdf
Download 19-notes.pdf |
Thursday, March 31 |
Sequential Rademacher averages. |
20-notes.pdf
Download 20-notes.pdf |
Tuesday, April 5 |
Optimal regret for linear games. Online convex optimization. |
21-notes.pdf
Download 21-notes.pdf |
Thursday, April 7 |
Online convex optimization: gradient method; regularized minimization; Bregman divergence. |
22-notes.pdf
Download 22-notes.pdf |
Tuesday, April 12 |
Online convex optimization: mirror descent. |
23-notes.pdf
Download 23-notes.pdf |
Thursday, April 14 |
Online convex optimization: regret bounds for mirror descent. |
24-notes.pdf
Download 24-notes.pdf |
Tuesday, April 19 |
Logarithmic regret with strongly convex losses. |
25-notes.pdf
Download 25-notes.pdf |
Thursday, April 21 |
Adaptive online optimization: adapting to strong convexity. Adagrad. |
26-notes.pdf
Download 26-notes.pdf |
Tuesday, April 26 |
Final project presentations |
|
Thursday, April 28 |
Final project presentations |
|