CS 395T, Spring 2024
Description
Algorithm design has traditionally been studied from a discrete perspective. Increasingly, however, modern algorithm design has benefitted from adopting a continuous perspective and using tools developed in the study of continuous mathematics. This course provides an introduction to the continuous algorithmic toolkit. In the first part, we survey topics in optimization theory (e.g. minimax and stochastic optimization, semidefinite programming, interior-point methods, and provable methods for structured nonconvex problems). In the second part, we build methods for sampling from continuous spaces, from general-purpose logconcave samplers to recent analysis techniques from stochastic differential equations and optimal transport. In the third part, we cover algorithms for high-dimensional statistics (e.g. PCA, learning mixture models, and robust parameter estimation), concluding by overviewing the interplay of differential privacy with algorithm design.
Course information: Here.
Time and location: 3:30 to 5:00 PM, Tuesdays and Thursdays (GDC 6.202).
Instructor:
Kevin Tian, kjtian (at) cs (dot) utexas (dot) edu.
Office hours: 3:30 to 5:30 PM, Mondays (GDC 4.720).
TA:
Shourya Pandey, shouryap (at) utexas (dot) edu.
Office hours: 2:00 to 3:00 PM, Wednesdays (Station 1, GDC 1.302), and 12:00 to 1:00 PM, Fridays (Zoom).
Assignments
Assignments (due Thursdays at start of class) will be posted below when they become available. Unless we have discussed and I have specified otherwise, homework is not accepted if it is not turned in by hand at the start of class, or turned in electronically on Canvas by then. Send me an email to discuss any exceptions.
Homework IHomework II
Homework III
Homework IV
Homework V
Homework VI
Syllabus
Dates | Topic | Notes | Announcements |
---|---|---|---|
Notation | |||
1/16, 1/18 | Convexity, logconcavity, and continuous algorithms | HW I out | |
1/23, 1/25 | Gradient descent | ||
1/30, 2/1 | Mirror descent | HW I due, HW II out | |
2/6 | Minimax optimization | ||
2/8 | Matrix analysis and concentration | ||
2/13 | Polynomial approximations | ||
2/15 | Matrix multiplicative weights | HW II due, HW III out | |
2/20 | Linear regression | ||
2/22 | Sparse recovery | ||
2/27 | Interior-point methods | ||
2/29 | Spectral graph theory | ||
3/5, 3/7, 3/19 | Ball walk | PDF
Reference |
HW III due, HW IV out |
3/21 | Isotropic rounding | Reference | |
3/26, 3/28, 4/2 | Stochastic calculus | HW IV due, HW V out | |
4/4 | Langevin algorithms | ||
4/9 | Principal component analysis | Reference | |
4/11 | Robust statistics | Reference
Reference |
HW V due, HW VI out |
4/16 | Mixture models | Reference
Reference |
|
4/18 | Generalized linear models | Reference
Reference |
|
4/23 | Differential privacy | ||
4/25 | Adaptive data analysis | Reference | HW VI due |
Resources
Concentration Inequalities: A Nonasymptotic Theory of Independence, Boucheron, Lugosi, and Massart.
Convex Optimization: Algorithms and Complexity, Bubeck.
Log-Concave Sampling, Chewi.
Recent Advances in Algorithmic High-Dimensional Statistics, Diakonikolas and Kane.
Lecture Notes on Statistics and Information Theory, Duchi.
Probability: Theory and Examples, Durrett.
The Algorithmic Foundations of Differential Privacy, Dwork and Roth.
Algorithms for Private Data Analysis, Kamath.
Spectral Algorithms, Kannan and Vempala.
The art and science of positive definite matrices, Lee.
Techniques in Optimization and Sampling, Lee and Vempala.
Markov Chains and Mixing Times, Levin, Peres, and Wilmer.
Robustness in Machine Learning, Li.
Algorithmic Aspects of Machine Learning, Moitra.
Interior-Point Polynomial Algorithms in Convex Programming, Nemirovski and Nesterov.
Problem Complexity and Method Efficiency in Optimization, Nemirovski and Yudin.
Lectures on Convex Optimization, Nesterov.
Stochastic Differential Equations: An Introduction with Applications, Øksendal.
Sublinear Algorithms, Price.
Faster Algorithms via Approximation Theory, Sachdeva and Vishnoi.
Online Learning and Online Convex Optimization, Shalev-Shwartz.
Optimization Algorithms, Sidford.
Spectral and Algebraic Graph Theory, Spielman.
Convex Optimization, Tibshirani.
Approximation Theory and Approximation Practice, Trefethen.
An Introduction to Matrix Concentration Inequalities, Tropp.
Algorithmic Convex Geometry, Vempala.
Dynamic Algebraic Algorithms, van den Brand.
Probability in High Dimension, van Handel.
High-Dimensional Probability: An Introduction with Applications in Data Science, Vershynin.
Optimal transport, old and new, Villani.
Sketching as a Tool for Numerical Linear Algebra, Woodruff.