CS 395T, Spring 2025
Description
Algorithm design has traditionally been studied from a discrete perspective. Increasingly, however, modern algorithm design has benefitted from adopting a continuous perspective and using tools developed in the study of continuous mathematics. This course provides an introduction to the continuous algorithmic toolkit. In the first part, we survey topics in optimization theory (e.g., minimax and stochastic optimization, interior-point methods, and provable methods for structured nonconvex problems). We also highlight connections to approximation theory and numerical linear algebra. In the second part, we build methods for sampling from continuous spaces, from general-purpose logconcave samplers to recent analysis techniques from stochastic differential equations and optimal transport, with applications to modern diffusion models.
Course information: Here.
Time and location: 3:30 to 5:00 PM, Tuesdays and Thursdays (GDC 6.202).
Instructor:
Kevin Tian, kjtian (at) cs (dot) utexas (dot) edu.
Office hours: 3:30 to 5:00 PM, Mondays (GDC 4.720).
TA:
Chutong Yang, cyang98 (at) utexas (dot) edu.
Office hours: 1:00 to 2:00 PM, Tuesdays and Thursdays (Station 1, GDC 1.302).
Assignments
Assignments (due Thursdays at start of class) will be posted below when they become available. Unless we have discussed and I have specified otherwise, homework is not accepted if it is not turned in by hand at the start of class, or turned in electronically on Canvas by then. Send me an email to discuss any exceptions.
Homework ISyllabus
Dates | Topic | Notes | Announcements |
---|---|---|---|
Notation | |||
1/14, 1/16 | Convexity, logconcavity, and continuous algorithms | HW I out | |
1/21, 1/23 | Gradient descent | ||
1/28, 1/30 | Mirror descent | HW I due, HW II out | |
2/4 | Minimax optimization | ||
2/6 | High-order methods | ||
2/11, 2/13 | Matrix analysis and concentration | ||
2/18 | Polynomial approximations | ||
2/20 | Matrix multiplicative weights | HW II due, HW III out | |
2/25 | Linear regression | ||
2/27 | Low-rank approximation | ||
3/4 | Sparse recovery | ||
3/6 | Generalized linear models | ||
3/11 | Interior-point methods | ||
3/13 | Inverse maintenance | HW III due, HW IV out | |
3/25 | Spectral graph theory | ||
3/27, 4/1 | Continuous random walks | ||
4/3, 4/8 | Stochastic calculus | ||
4/10 | Langevin algorithms | HW IV due, HW V out | |
4/15 | Restricted Gaussian dynamics | ||
4/17 | General logconcave sampling | ||
4/22 | Diffusion models | ||
4/24 | Stochastic localization | HW V due |
Resources
Concentration Inequalities: A Nonasymptotic Theory of Independence, Boucheron, Lugosi, and Massart.
Convex Optimization: Algorithms and Complexity, Bubeck.
Log-Concave Sampling, Chewi.
Recent Advances in Algorithmic High-Dimensional Statistics, Diakonikolas and Kane.
Lecture Notes on Statistics and Information Theory, Duchi.
Probability: Theory and Examples, Durrett.
The Algorithmic Foundations of Differential Privacy, Dwork and Roth.
Algorithms for Private Data Analysis, Kamath.
Spectral Algorithms, Kannan and Vempala.
The art and science of positive definite matrices, Lee.
Techniques in Optimization and Sampling, Lee and Vempala.
Markov Chains and Mixing Times, Levin, Peres, and Wilmer.
Robustness in Machine Learning, Li.
Algorithmic Aspects of Machine Learning, Moitra.
Interior-Point Polynomial Algorithms in Convex Programming, Nemirovski and Nesterov.
Problem Complexity and Method Efficiency in Optimization, Nemirovski and Yudin.
Lectures on Convex Optimization, Nesterov.
Stochastic Differential Equations: An Introduction with Applications, Øksendal.
Sublinear Algorithms, Price.
Faster Algorithms via Approximation Theory, Sachdeva and Vishnoi.
Online Learning and Online Convex Optimization, Shalev-Shwartz.
Optimization Algorithms, Sidford.
Spectral and Algebraic Graph Theory, Spielman.
Convex Optimization, Tibshirani.
Approximation Theory and Approximation Practice, Trefethen.
An Introduction to Matrix Concentration Inequalities, Tropp.
Algorithmic Convex Geometry, Vempala.
Dynamic Algebraic Algorithms, van den Brand.
Probability in High Dimension, van Handel.
High-Dimensional Probability: An Introduction with Applications in Data Science, Vershynin.
Optimal transport, old and new, Villani.
Sketching as a Tool for Numerical Linear Algebra, Woodruff.