Kevin Tian
he/him/his ‖ kjtian (at) cs (dot) utexas (dot) edu
I am an Assistant Professor of Computer Science at UT Austin. I study algorithmic problems in modern data science, often in the span of continuous optimization and high-dimensional statistics. I also have broad interests in trustworthy machine learning, e.g. robustness, privacy, and fairness.
From 2022-2023, I was a Postdoctoral Researcher in the Machine Learning Foundations group at Microsoft Research. I completed my Ph.D. in Computer Science at Stanford from 2016-2022 (where I was fortunate to be advised by Aaron Sidford), and my B.S. in Mathematics and Computer Science at MIT from 2012-2015.
My work has received generous funding from an NSF Graduate Research Fellowship, a Google Ph.D. Fellowship for Algorithms, Optimizations, and Markets, and a VMware Research Fellowship for the Simons Institute Geometric Methods in Optimization and Sampling program.
Please feel free to get in touch if we share interests. Website designed by Sunny Tian (check out some of her cool work here). Photo credits go to the wonderfully talented Amy Zhang.
My awesome collaborators (roughly chronological): Weihao Kong, Gregory Valiant, Teng Zhang, James Zou, Michael Cohen, Yair Carmon, John Duchi, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Jerry Li, Swati Padmanabhan, Ruoqi Shen, Qijia Jiang, Ilias Diakonikolas, Daniel Kane, Daniel Kongsgaard, Tselil Schramm, Sepehr Assadi, Christopher Musco, Allen Liu, Jonathan Kelner, Nima Anari, Thuy-Duong Vuong, Sivakanth Gopi, Daogao Liu, Adam Bouland, Ewin Tang, Victor Reis, Kiran Shiragur, Sushant Sachdeva, Yibin Zhao, Lunjia Hu, Ankit Pensia, Shourya Pandey, Hilal Asi, Vasilis Kontonis, Konstantinos Stavrapoulos, Gautam Chandrasekaran
Team
- Syamantak Kumar, Ph.D. student, co-advised with Purnamrita Sarkar.
- Chutong Yang, Ph.D. student.
- Yusong Zhu, Ph.D. student, co-advised with Eric Price.
- Jonathan Li, M.S. student.
- Jennifer Mickel, undergraduate alumnus.
- Saloni Modi, undergraduate.
Teaching
Thesis
-
Iterative Methods for Structured Algorithmic Data Science
Stanford University, 2022.
Arthur Samuel Award for Best Doctoral Thesis in Computer Science.
Preprints
-
Omnipredicting Single-Index Models with Multi-Index Models
with Lunjia Hu and Chutong Yang
arXiv, 2024.
Publications
-
Eulerian Graph Sparsification by Effective Resistance Decomposition
with Arun Jambulapati, Sushant Sachdeva, Aaron Sidford, and Yibin Zhao
Symposium on Discrete Algorithms (SODA), 2025. -
Learning Noisy Halfspaces with a Margin: Massart is No Harder than Random
with Gautam Chandrasekaran, Vasilis Kontonis, and Konstantinos Stavropoulos
Neural Information Processing Systems (NeurIPS), 2024.
Spotlight presentation. -
Semi-Random Matrix Completion via Flow-Based Adaptive Reweighting
with Jonathan A. Kelner, Jerry Li, Allen Liu, and Aaron Sidford
Neural Information Processing Systems (NeurIPS), 2024. -
Testing Calibration in Nearly-Linear Time
with Lunjia Hu, Arun Jambulapati, and Chutong Yang
Neural Information Processing Systems (NeurIPS), 2024. -
Private Stochastic Convex Optimization with Heavy Tails: Near-Optimality from Simple Reductions
with Hilal Asi and Daogao Liu
Neural Information Processing Systems (NeurIPS), 2024. -
Closing the Computational-Query Depth Gap in Parallel Stochastic Convex Optimization
with Arun Jambulapati and Aaron Sidford
Conference on Learning Theory (COLT), 2024. -
Black-Box k-to-1 PCA Reductions: Theory and Applications
with Arun Jambulapati, Syamantak Kumar, Jerry Li, Shourya Pandey, and Ankit Pensia
Conference on Learning Theory (COLT), 2024. -
Linear-Sized Sparsifiers via Near-Linear Time Discrepancy Theory
with Arun Jambulapati and Victor Reis
Symposium on Discrete Algorithms (SODA), 2024. -
A CS guide to the quantum singular value transformation
with Ewin Tang
Symposium on Simplicity in Algorithms (SOSA), 2024. -
Revisiting Area Convexity: Faster Box-Simplex Games and Spectrahedral Generalizations
with Arun Jambulapati
Neural Information Processing Systems (NeurIPS), 2023. -
Structured Semidefinite Programming for Recovering Structured Preconditioners
with Arun Jambulapati, Jerry Li, Christopher Musco, Kirankumar Shiragur, and Aaron Sidford
Neural Information Processing Systems (NeurIPS), 2023. -
Matrix Completion in Almost-Verification Time
with Jonathan A. Kelner, Jerry Li, Allen Liu, and Aaron Sidford
Foundations of Computer Science (FOCS), 2023. -
ReSQueing Parallel and Private Stochastic Convex Optimization
with Yair Carmon, Arun Jambulapati, Yujia Jin, Yin Tat Lee, Daogao Liu, and Aaron Sidford
Foundations of Computer Science (FOCS), 2023. -
Algorithmic Aspects of the Log-Laplace Transform and a Non-Euclidean Proximal Sampler
with Sivakanth Gopi, Yin Tat Lee, Daogao Liu, and Ruoqi Shen
Conference on Learning Theory (COLT), 2023. -
Semi-Random Sparse Recovery in Nearly-Linear Time
with Jonathan A. Kelner, Jerry Li, Allen Liu, and Aaron Sidford
Conference on Learning Theory (COLT), 2023. -
Quadratic Speedups in Parallel Sampling from Determinantal Distributions
with Nima Anari, Callum Burgess, and Thuy-Duong Vuong
Symposium on Parallelism in Algorithms and Architectures (SPAA), 2023. -
Quantum Speedups for Zero-Sum Games via Improved Dynamic Gibbs Sampling
with Adam Bouland, Yosheb Getachew, Yujia Jin, and Aaron Sidford
International Conference on Machine Learning (ICML), 2023. -
Private Convex Optimization in General Norms
with Sivakanth Gopi, Yin Tat Lee, Daogao Liu, and Ruoqi Shen
Symposium on Discrete Algorithms (SODA), 2023. -
Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Methods
with Yujia Jin and Aaron Sidford
Conference on Learning Theory (COLT), 2022. -
Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching
with Arun Jambulapati, Yujia Jin, and Aaron Sidford
International Colloquium on Automata, Languages, and Programming (ICALP), 2022. -
Clustering Mixture Models in Almost-Linear Time via List-Decodable Mean Estimation
with Ilias Diakonikolas, Daniel M. Kane, Daniel Kongsgaard, and Jerry Li
Symposium on Theory of Computing (STOC), 2022. -
Semi-Streaming Bipartite Matching in Fewer Passes and Optimal Space
with Sepehr Assadi, Arun Jambulapati, Yujia Jin and Aaron Sidford
Symposium on Discrete Algorithms (SODA), 2022. -
Lower Bounds on Metropolized Sampling Methods for Well-Conditioned Distributions
with Yin Tat Lee and Ruoqi Shen
Neural Information Processing Systems (NeurIPS), 2021.
Oral presentation (top 1% of submissions). -
List-Decodable Mean Estimation in Nearly-PCA Time
with Ilias Diakonikolas, Daniel M. Kane, Daniel Kongsgaard, and Jerry Li
Neural Information Processing Systems (NeurIPS), 2021.
Spotlight presentation (top 3% of submissions). -
Robust Regression Revisited: Acceleration and Improved Estimation Rates
with Arun Jambulapati, Jerry Li, and Tselil Schramm
Neural Information Processing Systems (NeurIPS), 2021. -
Structured Logconcave Sampling with a Restricted Gaussian Oracle
with Yin Tat Lee and Ruoqi Shen
Conference on Learning Theory (COLT), 2021. -
Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration
with Michael B. Cohen and Aaron Sidford
Innovations in Theoretical Computer Science (ITCS), 2021. -
Acceleration with a Ball Optimization Oracle
with Yair Carmon, Arun Jambulapati, Qijia Jiang, Yujia Jin, Yin Tat Lee, and Aaron Sidford
Neural Information Processing Systems (NeurIPS), 2020.
Oral presentation (top 1% of submissions). -
Robust Sub-Gaussian Principal Component Analysis and Width-Independent Schatten Packing
with Arun Jambulapati and Jerry Li
Neural Information Processing Systems (NeurIPS), 2020.
Spotlight presentation (top 3% of submissions). -
Coordinate Methods for Matrix Games
with Yair Carmon, Yujia Jin, and Aaron Sidford
Foundations of Computer Science (FOCS), 2020. -
Logsmooth Gradient Concentration and Tighter Runtimes for Metropolized Hamiltonian Monte Carlo
with Yin Tat Lee and Ruoqi Shen
Conference on Learning Theory (COLT), 2020. -
Variance Reduction for Matrix Games
with Yair Carmon, Yujia Jin, and Aaron Sidford
Neural Information Processing Systems (NeurIPS), 2019.
Oral presentation (top 0.5% of submissions). -
A Direct Õ(1/ϵ) Iteration Parallel Algorithm for Optimal Transport
with Arun Jambulapati and Aaron Sidford
Neural Information Processing Systems (NeurIPS), 2019. -
A Rank-1 Sketch for Matrix Multiplicative Weights
with Yair Carmon, John C. Duchi, and Aaron Sidford
Conference on Learning Theory (COLT), 2019. -
Coordinate Methods for Accelerating ℓ∞ Regression and Faster Approximate Maximum Flow
with Aaron Sidford
Foundations of Computer Science (FOCS), 2018.
Invited to the FOCS SICOMP Special Issue. -
CoVeR: Learning Covariate-Specific Vector Representations with Tensor Decompositions
with Teng Zhang and James Zou
International Conference on Machine Learning (ICML), 2018. -
Learning Populations of Parameters
with Weihao Kong and Gregory Valiant
Neural Information Processing Systems (NeurIPS), 2017.
Here is a humorous informational video about the paper.
Earlier published work
-
A novel k-mer set memory (KSM) motif representation improves regulatory variant prediction
with Yuchun Guo, Haoyang Zeng, Xiaoyun Guo, and David K. Gifford
Genome Research, 2018. -
Predicting gene expression in massively parallel reporter assays: a comparative study
with Anat Kreimer, Haoyang Zeng, et al.
Human Mutation, 2017. -
K-mer Set Memory (KSM) Motif Representation Enables Accurate Prediction of the Impact of Regulatory Variants
with Yuchun Guo, Haoyang Zeng, and David K. Gifford
International Conference on Research in Computational Molecular Biology (RECOMB), 2017. -
On the Power Dominating Sets of Hypercubes
with Nathaniel Dean, Alexandra Ilic, Ignacio Ramirez, and Jian Shen
International Conference on Computational Science and Engineering (CSE), 2011.
Other
-
Positive Semidefinite Programming: Mixed, Parallel, and Width-Independent
with Arun Jambulapati, Yin Tat Lee, Jerry Li, and Swati Padmanabhan
Manuscript.
A preliminary version was presented at Symposium on Theory of Computing (STOC), 2020. The authors later discovered an error in Section 5, which we currently do not know how to fix.
Professional service
At Stanford, I was a graduate teaching assistant for MS&E 213/CS 269O: Introduction to Optimization Theory, MS&E 313/CS 269G: Almost Linear Time Graph Algorithms, and CS 168: The Modern Algorithmic Toolbox. At MIT, I was a teaching assistant for 6.006: Introduction to Algorithms. In 2018-19, I ran the Stanford Optimization for Algorithm Design reading group. I was the Stanford Theory Lunch organizer for Spring 2020. Together with Sinho Chewi, I co-organized the Complexity of Sampling reading group at the Simons Institute in Fall 2021.
Program committee member: COLT 2023, STOC 2024, COLT 2024, ALT 2025
Workshop organizer: SIAM Conference on Optimization 2023, INFORMS Optimization Society Conference 2024
Conference reviewer: STOC, COLT, ICALP, ICML, NeurIPS, SODA, STACS, FOCS, SOSA, ITCSJournal reviewer: Bernoulli, Mathematical Programming, Journal of Machine Learning Research, Transactions on Information Theory, SIAM Journal on Optimization, Journal of the ACM, The Computer Journal, Journal of Optimization Theory and Applications, Transactions on Quantum Computing, Probability Theory and Related Fields, Information and Computation, Mathematics of Operations Research