Rahul Kidambi

I am a graduate student of Prof. Sham M. Kakade studying Machine Learning at the University of Washington, Seattle.

CVGithub

contact: rkidambi AT uw DOT edu


Research:

I am interested in the design and analysis of practical Algorithms for Large Scale Machine Learning, as viewed through the lens of Computation, Statistics and Optimization. I am currently interested in developing a practically effective learning algorithm for deep learning and non-convex optimization.

Previously, I spent time at Microsoft Research, India, working on problems at the intersection of Structured Prediction, Semi-Supervised Learning and Active Learning.


Publications:

Asterisk [*] indicates alphabetical ordering of authors.

  • On the insufficiency of existing Momentum schemes for Stochastic Optimization,
    Rahul Kidambi, Praneeth Netrapalli, Prateek Jain and Sham M. Kakade.
    In International Conference on Learning Representations (ICLR), 2018.
    Oral Presentation; 23/1002 submissions ≈ 2% Acceptance Rate.
    ArXiv manuscript, abs/1803.05591, March 2018.
    [Open Review] [Code] [Slides (pptx)] [Poster (pdf)]

  • Leverage Score Sampling for Faster Accelerated Regression and ERM, [*]
    Naman Agarwal, Sham M. Kakade, Rahul Kidambi, Yin Tat Lee, Praneeth Netrapalli and Aaron Sidford.
    ArXiv manuscript, abs/1711.08426, November 2017.

  • A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares), [*]
    Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Venkata Krishna Pillutla and Aaron Sidford.
    Invited paper at FSTTCS 2017.
    ArXiv manuscript, abs/1710.09430, October 2017.

  • Accelerating Stochastic Gradient Descent, [*]
    Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli and Aaron Sidford.
    In Conference on Learning Theory (COLT), 2018.
    ArXiv manuscript, abs/1704.08227, April 2017.
    [COLT proceedings (PMLR vol. 75)] [Prateek's Slides (pptx)] [Poster (pdf)] [Video (Sham at MSR)]

  • Parallelizing Stochastic Gradient Descent for Least Squares Regression: mini-batching, averaging, and model misspecification1, [*]
    Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli and Aaron Sidford.
    Appeared in Journal of Machine Learning Research (JMLR), Vol. 18 (223), July 2018.
    ArXiv manuscript, abs/1610.03774, October 2016. Updated, April 2018.
    [JMLR page]

    The dblp listing provides a complete set of my papers.


    1. Previously titled "Parallelizing Stochastic Approximation Through Mini-Batching and Tail Averaging."

  • Academic Service:

  • Conference Reviewing/Sub-Reviewing: ISMB 2012, NIPS 2016, COLT 2017, 2018, NIPS 2018.
  • Journal Refereeing: Journal of Machine Learning Research (JMLR) - 2015, 2018, Electronic Journal of Statistics (EJS) - 2017.


  • Teaching:

    I have been a Teaching Assistant for the following classes:

  • CSE 547/STAT 548: Machine Learning for Big Data. (Spring 2018).
  • EE 514a: Information Theory-I (Autumn 2015).
  • EE 215: Fundamentals of Electrical Engineering (Autumn 2014, Winter 2015).


  • Miscellaneous:

    • Football • Basketball • Travel • Music • Running.