Rahul Kidambi

I am a graduate student of Prof. Sham M. Kakade studying Machine Learning at the University of Washington, Seattle.

CVGithubScholar

contact: rkidambi AT uw DOT edu


Research:

I am interested in the design and analysis of practical Algorithms for Large Scale Machine Learning, as viewed through the lens of Computation, Statistics and Optimization.

Previously, I spent time at Microsoft Research, India, working on problems at the intersection of Structured Prediction, Semi-Supervised Learning and Active Learning.


Publications:

Asterisk [*] indicates alphabetical ordering of authors.

Recent Pre-Prints:

  • Leverage Score Sampling for Faster Accelerated Regression and ERM, [*]
    Naman Agarwal, Sham M. Kakade, Rahul Kidambi, Yin Tat Lee, Praneeth Netrapalli and Aaron Sidford.
    ArXiv manuscript, abs/1711.08426, November 2017.

    Papers:

  • On the insufficiency of existing Momentum schemes for Stochastic Optimization,
    Rahul Kidambi, Praneeth Netrapalli, Prateek Jain and Sham M. Kakade.
    In International Conference on Learning Representations (ICLR), 2018.
    Oral Presentation; 23/1002 submissions ≈ 2% Acceptance Rate.
    Also an invited paper at the Information Theory and Applications (ITA) workshop, San Diego, February 2018.
    ArXiv manuscript, abs/1803.05591, March 2018.
    [Open Review] [ITA version] [Code] [Slides (pptx)] [Poster (pdf)]

  • A Markov Chain Theory Approach to Characterizing the Minimax Optimality of Stochastic Gradient Descent (for Least Squares), [*]
    Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli, Venkata Krishna Pillutla and Aaron Sidford.
    Invited paper at FSTTCS 2017.
    ArXiv manuscript, abs/1710.09430, October 2017.

  • Accelerating Stochastic Gradient Descent for least squares regression2, [*]
    Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli and Aaron Sidford.
    In Conference on Learning Theory (COLT), 2018.
    ArXiv manuscript, abs/1704.08227, April 2017.
    [COLT proceedings] [Prateek's Slides (pptx)] [Poster (pdf)] [Video (Sham at MSR)]

  • Parallelizing Stochastic Gradient Descent for Least Squares Regression: mini-batching, averaging, and model misspecification1, [*]
    Prateek Jain, Sham M. Kakade, Rahul Kidambi, Praneeth Netrapalli and Aaron Sidford.
    Appeared in Journal of Machine Learning Research (JMLR), Vol. 18 (223), July 2018.
    ArXiv manuscript, abs/1610.03774, October 2016. Updated, April 2018.
    [JMLR proceedings]

  • Submodular Hamming Metrics,
    Jennifer Gillenwater, Rishabh K. Iyer, Bethany Lusch, Rahul Kidambi, Jeff A. Bilmes.
    Appeared in Neural Information Processing Systems (NeurIPS), December 2015.
    Spotlight presentation.
    ArXiv manuscript, abs/1511.02163, November 2015.
    [NeurIPS proceedings]

  • On Shannon capacity and causal estimation,
    Rahul Kidambi and Sreeram Kannan.
    Invited paper at Allerton Conference on Communication, Control, and Computing, 2015.
    [Allerton proceedings]

  • Deformable trellises on factor graphs for robust microtubule tracking in clutter,
    Rahul Kidambi, Min-Chi Shih, Kenneth Rose.
    Appeared in International Symposium on Biomedical Imaging (ISBI), May 2012.
    [ISBI proceedings]

    (Selected) Past Work:

  • A Structured Prediction Approach for Missing Value Imputation,
    Rahul Kidambi, Vinod Nair, Sundararajan Sellamanickam, S. Sathiya Keerthi.
    ArXiv manuscript, abs/1311.2137, November 2013.

  • A Quantitative Evaluation Framework for Missing Value Imputation Algorithms,
    Vinod Nair, Rahul Kidambi, Sundararajan Sellamanickam, S. Sathiya Keerthi, Johannes Gehrke, Vijay Narayanan.
    ArXiv manuscript, abs/1311.2276, November 2013.

    The dblp maintains a listing of my papers.


    1. Previously titled "Parallelizing Stochastic Approximation Through Mini-Batching and Tail Averaging."
    2. Previously titled "Accelerating Stochastic Gradient Descent."

  • Academic Service:

  • Conference Reviewing/Sub-Reviewing: ISMB 2012, NeurIPS 2016, COLT 2017, COLT 2018, NeurIPS 2018, AISTATS 2019, ICLR 2019, ICML 2019.
  • Journal Refereeing: Journal of Machine Learning Research (JMLR) - 2015, 2018, Electronic Journal of Statistics (EJS) - 2017, IEEE Trans. on Information Theory - 2018.


  • Teaching:

    I have been a Teaching Assistant for the following classes:

  • CSE 547/STAT 548: Machine Learning for Big Data. (Spring 2018).
  • EE 514a: Information Theory-I (Autumn 2015).
  • EE 215: Fundamentals of Electrical Engineering (Autumn 2014, Winter 2015).


  • Miscellaneous:

    • Football • Basketball • Travel • Music • Running.