Activity

  • Principal Investigator: Jan 2020 - now

Preprints

  • An Empirical Investigation of the Role of Pre-training in Lifelong Learning
    Sanket Vaibhav Mehta, Darshan Patil, Sarath Chandar, and Emma Strubell
    In ArXiv, 2021.
    [arxiv]

  • Maximum Reward Formulation In Reinforcement Learning
    Sai Krishna Gottipati, Yashaswi Pathak, Rohan Nuttall, Raviteja Chunduru, Ahmed Touati, Sriram Ganapathi Subramanian, Matthew E Taylor, and Sarath Chandar
    In arXiv, 2020.
    [arXiv]

Conference and Journal Papers

2022

  1. TAG: Task-based Accumulated Gradients for Lifelong learning
    Pranshu Malviya, Balaraman Ravindran, and Sarath Chandar
    Conference on Lifelong Learning Agents (CoLLAs), 2022.
    [arxiv], [code]

  2. Combining Reinforcement Learning and Constraint Programming for Sequence-Generation Tasks with Hard Constraints
    Daphné Lafleur, Sarath Chandar, and Gilles Pesant
    Principles and Practice of Constraint Programming (CP), 2022.

  3. Towards Evaluating Adaptivity of Model-Based Reinforcement Learning Methods
    Yi Wan*, Ali Rahimi-Kalahroudi*, Janarthanan Rajendran, Ida Momennejad, Sarath Chandar, and Harm van Seijen
    International Conference on Machine Learning (ICML), 2022.
    [arXiv], [code]

  4. Post-hoc Interpretability for Neural NLP: A Survey
    Andreas Madsen, Siva Reddy, and Sarath Chandar
    ACM Computing Surveys, 2022.
    [arXiv]

  5. Local Structure Matters Most: Perturbation Study in NLU
    Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, and Sarath Chandar
    Findings of ACL, 2022.
    [arxiv]

  6. Memory Augmented Optimizers for Deep Learning
    Paul-Aymeric McRae, Prasanna Parthasarathi, Mido Assran, and Sarath Chandar
    International Conference on Learning Representations (ICLR), 2022.
    [openreview], [code]

  7. PatchUp: A Feature-Space Block-Level Regularization Technique for Convolutional Neural Networks
    Mojtaba Faramarzi, Mohammad Amini, Akilesh Badrinaaraayanan, Vikas Verma, and Sarath Chandar
    AAAI Conference on Artificial Intelligence (AAAI), 2022.
    [arXiv], [code]

2021

  1. MLMLM: Link Prediction with Mean Likelihood Masked Language Model
    Louis Clouatre, Philippe Trempe, Amal Zouaq, and Sarath Chandar
    Findings of ACL, 2021.
    [arXiv]

  2. Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?
    Prasanna Parthasarathi, Sarath Chandar, and Joelle Pineau
    Proceedings of the 22nd Annual SIGdial Meeting on Discourse and Dialogue, 2021.

  3. A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss
    Prasanna Parthasarathi, Mohamed Abdelsalam, Joelle Pineau, and Sarath Chandar
    Proceedings of the 22nd Annual SIGdial Meeting on Discourse and Dialogue, 2021.

  4. Continuous Coordination As a Realistic Scenario for Lifelong Learning
    Hadi Nekoei, Akilesh Badrinaaraayanan, Aaron Courville, and Sarath Chandar
    International Conference on Machine Learning (ICML), 2021.
    [arXiv], [code]

  5. A Survey of Data Augmentation Approaches for NLP
    Steven Y. Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, and Eduard Hovy
    Findings of ACL, 2021.
    [arXiv]

  6. Towered Actor Critic for Handling Multiple Action Types in Reinforcement Learning For Drug Discovery
    Sai Krishna Gottipati, Yashaswi Pathak, Boris Sattarov, Sahir, Rohan Nuttall, Mohammad Amini, Matthew E. Taylor, and Sarath Chandar
    AAAI Conference on Artificial Intelligence, 2021.

  7. IIRC: Incremental Implicitly-Refined Classification
    Mohamed Abdelsalam, Mojtaba Faramarzi, Shagun Sodhani, and Sarath Chandar
    Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
    [arXiv], [code], [website], [PyPI], [docs]

2020

  1. The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning
    Harm van Seijen, Hadi Nekoei, Evan Racah, and Sarath Chandar
    Neural Information Processing Systems (NeurIPS), 2020.
    [arXiv], [code]

  2. Learning To Navigate The Synthetically Accessible Chemical Space Using Reinforcement Learning
    Sai Krishna Gottipati*, Boris Sattarov*, Sufeng Niu, Yashaswi Pathak, Haoran Wei, Shengchao Liu, Karam MJ Thomas, Simon Blackburn, Connor W Coley, Jian Tang, Sarath Chandar, and Yoshua Bengio
    International Conference on Machine Learning (ICML), 2020.
    [arXiv]