Preprints

  • Are self-explanations from Large Language Models faithful?
    Andreas Madsen, Sarath Chandar, and Siva Reddy
    In ArXiv, 2024.
    #NLP
    [arXiv], [code]

  • Faithfulness Measurable Masked Language Models
    Andreas Madsen, Siva Reddy, and Sarath Chandar
    In ArXiv, 2023.
    #NLP
    [arXiv], [code], [YouTube]

  • Lookbehind Optimizer: k steps back, 1 step forward
    Gonçalo Mordido, Pranshu Malviya, Aristide Baratin, and Sarath Chandar
    In ArXiv, 2023.
    #DL
    [arXiv]

  • Promoting Exploration in Memory-Augmented Adam using Critical Momenta
    Pranshu Malviya, Gonçalo Mordido, Aristide Baratin, Reza Babanezhad Harikandeh, Jerry Huang, Simon Lacoste-Julien, Razvan Pascanu, and Sarath Chandar
    In ArXiv, 2023.
    #DL
    [arXiv]

  • Should We Attend More or Less? Modulating Attention for Fairness
    Abdelrahman Zayed, Gonçalo Mordido, Samira Shabanian, and Sarath Chandar
    In ArXiv, 2023.
    #NLP
    [arXiv]

  • Segmentation of Multiple Sclerosis Lesions across Hospitals: Learn Continually or Train from Scratch?
    Naga Karthik Enamundram, Anne Kerbrat, Pierre Labauge, Tobias Granberg, Jason Talbott, Daniel S. Reich, Massimo Filippi, Rohit Bakshi, Virginie Callot, Sarath Chandar, and Julien Cohen-Adad
    In ArXiv, 2022.
    #DL
    [arXiv], [code]

  • Sharpness-Aware Training for Accurate Inference on Noisy DNN Accelerators
    Gonçalo Mordido, Sarath Chandar, and François Leduc-Primeau
    Conference on Lifelong Learning Agents (CoLLAs) workshop, 2022.
    [Edge Intelligence Workshop (EIW), 2022]
    #DL
    [arXiv]

  • An Introduction to Lifelong Supervised Learning
    Shagun Sodhani, Mojtaba Farmazi, Sanket Vaibhav Mehta, Pranshu Malviya, Mohamed Abdelsalam, Janarthanan Rajendran, and Sarath Chandar
    In ArXiv, 2022.
    #DL
    [arXiv]

  • RECOVER: Sequential Model Optimization Platform for Combination Drug Repurposing Identifies Novel Synergistic Compounds in vitro
    Paul Bertin, Jarrid Rector-Brooks, Deepak Sharma, Thomas Gaudelet, Andrew Anighoro, Torsten Gross, Francisco Martínez-Peña, Eileen L. Tang, Suraj M S, Cristian Regep, Jeremy Hayter, Maksym Korablyov, Nicholas Valiante, Almer van der Sloot, Mike Tyers, Charles Roberts, Michael M. Bronstein, Luke L. Lairson, Jake P. Taylor-King, and Yoshua Bengio
    In arXiv, 2022.
    #DL
    [arXiv], [code]

  • Maximum Reward Formulation In Reinforcement Learning
    Sai Krishna Gottipati, Yashaswi Pathak, Rohan Nuttall, Raviteja Chunduru, Ahmed Touati, Sriram Ganapathi Subramanian, Matthew E Taylor, and Sarath Chandar
    In arXiv, 2020.
    #RL
    [arXiv]

Conference and Journal Papers

2024

  1. Mastering Memory Tasks with World Models
    Mohammad Reza Samsami*, Artem Zholus*, Janarthanan Rajendran, and Sarath Chandar
    International Conference on Learning Representations (ICLR), 2024. [Oral presentation.]
    #RL, #DL
    [openreview]

  2. Intelligent Switching for Reset-Free RL
    Darshan Patil, Janarthanan Rajendran, Glen Berseth, and Sarath Chandar
    International Conference on Learning Representations (ICLR), 2024.
    #RL
    [openreview]

  3. Fast and Accurate Output Error Estimation for Memristor-Based Deep Neural Networks
    Jonathan Kern, Sébastien Henwood, Gonçalo Mordido, Elsa Dupraz, Abdeldjalil Aïssa-El-Bey, Yvon Savaria, and François Leduc-Primeau
    IEEE Transactions on Signal Processing, 2024.
    #DL
    [paper]

  4. Fairness-Aware Structured Pruning in Transformers
    Abdelrahman Zayed, Gonçalo Mordido, Samira Shabanian, Ioana Baldini, and Sarath Chandar
    AAAI Conference on Artificial Intelligence (AAAI), 2024.
    #NLP
    [arXiv]

  5. Learning Conditional Policies for Crystal Design Using Offline Reinforcement Learning
    Prashant Govindarajan, Santiago Miret, Jarrid Rector-Brooks, Mariano Phielipp, Janarthanan Rajendran, and Sarath Chandar
    Digital Discovery Journal, 2024.
    #RL
    [openreview]

2023

  1. Self-Influence Guided Data Reweighting for Language Model Pre-training
    Megh Thakkar, Tolga Bolukbasi, Sriram Ganapathy, Shikhar Vashishth, Sarath Chandar, and Partha Talukdar
    Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
    #NLP
    [arXiv]

  2. EpiK-Eval: Evaluation for Language Models as Epistemic Models
    Gabriele Prato, Jerry Huang, Prasanna Parthasarathi, Shagun Sodhani, and Sarath Chandar
    Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
    #NLP
    [arXiv], [code]

  3. Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models
    Amirhossein Kazemnejad, Mehdi Rezagholizadeh, Prasanna Parthasarathi, and Sarath Chandar
    Findings of Empirical Methods in Natural Language Processing (EMNLP), 2023.
    #NLP
    [arXiv]

  4. Training DNNs Resilient to Adversarial and Random Bit-Flips by Learning Quantization Ranges
    Kamran Chitsaz, Gonçalo Mordido, Jean Pierre David, and François Leduc-Primeau
    Transactions on Machine Learning Research (TMLR), 2023.
    #DL
    [openreview], [code]

  5. Replay Buffer with Local Forgetting for Adapting to Local Environment Changes in Deep Model-Based Reinforcement Learning
    Ali Rahimi-Kalahroudi, Janarthanan Rajendran, Ida Momennejad, Harm van Seijen, and Sarath Chandar
    Conference on Lifelong Learning Agents (CoLLAs), 2023.
    [Deep Reinforcement Learning Workshop, NeurIPS, 2022]
    #RL
    [arXiv]

  6. Towards Few-shot Coordination: Revisiting Ad-hoc Teamplay Challenge In the Game of Hanabi
    Hadi Nekoei, Xutong Zhao, Janarthanan Rajendran, Miao Liu, and Sarath Chandar
    Conference on Lifelong Learning Agents (CoLLAs), 2023.
    #RL
    [paper]

  7. Dealing With Non-stationarity in Decentralized Cooperative Multi-Agent Deep Reinforcement Learning via Multi-Timescale Learning
    Hadi Nekoei, Akilesh Badrinaaraayanan, Amit Sinha, Mohammad Amini, Janarthanan Rajendran, Aditya Mahajan, and Sarath Chandar
    Conference on Lifelong Learning Agents (CoLLAs), 2023.
    #RL
    [arXiv]

  8. Conditionally Optimistic Exploration for Cooperative Deep Multi-Agent Reinforcement Learning
    Xutong Zhao, Yangchen Pan, Chenjun Xiao, Sarath Chandar, and Janarthanan Rajendran
    Conference on Uncertainty in Artificial Intelligence (UAI), 2023.
    #RL
    [arXiv]

  9. An Empirical Investigation of the Role of Pre-training in Lifelong Learning
    Sanket Vaibhav Mehta, Darshan Patil, Sarath Chandar, and Emma Strubell
    Journal of Machine Learning Research, 2023.
    #DL
    [arXiv]

  10. Multi-Agent Reinforcement Learning for Fast-Timescale Demand Response of Residential Loads
    Vincent Mai, Philippe Maisonneuve, Tianyu Zhang, Hadi Nekoei, Liam Paull, and Antoine Lesage-Landry
    International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2023.
    #RL
    [arXiv]

  11. Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness
    Abdelrahman Zayed, Prasanna Parthasarathi, Gonçalo Mordido, Hamid Palangi, Samira Shabanian, and Sarath Chandar
    AAAI Conference on Artificial Intelligence (AAAI), 2023.
    #NLP
    [arXiv]

  12. DEUP: Direct Epistemic Uncertainty Prediction
    Moksh Jain, Salem Lahlou, Hadi Nekoei, Victor Butoi, Paul Bertin, Jarrid Rector-Brooks, Maksym Korablyov, and Yoshua Bengio
    Transactions on Machine Learning Research (TMLR), 2023.
    #DL
    [arXiv], [code]

  13. Label fusion and training methods for reliable representation of inter-rater uncertainty
    Andreanne Lemay, Charley Gros, Naga Karthik Enamundram, and Julien Cohen-Adad
    The Journal of Machine Learning for Biomedical Imaging (MELBA), 2023.
    #DL
    [paper]

2022

  1. Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining
    Andreas Madsen, Nicholas Meade, Vaibhav Adlakha, and Siva Reddy
    Findings of Empirical Methods in Natural Language Processing (EMNLP), 2022.
    [BlackboxNLP Workshop, 2022]
    #NLP
    [arXiv], [code]

  2. Detecting Languages Unintelligible to Multilingual Models through Local Structure Probes
    Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, and Sarath Chandar
    Findings of Empirical Methods in Natural Language Processing (EMNLP), 2022.
    #NLP

  3. Local Structure Matters Most in Most Languages
    Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, and Sarath Chandar
    Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (AACL-IJCNLP), 2022.
    #NLP

  4. TAG: Task-based Accumulated Gradients for Lifelong learning
    Pranshu Malviya, Balaraman Ravindran, and Sarath Chandar
    Conference on Lifelong Learning Agents (CoLLAs), 2022.
    [Workshop on Theory and Foundation of Continual Learning, ICML, 2021]
    #DL
    [arXiv], [code]

  5. Improving Meta-Learning Generalization with Activation-Based Early-Stopping
    Simon Guiroy, Christopher Pal, Gonçalo Mordido, and Sarath Chandar
    Conference on Lifelong Learning Agents (CoLLAs), 2022.
    #DL
    [arXiv], [code], [YouTube]

  6. Combining Reinforcement Learning and Constraint Programming for Sequence-Generation Tasks with Hard Constraints
    Daphné Lafleur, Sarath Chandar, and Gilles Pesant
    Principles and Practice of Constraint Programming (CP), 2022.
    #RL

  7. Biological Sequence Design with GFlowNets
    Moksh Jain, Emmanuel Bengio, Alex-Hernandez Garcia, Jarrid Rector-Brooks, Bonaventure F. P. Dossou, Chanakya Ekbote, Jie Fu, Tianyu Zhang, Micheal Kilgour, Dinghuai Zhang, Lena Simine, Payel Das, and Yoshua Bengio
    International Conference on Machine Learning (ICML), 2022.
    #DL
    [arXiv], [code]

  8. Towards Evaluating Adaptivity of Model-Based Reinforcement Learning Methods
    Yi Wan*, Ali Rahimi-Kalahroudi*, Janarthanan Rajendran, Ida Momennejad, Sarath Chandar, and Harm van Seijen
    International Conference on Machine Learning (ICML), 2022.
    #RL
    [arXiv], [code]

  9. Post-hoc Interpretability for Neural NLP: A Survey
    Andreas Madsen, Siva Reddy, and Sarath Chandar
    ACM Computing Surveys, 2022.
    #NLP
    [arXiv]

  10. Local Structure Matters Most: Perturbation Study in NLU
    Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, and Sarath Chandar
    Findings of the Annual Meeting of the Association for Computational Linguistics (ACL), 2022.
    #NLP
    [arXiv]

  11. Towards Language-independent Brown Build Detection
    Doriane Olewicki, Mathieu Nayrolles, and Bram Adams
    International Conference on Software Engineering (ICSE), 2022.
    #DL

  12. Memory Augmented Optimizers for Deep Learning
    Paul-Aymeric McRae, Prasanna Parthasarathi, Mido Assran, and Sarath Chandar
    International Conference on Learning Representations (ICLR), 2022.
    #DL
    [openreview], [code]

  13. PatchUp: A Feature-Space Block-Level Regularization Technique for Convolutional Neural Networks
    Mojtaba Faramarzi, Mohammad Amini, Akilesh Badrinaaraayanan, Vikas Verma, and Sarath Chandar
    AAAI Conference on Artificial Intelligence (AAAI), 2022.
    #DL
    [arXiv], [code]

2021

  1. MLMLM: Link Prediction with Mean Likelihood Masked Language Model
    Louis Clouâtre, Philippe Trempe, Amal Zouaq, and Sarath Chandar
    Findings of the Annual Meeting of the Association for Computational Linguistics (ACL), 2021.
    #NLP
    [arXiv]

  2. Benchmarking Bias Mitigation Algorithms in Representation Learning through Fairness Metrics
    Charan Reddy, Deepak Sharma, Soroush Mehri, Adriana Romero, Samira Shabanian, and Sina Honari
    Proceedings of the Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks, 2021.
    #NLP
    [openreview], [code]

  3. A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss
    Prasanna Parthasarathi, Mohamed Abdelsalam, Joelle Pineau, and Sarath Chandar
    Proceedings of the 22nd Annual SIGdial Meeting on Discourse and Dialogue, 2021.
    #NLP

  4. Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?
    Prasanna Parthasarathi, Sarath Chandar, and Joelle Pineau
    Proceedings of the 22nd Annual SIGdial Meeting on Discourse and Dialogue, 2021.
    #NLP

  5. Continuous Coordination As a Realistic Scenario for Lifelong Learning
    Hadi Nekoei, Akilesh Badrinaaraayanan, Aaron Courville, and Sarath Chandar
    International Conference on Machine Learning (ICML), 2021.
    #RL
    [arXiv], [code]

  6. A Survey of Data Augmentation Approaches for NLP
    Steven Y. Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, and Eduard Hovy
    Findings of the Annual Meeting of the Association for Computational Linguistics (ACL), 2021.
    #NLP
    [arXiv]

  7. Towered Actor Critic for Handling Multiple Action Types in Reinforcement Learning For Drug Discovery
    Sai Krishna Gottipati, Yashaswi Pathak, Boris Sattarov, Sahir, Rohan Nuttall, Mohammad Amini, Matthew E. Taylor, and Sarath Chandar
    AAAI Conference on Artificial Intelligence (AAAI), 2021.
    #RL

  8. IIRC: Incremental Implicitly-Refined Classification
    Mohamed Abdelsalam, Mojtaba Faramarzi, Shagun Sodhani, and Sarath Chandar
    Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
    #DL
    [arXiv], [code], [website], [PyPI], [docs]

2020

  1. The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning
    Harm van Seijen, Hadi Nekoei, Evan Racah, and Sarath Chandar
    Neural Information Processing Systems (NeurIPS), 2020.
    #RL
    [arXiv], [code]

  2. Learning To Navigate The Synthetically Accessible Chemical Space Using Reinforcement Learning
    Sai Krishna Gottipati*, Boris Sattarov*, Sufeng Niu, Yashaswi Pathak, Haoran Wei, Shengchao Liu, Karam MJ Thomas, Simon Blackburn, Connor W Coley, Jian Tang, Sarath Chandar, and Yoshua Bengio
    International Conference on Machine Learning (ICML), 2020.
    #RL
    [arXiv]