Activité

  • Chercheur principal: jan. 2020 - maintenant

Prépublications

Articles de conférence et de revue

2024

  1. Exploring Quantization for Efficient Pre-Training of Transformer Language Models
    , , et
    Findings of the Association for Computational Linguistics (EMNLP), 2024.
    #NLP, #DL
    [arXiv]

  2. Context-Aware Assistant Selection for Improved Inference Acceleration with Large Language Models
    , Prasanna Parthasarathi, Mehdi Rezagholizadeh et
    Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024.
    #NLP
    [arXiv]

  3. Do Large Language Models Know How Much They Know?
    , , Prasanna Parthasarathi, Shagun Sodhani et
    Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024.
    #NLP

  4. Should We Attend More or Less? Modulating Attention for Fairness
    , , Samira Shabanian et
    Conference on Language Modeling (COLM), 2024.
    #NLP
    [arXiv]

  5. Are self-explanations from Large Language Models faithful?
    , et Siva Reddy
    Findings of the Association for Computational Linguistics (ACL), 2024.
    #NLP
    [arXiv], [code]

  6. A deep-dive into the tradeoffs of preference alignment with PEFT
    , , Matthew Riemer, Pin-Yu Chen, Amal Zouaq, Payel Das et
    Annual Meeting of the Association for Computational Linguistics (ACL), 2024.
    #NLP
    [arXiv]

  7. Why Don’t Prompt-Based Fairness Metrics Correlate?
    , , Ioana Baldini et
    Annual Meeting of the Association for Computational Linguistics (ACL), 2024.
    #NLP
    [arXiv], [YouTube]

  8. Sub-goal Distillation: A Method to Improve Small Language Agents
    , Elias Stengel-Eskin, et Marc-Alexandre Cote
    Conference on Lifelong Learning Agents (CoLLAs), 2024.
    #RL, #NLP
    [arXiv]

  9. Lookbehind-SAM: k steps back, 1 step forward
    , , Aristide Baratin et
    International Conference on Machine Learning (ICML), 2024.
    #DL
    [arXiv], [code], [YouTube]

  10. Faithfulness Measurable Masked Language Models
    , Siva Reddy et
    International Conference on Machine Learning (ICML), 2024. [Spotlight award - top 3.5%]
    #NLP
    [arXiv], [code], [YouTube]

  11. Promoting Exploration in Memory-Augmented Adam using Critical Momenta
    , , Aristide Baratin, Reza Babanezhad Harikandeh, , Simon Lacoste-Julien, Razvan Pascanu et
    Transactions on Machine Learning Research (TMLR), 2024.
    #DL
    [arXiv]

  12. MVP: Minimal Viable Phrase for Long Text Understanding
    , Amal Zouaq et
    Joint International Conference on Computational Linguistics, Language, Resources and Evaluation (LREC-COLING), 2024.
    #NLP

  13. Mastering Memory Tasks with World Models
    , , et
    International Conference on Learning Representations (ICLR), 2024. [Oral presentation.]
    #RL, #DL
    [openreview]

  14. Intelligent Switching for Reset-Free RL
    , , Glen Berseth et
    International Conference on Learning Representations (ICLR), 2024.
    #RL
    [openreview]

  15. On the Costs and Benefits of Adopting Lifelong Learning for Software Analytics - Empirical Study on Brown Build and Risk Prediction
    Doriane Olewicki, Sarra Habchi, Mathieu Nayrolles, , et Bram Adams
    International Conference on Software Engineering (ICSE) - Software Engineering in Practice Track, 2024. [ICSE24 SEIP Distinguished Paper Award.]
    #DL
    [arXiv]

  16. Fairness-Aware Structured Pruning in Transformers
    , , Samira Shabanian, Ioana Baldini et
    AAAI Conference on Artificial Intelligence (AAAI), 2024.
    #NLP
    [arXiv], [YouTube]

  17. Learning Conditional Policies for Crystal Design Using Offline Reinforcement Learning
    , Santiago Miret, , Mariano Phielipp, et
    Digital Discovery Journal, 2024.
    #RL
    [openreview]

2023

  1. Self-Influence Guided Data Reweighting for Language Model Pre-training
    , Tolga Bolukbasi, Sriram Ganapathy, Shikhar Vashishth, et Partha Talukdar
    Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
    #NLP
    [arXiv]

  2. EpiK-Eval: Evaluation for Language Models as Epistemic Models
    , , Prasanna Parthasarathi, Shagun Sodhani et
    Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
    #NLP
    [arXiv], [code]

  3. Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models
    Amirhossein Kazemnejad, Mehdi Rezagholizadeh, Prasanna Parthasarathi et
    Findings of the Association for Computational Linguistics (EMNLP), 2023.
    #NLP
    [arXiv]

  4. Replay Buffer with Local Forgetting for Adapting to Local Environment Changes in Deep Model-Based Reinforcement Learning
    , , Ida Momennejad, Harm van Seijen et
    Conference on Lifelong Learning Agents (CoLLAs), 2023.
    [Deep Reinforcement Learning Workshop, NeurIPS, 2022]
    #RL
    [arXiv]

  5. Towards Few-shot Coordination: Revisiting Ad-hoc Teamplay Challenge In the Game of Hanabi
    , , , Miao Liu et
    Conference on Lifelong Learning Agents (CoLLAs), 2023.
    #RL
    [paper]

  6. Dealing With Non-stationarity in Decentralized Cooperative Multi-Agent Deep Reinforcement Learning via Multi-Timescale Learning
    , , Amit Sinha, Mohammad Amini, , Aditya Mahajan et
    Conference on Lifelong Learning Agents (CoLLAs), 2023.
    #RL
    [arXiv]

  7. Conditionally Optimistic Exploration for Cooperative Deep Multi-Agent Reinforcement Learning
    , Yangchen Pan, Chenjun Xiao, et
    Conference on Uncertainty in Artificial Intelligence (UAI), 2023.
    #RL
    [arXiv]

  8. An Empirical Investigation of the Role of Pre-training in Lifelong Learning
    Sanket Vaibhav Mehta, , et Emma Strubell
    Journal of Machine Learning Research, 2023.
    #DL
    [arXiv]

  9. Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness
    , Prasanna Parthasarathi, , Hamid Palangi, Samira Shabanian et
    AAAI Conference on Artificial Intelligence (AAAI), 2023.
    #NLP
    [arXiv], [YouTube]

2022

  1. Detecting Languages Unintelligible to Multilingual Models through Local Structure Probes
    , Prasanna Parthasarathi, Amal Zouaq et
    Findings of the Association for Computational Linguistics (EMNLP), 2022.
    #NLP

  2. Local Structure Matters Most in Most Languages
    , Prasanna Parthasarathi, Amal Zouaq et
    Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (AACL-IJCNLP), 2022.
    #NLP

  3. TAG: Task-based Accumulated Gradients for Lifelong Learning
    , Balaraman Ravindran et
    Conference on Lifelong Learning Agents (CoLLAs), 2022.
    [Workshop on Theory and Foundation of Continual Learning, ICML, 2021]
    #DL
    [arXiv], [code]

  4. Improving Meta-Learning Generalization with Activation-Based Early-Stopping
    , Christopher Pal, et
    Conference on Lifelong Learning Agents (CoLLAs), 2022.
    #DL
    [arXiv], [code], [YouTube]

  5. Combining Reinforcement Learning and Constraint Programming for Sequence-Generation Tasks with Hard Constraints
    , et Gilles Pesant
    Principles and Practice of Constraint Programming (CP), 2022.
    #RL

  6. Towards Evaluating Adaptivity of Model-Based Reinforcement Learning Methods
    , , , Ida Momennejad, et Harm van Seijen
    International Conference on Machine Learning (ICML), 2022.
    #RL
    [arXiv], [code]

  7. Post-hoc Interpretability for Neural NLP: A Survey
    , Siva Reddy et
    ACM Computing Surveys, 2022.
    #NLP
    [arXiv]

  8. Local Structure Matters Most: Perturbation Study in NLU
    , Prasanna Parthasarathi, Amal Zouaq et
    Findings of the Association for Computational Linguistics (ACL), 2022.
    #NLP
    [arXiv]

  9. Memory Augmented Optimizers for Deep Learning
    , Prasanna Parthasarathi, Mido Assran et
    International Conference on Learning Representations (ICLR), 2022.
    #DL
    [openreview], [code]

  10. PatchUp: A Feature-Space Block-Level Regularization Technique for Convolutional Neural Networks
    , Mohammad Amini, , Vikas Verma et
    AAAI Conference on Artificial Intelligence (AAAI), 2022.
    #DL
    [arXiv], [code]

2021

  1. MLMLM: Link Prediction with Mean Likelihood Masked Language Model
    , Philippe Trempe, Amal Zouaq et
    Findings of the Association for Computational Linguistics (ACL-IJCNLP), 2021.
    #NLP
    [arXiv]

  2. A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss
    Prasanna Parthasarathi, , Joelle Pineau et
    Proceedings of the 22nd Annual SIGdial Meeting on Discourse and Dialogue, 2021.
    #NLP

  3. Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?
    Prasanna Parthasarathi, et Joelle Pineau
    Proceedings of the 22nd Annual SIGdial Meeting on Discourse and Dialogue, 2021.
    #NLP

  4. Continuous Coordination As a Realistic Scenario for Lifelong Learning
    , , Aaron Courville et
    International Conference on Machine Learning (ICML), 2021.
    #RL
    [arXiv], [code]

  5. A Survey of Data Augmentation Approaches for NLP
    Steven Y. Feng, Varun Gangal, Jason Wei, , Soroush Vosoughi, Teruko Mitamura et Eduard Hovy
    Findings of the Association for Computational Linguistics (ACL-IJCNLP), 2021.
    #NLP
    [arXiv]

  6. Towered Actor Critic for Handling Multiple Action Types in Reinforcement Learning For Drug Discovery
    Sai Krishna Gottipati, Yashaswi Pathak, Boris Sattarov, Sahir, Rohan Nuttall, Mohammad Amini, Matthew E. Taylor et
    AAAI Conference on Artificial Intelligence (AAAI), 2021.
    #RL

  7. IIRC: Incremental Implicitly-Refined Classification
    , , Shagun Sodhani et
    Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
    #DL
    [arXiv], [code], [website], [PyPI], [docs]

2020

  1. The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning
    Harm van Seijen, , et
    Neural Information Processing Systems (NeurIPS), 2020.
    #RL
    [arXiv], [code]

  2. Learning To Navigate The Synthetically Accessible Chemical Space Using Reinforcement Learning
    Sai Krishna Gottipati*, Boris Sattarov*, Sufeng Niu, Yashaswi Pathak, Haoran Wei, Shengchao Liu, Karam MJ Thomas, Simon Blackburn, Connor W Coley, Jian Tang, et Yoshua Bengio
    International Conference on Machine Learning (ICML), 2020.
    #RL
    [arXiv]