Preprints

  • Did I Faithfully Say What I Thought? Bridging the Gap Between Neural Activity and Self-Explanations in Large Language Models
    , Jean-Noel Vittaut, Nicolas Chesneau, , and Marie-Jeanne Lesot
    In ArXiv, 2025.
    #NLP
    [arXiv]

  • Monitoring morphometric drift in lifelong learning segmentation of the spinal cord
    Enamundram Naga Karthik, Sandrine Bédard, Jan Valošek, Christoph S. Aigner, Elise Bannier, Josef Bednařík, Virginie Callot, Anna Combes, Armin Curt, Gergely David, Falk Eippert, Lynn Farner, Michael G Fehlings, Patrick Freund, Tobias Granberg, Cristina Granziera, RHSCIR Network Imaging Group, Ulrike Horn, Tomáš Horák, Suzanne Humphreys, Markus Hupp, Anne Kerbrat, Nawal Kinany, Shannon Kolind, Petr Kudlička, Anna Lebret, Lisa Eunyoung Lee, Caterina Mainero, Allan R. Martin, Megan McGrath, Govind Nair, Kristin P. O'Grady, Jiwon Oh, Russell Ouellette, Nikolai Pfender, Dario Pfyffer, Pierre-François Pradat, Alexandre Prat, Emanuele Pravatà, Daniel S. Reich, Ilaria Ricchi, Naama Rotem-Kohavi, Simon Schading-Sassenhausen, Maryam Seif, Andrew Smith, Seth A Smith, Grace Sweeney, Roger Tam, Anthony Traboulsee, Constantina Andrada Treaba, Charidimos Tsagkas, Zachary Vavasour, Dimitri Van De Ville, Kenneth Arnold Weber II, , and Julien Cohen-Adad
    In ArXiv, 2025.
    #NLP
    [arXiv]

  • Too Big to Fool: Resisting Deception in Language Models
    , Mats Leon Richter, Juan Rodriguez, , , and Maxime Gasse
    In ArXiv, 2024.
    #NLP
    [arXiv]

  • Interpretability Needs a New Paradigm
    , Himabindu Lakkaraju, Siva Reddy, and
    In ArXiv, 2024.
    #NLP, #DL
    [arXiv]

Conference and Journal Papers

2025

  1. Steering Large Language Model Activations in Sparse Spaces
    Reza Bayat*, , Mohammad Pezeshki, , and Pascal Vincent
    Conference on Language Modeling (COLM), 2025.
    #NLP, #DL
    [arXiv]

  2. Boosting LLM Reasoning via Spontaneous Self-Correction
    , Tengyu Xu, Xuewei Wang, Zhengxing Chen, Di Jin, Liang Tan, Yen-Ting, Zishun Yu, Zhuokai Zhao, Yun He, Sinong Wang, Han Fang, , and Chen Zhu
    Conference on Language Modeling (COLM), 2025.
    #NLP, #RL
    [openreview], [arXiv]

  3. Revisiting Replay and Gradient Alignment for Continual Pre-Training of Large Language Models
    , Gopeshh Subbaraj, Matthew Riemer, Nizar Islah, Tsuguchika Tabaru, Hiroaki Kingetsu, , and Irina Rish
    Conference on Lifelong Learning Agents (CoLLAs), 2025.
    #NLP, #DL

  4. Combining Domain and Alignment Vectors Provides Better Knowledge-Safety Trade-offs in LLMs
    , , Matthew Riemer, Pin-Yu Chen, Amal Zouaq, Payel Das, and
    Annual Meeting of the Association for Computational Linguistics (ACL), 2025.
    #NLP
    [acl], [arXiv]

  5. Small Encoders Can Rival Large Decoders in Detecting Groundedness
    , , , Fernando Rodriguez, Alaa Boukhary, Adam Elwood, and
    Findings of the Association for Computational Linguistics (ACL), 2025.
    #NLP
    [acl], [arXiv]

  6. Do Robot Snakes Dream like Electric Sheep? Investigating the Effects of Architectural Inductive Biases on Hallucination
    , Prasanna Parthasarathi, Mehdi Rezagholizadeh, Boxing Chen, and
    Findings of the Association for Computational Linguistics (ACL), 2025.
    #NLP
    [acl], [arXiv]

  7. IDAT: A Multi-Modal Dataset and Toolkit for Building and Evaluating Interactive Task-Solving Agents
    Shrestha Mohanty, Negar Arabzadeh, Andrea Tupini, Yuxuan Sun, Alexey Skrynnik, , Marc-Alexandre Côté, and Julia Kiseleva
    ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), 2025.
    #NLP
    [arXiv]

  8. NeoBERT: A Next Generation BERT
    , , Mariam El Mezouar, and
    Transactions on Machine Learning Research (TMLR), 2025.
    #NLP
    [openreview]

  9. ChartGemma: Visual Instruction-tuning for Chart Reasoning in the Wild
    Ahmed Masry*, , Aayush Bajaj, Aaryaman Kartha, Enamul Hoque, and Shafiq Joty
    International Conference on Computational Linguistics (COLING) Industry Track, 2025.
    #NLP
    [acl], [arXiv], [code]

2024

  1. WorkArena++: Towards Compositional Planning and Reasoning-based Common Knowledge Work Tasks
    Leo Boisvert*, , Maxime Gasse, Massimo Caccia, Thibault Le Sellier De Chezelles, Quentin Cappart, Nicolas Chapados, Alexandre Lacoste, and Alexandre Drouin
    Conference on Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track, 2024.
    #NLP
    [neurips], [openreview], [arXiv], [code]

  2. Exploring Quantization for Efficient Pre-Training of Transformer Language Models
    , , , and
    Findings of the Association for Computational Linguistics (EMNLP), 2024.
    #NLP, #DL
    [acl], [arXiv]

  3. Context-Aware Assistant Selection for Improved Inference Acceleration with Large Language Models
    , Prasanna Parthasarathi, Mehdi Rezagholizadeh, and
    Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024.
    #NLP
    [acl], [arXiv]

  4. Do Large Language Models Know How Much They Know?
    , , Prasanna Parthasarathi, Shagun Sodhani, and
    Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024.
    #NLP
    [acl], [arXiv]

  5. Should We Attend More or Less? Modulating Attention for Fairness
    , , Samira Shabanian, and
    Conference on Language Modeling (COLM), 2024.
    #NLP
    [openreview], [arXiv]

  6. Are self-explanations from Large Language Models faithful?
    , , and Siva Reddy
    Findings of the Association for Computational Linguistics (ACL), 2024.
    #NLP
    [acl], [arXiv], [code], [YouTube]

  7. A deep-dive into the tradeoffs of preference alignment with PEFT
    , , Matthew Riemer, Pin-Yu Chen, Amal Zouaq, Payel Das, and
    Annual Meeting of the Association for Computational Linguistics (ACL), 2024.
    #NLP
    [acl], [arXiv]

  8. Why Don’t Prompt-Based Fairness Metrics Correlate?
    , , Ioana Baldini, and
    Annual Meeting of the Association for Computational Linguistics (ACL), 2024.
    #NLP
    [acl], [arXiv], [YouTube]

  9. Sub-goal Distillation: A Method to Improve Small Language Agents
    , Elias Stengel-Eskin, , and Marc-Alexandre Cote
    Conference on Lifelong Learning Agents (CoLLAs), 2024. [Oral presentation.]
    #RL, #NLP
    [arXiv]

  10. Faithfulness Measurable Masked Language Models
    , Siva Reddy, and
    International Conference on Machine Learning (ICML), 2024. [Spotlight award - top 3.5%]
    #NLP
    [pmlr], [arXiv], [code], [YouTube], [blogpost]

  11. MVP: Minimal Viable Phrase for Long Text Understanding
    , Amal Zouaq, and
    Joint International Conference on Computational Linguistics, Language, Resources and Evaluation (LREC-COLING), 2024.
    #NLP
    [acl]

  12. Fairness-Aware Structured Pruning in Transformers
    , , Samira Shabanian, Ioana Baldini, and
    AAAI Conference on Artificial Intelligence (AAAI), 2024.
    #NLP
    [aaai], [arXiv], [YouTube]

2023

  1. Self-Influence Guided Data Reweighting for Language Model Pre-training
    , Tolga Bolukbasi, Sriram Ganapathy, Shikhar Vashishth, , and Partha Talukdar
    Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
    #NLP
    [acl], [openreview], [arXiv]

  2. EpiK-Eval: Evaluation for Language Models as Epistemic Models
    , , Prasanna Parthasarathi, Shagun Sodhani, and
    Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
    #NLP
    [acl], [openreview], [arXiv], [code]

  3. Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models
    Amirhossein Kazemnejad, Mehdi Rezagholizadeh, Prasanna Parthasarathi, and
    Findings of the Association for Computational Linguistics (EMNLP), 2023.
    #NLP
    [acl], [openreview], [arXiv]

  4. Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness
    , Prasanna Parthasarathi, , Hamid Palangi, Samira Shabanian, and
    AAAI Conference on Artificial Intelligence (AAAI), 2023.
    #NLP
    [aaai], [arXiv], [YouTube]

2022

  1. Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining
    , Nicholas Meade, Vaibhav Adlakha, and Siva Reddy
    Findings of the Association for Computational Linguistics (EMNLP), 2022.
    [BlackboxNLP, 2022]
    #NLP
    [acl], [arXiv], [code]

  2. Detecting Languages Unintelligible to Multilingual Models through Local Structure Probes
    , Prasanna Parthasarathi, Amal Zouaq, and
    Findings of the Association for Computational Linguistics (EMNLP), 2022.
    #NLP
    [acl]

  3. Local Structure Matters Most in Most Languages
    , Prasanna Parthasarathi, Amal Zouaq, and
    Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (AACL-IJCNLP), 2022.
    #NLP
    [acl]

  4. Post-hoc Interpretability for Neural NLP: A Survey
    , Siva Reddy, and
    ACM Computing Surveys, 2022.
    #NLP
    [acm], [arXiv]

  5. Local Structure Matters Most: Perturbation Study in NLU
    , Prasanna Parthasarathi, Amal Zouaq, and
    Findings of the Association for Computational Linguistics (ACL), 2022.
    #NLP
    [acl], [arXiv]

2021

  1. Benchmarking Bias Mitigation Algorithms in Representation Learning through Fairness Metrics
    , Deepak Sharma, Soroush Mehri, Adriana Romero, Samira Shabanian, and Sina Honari
    Conference on Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track, 2021.
    #NLP
    [neurips], [openreview], [code]

  2. A Survey of Data Augmentation Approaches for NLP
    Steven Y. Feng, Varun Gangal, Jason Wei, , Soroush Vosoughi, Teruko Mitamura, and Eduard Hovy
    Findings of the Association for Computational Linguistics (ACL-IJCNLP), 2021.
    #NLP
    [acl], [arXiv]

  3. MLMLM: Link Prediction with Mean Likelihood Masked Language Model
    , Philippe Trempe, Amal Zouaq, and
    Findings of the Association for Computational Linguistics (ACL-IJCNLP), 2021.
    #NLP
    [acl], [arXiv]

  4. A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss
    Prasanna Parthasarathi, , Joelle Pineau, and
    Proceedings of the 22nd Annual SIGdial Meeting on Discourse and Dialogue, 2021.
    #NLP
    [acl]

  5. Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?
    Prasanna Parthasarathi, , and Joelle Pineau
    Proceedings of the 22nd Annual SIGdial Meeting on Discourse and Dialogue, 2021.
    #NLP
    [acl]