Publications | Traitement du langage naturel
Prépublications
-
ChartGemma: Visual Instruction-tuning for Chart Reasoning in the Wild
Ahmed Masry*, Megh Thakkar*, Aayush Bajaj, Aaryaman Kartha, Enamul Hoque et Shafiq Joty
In arXiv, 2024.
#NLP
[arXiv], [code] -
Exploring the Plasticity of Neural Network for NLP Tasks in Continual Learning
Maryam Hashemzadeh, Pranshu Malviya*, Darshan Patil* et Sarath Chandar
Conference on Lifelong Learning Agents (CoLLAs) Workshop Track, 2024.
#DL, #NLP
-
IDAT: A Multi-Modal Dataset and Toolkit for Building and Evaluating Interactive Task-Solving Agents
Shrestha Mohanty, Negar Arabzadeh, Andrea Tupini, Yuxuan Sun, Alexey Skrynnik, Artem Zholus, Marc-Alexandre Côté et Julia Kiseleva
In ArXiv, 2024.
#NLP
[arXiv] -
Interpretability Needs a New Paradigm
Andreas Madsen, Himabindu Lakkaraju, Siva Reddy et Sarath Chandar
In ArXiv, 2024.
#NLP, #DL, #Other
[arXiv]
Articles de conférence et de revue
2024
-
WorkArena++: Towards Compositional Planning and Reasoning-based Common Knowledge Work Tasks
Leo Boisvert*, Megh Thakkar*, Maxime Gasse, Massimo Caccia, Thibault Le Sellier De Chezelles, Quentin Cappart, Nicolas Chapados, Alexandre Lacoste et Alexandre Drouin
Conference on Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track, 2024.
#NLP
[openreview], [arXiv], [code] -
Exploring Quantization for Efficient Pre-Training of Transformer Language Models
Kamran Chitsaz, Quentin Fournier, Gonçalo Mordido et Sarath Chandar
Findings of the Association for Computational Linguistics (EMNLP), 2024.
#NLP, #DL
[acl], [arXiv] -
Context-Aware Assistant Selection for Improved Inference Acceleration with Large Language Models
Jerry Huang, Prasanna Parthasarathi, Mehdi Rezagholizadeh et Sarath Chandar
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024.
#NLP
[acl], [arXiv] -
Do Large Language Models Know How Much They Know?
Gabriele Prato, Jerry Huang, Prasanna Parthasarathi, Shagun Sodhani et Sarath Chandar
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024.
#NLP
[acl] -
Should We Attend More or Less? Modulating Attention for Fairness
Abdelrahman Zayed, Gonçalo Mordido, Samira Shabanian et Sarath Chandar
Conference on Language Modeling (COLM), 2024.
#NLP
[openreview], [arXiv] -
Are self-explanations from Large Language Models faithful?
Andreas Madsen, Sarath Chandar et Siva Reddy
Findings of the Association for Computational Linguistics (ACL), 2024.
#NLP
[acl], [arXiv], [code], [YouTube] -
A deep-dive into the tradeoffs of preference alignment with PEFT
Megh Thakkar, Quentin Fournier, Matthew Riemer, Pin-Yu Chen, Amal Zouaq, Payel Das et Sarath Chandar
Annual Meeting of the Association for Computational Linguistics (ACL), 2024.
#NLP
[acl], [arXiv] -
Why Don’t Prompt-Based Fairness Metrics Correlate?
Abdelrahman Zayed, Gonçalo Mordido, Ioana Baldini et Sarath Chandar
Annual Meeting of the Association for Computational Linguistics (ACL), 2024.
#NLP
[acl], [arXiv], [YouTube] -
Sub-goal Distillation: A Method to Improve Small Language Agents
Maryam Hashemzadeh, Elias Stengel-Eskin, Sarath Chandar et Marc-Alexandre Cote
Conference on Lifelong Learning Agents (CoLLAs), 2024. [Oral presentation.]
#RL, #NLP
[arXiv] -
Faithfulness Measurable Masked Language Models
Andreas Madsen, Siva Reddy et Sarath Chandar
International Conference on Machine Learning (ICML), 2024. [Spotlight award - top 3.5%]
#NLP
[pmlr], [arXiv], [code], [YouTube], [blogpost] -
MVP: Minimal Viable Phrase for Long Text Understanding
Louis Clouâtre, Amal Zouaq et Sarath Chandar
Joint International Conference on Computational Linguistics, Language, Resources and Evaluation (LREC-COLING), 2024.
#NLP
[acl] -
Fairness-Aware Structured Pruning in Transformers
Abdelrahman Zayed, Gonçalo Mordido, Samira Shabanian, Ioana Baldini et Sarath Chandar
AAAI Conference on Artificial Intelligence (AAAI), 2024.
#NLP
[aaai], [arXiv], [YouTube]
2023
-
Self-Influence Guided Data Reweighting for Language Model Pre-training
Megh Thakkar, Tolga Bolukbasi, Sriram Ganapathy, Shikhar Vashishth, Sarath Chandar et Partha Talukdar
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
#NLP
[acl], [openreview], [arXiv] -
EpiK-Eval: Evaluation for Language Models as Epistemic Models
Gabriele Prato, Jerry Huang, Prasanna Parthasarathi, Shagun Sodhani et Sarath Chandar
Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
#NLP
[acl], [openreview], [arXiv], [code] -
Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models
Amirhossein Kazemnejad, Mehdi Rezagholizadeh, Prasanna Parthasarathi et Sarath Chandar
Findings of the Association for Computational Linguistics (EMNLP), 2023.
#NLP
[acl], [openreview], [arXiv] -
Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness
Abdelrahman Zayed, Prasanna Parthasarathi, Gonçalo Mordido, Hamid Palangi, Samira Shabanian et Sarath Chandar
AAAI Conference on Artificial Intelligence (AAAI), 2023.
#NLP
[aaai], [arXiv], [YouTube]
2022
-
Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining
Andreas Madsen, Nicholas Meade, Vaibhav Adlakha et Siva Reddy
Findings of the Association for Computational Linguistics (EMNLP), 2022.
[BlackboxNLP Workshop, 2022]
#NLP
[acl], [arXiv], [code] -
Detecting Languages Unintelligible to Multilingual Models through Local Structure Probes
Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq et Sarath Chandar
Findings of the Association for Computational Linguistics (EMNLP), 2022.
#NLP
[acl] -
Local Structure Matters Most in Most Languages
Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq et Sarath Chandar
Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (AACL-IJCNLP), 2022.
#NLP
[acl] -
Post-hoc Interpretability for Neural NLP: A Survey
Andreas Madsen, Siva Reddy et Sarath Chandar
ACM Computing Surveys, 2022.
#NLP
[acm], [arXiv] -
Local Structure Matters Most: Perturbation Study in NLU
Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq et Sarath Chandar
Findings of the Association for Computational Linguistics (ACL), 2022.
#NLP
[acl], [arXiv]
2021
-
Benchmarking Bias Mitigation Algorithms in Representation Learning through Fairness Metrics
Charan Reddy, Deepak Sharma, Soroush Mehri, Adriana Romero, Samira Shabanian et Sina Honari
Conference on Neural Information Processing Systems (NeurIPS) Datasets and Benchmarks Track, 2021.
#NLP
[neurips], [openreview], [code] -
A Survey of Data Augmentation Approaches for NLP
Steven Y. Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura et Eduard Hovy
Findings of the Association for Computational Linguistics (ACL-IJCNLP), 2021.
#NLP
[acl], [arXiv] -
MLMLM: Link Prediction with Mean Likelihood Masked Language Model
Louis Clouâtre, Philippe Trempe, Amal Zouaq et Sarath Chandar
Findings of the Association for Computational Linguistics (ACL-IJCNLP), 2021.
#NLP
[acl], [arXiv] -
A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss
Prasanna Parthasarathi, Mohamed Abdelsalam, Joelle Pineau et Sarath Chandar
Proceedings of the 22nd Annual SIGdial Meeting on Discourse and Dialogue, 2021.
#NLP
[acl] -
Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?
Prasanna Parthasarathi, Sarath Chandar et Joelle Pineau
Proceedings of the 22nd Annual SIGdial Meeting on Discourse and Dialogue, 2021.
#NLP
[acl]