Preprints

Conference and Journal Papers

2024

  1. Fairness-Aware Structured Pruning in Transformers
    Abdelrahman Zayed, Gonçalo Mordido, Samira Shabanian, Ioana Baldini, and Sarath Chandar
    AAAI Conference on Artificial Intelligence (AAAI), 2024.
    #NLP
    [arXiv]

2023

  1. Self-Influence Guided Data Reweighting for Language Model Pre-training
    Megh Thakkar, Tolga Bolukbasi, Sriram Ganapathy, Shikhar Vashishth, Sarath Chandar, and Partha Talukdar
    Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
    #NLP
    [arXiv]

  2. EpiK-Eval: Evaluation for Language Models as Epistemic Models
    Gabriele Prato, Jerry Huang, Prasanna Parthasarathi, Shagun Sodhani, and Sarath Chandar
    Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
    #NLP
    [arXiv], [code]

  3. Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models
    Amirhossein Kazemnejad, Mehdi Rezagholizadeh, Prasanna Parthasarathi, and Sarath Chandar
    Findings of Empirical Methods in Natural Language Processing (EMNLP), 2023.
    #NLP
    [arXiv]

  4. Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness
    Abdelrahman Zayed, Prasanna Parthasarathi, Gonçalo Mordido, Hamid Palangi, Samira Shabanian, and Sarath Chandar
    AAAI Conference on Artificial Intelligence (AAAI), 2023.
    #NLP
    [arXiv]

2022

  1. Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining
    Andreas Madsen, Nicholas Meade, Vaibhav Adlakha, and Siva Reddy
    Findings of Empirical Methods in Natural Language Processing (EMNLP), 2022.
    [BlackboxNLP Workshop, 2022]
    #NLP
    [arXiv], [code]

  2. Detecting Languages Unintelligible to Multilingual Models through Local Structure Probes
    Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, and Sarath Chandar
    Findings of Empirical Methods in Natural Language Processing (EMNLP), 2022.
    #NLP

  3. Local Structure Matters Most in Most Languages
    Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, and Sarath Chandar
    Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (AACL-IJCNLP), 2022.
    #NLP

  4. Post-hoc Interpretability for Neural NLP: A Survey
    Andreas Madsen, Siva Reddy, and Sarath Chandar
    ACM Computing Surveys, 2022.
    #NLP
    [arXiv]

  5. Local Structure Matters Most: Perturbation Study in NLU
    Louis Clouâtre, Prasanna Parthasarathi, Amal Zouaq, and Sarath Chandar
    Findings of the Annual Meeting of the Association for Computational Linguistics (ACL), 2022.
    #NLP
    [arXiv]

2021

  1. MLMLM: Link Prediction with Mean Likelihood Masked Language Model
    Louis Clouâtre, Philippe Trempe, Amal Zouaq, and Sarath Chandar
    Findings of the Annual Meeting of the Association for Computational Linguistics (ACL), 2021.
    #NLP
    [arXiv]

  2. Benchmarking Bias Mitigation Algorithms in Representation Learning through Fairness Metrics
    Charan Reddy, Deepak Sharma, Soroush Mehri, Adriana Romero, Samira Shabanian, and Sina Honari
    Proceedings of the Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks, 2021.
    #NLP
    [openreview], [code]

  3. A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss
    Prasanna Parthasarathi, Mohamed Abdelsalam, Joelle Pineau, and Sarath Chandar
    Proceedings of the 22nd Annual SIGdial Meeting on Discourse and Dialogue, 2021.
    #NLP

  4. Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?
    Prasanna Parthasarathi, Sarath Chandar, and Joelle Pineau
    Proceedings of the 22nd Annual SIGdial Meeting on Discourse and Dialogue, 2021.
    #NLP

  5. A Survey of Data Augmentation Approaches for NLP
    Steven Y. Feng, Varun Gangal, Jason Wei, Sarath Chandar, Soroush Vosoughi, Teruko Mitamura, and Eduard Hovy
    Findings of the Annual Meeting of the Association for Computational Linguistics (ACL), 2021.
    #NLP
    [arXiv]