Prépublications

Articles de conférence et de revue

2024

  1. Exploring Quantization for Efficient Pre-Training of Transformer Language Models
    , , et
    Findings of the Association for Computational Linguistics (EMNLP), 2024.
    #NLP, #DL
    [arXiv]

  2. Context-Aware Assistant Selection for Improved Inference Acceleration with Large Language Models
    , Prasanna Parthasarathi, Mehdi Rezagholizadeh et
    Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024.
    #NLP
    [arXiv]

  3. Do Large Language Models Know How Much They Know?
    , , Prasanna Parthasarathi, Shagun Sodhani et
    Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024.
    #NLP

  4. Should We Attend More or Less? Modulating Attention for Fairness
    , , Samira Shabanian et
    Conference on Language Modeling (COLM), 2024.
    #NLP
    [arXiv]

  5. Are self-explanations from Large Language Models faithful?
    , et Siva Reddy
    Findings of the Association for Computational Linguistics (ACL), 2024.
    #NLP
    [arXiv], [code]

  6. A deep-dive into the tradeoffs of preference alignment with PEFT
    , , Matthew Riemer, Pin-Yu Chen, Amal Zouaq, Payel Das et
    Annual Meeting of the Association for Computational Linguistics (ACL), 2024.
    #NLP
    [arXiv]

  7. Why Don’t Prompt-Based Fairness Metrics Correlate?
    , , Ioana Baldini et
    Annual Meeting of the Association for Computational Linguistics (ACL), 2024.
    #NLP
    [arXiv], [YouTube]

  8. Sub-goal Distillation: A Method to Improve Small Language Agents
    , Elias Stengel-Eskin, et Marc-Alexandre Cote
    Conference on Lifelong Learning Agents (CoLLAs), 2024.
    #RL, #NLP
    [arXiv]

  9. Faithfulness Measurable Masked Language Models
    , Siva Reddy et
    International Conference on Machine Learning (ICML), 2024. [Spotlight award - top 3.5%]
    #NLP
    [arXiv], [code], [YouTube]

  10. MVP: Minimal Viable Phrase for Long Text Understanding
    , Amal Zouaq et
    Joint International Conference on Computational Linguistics, Language, Resources and Evaluation (LREC-COLING), 2024.
    #NLP

  11. Fairness-Aware Structured Pruning in Transformers
    , , Samira Shabanian, Ioana Baldini et
    AAAI Conference on Artificial Intelligence (AAAI), 2024.
    #NLP
    [arXiv], [YouTube]

2023

  1. Self-Influence Guided Data Reweighting for Language Model Pre-training
    , Tolga Bolukbasi, Sriram Ganapathy, Shikhar Vashishth, et Partha Talukdar
    Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
    #NLP
    [arXiv]

  2. EpiK-Eval: Evaluation for Language Models as Epistemic Models
    , , Prasanna Parthasarathi, Shagun Sodhani et
    Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
    #NLP
    [arXiv], [code]

  3. Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models
    Amirhossein Kazemnejad, Mehdi Rezagholizadeh, Prasanna Parthasarathi et
    Findings of the Association for Computational Linguistics (EMNLP), 2023.
    #NLP
    [arXiv]

  4. Deep Learning on a Healthy Data Diet: Finding Important Examples for Fairness
    , Prasanna Parthasarathi, , Hamid Palangi, Samira Shabanian et
    AAAI Conference on Artificial Intelligence (AAAI), 2023.
    #NLP
    [arXiv], [YouTube]

2022

  1. Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining
    , Nicholas Meade, Vaibhav Adlakha et Siva Reddy
    Findings of the Association for Computational Linguistics (EMNLP), 2022.
    [BlackboxNLP Workshop, 2022]
    #NLP
    [arXiv], [code]

  2. Detecting Languages Unintelligible to Multilingual Models through Local Structure Probes
    , Prasanna Parthasarathi, Amal Zouaq et
    Findings of the Association for Computational Linguistics (EMNLP), 2022.
    #NLP

  3. Local Structure Matters Most in Most Languages
    , Prasanna Parthasarathi, Amal Zouaq et
    Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing (AACL-IJCNLP), 2022.
    #NLP

  4. Post-hoc Interpretability for Neural NLP: A Survey
    , Siva Reddy et
    ACM Computing Surveys, 2022.
    #NLP
    [arXiv]

  5. Local Structure Matters Most: Perturbation Study in NLU
    , Prasanna Parthasarathi, Amal Zouaq et
    Findings of the Association for Computational Linguistics (ACL), 2022.
    #NLP
    [arXiv]

2021

  1. MLMLM: Link Prediction with Mean Likelihood Masked Language Model
    , Philippe Trempe, Amal Zouaq et
    Findings of the Association for Computational Linguistics (ACL-IJCNLP), 2021.
    #NLP
    [arXiv]

  2. Benchmarking Bias Mitigation Algorithms in Representation Learning through Fairness Metrics
    , Deepak Sharma, Soroush Mehri, Adriana Romero, Samira Shabanian et Sina Honari
    Proceedings of the Neural Information Processing Systems (NeurIPS) Track on Datasets and Benchmarks, 2021.
    #NLP
    [openreview], [code]

  3. A Brief Study on the Effects of Training Generative Dialogue Models with a Semantic loss
    Prasanna Parthasarathi, , Joelle Pineau et
    Proceedings of the 22nd Annual SIGdial Meeting on Discourse and Dialogue, 2021.
    #NLP

  4. Do Encoder Representations of Generative Dialogue Models Encode Sufficient Information about the Task ?
    Prasanna Parthasarathi, et Joelle Pineau
    Proceedings of the 22nd Annual SIGdial Meeting on Discourse and Dialogue, 2021.
    #NLP

  5. A Survey of Data Augmentation Approaches for NLP
    Steven Y. Feng, Varun Gangal, Jason Wei, , Soroush Vosoughi, Teruko Mitamura et Eduard Hovy
    Findings of the Association for Computational Linguistics (ACL-IJCNLP), 2021.
    #NLP
    [arXiv]