Publications d'Quentin Fournier
-
Quentin Fournier
Domaines de recherche: Apprentissage permanent, Traitement du langage naturel
Activité
- Fellow de Recherche: jan. 2024 - maintenant
- Chercheur postdoctoral: jan. 2023 - jan. 2024
Prépublications
-
Protein Language Models: Is Scaling Necessary?
Quentin Fournier, Robert M. Vernon, Almer van der Sloot, Benjamin Schulz, Sarath Chandar et Christopher James Langmead
In bioRxiv, 2024.
#DL, #Other
[bioRxiv], [code]
Articles de conférence et de revue
2025
-
Predicting the Impact of Model Expansion through the Minima Manifold: A Loss Landscape Perspective
Pranshu Malviya, Jerry Huang, Ariside Baratin, Quentin Fournier et Sarath Chandar
Conference on Lifelong Learning Agents (CoLLAs), 2025.
#DL
[arXiv] -
Combining Domain and Alignment Vectors Provides Better Knowledge-Safety Trade-offs in LLMs
Megh Thakkar, Quentin Fournier, Matthew Riemer, Pin-Yu Chen, Amal Zouaq, Payel Das et Sarath Chandar
Annual Meeting of the Association for Computational Linguistics (ACL), 2025.
#NLP
-
Small Encoders Can Rival Large Decoders in Detecting Groundedness
Istabrak Abbes, Gabriele Prato, Quentin Fournier, Fernando Rodriguez, Alaa Boukhary, Adam Elwood et Sarath Chandar
Findings of the Association for Computational Linguistics (ACL), 2025.
#NLP
2024
-
Exploring Quantization for Efficient Pre-Training of Transformer Language Models
Kamran Chitsaz, Quentin Fournier, Gonçalo Mordido et Sarath Chandar
Findings of the Association for Computational Linguistics (EMNLP), 2024.
#NLP, #DL
[acl], [arXiv] -
A deep-dive into the tradeoffs of preference alignment with PEFT
Megh Thakkar, Quentin Fournier, Matthew Riemer, Pin-Yu Chen, Amal Zouaq, Payel Das et Sarath Chandar
Annual Meeting of the Association for Computational Linguistics (ACL), 2024.
#NLP
[acl], [arXiv]