Publications d'Quentin Fournier
-
Quentin Fournier
Domaines de recherche: Apprentissage permanent, Traitement du langage naturel
Activité
- Fellow de Recherche: jan. 2024 - maintenant
- Chercheur postdoctoral: jan. 2023 - jan. 2024
Prépublications
-
Protein Language Models: Is Scaling Necessary?
Quentin Fournier, Robert M. Vernon, Almer van der Sloot, Benjamin Schulz, Sarath Chandar et Christopher James Langmead
In bioRxiv, 2024.
#DL, #Other
[bioRxiv], [code] -
Predicting the Impact of Model Expansion through the Minima Manifold: A Loss Landscape Perspective
Pranshu Malviya, Jerry Huang, Quentin Fournier et Sarath Chandar
In ArXiv, 2024.
#DL
[arXiv]
Articles de conférence et de revue
2024
-
Exploring Quantization for Efficient Pre-Training of Transformer Language Models
Kamran Chitsaz, Quentin Fournier, Gonçalo Mordido et Sarath Chandar
Findings of the Association for Computational Linguistics (EMNLP), 2024.
#NLP, #DL
[acl], [arXiv] -
A deep-dive into the tradeoffs of preference alignment with PEFT
Megh Thakkar, Quentin Fournier, Matthew Riemer, Pin-Yu Chen, Amal Zouaq, Payel Das et Sarath Chandar
Annual Meeting of the Association for Computational Linguistics (ACL), 2024.
#NLP
[acl], [arXiv]