Publications d'Quentin Fournier
Activité
- Chercheur postdoctoral: fév. 2023 - fév. 2024
Prépublications
-
NovoMolGen: Rethinking Molecular Language Model Pretraining
Kamran Chitsaz*, Roshan Balaji*, Quentin Fournier, Nirav Pravinbhai Bhatt et Sarath Chandar
In ArXiv, 2025.
#NLP, #Other
[arXiv], [huggingface], [code] -
CADmium: Fine-Tuning Code Language Models for Text-Driven Sequential CAD Design
Prashant Govindarajan*, Davide Baldelli*, Jay Pathak, Quentin Fournier et Sarath Chandar
In ArXiv, 2025.
#NLP
[arXiv], [code], [huggingface] -
Structure-Aligned Protein Language Model
Can Chen, David Heurtel-Depeiges, Robert M. Vernon, Christopher James Langmead, Yoshua Bengio et Quentin Fournier
In ArXiv, 2025.
#NLP, #Other
[arXiv], [huggingface] -
Protein Language Models: Is Scaling Necessary?
Quentin Fournier, Robert M. Vernon, Almer van der Sloot, Benjamin Schulz, Sarath Chandar et Christopher James Langmead
In bioRxiv, 2024.
#NLP, #Other
[bioRxiv], [code], [huggingface]
Articles de conférence et de revue
2025
-
Manifold Metric: A Loss Landscape Approach for Predicting Model Performance
Pranshu Malviya, Jerry Huang, Ariside Baratin, Quentin Fournier et Sarath Chandar
Conference on Lifelong Learning Agents (CoLLAs), 2025.
#DL
[arXiv] -
Combining Domain and Alignment Vectors Provides Better Knowledge-Safety Trade-offs in LLMs
Megh Thakkar, Quentin Fournier, Matthew Riemer, Pin-Yu Chen, Amal Zouaq, Payel Das et Sarath Chandar
Annual Meeting of the Association for Computational Linguistics (ACL), 2025.
#NLP
[acl], [arXiv] -
Small Encoders Can Rival Large Decoders in Detecting Groundedness
Istabrak Abbes, Gabriele Prato, Quentin Fournier, Fernando Rodriguez, Alaa Boukhary, Adam Elwood et Sarath Chandar
Findings of the Association for Computational Linguistics (ACL), 2025.
#NLP
[acl], [arXiv] -
NeoBERT: A Next Generation BERT
Lola Le Breton, Quentin Fournier, Mariam El Mezouar et Sarath Chandar
Transactions on Machine Learning Research (TMLR), 2025.
#NLP
[openreview], [arXiv], [code], [huggingface]
2024
-
Exploring Quantization for Efficient Pre-Training of Transformer Language Models
Kamran Chitsaz, Quentin Fournier, Gonçalo Mordido et Sarath Chandar
Findings of the Association for Computational Linguistics (EMNLP), 2024.
#NLP, #DL
[acl], [arXiv] -
A deep-dive into the tradeoffs of preference alignment with PEFT
Megh Thakkar, Quentin Fournier, Matthew Riemer, Pin-Yu Chen, Amal Zouaq, Payel Das et Sarath Chandar
Annual Meeting of the Association for Computational Linguistics (ACL), 2024.
#NLP
[acl], [arXiv]