Publications by Quentin Fournier
Activity
- Postdocs: Feb 2023 - Feb 2024
Preprints
-
NovoMolGen: Rethinking Molecular Language Model Pretraining
Kamran Chitsaz*, Roshan Balaji*, Quentin Fournier, Nirav Pravinbhai Bhatt, and Sarath Chandar
In ArXiv, 2025.
#NLP, #Other
[arXiv], [huggingface], [code] -
CADmium: Fine-Tuning Code Language Models for Text-Driven Sequential CAD Design
Prashant Govindarajan*, Davide Baldelli*, Jay Pathak, Quentin Fournier, and Sarath Chandar
In ArXiv, 2025.
#NLP
[arXiv], [code], [huggingface] -
Structure-Aligned Protein Language Model
Can Chen, David Heurtel-Depeiges, Robert M. Vernon, Christopher James Langmead, Yoshua Bengio, and Quentin Fournier
In ArXiv, 2025.
#NLP, #Other
[arXiv], [huggingface] -
Protein Language Models: Is Scaling Necessary?
Quentin Fournier, Robert M. Vernon, Almer van der Sloot, Benjamin Schulz, Sarath Chandar, and Christopher James Langmead
In bioRxiv, 2024.
#NLP, #Other
[bioRxiv], [code], [huggingface]
Conference and Journal Papers
2025
-
Manifold Metric: A Loss Landscape Approach for Predicting Model Performance
Pranshu Malviya, Jerry Huang, Ariside Baratin, Quentin Fournier, and Sarath Chandar
Conference on Lifelong Learning Agents (CoLLAs), 2025.
#DL
[arXiv] -
Combining Domain and Alignment Vectors Provides Better Knowledge-Safety Trade-offs in LLMs
Megh Thakkar, Quentin Fournier, Matthew Riemer, Pin-Yu Chen, Amal Zouaq, Payel Das, and Sarath Chandar
Annual Meeting of the Association for Computational Linguistics (ACL), 2025.
#NLP
[acl], [arXiv] -
Small Encoders Can Rival Large Decoders in Detecting Groundedness
Istabrak Abbes, Gabriele Prato, Quentin Fournier, Fernando Rodriguez, Alaa Boukhary, Adam Elwood, and Sarath Chandar
Findings of the Association for Computational Linguistics (ACL), 2025.
#NLP
[acl], [arXiv] -
NeoBERT: A Next Generation BERT
Lola Le Breton, Quentin Fournier, Mariam El Mezouar, and Sarath Chandar
Transactions on Machine Learning Research (TMLR), 2025.
#NLP
[openreview], [arXiv], [code], [huggingface]
2024
-
Exploring Quantization for Efficient Pre-Training of Transformer Language Models
Kamran Chitsaz, Quentin Fournier, Gonçalo Mordido, and Sarath Chandar
Findings of the Association for Computational Linguistics (EMNLP), 2024.
#NLP, #DL
[acl], [arXiv] -
A deep-dive into the tradeoffs of preference alignment with PEFT
Megh Thakkar, Quentin Fournier, Matthew Riemer, Pin-Yu Chen, Amal Zouaq, Payel Das, and Sarath Chandar
Annual Meeting of the Association for Computational Linguistics (ACL), 2024.
#NLP
[acl], [arXiv]