Prépublications

  • The Markovian Thinker
    Milad Aghajohari*, , Amirhossein Kazemnejad*, , Alessandro Sordoni, Aaron Courville et Siva Reddy
    In ArXiv, 2025.
    #NLP, #RL
    [arXiv]

  • Just-in-time Episodic Feedback Hinter: Leveraging Offline Knowledge to Improve LLM Agents Adaptation
    , Aman Jaiswal, Patrice Bechard, Oleh Shliazhko, Orlando Marquez Ayala, , Massimo Caccia, Alexandre Drouin, et Alexandre Lacoste
    In ArXiv, 2025.
    #NLP, #RL
    [arXiv]

  • GRPO-λ: Credit Assignment improves LLM Reasoning
    Prasanna Parthasarathi*, , Boxing Chen, Yufei Cui et
    In ArXiv, 2025.
    #RL, #NLP
    [arXiv]

  • CrystalGym: A New Benchmark for Materials Discovery Using Reinforcement Learning
    , , , Mariano Phielipp, Santiago Miret et
    In ArXiv, 2025.
    #RL, #Other
    [arXiv], [code]

  • NovoMolGen: Rethinking Molecular Language Model Pretraining
    , , Quentin Fournier, Nirav Pravinbhai Bhatt et
    In ArXiv, 2025.
    #NLP, #Other
    [arXiv], [huggingface], [code]

  • CADmium: Fine-Tuning Code Language Models for Text-Driven Sequential CAD Design
    , , Jay Pathak, Quentin Fournier et
    In ArXiv, 2025.
    #NLP
    [arXiv], [code], [huggingface]

  • Optimizers Qualitatively Alter Solutions And We Should Leverage This
    Razvan Pascanu, Clare Lyle, Ionut-Vlad Modoranu, Naima Elosegui Borras, Dan Alistarh, Petar Velickovic, , Soham De et James Martens
    In ArXiv, 2025.
    #DL
    [arXiv]

  • Did I Faithfully Say What I Thought? Bridging the Gap Between Neural Activity and Self-Explanations in Large Language Models
    , Jean-Noel Vittaut, Nicolas Chesneau, et Marie-Jeanne Lesot
    In ArXiv, 2025.
    #NLP
    [arXiv]

  • V-JEPA 2: Self-Supervised Video Models Enable Understanding, Prediction and Planning
    Mido Assran*, Adrien Bardes*, David Fan*, Quentin Garrido*, Russell Howes*, Mojtaba Komeili*, Matthew Muckley*, Ammar Rizvi*, Claire Roberts*, Koustuv Sinha*, , Sergio Arnaud*, Abha Gejji*, Ada Martin*, Francois Robert Hogan*, Daniel Dugas*, Piotr Bojanowski, Vasil Khalidov, Patrick Labatut, Francisco Massa, Marc Szafraniec, Kapil Krishnakumar, Yong Li, Xiaodong Ma, , Franziska Meier*, Yann LeCun*, Michael Rabbat* et Nicolas Ballas*
    Technical Report, 2025.
    #DL
    [website], [arXiv], [code], [huggingface], [blogpost]

  • Structure-Aligned Protein Language Model
    Can Chen, , Robert M. Vernon, Christopher James Langmead, Yoshua Bengio et Quentin Fournier
    In ArXiv, 2025.
    #NLP, #Other
    [arXiv], [huggingface]

  • Monitoring morphometric drift in lifelong learning segmentation of the spinal cord
    Enamundram Naga Karthik, Sandrine Bédard, Jan Valošek, Christoph S. Aigner, Elise Bannier, Josef Bednařík, Virginie Callot, Anna Combes, Armin Curt, Gergely David, Falk Eippert, Lynn Farner, Michael G Fehlings, Patrick Freund, Tobias Granberg, Cristina Granziera, RHSCIR Network Imaging Group, Ulrike Horn, Tomáš Horák, Suzanne Humphreys, Markus Hupp, Anne Kerbrat, Nawal Kinany, Shannon Kolind, Petr Kudlička, Anna Lebret, Lisa Eunyoung Lee, Caterina Mainero, Allan R. Martin, Megan McGrath, Govind Nair, Kristin P. O'Grady, Jiwon Oh, Russell Ouellette, Nikolai Pfender, Dario Pfyffer, Pierre-François Pradat, Alexandre Prat, Emanuele Pravatà, Daniel S. Reich, Ilaria Ricchi, Naama Rotem-Kohavi, Simon Schading-Sassenhausen, Maryam Seif, Andrew Smith, Seth A Smith, Grace Sweeney, Roger Tam, Anthony Traboulsee, Constantina Andrada Treaba, Charidimos Tsagkas, Zachary Vavasour, Dimitri Van De Ville, Kenneth Arnold Weber II, et Julien Cohen-Adad
    In ArXiv, 2025.
    #NLP
    [arXiv]

  • Torque-Aware Momentum
    , , Aristide Baratin, Reza Babanezhad Harikandeh, Gintare Karolina Dziugaite, Razvan Pascanu et
    In ArXiv, 2024.
    #DL
    [arXiv]

  • TRecViT: A Recurrent Video Transformer
    Viorica Pătrăucean, Xu Owen He, Joseph Heyward, Chuhan Zhang, Mehdi S. M. Sajjadi, George-Cristian Muraru, , Mahdi Karami, Ross Goroshin, Yutian Chen, Simon Osindero, João Carreira et Razvan Pascanu
    In ArXiv, 2024.
    #DL
    [arXiv], [code]

  • Too Big to Fool: Resisting Deception in Language Models
    , Mats Leon Richter, Juan Rodriguez, , et Maxime Gasse
    In ArXiv, 2024.
    #NLP
    [arXiv]

  • Interpretability Needs a New Paradigm
    , Himabindu Lakkaraju, Siva Reddy et
    In ArXiv, 2024.
    #NLP, #DL
    [arXiv]

  • Protein Language Models: Is Scaling Necessary?
    Quentin Fournier, Robert M. Vernon, Almer van der Sloot, Benjamin Schulz, et Christopher James Langmead
    In bioRxiv, 2024.
    #DL, #Other
    [bioRxiv], [code], [huggingface]

  • Interpretability in Action: Exploratory Analysis of VPT, a Minecraft Agent
    Karolis Jucys, George Adamopoulos, Mehrab Hamidi, Stephanie Milani, , , Sonia Joseph, Blake Richards, Irina Rish et Özgür Şimşek
    Workshop on Mechanistic Interpretability @ ICML, 2024.
    #Other
    [arXiv]

  • Segmentation of Multiple Sclerosis Lesions across Hospitals: Learn Continually or Train from Scratch?
    , Anne Kerbrat, Pierre Labauge, Tobias Granberg, Jason Talbott, Daniel S. Reich, Massimo Filippi, Rohit Bakshi, Virginie Callot, et Julien Cohen-Adad
    In ArXiv, 2022.
    [Medical Imaging meets NeurIPS, 2022]
    #DL, #Other
    [arXiv], [code]

  • Feature diversity in self-supervised learning
    et
    Conference on Lifelong Learning Agents (CoLLAs) Workshop Track, 2022.
    #DL
    [arXiv]

  • An Introduction to Lifelong Supervised Learning
    Shagun Sodhani, , Sanket Vaibhav Mehta, , , et
    In ArXiv, 2022.
    #DL
    [arXiv]

  • Maximum Reward Formulation In Reinforcement Learning
    Sai Krishna Gottipati, Yashaswi Pathak, Rohan Nuttall, Raviteja Chunduru, Ahmed Touati, Sriram Ganapathi Subramanian, Matthew E Taylor et
    In arXiv, 2020.
    #RL
    [arXiv]