Publications
Preprints
-
Did I Faithfully Say What I Thought? Bridging the Gap Between Neural Activity and Self-Explanations in Large Language Models
Milan Bhan, Jean-Noel Vittaut, Nicolas Chesneau, Sarath Chandar, and Marie-Jeanne Lesot
In ArXiv, 2025.
#NLP
[arXiv] -
V-JEPA 2: Self-Supervised Video Models Enable Understanding, Prediction and Planning
Mido Assran*, Adrien Bardes*, David Fan*, Quentin Garrido*, Russell Howes*, Mojtaba Komeili*, Matthew Muckley*, Ammar Rizvi*, Claire Roberts*, Koustuv Sinha*, Artem Zholus*, Sergio Arnaud*, Abha Gejji*, Ada Martin*, Francois Robert Hogan*, Daniel Dugas*, Piotr Bojanowski, Vasil Khalidov, Patrick Labatut, Francisco Massa, Marc Szafraniec, Kapil Krishnakumar, Yong Li, Xiaodong Ma, Sarath Chandar, Franziska Meier*, Yann LeCun*, Michael Rabbat*, and Nicolas Ballas*
Technical Report, 2025.
#DL
[website], [arXiv], [code], [huggingface], [blogpost] -
Monitoring morphometric drift in lifelong learning segmentation of the spinal cord
Enamundram Naga Karthik, Sandrine Bédard, Jan Valošek, Christoph S. Aigner, Elise Bannier, Josef Bednařík, Virginie Callot, Anna Combes, Armin Curt, Gergely David, Falk Eippert, Lynn Farner, Michael G Fehlings, Patrick Freund, Tobias Granberg, Cristina Granziera, RHSCIR Network Imaging Group, Ulrike Horn, Tomáš Horák, Suzanne Humphreys, Markus Hupp, Anne Kerbrat, Nawal Kinany, Shannon Kolind, Petr Kudlička, Anna Lebret, Lisa Eunyoung Lee, Caterina Mainero, Allan R. Martin, Megan McGrath, Govind Nair, Kristin P. O'Grady, Jiwon Oh, Russell Ouellette, Nikolai Pfender, Dario Pfyffer, Pierre-François Pradat, Alexandre Prat, Emanuele Pravatà, Daniel S. Reich, Ilaria Ricchi, Naama Rotem-Kohavi, Simon Schading-Sassenhausen, Maryam Seif, Andrew Smith, Seth A Smith, Grace Sweeney, Roger Tam, Anthony Traboulsee, Constantina Andrada Treaba, Charidimos Tsagkas, Zachary Vavasour, Dimitri Van De Ville, Kenneth Arnold Weber II, Sarath Chandar, and Julien Cohen-Adad
In ArXiv, 2025.
#NLP
[arXiv] -
Torque-Aware Momentum
Pranshu Malviya, Gonçalo Mordido, Aristide Baratin, Reza Babanezhad Harikandeh, Gintare Karolina Dziugaite, Razvan Pascanu, and Sarath Chandar
In ArXiv, 2024.
#DL
[arXiv] -
TRecViT: A Recurrent Video Transformer
Viorica Pătrăucean, Xu Owen He, Joseph Heyward, Chuhan Zhang, Mehdi S. M. Sajjadi, George-Cristian Muraru, Artem Zholus, Mahdi Karami, Ross Goroshin, Yutian Chen, Simon Osindero, João Carreira, and Razvan Pascanu
In ArXiv, 2024.
#DL
[arXiv], [code] -
Too Big to Fool: Resisting Deception in Language Models
Mohammad Reza Samsami, Mats Leon Richter, Juan Rodriguez, Megh Thakkar, Sarath Chandar, and Maxime Gasse
In ArXiv, 2024.
#NLP
[arXiv] -
Unraveling the Complexity of Memory in RL Agents: An Approach for Classification and Evaluation
Egor Cherepanov, Nikita Kachaev, Artem Zholus, Alexey K. Kovalev, and Aleksandr I. Panov
In ArXiv, 2024.
#RL
[arXiv] -
Interpretability Needs a New Paradigm
Andreas Madsen, Himabindu Lakkaraju, Siva Reddy, and Sarath Chandar
In ArXiv, 2024.
#NLP, #DL
[arXiv] -
Protein Language Models: Is Scaling Necessary?
Quentin Fournier, Robert M. Vernon, Almer van der Sloot, Benjamin Schulz, Sarath Chandar, and Christopher James Langmead
In bioRxiv, 2024.
#DL, #Other
[bioRxiv], [code] -
Interpretability in Action: Exploratory Analysis of VPT, a Minecraft Agent
Karolis Jucys, George Adamopoulos, Mehrab Hamidi, Stephanie Milani, Mohammad Reza Samsami, Artem Zholus, Sonia Joseph, Blake Richards, Irina Rish, and Özgür Şimşek
Workshop on Mechanistic Interpretability @ ICML, 2024.
#Other
[arXiv] -
Segmentation of Multiple Sclerosis Lesions across Hospitals: Learn Continually or Train from Scratch?
Naga Karthik Enamundram, Anne Kerbrat, Pierre Labauge, Tobias Granberg, Jason Talbott, Daniel S. Reich, Massimo Filippi, Rohit Bakshi, Virginie Callot, Sarath Chandar, and Julien Cohen-Adad
In ArXiv, 2022.
[Medical Imaging meets NeurIPS, 2022]
#DL, #Other
[arXiv], [code] -
Feature diversity in self-supervised learning
Pranshu Malviya* and Arjun Vaithilingam Sudhakar*
Conference on Lifelong Learning Agents (CoLLAs) Workshop Track, 2022.
#DL
[arXiv] -
An Introduction to Lifelong Supervised Learning
Shagun Sodhani, Mojtaba Faramazi, Sanket Vaibhav Mehta, Pranshu Malviya, Mohamed Abdelsalam, Janarthanan Rajendran, and Sarath Chandar
In ArXiv, 2022.
#DL
[arXiv] -
Maximum Reward Formulation In Reinforcement Learning
Sai Krishna Gottipati, Yashaswi Pathak, Rohan Nuttall, Raviteja Chunduru, Ahmed Touati, Sriram Ganapathi Subramanian, Matthew E Taylor, and Sarath Chandar
In arXiv, 2020.
#RL
[arXiv]