Publications | Deep Learning
Preprints
-
Protein Language Models: Is Scaling Necessary?
Quentin Fournier, Robert M. Vernon, Almer van der Sloot, Benjamin Schulz, Sarath Chandar, and Christopher James Langmead
In bioRxiv, 2024.
#DL, #Other
[bioRxiv], [code] -
Exploring the Plasticity of Neural Network for NLP Tasks in Continual Learning
Maryam Hashemzadeh, Pranshu Malviya*, Darshan Patil*, and Sarath Chandar
Conference on Lifelong Learning Agents (CoLLAs) workshop, 2024.
#DL, #NLP
-
BindGPT: A Scalable Framework for 3D Molecular Design via Language Modeling and Reinforcement Learning
Artem Zholus, Maksim Kuznetsov, Roman Schutski, Rim Shayakhmetov, Daniil Polykovskiy, Sarath Chandar, and Alex Zhavoronkov
In arXiv, 2024.
#DL, #RL
[arXiv], [website] -
Predicting the Impact of Model Expansion through the Minima Manifold: A Loss Landscape Perspective
Pranshu Malviya, Jerry Huang, Quentin Fournier, and Sarath Chandar
In ArXiv, 2024.
#DL
[arXiv] -
Interpretability Needs a New Paradigm
Andreas Madsen, Himabindu Lakkaraju, Siva Reddy, and Sarath Chandar
In ArXiv, 2024.
#NLP, #DL, #Other
[arXiv] -
Segmentation of Multiple Sclerosis Lesions across Hospitals: Learn Continually or Train from Scratch?
Naga Karthik Enamundram, Anne Kerbrat, Pierre Labauge, Tobias Granberg, Jason Talbott, Daniel S. Reich, Massimo Filippi, Rohit Bakshi, Virginie Callot, Sarath Chandar, and Julien Cohen-Adad
In ArXiv, 2022.
#DL
[arXiv], [code] -
Feature diversity in self-supervised learning
Pranshu Malviya* and Arjun Vaithilingam Sudhakar*
Conference on Lifelong Learning Agents (CoLLAs) workshop, 2022.
#DL
[arXiv] -
Sharpness-Aware Training for Accurate Inference on Noisy DNN Accelerators
Gonçalo Mordido, Sarath Chandar, and François Leduc-Primeau
Conference on Lifelong Learning Agents (CoLLAs) workshop, 2022.
[Edge Intelligence Workshop (EIW), 2022]
#DL
[arXiv] -
An Introduction to Lifelong Supervised Learning
Shagun Sodhani, Mojtaba Farmazi, Sanket Vaibhav Mehta, Pranshu Malviya, Mohamed Abdelsalam, Janarthanan Rajendran, and Sarath Chandar
In ArXiv, 2022.
#DL
[arXiv] -
RECOVER: Sequential Model Optimization Platform for Combination Drug Repurposing Identifies Novel Synergistic Compounds in vitro
Paul Bertin, Jarrid Rector-Brooks, Deepak Sharma, Thomas Gaudelet, Andrew Anighoro, Torsten Gross, Francisco Martínez-Peña, Eileen L. Tang, Suraj M S, Cristian Regep, Jeremy Hayter, Maksym Korablyov, Nicholas Valiante, Almer van der Sloot, Mike Tyers, Charles Roberts, Michael M. Bronstein, Luke L. Lairson, Jake P. Taylor-King, and Yoshua Bengio
In arXiv, 2022.
#DL
[arXiv], [code]
Conference and Journal Papers
2024
-
Exploring Quantization for Efficient Pre-Training of Transformer Language Models
Kamran Chitsaz, Quentin Fournier, Gonçalo Mordido, and Sarath Chandar
Findings of the Association for Computational Linguistics (EMNLP), 2024.
#NLP, #DL
[arXiv] -
Lookbehind-SAM: k steps back, 1 step forward
Gonçalo Mordido, Pranshu Malviya, Aristide Baratin, and Sarath Chandar
International Conference on Machine Learning (ICML), 2024.
#DL
[arXiv], [code], [YouTube] -
Promoting Exploration in Memory-Augmented Adam using Critical Momenta
Pranshu Malviya, Gonçalo Mordido, Aristide Baratin, Reza Babanezhad Harikandeh, Jerry Huang, Simon Lacoste-Julien, Razvan Pascanu, and Sarath Chandar
Transactions on Machine Learning Research (TMLR), 2024.
#DL
[arXiv] -
A Responsible Framework for Applying Artificial Intelligence on Medical Images and Signals at the Point-of-care: the PACS-AI Platform
Pascal Theriault-Lauzier, Denis Cobin, Olivier Tastet, Elodie Labrecque Langlais, Bahareh Taji, Guson Kang, Aun-Yeong Chong, Derek So, An Tang, Judy Wawira Gichoya, Sarath Chandar, Pierre-Luc Déziel, Julie G Hussin, Samuel Kadoury, and Robert Avram
Canadian Journal of Cardiology, 2024.
#DL, #Other
-
Mastering Memory Tasks with World Models
Mohammad Reza Samsami*, Artem Zholus*, Janarthanan Rajendran, and Sarath Chandar
International Conference on Learning Representations (ICLR), 2024. [Oral presentation.]
#RL, #DL
[openreview] -
On the Costs and Benefits of Adopting Lifelong Learning for Software Analytics - Empirical Study on Brown Build and Risk Prediction
Doriane Olewicki, Sarra Habchi, Mathieu Nayrolles, Mojtaba Faramarzi, Sarath Chandar, and Bram Adams
International Conference on Software Engineering (ICSE) - Software Engineering in Practice Track, 2024. [ICSE24 SEIP Distinguished Paper Award.]
#DL
[arXiv] -
Fast and Accurate Output Error Estimation for Memristor-Based Deep Neural Networks
Jonathan Kern, Sébastien Henwood, Gonçalo Mordido, Elsa Dupraz, Abdeldjalil Aïssa-El-Bey, Yvon Savaria, and François Leduc-Primeau
IEEE Transactions on Signal Processing, 2024.
#DL
[paper]
2023
-
Training DNNs Resilient to Adversarial and Random Bit-Flips by Learning Quantization Ranges
Kamran Chitsaz, Gonçalo Mordido, Jean Pierre David, and François Leduc-Primeau
Transactions on Machine Learning Research (TMLR), 2023.
#DL
[openreview], [code] -
An Empirical Investigation of the Role of Pre-training in Lifelong Learning
Sanket Vaibhav Mehta, Darshan Patil, Sarath Chandar, and Emma Strubell
Journal of Machine Learning Research, 2023.
#DL
[arXiv] -
DEUP: Direct Epistemic Uncertainty Prediction
Moksh Jain, Salem Lahlou, Hadi Nekoei, Victor Butoi, Paul Bertin, Jarrid Rector-Brooks, Maksym Korablyov, and Yoshua Bengio
Transactions on Machine Learning Research (TMLR), 2023.
#DL
[arXiv], [code] -
Label fusion and training methods for reliable representation of inter-rater uncertainty
Andreanne Lemay, Charley Gros, Naga Karthik Enamundram, and Julien Cohen-Adad
The Journal of Machine Learning for Biomedical Imaging (MELBA), 2023.
#DL
[paper]
2022
-
TAG: Task-based Accumulated Gradients for Lifelong Learning
Pranshu Malviya, Balaraman Ravindran, and Sarath Chandar
Conference on Lifelong Learning Agents (CoLLAs), 2022.
[Workshop on Theory and Foundation of Continual Learning, ICML, 2021]
#DL
[arXiv], [code] -
Improving Meta-Learning Generalization with Activation-Based Early-Stopping
Simon Guiroy, Christopher Pal, Gonçalo Mordido, and Sarath Chandar
Conference on Lifelong Learning Agents (CoLLAs), 2022.
#DL
[arXiv], [code], [YouTube] -
Biological Sequence Design with GFlowNets
Moksh Jain, Emmanuel Bengio, Alex-Hernandez Garcia, Jarrid Rector-Brooks, Bonaventure F. P. Dossou, Chanakya Ekbote, Jie Fu, Tianyu Zhang, Micheal Kilgour, Dinghuai Zhang, Lena Simine, Payel Das, and Yoshua Bengio
International Conference on Machine Learning (ICML), 2022.
#DL
[arXiv], [code] -
Memory Augmented Optimizers for Deep Learning
Paul-Aymeric McRae, Prasanna Parthasarathi, Mido Assran, and Sarath Chandar
International Conference on Learning Representations (ICLR), 2022.
#DL
[openreview], [code] -
PatchUp: A Feature-Space Block-Level Regularization Technique for Convolutional Neural Networks
Mojtaba Faramarzi, Mohammad Amini, Akilesh Badrinaaraayanan, Vikas Verma, and Sarath Chandar
AAAI Conference on Artificial Intelligence (AAAI), 2022.
#DL
[arXiv], [code]
2021
-
IIRC: Incremental Implicitly-Refined Classification
Mohamed Abdelsalam, Mojtaba Faramarzi, Shagun Sodhani, and Sarath Chandar
Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
#DL
[arXiv], [code], [website], [PyPI], [docs]