Publications d'Pranshu Malviya
-
Pranshu Malviya
Domaines de recherche: Apprentissage permanent, Inférence de causalité, Optimisation
Activité
- Étudiant au doctorat: jan. 2022 - maintenant
- Étudiant à la maitrise: jan. 2021 - déc. 2021
Thèse de maitrise
-
TAG: Task-based Accumulated Gradients for Lifelong learning
par Pranshu Malviya, avec Balaraman Ravindran et Sarath Chandar comme superviseurs.
Indian Institute of Technology, Madras ⸺ janvier 2022.
[thesis]
Prépublications
-
Exploring the Plasticity of Neural Network for NLP Tasks in Continual Learning
Maryam Hashemzadeh, Pranshu Malviya*, Darshan Patil* et Sarath Chandar
Conference on Lifelong Learning Agents (CoLLAs) Workshop Track, 2024.
#DL, #NLP
-
Predicting the Impact of Model Expansion through the Minima Manifold: A Loss Landscape Perspective
Pranshu Malviya, Jerry Huang, Quentin Fournier et Sarath Chandar
In ArXiv, 2024.
#DL
[arXiv] -
Feature diversity in self-supervised learning
Pranshu Malviya* et Arjun Vaithilingam Sudhakar*
Conference on Lifelong Learning Agents (CoLLAs) Workshop Track, 2022.
#DL
[arXiv] -
An Introduction to Lifelong Supervised Learning
Shagun Sodhani, Mojtaba Farmazi, Sanket Vaibhav Mehta, Pranshu Malviya, Mohamed Abdelsalam, Janarthanan Rajendran et Sarath Chandar
In ArXiv, 2022.
#DL
[arXiv]
Articles de conférence et de revue
2024
-
Lookbehind-SAM: k steps back, 1 step forward
Gonçalo Mordido, Pranshu Malviya, Aristide Baratin et Sarath Chandar
International Conference on Machine Learning (ICML), 2024.
#DL
[pmlr], [arXiv], [code], [YouTube] -
Promoting Exploration in Memory-Augmented Adam using Critical Momenta
Pranshu Malviya, Gonçalo Mordido, Aristide Baratin, Reza Babanezhad Harikandeh, Jerry Huang, Simon Lacoste-Julien, Razvan Pascanu et Sarath Chandar
Transactions on Machine Learning Research (TMLR), 2024.
#DL
[openreview], [arXiv]
2022
-
TAG: Task-based Accumulated Gradients for Lifelong Learning
Pranshu Malviya, Balaraman Ravindran et Sarath Chandar
Conference on Lifelong Learning Agents (CoLLAs), 2022.
[Workshop on Theory and Foundation of Continual Learning, ICML, 2021]
#DL
[pmlr], [arXiv], [code]