Publications | Apprentissage profond par renforcement
Prépublications
-
Unraveling the Complexity of Memory in RL Agents: An Approach for Classification and Evaluation
Egor Cherepanov, Nikita Kachaev, Artem Zholus, Alexey K. Kovalev et Aleksandr I. Panov
In ArXiv, 2024.
#RL
[arXiv] -
Maximum Reward Formulation In Reinforcement Learning
Sai Krishna Gottipati, Yashaswi Pathak, Rohan Nuttall, Raviteja Chunduru, Ahmed Touati, Sriram Ganapathi Subramanian, Matthew E Taylor et Sarath Chandar
In arXiv, 2020.
#RL
[arXiv]
Articles de conférence et de revue
2025
-
BindGPT: A Scalable Framework for 3D Molecular Design via Language Modeling and Reinforcement Learning
Artem Zholus, Maksim Kuznetsov, Roman Schutski, Rim Shayakhmetov, Daniil Polykovskiy, Sarath Chandar et Alex Zhavoronkov
AAAI Conference on Artificial Intelligence (AAAI), 2025.
#DL, #RL
[arXiv], [website]
2024
-
Balancing Context Length and Mixing Times for Reinforcement Learning at Scale
Matthew Riemer, Khimya Khetarpal, Janarthanan Rajendran et Sarath Chandar
Conference on Neural Information Processing Systems (NeurIPS), 2024.
#RL
[openreview] -
Toward Debugging Deep Reinforcement Learning Programs with RLExplorer
Rached Bouchoucha, Ahmed Haj Yahmed, Darshan Patil, Janarthanan Rajendran, Amin Nikanjam, Sarath Chandar et Foutse Khomh
International Conference on Software Maintenance and Evolution (ICSME), 2024.
#RL
[arXiv] -
Sub-goal Distillation: A Method to Improve Small Language Agents
Maryam Hashemzadeh, Elias Stengel-Eskin, Sarath Chandar et Marc-Alexandre Cote
Conference on Lifelong Learning Agents (CoLLAs), 2024. [Oral presentation.]
#RL, #NLP
[arXiv] -
Partial Models for Building Adaptive Model-Based Reinforcement Learning Agents
Safa Alver, Ali Rahimi-Kalahroudi et Doina Precup
Conference on Lifelong Learning Agents (CoLLAs), 2024.
#RL
[arXiv] -
Mastering Memory Tasks with World Models
Mohammad Reza Samsami*, Artem Zholus*, Janarthanan Rajendran et Sarath Chandar
International Conference on Learning Representations (ICLR), 2024. [Oral presentation.]
#RL, #DL
[openreview], [arXiv] -
Intelligent Switching for Reset-Free RL
Darshan Patil, Janarthanan Rajendran, Glen Berseth et Sarath Chandar
International Conference on Learning Representations (ICLR), 2024.
#RL
[openreview], [arXiv] -
Learning Conditional Policies for Crystal Design Using Offline Reinforcement Learning
Prashant Govindarajan, Santiago Miret, Jarrid Rector-Brooks, Mariano Phielipp, Janarthanan Rajendran et Sarath Chandar
Digital Discovery Journal, 2024.
#RL
[paper]
2023
-
Replay Buffer with Local Forgetting for Adapting to Local Environment Changes in Deep Model-Based Reinforcement Learning
Ali Rahimi-Kalahroudi, Janarthanan Rajendran, Ida Momennejad, Harm van Seijen et Sarath Chandar
Conference on Lifelong Learning Agents (CoLLAs), 2023.
[Deep Reinforcement Learning Workshop, NeurIPS, 2022]
#RL
[pmlr], [arXiv] -
Towards Few-shot Coordination: Revisiting Ad-hoc Teamplay Challenge In the Game of Hanabi
Hadi Nekoei, Xutong Zhao, Janarthanan Rajendran, Miao Liu et Sarath Chandar
Conference on Lifelong Learning Agents (CoLLAs), 2023.
#RL
[pmlr], [arXiv] -
Dealing With Non-stationarity in Decentralized Cooperative Multi-Agent Deep Reinforcement Learning via Multi-Timescale Learning
Hadi Nekoei, Akilesh Badrinaaraayanan, Amit Sinha, Mohammad Amini, Janarthanan Rajendran, Aditya Mahajan et Sarath Chandar
Conference on Lifelong Learning Agents (CoLLAs), 2023.
#RL
[pmlr], [arXiv] -
Conditionally Optimistic Exploration for Cooperative Deep Multi-Agent Reinforcement Learning
Xutong Zhao, Yangchen Pan, Chenjun Xiao, Sarath Chandar et Janarthanan Rajendran
Conference on Uncertainty in Artificial Intelligence (UAI), 2023.
#RL
[pmlr], [arXiv] -
Multi-Agent Reinforcement Learning for Fast-Timescale Demand Response of Residential Loads
Vincent Mai, Philippe Maisonneuve, Tianyu Zhang, Hadi Nekoei, Liam Paull et Antoine Lesage-Landry
International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2023.
#RL
[arXiv]
2022
-
Combining Reinforcement Learning and Constraint Programming for Sequence-Generation Tasks with Hard Constraints
Daphné Lafleur, Sarath Chandar et Gilles Pesant
International Conference on Principles and Practice of Constraint Programming (CP), 2022.
#RL
[paper] -
Towards Evaluating Adaptivity of Model-Based Reinforcement Learning Methods
Yi Wan*, Ali Rahimi-Kalahroudi*, Janarthanan Rajendran, Ida Momennejad, Sarath Chandar et Harm van Seijen
International Conference on Machine Learning (ICML), 2022.
#RL
[pmlr], [arXiv], [code]
2021
-
Continuous Coordination As a Realistic Scenario for Lifelong Learning
Hadi Nekoei, Akilesh Badrinaaraayanan, Aaron Courville et Sarath Chandar
International Conference on Machine Learning (ICML), 2021.
#RL
[pmlr], [arXiv], [code] -
Towered Actor Critic for Handling Multiple Action Types in Reinforcement Learning For Drug Discovery
Sai Krishna Gottipati, Yashaswi Pathak, Boris Sattarov, Sahir, Rohan Nuttall, Mohammad Amini, Matthew E. Taylor et Sarath Chandar
AAAI Conference on Artificial Intelligence (AAAI), 2021.
#RL
[aaai]
2020
-
The LoCA Regret: A Consistent Metric to Evaluate Model-Based Behavior in Reinforcement Learning
Harm van Seijen, Hadi Nekoei, Evan Racah et Sarath Chandar
Conference on Neural Information Processing Systems (NeurIPS), 2020.
#RL
[neurips], [arXiv], [code] -
Learning To Navigate The Synthetically Accessible Chemical Space Using Reinforcement Learning
Sai Krishna Gottipati*, Boris Sattarov*, Sufeng Niu, Yashaswi Pathak, Haoran Wei, Shengchao Liu, Karam MJ Thomas, Simon Blackburn, Connor W Coley, Jian Tang, Sarath Chandar et Yoshua Bengio
International Conference on Machine Learning (ICML), 2020.
#RL
[pmlr], [arXiv]