This is the official code for our paper, "Learning Successor Features the Simple Way," accepted at NeurIPS 2024.
The authors are Raymond Chua, Arna Ghosh, Christos Kaplanis, Blake Richards and Doina Precup.
This repository is a work in progress. More details on the repo will be added soon. (Last updated: 4 Nov 2024)
This repository contains the code for the experiments in the paper. The code is written in PyTorch and is adapted from the Unsupervised Reinforcement Learning Benchmark (URLB) repository.
In the paper, we presented the architecture for the discrete action setting. Here, we provide the code for the continuous action setting, which requires some modifications to the architecture. The figure below shows the architecture for the continuous action setting.
The repository is structured as follows:
Folder | Description |
---|---|
agent | Implementations of the agents |
custom_dmc_tasks | Tasks |
TBD