diff --git a/README.md b/README.md index 8354d14..2e34468 100644 --- a/README.md +++ b/README.md @@ -15,6 +15,8 @@ RLzoo is a collection of the most practical reinforcement learning algorithms, frameworks and applications. It is implemented with Tensorflow 2.0 and API of neural network layers in TensorLayer 2, to provide a hands-on fast-developing approach for reinforcement learning practices and benchmarks. It supports basic toy-tests like [OpenAI Gym](https://gym.openai.com/) and [DeepMind Control Suite](https://github.com/deepmind/dm_control) with very simple configurations. Moreover, RLzoo supports robot learning benchmark environment [RLBench](https://github.com/stepjam/RLBench) based on [Vrep](http://www.coppeliarobotics.com/)/[Pyrep](https://github.com/stepjam/PyRep) simulator. Other large-scale distributed training framework for more realistic scenarios with [Unity 3D](https://github.com/Unity-Technologies/ml-agents), [Mujoco](http://www.mujoco.org/), [Bullet Physics](https://github.com/bulletphysics/bullet3), etc, will be supported in the future. A [Springer textbook](https://deepreinforcementlearningbook.org) is also provided, you can get the free PDF if your institute has Springer license. +Different from RLzoo for simple usage with **high-level APIs**, we also have a [RL tutorial](https://github.com/tensorlayer/tensorlayer/tree/master/examples/reinforcement_learning) that aims to make the reinforcement learning tutorial simple, transparent and straight-forward with **low-level APIs**, as this would not only benefits new learners of reinforcement learning, but also provide convenience for senior researchers to testify their new ideas quickly. + @@ -25,9 +27,6 @@ RLzoo is a collection of the most practical reinforcement learning algorithms, f - - - We aim to make it easy to configure for all components within RL, including replacing the networks, optimizers, etc. We also provide automatically adaptive policies and value functions in the common functions: for the observation space, the vector state or the raw-pixel (image) state are supported automatically according to the shape of the space; for the action space, the discrete action or continuous action are supported automatically according to the shape of the space as well. The deterministic or stochastic property of policy needs to be chosen according to each algorithm. Some environments with raw-pixel based observation (e.g. Atari, RLBench) may be hard to train, be patient and play around with the hyperparameters! **Table of contents:** @@ -44,14 +43,13 @@ We aim to make it easy to configure for all components within RL, including repl - [Credits](#credits) - [Citing](#citing) -Please note that this repository using RL algorithms with **high-level API**. So if you want to get familiar with each algorithm more quickly, please look at our **[RL tutorials](https://github.com/tensorlayer/tensorlayer/tree/master/examples/reinforcement_learning)** where each algorithm is implemented individually in a more straightforward manner. ## Status: Release We are currently open to any suggestions or pull requests from the community to make RLzoo a better repository. Given the scope of this project, we expect there could be some issues over the coming months after initial release. We will keep improving the potential problems and commit when significant changes are made in the future. Current default hyperparameters for each algorithm and each environment may not be optimal, so you can play around with those hyperparameters to achieve best performances. We will release a version with optimal hyperparameters and benchmark results for all algorithms in the future. -## Contents: -### Algorithms: +## Contents +### Algorithms | Algorithms | Papers | | --------------- | -------| @@ -76,7 +74,7 @@ the coming months after initial release. We will keep improving the potential pr |Twin Delayed DDPG (TD3)|[Addressing function approximation error in actor-critic methods. Fujimoto et al. 2018.](https://arxiv.org/pdf/1802.09477.pdf)| |Soft Actor-Critic (SAC)|[Soft actor-critic algorithms and applications. Haarnoja et al. 2018.](https://arxiv.org/abs/1812.05905)| -### Environments: +### Environments * [**OpenAI Gym**](https://gym.openai.com/): @@ -126,7 +124,7 @@ The supported configurations for RL algorithms with corresponding environments i | TRPO | Discrete/Continuous | Stochastic | On-policy | All | -## Prerequisites: +## Prerequisites * python >=3.5 (python 3.6 is needed if using dm_control) * tensorflow >= 2.0.0 or tensorflow-gpu >= 2.0.0a0 @@ -136,7 +134,7 @@ The supported configurations for RL algorithms with corresponding environments i * [Mujoco 2.0](http://www.mujoco.org/), [dm_control](https://github.com/deepmind/dm_control), [dm2gym](https://github.com/zuoxingdong/dm2gym) (if using DeepMind Control Suite environments) * Vrep, PyRep, RLBench (if using RLBench environments, follows [here](http://www.coppeliarobotics.com/downloads.html), [here](https://github.com/stepjam/PyRep) and [here](https://github.com/stepjam/RLBench)) -## Installation: +## Installation To install RLzoo package with key requirements: @@ -144,7 +142,9 @@ To install RLzoo package with key requirements: pip install rlzoo ``` -## Usage: +## Usage + +For usage, please check our [online documentation](https://rlzoo.readthedocs.io). ### 0. Quick Start Choose whatever environments with whatever RL algorithms supported in RLzoo, and enjoy the game by running following example in the root file of installed package: @@ -187,7 +187,6 @@ alg.learn(env=env, mode='train', render=False, **learn_params) alg.learn(env=env, mode='test', render=True, **learn_params) ``` -#### To Run: ```python # in the root folder of rlzoo package @@ -199,7 +198,7 @@ python run_rlzoo.py RLzoo with **explicit configurations** means the configurations for learning, including parameter values for the algorithm and the learning process, the network structures used in the algorithms and the optimizers etc, are explicitly displayed in the main script for running. And the main scripts for demonstration are under the folder of each algorithm, for example, `./rlzoo/algorithms/sac/run_sac.py` can be called with `python algorithms/sac/run_sac.py` from the file `./rlzoo` to run the learning process same as in above implicit configurations. -#### A Quick Example: +#### A Quick Example ```python import gym @@ -264,8 +263,6 @@ render: if true, visualize the environment model.learn(env, test_episodes=100, max_steps=200, mode='test', render=True) ``` -#### To Run: - In the package folder, we provides examples with explicit configurations for each algorithm. ```python @@ -276,15 +273,15 @@ python algorithms//run_.py python algorithms/ac/run_ac.py ``` -## Troubleshooting: +## Troubleshooting * If you meet the error *'AttributeError: module 'tensorflow' has no attribute 'contrib''* when running the code after installing tensorflow-probability, try: `pip install --upgrade tf-nightly-2.0-preview tfp-nightly` * When trying to use RLBench environments, *'No module named rlbench'* can be caused by no RLBench package installed at your local or a mistake in the python path. You should add `export PYTHONPATH=/home/quantumiracle/research/vrep/PyRep/RLBench` every time you try to run the learning script with RLBench environment or add it to you `~/.bashrc` file once for all. * If you meet the error that the Qt platform is not loaded correctly when using DeepMind Control Suite environments, it's probably caused by your Ubuntu system not being version 14.04 or 16.04. Check [here](https://github.com/deepmind/dm_control). -## Credits: -Our contributors include: +## Credits +Our core contributors include: [Zihan Ding](https://github.com/quantumiracle?tab=repositories), [Tianyang Yu](https://github.com/Tokarev-TT-33), @@ -292,7 +289,7 @@ Our contributors include: [Hongming Zhang](https://github.com/initial-h), [Hao Dong](https://github.com/zsdonghao) -## Citing: +## Citing ``` @misc{RLzoo, @@ -305,6 +302,16 @@ Our contributors include: } ``` +## Other Resources +
+ +
+ +
+ +
+
+
diff --git a/docs/guide/quickstart.rst b/docs/guide/quickstart.rst index 2003fa2..382cee1 100644 --- a/docs/guide/quickstart.rst +++ b/docs/guide/quickstart.rst @@ -30,6 +30,5 @@ Open ``./run_rlzoo.py``: Run the example: .. code-block:: bash - :linenos: python run_rlzoo.py diff --git a/docs/index.rst b/docs/index.rst index 52d9d5a..c4f22e6 100644 --- a/docs/index.rst +++ b/docs/index.rst @@ -6,8 +6,6 @@ Reinforcement Learning Zoo for Simple Usage ============================================ - - .. image:: img/logo.png :width: 50 % :align: center @@ -44,11 +42,11 @@ RLzoo is a collection of the most practical reinforcement learning algorithms, f common/common .. toctree:: - :maxdepth: 2 + :maxdepth: 1 :caption: Other Resources - other/drlbook - + other/drl_book + other/drl_tutorial Contributing ================== @@ -63,17 +61,6 @@ Citation * :ref:`search` -Other Resources -================== - - -.. image:: http://deep-reinforcement-learning-book.github.io/assets/images/cover_v1.png - :width: 30 % - :target: https://deepreinforcementlearningbook.org -.. image:: http://download.broadview.com.cn/ScreenShow/180371146440fada4ad2 - :width: 30 % - :target: http://www.broadview.com.cn/book/5059 - .. image:: img/logo.png :width: 70 % :align: center diff --git a/docs/other/drl_book.rst b/docs/other/drl_book.rst new file mode 100644 index 0000000..bcf78fd --- /dev/null +++ b/docs/other/drl_book.rst @@ -0,0 +1,42 @@ +DRL Book +========== + +.. image:: http://deep-reinforcement-learning-book.github.io/assets/images/cover_v1.png + :width: 30 % + :align: center + :target: https://deepreinforcementlearningbook.org + +- You can get the `free PDF `__ if your institute has Springer license. + +Deep reinforcement learning (DRL) relies on the intersection of reinforcement learning (RL) and deep learning (DL). It has been able to solve a wide range of complex decision-making tasks that were previously out of reach for a machine and famously contributed to the success of AlphaGo. Furthermore, it opens up numerous new applications in domains such as healthcare, robotics, smart grids, and finance. + +Divided into three main parts, this book provides a comprehensive and self-contained introduction to DRL. The first part introduces the foundations of DL, RL and widely used DRL methods and discusses their implementation. The second part covers selected DRL research topics, which are useful for those wanting to specialize in DRL research. To help readers gain a deep understanding of DRL and quickly apply the techniques in practice, the third part presents mass applications, such as the intelligent transportation system and learning to run, with detailed explanations. + +The book is intended for computer science students, both undergraduate and postgraduate, who would like to learn DRL from scratch, practice its implementation, and explore the research topics. This book also appeals to engineers and practitioners who do not have strong machine learning background, but want to quickly understand how DRL works and use the techniques in their applications. + +Editors +-------- +- Hao Dong - Peking University +- Zihan Ding - Princeton University +- Shanghang Zhang - University of California, Berkeley + +Authors +-------- +- Hao Dong - Peking University +- Zihan Ding - Princeton University +- Shanghang Zhang - University of California, Berkeley +- Hang Yuan - Oxford University +- Hongming Zhang - Peking University +- Jingqing Zhang - Imperial College London +- Yanhua Huang - Xiaohongshu Technology Co. +- Tianyang Yu - Nanchang University +- Huaqing Zhang - Google +- Ruitong Huang - Borealis AI + + +.. image:: https://deep-generative-models.github.io/files/web/water-bottom-min.png + :width: 100 % + :align: center + :target: https://github.com/tensorlayer/tensorlayer/edit/master/examples/reinforcement_learning + + diff --git a/docs/other/drl_tutorial.rst b/docs/other/drl_tutorial.rst new file mode 100644 index 0000000..472c7f7 --- /dev/null +++ b/docs/other/drl_tutorial.rst @@ -0,0 +1,18 @@ +DRL Tutorial +================================= + + +.. image:: https://tensorlayer.readthedocs.io/en/latest/_images/tl_transparent_logo.png + :width: 30 % + :align: center + :target: https://github.com/tensorlayer/tensorlayer/edit/master/examples/reinforcement_learning + + +Different from RLzoo for simple usage with **high-level APIs**, the `RL tutorial `__ aims to make the reinforcement learning tutorial simple, transparent and straight-forward with **low-level APIs**, as this would not only benefits new learners of reinforcement learning, but also provide convenience for senior researchers to testify their new ideas quickly. + +.. image:: https://deep-generative-models.github.io/files/web/water-bottom-min.png + :width: 100 % + :align: center + :target: https://github.com/tensorlayer/tensorlayer/edit/master/examples/reinforcement_learning + + diff --git a/docs/other/drlbook.rst b/docs/other/drlbook.rst deleted file mode 100644 index 445de87..0000000 --- a/docs/other/drlbook.rst +++ /dev/null @@ -1,2 +0,0 @@ -Deep Reinforcement Learning Book -=================================