-
Notifications
You must be signed in to change notification settings - Fork 163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EnvPool advertisement #164
Comments
Btw, could you please use the newest version (0.6.1.post1) to verify the final reward on ant-v3 and humanoid-v3? Some changes have been made but I'm not sure whether it will break the consistency. |
@Trinkle23897 the Breakout-v3 typo was fixed. And btw this Mujoco humanoid result I got by training on my laptop with 11th Gen Intel® Core™ i9-11980HK @ 2.60GHz × 16 and RTX 3080, it was not even a desktop. Training with envpool was extremely fast. Just started training Mujoco Humanoid with the latest envpool. You updated it really fast to the new opensourced Mujoco version! |
|
Great! Would you like to be one of the authors of our paper? |
I and Denys would happy to contribute to being co-authors of the paper with you. |
This comment was marked as outdated.
This comment was marked as outdated.
btw, you can join our discord too https://discord.gg/hnYRq7DsQh |
Another request: I'm trying to use mujoco source code to build envpool. However, there are some small precision issues (google-deepmind/mujoco#294). The corresponding wheels are in https://github.com/sail-sg/envpool/actions/runs/2381544251 |
@Trinkle23897 I can test ant and humanoid after finishing with the ongoing experiments. Btw do you plan to support DM_control multi-agent envs: https://github.com/deepmind/dm_control/blob/main/dm_control/locomotion/soccer/README.md ? If yes we can run self-play experiments with rl_games and envpool as well, for the simplest env. |
@ViktorM Yes, envpool plans to support all tasks in dm_control.locomotion, the multi-agent soccer will be supported too. It can be one of the multi-agent env that envpool supported. |
@Benjamin-eecs thank you! Looking forward for o soccer with envpool. We already had some interesting results with the simples boxhead version 1x1. With envpool speed up we’ll be able to train 2x2 and maybe even ant version! |
Hi, I just came across this repo. I'm quite surprised that you use envpool to achieve 2 min Pong and 20min Breakout, nice work!
I'm wondering if you'd like to open a pull request at EnvPool to link with your result (like the CleanRL ones), and if it is possible for us to include your experiment result in our incoming arXiv paper. Also, it would be great if you can make more amazing results based on EnvPool mujoco tasks (which has aligned with gym's implementation and can also get a free speedup). Thanks!
BTW, isn't it a typo?
https://github.com/Denys88/rl_games/blame/master/docs/ATARI_ENVPOOL.md#L9
The text was updated successfully, but these errors were encountered: