-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Default hyperparamters for run_atari.py
(using P-DDDQN) fail with Pong and Breakout (log files attached)
#431
Comments
I too got similar results for Breakout, with default parameters. |
Thanks @vpj . Not sure if anyone on the team has been able to check this. Hopefully this will be updated in their code refactor, which I think they are doing behind the scenes. In the meantime I'm using an older version of the code to get DQN-based algorithms to get the published literature results. |
Hi @DanielTakeshi, I met the same problems, and can you tell me which version of the code you are using now? |
Thanks @DanielTakeshi , does this version work well and reach the scores in the paper published? |
Yes, that version works well. I've reproduced publishable scores from all the 20 games I tried. |
@DanielTakeshi Seeing that commit
so I'm wondering if this should be updated to match the changes in that commit? |
@DanielTakeshi Ok, I closed the issue. How did you solve this problem?The old version code does not seem to have this problem. Do you know what is the problem? |
@Sudsakorn123 unfortunately I do not know. I went through the current version of the code (the one I can't get training to work) line by line, and also checked preprocessing of images, but didn't seem to find anything unusual. |
Oh, I just saw an older issue Where the users were having some similar issues. Unfortunately it seems like nothing got resolved there. |
Met the same issue and finally solved by this pull request Fix dtype for wrapper observation spaces. |
@skpenn Good news, looks like the pull request you linked is 'effectively merged'! |
I believe, that the issue has not been solved yet. I tried training the deepq model on breakout and pong with the default hyper parameters and even after 40M time steps the average episode return wouldn't be greater than 0.4. I tired tuning the hyper parameters, but it didn't really help.
|
@Michalos88 really? That's unfortunate. For hyperparameters I strongly suggest sticking with defaults here (or with what the DeepMind paper did) since it's too expensive for us to keep tweaking with those. The repository here will eventually, I think, get results standardized for Atari and DQN based models. I'll run a few trials on my end as well (maybe next week) to see if default DQN parameters can make progress. |
Thanks, @DanielTakeshi. Yeah, let us know next week! |
@Michalos88 @skpenn @vpj @uotter Unfortunately it looks like the refactored code still runs into the same issue. The refactoring is helpful to make the interface uniform but I am guessing there are still some issues with the core DQN algorithm here. I'll split this into three parts. First AttemptUsing commit 4402b8e of baselines and the same machine as described in my earlier message here, I ran this command for PDD-DQN:
because that is what they tell us to run in the README: Unfortunately I get -20.7. The logs: log.txt Granted these are with the hyperparameters:
and with just 1M time steps by default. I think one needs around 10M for instance and then to make the buffer size larger. Second AttemptI then tried to use similar hyperparameters that I used for an older baselines commit (roughly 1 year ago) in which PDD-DQN easily gets at least +20 on Pong. This is what I next ran with different hyperparameters:
The 5e7 time steps and 50k buffer size puts it more in line with what I think the older baselines code used (and which the Nature paper may have used). The following morning (after running for about 12 hours) I noticed that after about 15M steps, the scores are still stuck at -21. PDD-DQN still doesn't seem to learn anything. I killed the script to avoid having to run 35M more steps. Here are the logs I have: log.txt Note that the learning seems to collapse. Early we get plenty of -20s and -19s, which I'd expect, and then later it's almost always -21. Observing the BenchmarksNote that the benchmarks for Atari they use: show that DQN gets a score of minus seven on Pong, which is really bad but better than what I am getting here. (It also shows Breakout with a score of just one...) I am not sure what command line arguments they are using for this, but maybe it's hidden somewhere in the code which generates the benchmarks? @pzhokhov Since this is a fairly critical issue, is there any chance the README can be adjusted with a message like:
I think this might help save some time for those who are hoping to use the DQN-based algorithms. In the meantime I can help try and figure out what the issue is, and I will also keep using my older version of baselines (from a year ago) which has the working DQN algorithms. |
added a note to README |
Thanks @pzhokhov In case it helps you can see in an earlier message the commit that I used which has DQN working well. ( 4993286 ) More precisely, for the commit I listed earlier, I literally tried training 24 Atari games using PDD-DQN and got good scores for all of them with (I think) 5e7 time steps. The commit after this seemed to be when things changed, and that involved adjusting some processing of the game frames, so that could be one area to check. I checked the source code but the core DQN seemed to be implemented correctly (at least, as of June 2018 but I don't think it was changed since then), and I couldn't find any obvious errors (e.g., not scaling the frame pixels, etc.). Do you have any other suspicions on what could be happening? I have some spare cycles that I could spend for testing. For efficiency reasons, I just want to make sure I don't duplicate my tests with what others are doing. I should also add, I ran some A2C tests on Pong as of today's commit, and got good scores (20+) in less than 2e7 time steps for num envs 2, 4, 8, and 16. So that removes one source of uncertainty. |
@DanielTakeshi just to be sure, does PPO2 on current master get good scores? |
Thanks @DanielTakeshi ! My strongest suspect would be hyperparameters; but your investigation shows that's not the case... Another possible area of failure is wrong datatype casts - if we accidentally convert to int after dividing by 255 somewhere. I'll look into the diff between commits shortly (today / tomorrow), if nothing jumps out, then we'll have to go through exceedingly painstaking exercise of starting with the same weights and ensuring updates are the same. It is really not fun, so hopefully it does not come to that :) |
Soo nothing in the commit changes jumped at me as a obvious source of error, however, I narrowed down the commits between which the breaking changes have happened. |
I think it is scale=True option passed to wrap_deepmind, which leads to dividing inputs by (255*255) instead of 255 ... running tests now |
Confirmed. Here's a fixing PR: #632; I'll update benchmark results shortly |
Whoa, this seems like great news! @pzhokhov Thanks for this, I am eager to see the benchmarks and to try out myself. @andytwigg I haven't confirmed PPO2 have you run it yourself? If PPO2 is implemented in a similar manner as A2C then it takes a few hours on a decent workstation, and A2C is getting reasonable results for me. (I get scores similar to those in the appendix in Self-Imitation Learning, ICML 2018). |
@andytwigg I can confirm that PPO2 produces expected results. @DanielTakeshi @pzhokhov Thanks for handling this issue! :) |
And by the way, if things are looking good, this part can be removed:
|
Was this solved? I'm still not able to obtain good scores in Pong using P-DDDQN with default hyperparameters. |
@JulianoLagana it was solved |
Thanks for the quick reply, @DanielTakeshi. Three days ago I ran the file train_pong.py (multiple times, with different seeds), and only one out of 5 runs actually managed to get a score (not average score) higher than zero. In my free time I'll investigate further and try to post here a minimal example. |
Unfortunately, with 6d1c6c7 still can't reproduce the result of benchmark on Breakout, with the command: |
I'm trying to deploy the version same to the benchmark. |
Enduro-v0 with 6d1c6c7 is good. |
Enduro-v0 with the latest master version is good, too. The problem is Breakout env? |
Have you resolved the problem with Breakout? I can not reproduce the result with only ~16 score. |
The default hyperparameters of
baselines/baselines/deepq/experiments/run_atari.py
, which presumably is the script we should be using for DQN-based models, fail to gain any noticeable reward for both Breakout and Pong. I've attached log files later and the steps to reproduce in this issue; the main reason why I'm filing it is that it probably makes sense to have default hyperparameters be working for the scripts that are provided. Or, alternatively, perhaps list the ones that work somewhere? Upon readingrun_atari.py
it seems like the number of steps is a bit low and the replay buffer should be 10x larger, but I don't think that's going to fix the issue since Pong should be able to learn quickly with this kind of setup.I know this is probably not the top priority now but in theory this is easy to fix (just run it with the correct hyperparameters), and it would be great for users since running even 10 million steps (the default value right now) can take over 10 hours on a decent personal workstation. If you're in the process of refactoring this code, is there any chance you can take this feedback into account? Thank you!
Steps to reproduce:
cd baselines/baselines/deepq/experiments/
python run_atari.py
with eitherPongNoFrameskip-v4
orBreakoutNoFrameskip-v4
as the--env
argument. I kept all other parameters their default value, so this was prioritized dueling double DQN.By default the
logger
in baselines will createlog.txt
,progress.csv
, andmonitor.csv
files that contain information about training runs. Here are the Breakout and Pong log files:breakout_log.txt
pong_log.txt
Since GitHub doesn't upload csv files, here are the
monitor.csv
files for Breakout and then Pong:https://www.dropbox.com/s/ibl8lvub2igr9kw/breakout_monitor.csv?dl=0
https://www.dropbox.com/s/yuf3din6yjb2swl/pong_monitor.csv?dl=0
Finally, here are the
progress.csv
files for Breakout and the for Pong:https://www.dropbox.com/s/79emijmnsdcjm37/breakout_progress.csv?dl=0
https://www.dropbox.com/s/b817wnlyyyriti9/pong_progress.csv?dl=0
The text was updated successfully, but these errors were encountered: