-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot reproduce the benchmark results of DQN on Breakout #672
Comments
To get DQN to work you need to adjust hyperparameters that are different from what's the default in Baselines. I got Breakout to work several times for different random seeds, all within the past week from master. Here is one example of a training curve I have with a code base I'm testing with (the top left is probably what you want, past 100 episode reward): Off the top of my head:
edit: this is PDD-DQN, just to be clear. I ran for 2.5e7 steps. |
@DanielTakeshi |
I copied your hyperparameters except for the exploration method and Adam epsilon. Then I can not get the benchmark results, I could only get like 17.0 after 1e7 training steps. However, I don't think the Adam epsilon matters in this issue, maybe the problem is caused by the default exploration schedule? Do you have some ideas about that or point out what exploration schedule should I use? Thanks! |
@bywbilly Did you use the exploration schedule I had earlier? I probably should have made it clear, the exploration schedule is shown in the lower left plot in my figure above. Here it is in my actual code:
|
@DanielTakeshi Oh, I didn't notice that! Thanks for pointing out that, and I am going to try that! Thanks! |
Hi @DanielTakeshi , I copied over the hyper parameters and the exploration schedule mentioned above. I am running the experiments with this baselines commit. Here is a list of the hyper parameters being used (I modified defaults.py) for PDD-DQN. I used the same hyperparameters but set dueling=False, prioritized_replay=False in defaults.py, and set double_q to false in build_graph.py for the vanilla DQN agent. As mentioned in the readme, I also tried to reproduce results with commit (7bfbcf1), without a changing the hyperparameters. But I was not able to reproduce the results. Would be really helpful if you could please let me know if I am doing anything wrong, and if any other hyper parameter combination is better. Thanks! Some results with the changed hyper parameters and code commit.
|
Not sure why is this closed. I cannot reproduce DQN/Breakout as well and there is no resolution proposed in this thread. I also tried out new parameters proposed by @DanielTakeshi that included new exploration schedule. None of these converge. |
@DanielTakeshi - yes but the purpose of the baseline is supposed to be reproducible by anyone :). We just need a simple script that would run all baselines periodically, put result in Markdown file and push in the repo. |
I use the instruction below to train DQN on breakout environment, here is my instruction:
python3 -m baselines.run --alg=deepq --env=BreakoutNoFrameskip-v4 --num_timesteps=10000000
And at the end of training, I could only get 14-15 for 100 episode reward mean, I want to know how could I reproduce the results?
The text was updated successfully, but these errors were encountered: