Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deepq algorithm can't converge when change to another atari game #234

Open
Ja1r0 opened this issue Dec 16, 2017 · 2 comments
Open

Deepq algorithm can't converge when change to another atari game #234

Ja1r0 opened this issue Dec 16, 2017 · 2 comments

Comments

@Ja1r0
Copy link

Ja1r0 commented Dec 16, 2017

In the "deepq" document,three experiments are provided,including one atari game "Pong".The deep-Q algorithm works pretty good in "Pong",but when I change Pong to another atari pixel game "AirRaid",the algorithm can't converge.Specifically,I just change
env = gym.make("PongNoFrameskip-v4")
to
env = gym.make("AirRaidNoFrameskip-v4")
in "baselines/deepq/experiments/train_pong.py" without any other modification to the code and hyperparameters.And the algorithm can't converge even after 1400 episodes.I don't know why.

@Ja1r0 Ja1r0 changed the title Deepq algorithm can't converge when change another atari game Deepq algorithm can't converge when change to another atari game Dec 16, 2017
@cxxgtxy
Copy link
Contributor

cxxgtxy commented Jan 12, 2018

Good question. In deed, you should use train in experiment/atari dir

@zwfcrazy
Copy link

zwfcrazy commented Mar 22, 2018

According to the original paper of DQN, DQN does not work well on games with very complex state space, such as the game Montezuma's Revenge. I'm not sure how complex Air Raid is, as I didn't find this game in the paper...
Maybe you can take a look how complex this game is...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants