You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the "deepq" document,three experiments are provided,including one atari game "Pong".The deep-Q algorithm works pretty good in "Pong",but when I change Pong to another atari pixel game "AirRaid",the algorithm can't converge.Specifically,I just change env = gym.make("PongNoFrameskip-v4")
to env = gym.make("AirRaidNoFrameskip-v4")
in "baselines/deepq/experiments/train_pong.py" without any other modification to the code and hyperparameters.And the algorithm can't converge even after 1400 episodes.I don't know why.
The text was updated successfully, but these errors were encountered:
Ja1r0
changed the title
Deepq algorithm can't converge when change another atari game
Deepq algorithm can't converge when change to another atari game
Dec 16, 2017
According to the original paper of DQN, DQN does not work well on games with very complex state space, such as the game Montezuma's Revenge. I'm not sure how complex Air Raid is, as I didn't find this game in the paper...
Maybe you can take a look how complex this game is...
In the "deepq" document,three experiments are provided,including one atari game "Pong".The deep-Q algorithm works pretty good in "Pong",but when I change Pong to another atari pixel game "AirRaid",the algorithm can't converge.Specifically,I just change
env = gym.make("PongNoFrameskip-v4")
to
env = gym.make("AirRaidNoFrameskip-v4")
in "baselines/deepq/experiments/train_pong.py" without any other modification to the code and hyperparameters.And the algorithm can't converge even after 1400 episodes.I don't know why.
The text was updated successfully, but these errors were encountered: