python==3.6
is required due to theunityagents
dependency- Run
Navigation.ipynb
to train model
- Banana_Windows_x86_64/: directory that stores Unity environment
- Navigation.ipynb: notebook used to train agent
- model.py: neural net that outputs action Q-values from given state-vector
- agent.py: agent which interacts with environment and implements Q-Learning
- checkpoint.pth: stores computed weights for neural net from training
A reward of +1 is provided for collecting a yellow banana, and a reward of -1 is provided for collecting a blue banana. Thus, the goal of the agent is to collect as many yellow bananas as possible while avoiding blue bananas.
The state space has 37 dimensions and contains the agent's velocity, along with ray-based perception of objects around agent's forward direction. Given this information, the agent has to learn how to best select actions. Four discrete actions are available, corresponding to:
0
- move forward.1
- move backward.2
- turn left.3
- turn right.
The task is episodic, and in order to solve the environment, the agent must get an average score of +13 over 100 consecutive episodes.
The agent solves the environment in under 400 episodes.