Skip to content

A Comparison between DQN and Actor-Critic in the CartPole Environment

Notifications You must be signed in to change notification settings

Giullar/CartPole-DQN-Vs-Actor-Critic

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

DQN Vs Actor-Critic in the CartPole Environment

A Comparison between a DQN and an Actor-Critic Reinforcement Learning agents in the CartPole Environment.

The Jupyter notebook AAS_Project.ipynb contains the agents definition and their training procedures. The execution of AAS_Project.ipynb generates the models weights for the two agents: DQN Agent: dqn_model_nt.h5 Actor-Critic Agent: ac_policy_nt.h5, ac_value_nt.h5

The Jupyter notebook CartPole_Models_Executor.ipynb is used to execute the agents (after the training) to visualize a game play.

About

A Comparison between DQN and Actor-Critic in the CartPole Environment

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published