You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So,I was doing some brush up on RL and uptil now what I have seen is that most Deep RLs like actor-critic/DDPG prefer to use MLP/fully connected layers.Now recently I came across the openai request for research ,where in they mentioned they would like us to investigate the effect of regularisation on different RL.One reason why there is no benefit in using regularisation is that RLs don't use complex models like ResNet
So the question is,are you aware of any work where the depth of the network in reinforcement learning is similar to some of the famous deep neural nets like SSD,YOLO etcIf yes can you please upload those links.
The text was updated successfully, but these errors were encountered:
Hi @sparshgarg23,
No, I am not aware of any work about with such sophisticated architectures (though I am admittedly not a deep RL expert).
MLP / CNN / LSTM are definitely preferred in most papers.
So,I was doing some brush up on RL and uptil now what I have seen is that most Deep RLs like actor-critic/DDPG prefer to use MLP/fully connected layers.Now recently I came across the openai request for research ,where in they mentioned they would like us to investigate the effect of regularisation on different RL.One reason why there is no benefit in using regularisation is that RLs don't use complex models like ResNet
So the question is,are you aware of any work where the depth of the network in reinforcement learning is similar to some of the famous deep neural nets like SSD,YOLO etcIf yes can you please upload those links.
The text was updated successfully, but these errors were encountered: