Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Adding transformer (self-attention) policies/torch layers #1432

Closed
1 task done
sbOogway opened this issue Apr 6, 2023 · 1 comment
Closed
1 task done
Labels
duplicate This issue or pull request already exists enhancement New feature or request

Comments

@sbOogway
Copy link

sbOogway commented Apr 6, 2023

🚀 Feature

due to the recent proven stability and reliability of transformer i think we should implement them in stable baselines. maybe creating an actor critic transformer policy?

Motivation

No response

Pitch

No response

Alternatives

No response

Additional context

No response

Checklist

  • I have checked that there is no similar issue in the repo
@sbOogway sbOogway added the enhancement New feature or request label Apr 6, 2023
@araffin araffin added the duplicate This issue or pull request already exists label Apr 6, 2023
@araffin
Copy link
Member

araffin commented Apr 6, 2023

I have checked that there is no similar issue in the repo

Please check harder next time...

Duplicate of Stable-Baselines-Team/stable-baselines3-contrib#165 #177 #1387 #1407 #1077

@araffin araffin closed this as not planned Won't fix, can't repro, duplicate, stale Apr 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
duplicate This issue or pull request already exists enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants