Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Broken compatibility of recent release (0.15.6): since 0.15.4 #1802

Closed
wookayin opened this issue Feb 4, 2020 · 1 comment
Closed

Broken compatibility of recent release (0.15.6): since 0.15.4 #1802

wookayin opened this issue Feb 4, 2020 · 1 comment

Comments

@wookayin
Copy link
Contributor

wookayin commented Feb 4, 2020

For example, many properties of EnvSpec are gone since 0.15.4. This breaks many existing RL third-party softwares (e.g. SAC) and baselines.

As an example, EnvSpec.tags might never have been considered as a public API, but it is never documented and there was no hint of deprecation/removal: #1626 (e.g. commit a99e8d1). This would give an error such as 'EnvSpec' object has no attribute 'tags'. Introduction of FlattenDictWrapper in favor of FlattenObservation (as well as so many wrapper classes) is another example. If you want to remove them, you could done an alias so people can switch to new APIs gradually.

But Why? With all due respect, I don't think this package maintains tests and semantics of versions very well. I understand backward-incompatible changes are unavoidable in many situations, but in the gym package, similar things have happened too many times. If we cannot avoid that, could you please consider bumping up versions (using better versioning schemes such as "Semantic Versioning") or at least mentioning breaking changes in the RELEASE NOTES? Otherwise, specifying version would not make much sense.

Other references (to name only a few):

openai/baselines#977
openai/baselines#1051
openai/baselines#1034
openai/baselines#1014

Would you please pay more a careful attention to compatibility and versioning schemes, so that newer versions of gym do not break many existing codebases (which I agree and understand is definitely not easy)? Especially, given that gym is considered as a de facto standard API in the RL community. I would like to ask OpenAI developers to take this maintenance issues more seriously.

@pzhokhov
Copy link
Collaborator

pzhokhov commented Feb 10, 2020

Very reasonable request. I apologize for jumping the gun on that PR; and will try not to repeat this mistake in the future. In the mean time, the I released a version 0.16.0, with a release note specifying EnvSpec changes.
That being said, we are technically adhering to semantic versioning, as https://semver.org/#spec-item-4 specifies:

Major version zero (0.y.z) is for initial development. Anything MAY change at any time. The public API SHOULD NOT be considered stable.

(admittedly, that's a bit of a silly excuse in this particular situation).
Speaking constructively, even though we'll put our best effort into not repeating similar failures in the future, the parts of the API that can be considered reliable are things in gym/core.py (Env, Wrapper); the rest - less so and may change. However, you are right in that we could and should have mentioned that in the release notes and bumped the minor version.

AdamGleave added a commit to HumanCompatibleAI/evaluating-rewards that referenced this issue Feb 11, 2020
AdamGleave added a commit to HumanCompatibleAI/evaluating-rewards that referenced this issue Feb 11, 2020
* Use benchmark_environments test code

* Force CI to regenerate cache

* Fix for Gym breaking change: openai/gym#1802
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants