-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improving code quality #400
Comments
Hi David! Thank you for pointing that out! All your concerns are perfectly valid; in fact, we have tripped over many of these problems internally as well. Recently, we started an effort on code quality improvement of baselines, specifically, test and benchmark coverage, pep8 (albeit with a bit relaxed standards, at least at first, around line length, star imports and maybe variable names because researchers love to name variables "Q" or "Vf" :) ). Code comments and docs will follow, with a bit of a fuzzy timeline though. If you have any particular algorithm you'd like to receive high-priority love, please let us know, also, we are open to pull requests with either new algorithms or improvements to the existing ones. |
Hi @pzhokhov! Thank you for your quick reply and good to know you are working on this. I think the concept of this repository is so great it's worth spending the effort on it. I will see if I can spend some time within my next projects to add some documentation too. Do you have any docstring convention (aka layout) chosen yet? One more thing, is there a common API layout? Like e.g. in scikit-learn every classifier/regressor provides a fit and transform method. |
Adding onto this - the Gym library provides a nice and clean interface for interacting with RL environments. It'd be nice if the same consistency can be applied to the algorithms and policies in baselines. For example, it seems like many algorithms implement their own versions of neural nets and policies, when a modular approach could be taken, where the implementation details of policies can be abstracted away from the training algorithms, and vice versa. Ideally there should just be 1 train.py that takes in a config file which specifies the policy architectures and the algorithm, and loads the corresponding components. It'd be also nice to create a standard policy saving/loading interface, so it'd be possible to load and collect data from policies separate from the learning algorithms that were used for training. |
@david-woelfle API layout - there is no solid common API just yet (that's another part of the code quality effort). So far the most stable part seems that each algorithm provides a "learn" method that takes environment, policy, and algorithm hyperparameters and returns the trained model. |
Hi all, Not to pile on here, but I have a few more points I think are worth highlighting.
While I massively appreciate the code, cleaning things up a bit would awesome. |
The stable-baselines fork has done a lot of heavy-lifting to refactor these baselines into a common API and add documentation. Would it be possible to merge their efforts upstream? Edit: this discussion is happening here 🙏. |
* Add 'terminal_observation' info key to vecenvs Fixes openai#400 * Review changes to terminal_observation fix * Fix typo
Hi everybody,
first many thanks for putting this repo online. As I found it I thought something like: "High quality implementations of RL algorithms? That is pretty cool." However, after having a more detailed look into the repository I was actually very disappointed that what I found does not appear to be anything close to high quality code.
Here some examples what I mean:
Having a look at baselines/baselines/a2c/a2c.py there are nearly no comments explaining what is done and why. I get that large parts of the code can be understood by reading the corresponding paper and that is fair enough for me. Nevertheless, not writing any docstrings at all seems very poor to me, See e.g. lines 18-31 of a2c.py:
While reading this code I ask myself many questions, e.g:
ob_space
and what object do you expect for it.reuse=False
forstep_model
andreuse=True
fortrain_model
Going on, some more points I would expect from high quality code:
That's a lot of criticism I know. So why should you care at all? Well the idea of this repo is actually brillant and I am sure it works fine as long as I don't have to touch anything. But as soon as I want to change a thing I would rather throw you code away and write it new myself, because it's less effort then understanding poorly documented code.
Finally, to leave you with a question, is there any chance that the above points are being improved in future?
Kind regards,
David
The text was updated successfully, but these errors were encountered: