Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add more features from copy-paste downstreams #26

Open
bollwyvl opened this issue Dec 2, 2020 · 5 comments
Open

Add more features from copy-paste downstreams #26

bollwyvl opened this issue Dec 2, 2020 · 5 comments

Comments

@bollwyvl
Copy link
Collaborator

bollwyvl commented Dec 2, 2020

In jupyterlab-lsp and probably scattered a few other places, there are a trove of lab- (but not extension-) specific options, which would be nice to polish and document, and include in this repo:

  • more advanced codemirror behaviors
  • working with advanced settings
  • working with jupyter_notebook_config.json
  • better support of starting servers without shell and peril-fraught escaping
  • more encapsulated notebook/lab config and runtime files (e.g. workspaces)
  • "nasty" paths with non-ASCII and spaces to catch more encoding/escaping errors
@jtpio
Copy link
Contributor

jtpio commented Jan 25, 2021

Looks like some of:

could be moved to this repo for example?

Trying to think of a minimal setup with the least amount of configuration files and boilerplate, that could for example be used to test a "traditional" JupyterLab extension. And maybe add this setup as an opt-in to the cookiecutter, so end to end testing would be enabled out of the box for new extensions.

@bollwyvl
Copy link
Collaborator Author

Yeah, wxyz would love for there to Just Be a standard library for @jupyter-widgets/controls core, for example. Just for documentation (there's a lot to chew on in wxyz), here are some quick thoughts from doing a couple releases with robot:

Testing WXYZ Widgets with Robot

  • the docs/demo/test approach that I'm coming around to is:
    • write a notebook that demonstrates a new capability
      • shows a unique, minimal custom widget, usually just setting value in as few cells as possible
      • add more details, using familiar widgets (sliders, etc) as control surfaces (sometimes embedded inside it).
      • finally build up a full application-style thing with DockBox
    • lint/format all of them with black, isort, and prettier... should probably have pyflakes, etc.
    • ensure each document is importable with importnb
    • ensure all of them are importable, together in index.ipynb
    • exercise all of them in nbconvert
    • and then finally exercise all of them in robot

Lesson Learned: Testing code in the browser is the most expensive, and highest false-positive, way to test jupyter functionality. Do everything you can first to find problems.

RobotOps: the nuts and bolts of robot testing

For the actual running of robot tasks, I think some of the higher-order boilerplate robot invocation. is important.

  • ensure tests always run at the lowest level randomization, and have no coupling on each other
    • always clean up fixtures, etc.
  • run tests in parallel with robotframework-pabot
  • ensure it's always easy (or at least not inconvenient) to just run one or two tests
  • have a single-CLI way to retry just failed tests

Lesson Learned: Robot/selenium browser tests can easily get very, very slow; a lot of the time can get lost just waiting for bad conditions. pabot at least allows you to at least wait for a lot of things at the same time, and achieves basically linear speedup. If tests do fail because they are flaky (or flaky because they are running in an unprecedented high velocity way), then re-running them until they work can get you by.

RoboDocs

Using the output of a robot test suite run as the input of a docs build with e.g. myst-nb or nbsphinx lets you have always-up-to-date screenshots.

Lesson Learned: Eventually a human has to look at the output of screenshots, and have a sensible way to compare them. Docs are a pretty good way to do this, especially if the text caption corresponds to what you should expect to see on the page! I'd really love to figure out how to do screencasts... i've done this at the heavyweight VM level, but perhaps FF has a way to do it semi-natively.

Unless you're readthedocs magic is very strong, it's all but pointless to try to do this on their hardware. The situation might improve with the MAMBA_FEATURE_FLAG.

@bollwyvl
Copy link
Collaborator Author

could be moved to this repo for example?

For reference, a lot of that does already exist, with some polish, in this library. Which specific things are you looking for?

@jtpio
Copy link
Contributor

jtpio commented Jan 26, 2021

Thanks for all the details 👍

Robot/selenium browser tests can easily get very, very slow; a lot of the time can get lost just waiting for bad conditions

Have you been able to compare that with other tools such as cypress, puppeteer or playwright?

For reference, a lot of that does already exist, with some polish, in this library. Which specific things are you looking for?

Ah nice! I thought they were slightly different compared to the one in wxyz.

@bollwyvl
Copy link
Collaborator Author

bollwyvl commented Jan 26, 2021 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants