Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add spike report #210

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

nikochiko
Copy link
Contributor

Added a spike report, giving an analysis of the approaches that can be used to test integration with the EvalAI server, a time estimate, and some examples.

Copy link
Member

@Ram81 Ram81 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nikochiko the images in markdown are not getting loaded. Can you please have a look

@nikochiko
Copy link
Contributor Author

nikochiko commented Dec 17, 2019

@Ram81 They are being loaded in my branch: https://github.com/nikochiko/evalai-cli/blob/add-spike-integration-tests/tests/integration/SPIKE.md
visible-image

Can you give me some more details of the issue?

Comment on lines +33 to +35
However, setting up the environment this way can take a lot of time. On the Travis VM, setting up the server took about
8-10 minutes on average. This makes the method unsuitable for regular testing on CI/CD pipeline. But, this can be
implemented successfully on the Production/Staging branches where updates are less frequent. For example, the build can
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nikochiko
Can you try if https://docs.travis-ci.com/user/build-stages/ is feasible as we can try to run unit test and integration test in parallel. Using this, we can get feedback from Unit Test and maybe, integration test will only run on some branches.

Copy link
Contributor Author

@nikochiko nikochiko Dec 17, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aha. Right. I think that's a good choice. While the unit tests take only 35-45 seconds, and the integration tests take 8-10 minutes, it wouldn't make much difference even if they were run at the same time paralelly. But we can set the integration tests to only run after the unit tests are passing. This way, a lot of time will be saved if unit tests are already failing. And integration tests can be implemented on more branches. I hope this is what you mean... !? Please correct me if I'm wrong 😅. I will add this in the report 😄


<h6> Testing with evalapi server </h6>

This is the more hassle-free approach. Direct tests can be written against the evalapi server. Users can be created for
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using Auth Token for live project, of either Production or Staging, is not a good practice from security point of view. It would be better if you can try encrypting Auth Token in Environment Variable as defined in: https://docs.travis-ci.com/user/environment-variables/#defining-encrypted-variables-in-travisyml

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh thank you so much for the resource. I didn't know this could be done 😄 Will add this in the report.

Comment on lines 112 to 156
<h5> The Conclusion: </h5>

* Testing CLI with EvalAI server will require much setup, and possibly changes in the EvalAI server to enable testing.
* Among the setup approaches, the second one (testing against evalapi) is better in the short term if the credentials
don't become an issue. With this approach, there would be almost no extra work required. An example challenge can be
created as a tutorial for new users and can be used for testing as well.
* However in the long term, the first approach (testing against a development environment)should be preferred as it
allows for more complete testing with more control over the server. With this approach, the work on the setup would
take around 2-3 weeks.
* The tests will also be easier to write while testing against the live evalapi server while writing in the other
scenario would also include adding mock challenges, submissions, participant teams, etc. A rough estimate would be
around 8 weeks for writing complete tests in the first case and 10-12 weeks for the second case.
* Overall:

Taking the approach to write tests against the live evalapi server can take just over 8 weeks.

A summary of this approach is as follows:
* Lightweight, faster to implement
* Testing time will not be much (currently it is around 40 seconds, adding these tests, it would be around 1 minute on
Travis).
* Can be implemented more frequently on the CI/CD as it is light-weight
* However, the approach is crude and more prone to errors
* Development on CLI project can come to a halt when the server is down
* Less freedom while writing tests as the data needs to be present on the live server
* Exposing the credentials of the test user could become a potential issue
* Currently, the API does not have functionality to create challenges/phases or load additional data on the database,
except for making submissions. But later, writing tests for such conditions can be problematic as the database being
used for tests is the same as that being used for production.
* As a walkaround, functionality can be added inside EvalAI server to allow developer testing with temporary mock
databases.

Taking the approach to write tests for a developer environment setup on the Travis VM can take around 12-16 weeks for a
complete setup.

The summary for this:
* More complex, heavyweight
* Testing time will be greatly increased. Setting up the server takes around 8-10 minutes.
* Should only be used for final checks for Production/Staging branches on the CI/CD pipeline due to being slow.
* Mocking the database adds additional complexity. (As a workaround, a separate branch can be maintained on the main
EvalAI project -- such as `production-mockdb` -- where we allow for this functionality. Then while setting up the
server, that branch can be cloned with `git clone --branch production-mockdb` and then running `docker-compose` up for
that)
* Allows for more freedom on the type of tests to be written. More customizable
* Will not be a problem even if more functionality is added in the CLI
* More suitable for long-term
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really thanks for writing the summary 🎉

@nikochiko
Copy link
Contributor Author

@vkartik97 I added some more information! Please have a look.

@nikochiko nikochiko requested review from Ram81 and krtkvrm December 18, 2019 06:33
Copy link
Member

@krtkvrm krtkvrm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 💯

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants