Skip to content

Google Season of Docs Ideas 2021

Rishabh Jain edited this page Mar 26, 2021 · 27 revisions

About CloudCV

Welcome, and thank you for your interest in CloudCV/EvalAI!

CloudCV began in the summer of 2013 as a research project within the Machine Learning and Perception lab at Virginia Tech (now at Georgia Tech), with the ambitious goal of making platforms to make AI research more reproducible. We’re a young community working towards enabling developers, researchers, and fellow students to build, compare and share state-of-the-art Artificial Intelligence algorithms. We have participated in the past seven installments (2013 - 2021) of Google Summer of Code, over the course of which our students built several excellent tools and features.

We are working on building an open-source platform, EvalAI, for evaluating and comparing machine learning (ML) and artificial intelligence algorithms (AI) at scale. EvalAI is built to provide a scalable solution to the AI research community to fulfill the critical need of evaluating machine learning models with static ground truth data or with a human-in-the-loop. This will help researchers, students, and data scientists to create, collaborate, and participate in AI challenges organized around the globe. By simplifying and standardizing the process of benchmarking these models, EvalAI seeks to lower the barrier to entry for participating in the global scientific effort to push the frontiers of machine learning and artificial intelligence, thereby increasing the rate of measurable progress in this domain.

About EvalAI

Progress on several important problems in Computer Vision (CV) and Artificial Intelligence (AI) has been driven by the introduction of bold new tasks coupled with the curation of large, realistic datasets. Not only do these tasks and datasets establish new problems and provide data necessary to analyze them, but more importantly they also establish reliable benchmarks where proposed solutions and hypotheses can be tested – an essential part of the scientific process. In recent years, the development of centralized evaluation platforms has lowered the barrier to compete and share results on these problems. As a result, a thriving community of researchers has grown around these tasks, thereby increasing the pace of progress and technical dissemination. EvalAI is an open-source platform that is helping to simplify and standardize the process of benchmarking AI models. We have hosted 100+ AI challenges with 10,000+ users, who have created 100,000+ submissions. Several organizations from industry such as Facebook, Google, IBM, eBay, etc., and academia such as Stanford, CMU, MIT, Georgia Tech, etc. are using it and its forked version for hosting their internal challenges instead of reinventing the wheel.

EvalAI's Documentation

How is EvalAI's documentation built ?

EvalAI’s documentation is built using the markdown and Sphinx. Sphinx is a tool that generates the static HTML files and they are hosted using Read the Docs. The latest documentation is available here.

Project Ideas

Run a full-audit for the current documentation and add docs for challenge creation on EvalAI

The current documentation for EvalAI is outdated and inconsistent at multiple places due to the continuous development of the project. The main goal for this project is to go through the entire current documentation, check it for the inconsistencies, and create a friction log for one of the most important use cases on EvalAI, i.e. challenge creation. The challenge creation includes an end to end pipeline for creating the challenge config , uploading it on EvalAI and running the workers for evaluation. Some of the main tasks in this project will include -

  • Read the current documentation to check for inconsistencies
  • Create a document for missing documentation in the challenge creation process
  • Add the missing documentation for challenge creation using zip file and using GitHub
  • Add the documentation for running the challenge workers locally and on EvalAI
  • Add FAQ section for challenge hosts to address common errors

Mentors - Deshraj, Rishabh

Contact Info: EvalAI Website: eval.ai EvalAI Github repository: CloudCV/EvalAI EvalAI Docs: http://evalai.readthedocs.io/en/latest Gitter Channel: gitter.im/Cloud-CV Mailing list: groups.google.com/forum/#!forum/cloudcv Email: [email protected]

Required Skills

  • Good, working knowledge of English
  • Familiarity with Git, GitHub, and markdown
  • Familiarity with EvalAI platform
Clone this wiki locally