From a57069d6e3c007b5a471a89ce24f69f343be6b48 Mon Sep 17 00:00:00 2001 From: Lyndon Maydwell Date: Fri, 23 Feb 2024 17:15:08 +1000 Subject: [PATCH] Reduced scope RFC for milestone 1 --- rfcs/0002-distribution-gh.md | 196 ++++++----------------------------- 1 file changed, 30 insertions(+), 166 deletions(-) diff --git a/rfcs/0002-distribution-gh.md b/rfcs/0002-distribution-gh.md index a1ce66f5..b4e3c972 100644 --- a/rfcs/0002-distribution-gh.md +++ b/rfcs/0002-distribution-gh.md @@ -10,35 +10,20 @@ Connector API, definition and packaging are specified respectively by: * [NDC specification](http://hasura.github.io/ndc-spec/) * [Deployment Specification](https://github.com/hasura/ndc-hub/blob/main/rfcs/0000-deployment.md) * [Packaging Specification (WIP)](https://github.com/hasura/ndc-hub/pull/89/files) +* [Umbrella Specification with the rest of the Roadmap](https://github.com/hasura/ndc-hub/pull/98) -This new distribution specification details how connector packages are intended to be owned, stored, indexed, searched, fetched and automatically published. +This new distribution specification details the extensions to the connector registry metdata that are required for distribution of the new package definitions. -The intuition for this system is inspired by other package management systems such as NPM, Cabal, etc. - -There was a previous implementation of these concepts as described (TODO: Get docs links from Shraddha) - -This proposal intends to allow the existing system to be extended to support new functionality and not require a migration. - - -### Items Outstanding in this Specification (TODO) - -The following items are intended to be fleshed-out in this specification prior to approval: - -* Data-formats -* URI locations -* Identifiers -* Assignment of implementation -* Dependencies -* Revocation concerns +This proposal intends to allow the existing system to be extended to support new functionality and not require any breaking changes or downtime. In addition, all "TODO" references should be replaced before finalization. -### Delivery Roadmap +### Umbrella Delivery Roadmap -The delivery of the changes outlined in this RFC can be rolled out incrementally, and this can be paused or stopped at any stage without disruption to the current system. +The delivery of a broader set of changes can be rolled out incrementally, and this can be paused or stopped at any stage without disruption to the current system. -#### Milestone 1 - Definition Links: +#### Milestone 1 (this RFC) - Definition Links: * Where there are currently git tag references: * Add link and checksum to package definition archive to DB Schema @@ -105,58 +90,15 @@ We will establish conventions for this that make authoring as streamlined as pos * Create a new CLI plugin to manage interaction with the API -### Follow-Up Work - -After implementation of this system, follow-up actions should be performed: - -* Publication of package definition archives for all connectors -* Publication of new versions of all available connectors linking to archives - - -### Out of Scope for this RFC - -The following are not described in this specification: - -* The format of the metadata used for indexing, etc -* Authentication mechanisms -* Verification policies and procedures - - -## Motivation and Why and How this Differs from Existing Solutions - -First, where are the current and proposed system different? - -| Layer | Current System | Proposed System | Difference | -| --- | --- | --- | --- | -| Package Definition Storage | Github Tag | .tar.gz | Currently the package definitions are stored in a directory hierarchy on a github branch/tag | -| Database | Postgres | Postgres | Same infrastructure with a new schema | -| API | Hasura | Hasura | Same infrastructure with different roles and actions | -| CLI | N/A | Hasura V3 CLI Plugin | No current CLI interactions available | -| Registry | ndc-hub/registry | No central component in proposal | There is no central definition source outside of the DB/Storage layer in the proposal | -| Third-party CI | No current component | Usage via CLI/API | Third-parties are able to integrate into their CI via API | -| Topic Scraping | No current component | Scheduled/Webhook trigger of topic ingestion of community connectors | Arbitrary ingestion is possible via the API/CLI | - -The current system's GraphQL: https://data.pro.arusah.com/ - Looking at all the graphql queries starting with connector_ should give all the things that we have so far. - -Google cloud function (https://github.com/hasura/connectors-cloud-integration/tree/main/sync_connector_hub_data) runs every 24 hours and scrapes registry and inserts into DB. - -Issues with the current system: - -* Pull-based ingestion of registry - Changes should ideally propagate instantaneously -* No ability to treat connectors independently - Need to PR to central registry -* Artefacts are not distributable outside of Github references - Tied to Github -* TODO - - ## Proposal -While the precursor specifications outline the structure and mechanisms of packaging, this RFC details how the packages are owned, distributed, and indexed. A layered solution is outlined from storage up to user-applications and how they can be leveraged by CI in order to automatically publish updated versions and scrape topics for community contribution discovery. +While the precursor specifications outline the structure and mechanisms of packaging, this RFC details how the packages are owned, distributed, and indexed. The solution is outlined from storage up to user-applications and how they can be leveraged by CI in order to automatically publish updated versions and scrape topics for community contribution discovery. This solution enables the following UX scenarios: -* Authors are granted system credentials and API tokens -* Packages are manually published via CLI by authors along with metadata -* Packages are automatically published by authors via CI and API in connector repositories +* Packages are published by authors via Hasura CI + * Deriving metadata and definitions from hub registry + * Deriving metadata and definitions from Github topics * Packages browsed and searched for by Hasura V3 users * Packages are referenced in Hasura V3 projects * Packages are fetched for local usage in Hasura V3 projects @@ -164,24 +106,7 @@ This solution enables the following UX scenarios: ### Ownership -Ownership is granted on an Organisation -> Package -> Author -> Version hierarchy. - -Users will be granted roles that authorize operations withing this hierarchy. - -An initial draft of roles (from most, to least privileged) is: - -* Operations - Global Access for System Operations -* Auditor - Global Read-Only Access -* Admin - Organisation Administrator - Create Packages -* Owner - Package Administrator -* Author - Package Contributor via Releases -* Public - General Public (default) - -All roles (except auditor) can grant lesser role privileges to users in their domains. - -Only Operations can grant the Operations role. - -All changes within the system are logged and can be viewed by the auditor role. +For this milestone ownerships is restricted to Hasura, but PRs can be made to the hub registry, and repositories can use pre-defined topics to allow community contributions. ### Storage @@ -189,92 +114,27 @@ All changes within the system are logged and can be viewed by the auditor role. Package definitions take the form described in the packaging spec. These need to be stored. The storage mechanism can be described abstractly: * The storage system is philosophically idempotent and content addressable (hashes are included in assets). -* Upload is available to the system internally, and able to be delegated to authors via pre-authorized URIs. * Stable read-only (fetch) URIs exist for the stored location of packages -* Indexes are maintained outside of storage, but minimal metadata is maintained for system-administration purposes -While this abstract definition is useful for system-requirements, in practice our initial implementation will use Google Cloud Buckets. +While this abstract definition is useful for system-requirements, in practice our initial implementation will use Google Cloud Buckets for a centralised hasura publication, and Github releases for independent publication. Storage conventions will be followed so that our system could initially predict the location of packages and we can incrementally transition to API based package access in service of rapid delivery. -Storage location convention will initially be: `ORG/PACKAGE/VERSION/SHA/org-packge-version-sha.tar.gz` +Storage location convention should initially be: `ORG/PACKAGE/VERSION/SHA/org-packge-version-sha.tar.gz`, although this is not a system dependency. ### Database The database backing the API provides all of the APIs state management capabilities outside of package archive storage (as described in "Storage"). -The initial implementation of the Database will be Postgres. +The initial implementation of the Database will be an extension of the exiting hub registry Postgres instance. ### API -The API provides the user-interaction layer that mediates the database and storage components. No direct user interaction should occur with either the database, or storage, except for the case when the API delegates a storage interaction - such as providing an author a pre-authorized URL for publication, or providing a Hasura V3 project user a public storage URL for a package definition. - -The API will be implemented via a Hasura V3 instance. (TODO: Check if V3 has the capabilities to implement this yet, or if we should start with a V2 instance for stability reasons) - -The various functions of the API are described as follows: - -#### Operation - -* System health monitoring -* Restarts -* Resource allocation +The API provides the user-interaction layer that mediates the database and storage components. No direct user interaction should occur with either the database, or storage, except for the case when the API delegates a storage interaction - such as providing a Hasura V3 project user a public storage URL for read access to a package definition. -#### Administration - -* Creation of organisations -* Creation of users -* Creation of packages -* Assignment of roles -* Verification -* Revocation of content -* Redirection of resources - -#### Authoring - -* Creation of packages -* Publication of new versions of packages -* Association of new metadata with organisation/package/versions -* Request for verification -* Revocation (requests?) of package versions - -#### Discoverability - -* Search for package by metadata filters - -#### Acquisition - -* Request for package download URI - - -### Applications / CLI - -The Hasura V3 CLI will provide a consistent and convenient interface to interacting with the API. - -The CLI commands will include the following: - -* hasura3 package create -* hasura3 package publish -* hasura3 package revoke -* hasura3 package search -* hasura3 package fetch - - -### CI - -Contributors may leverage the API or CLI in their CI workflows in order to automate the publication of new versions of their packages. - - -### Community Topic Consumption - -Hasura may automatically crawl pre-defined locations (such as Github) in order to collect third-party community contributions without authors needing to explicitly create new pull-requests, etc. - -For example: Github Topics can be use to search for e.g. "#hasura-v3-packge" and if the repository contains valid package definitions, consume these and index them. - -Please see previous work: TODO: Previous topic collection proof-of-concept - -*Open-Question: How would a community crawled package transition to an explicitly managed package? How would ownership be established and transitioned, etc?* +The API will be implemented via a Hasura V2 instance. ### Indexing @@ -287,6 +147,7 @@ The technical considerations were described in the "API" section, however, users * Organisation * Author * Tags +* Checksum * Category * Free-text description * Related packages @@ -294,6 +155,14 @@ The technical considerations were described in the "API" section, however, users * Any metadata in the Package description * Verification status +as well as all the existing registry metadata, and all metadata included in the package definition. + + +### Checksums + +All package definition references should be accompanied by a checksum in order to verify that the definition hasn't been changed in storage. Any definition fetch operation should verify the checksum. + + ### Discoverability The discoverability component will simply hard-code various permutations of the "Indexing" criteria to provide the user browsable lists of packages. @@ -305,22 +174,17 @@ These could include lists such as: * Verified packages * Etc. + ### Access -Roles should be deliberately narrow in their scope, with higher level roles being able to grant lower-level roles, but not able to perform their duties implicitly. +Access is read-only by default, with only Hasura having write access to the registry, API, and database. Package authors may have write access to their definition storage such as Github releases. ### Publication -The publication of packages should be performed via the API by users who have the Author role. - -The mechanism used is a three-step process: +The publication of packages should be performed via PR to the ndc-hub registry. -* Request pre-authorized storage URI for a package version via API -* Upload the package definition archive via the storage URI -* Set the metadata for the package version - -This can be done via a single user-interaction if an application (such as Hasura V3 CLI) abstracts these three steps. +Convenience interfaces could be developed to assist with this workflow. ### Verification @@ -346,4 +210,4 @@ Any publicly accessible APIs with publication capabilities have the potential to * Recycling of content * Unintentional mistakes * Spam / Reflection -* Etc. +