-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge develop onto main for 0.3.0 release #75
Open
amoeba
wants to merge
188
commits into
main
Choose a base branch
from
develop
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Test suite now passes on Python 3.9.1
Port the essential services from Docker to Kubernetes
Python 3 Upgrade
Add instructions for an authenticated SPARQL endpoint
(Just needed to run out for the day)
The old Interface and Graph classes had a lot of cruft and were really confusing to keep straight. I've done a refactor that uses a different class structure that's a lot easier for me to understand. Hopefully it's easier for you too. See the README for more info, including a fancy picture, but the Interface class is now wrapped into a top-level SlinkyClient class and the old Graph class is now tied into a SparqlTripleStore class. With the old setup, you had to instantiate an Interface and a Graph. Now you just instantiate a SlinkyClient and you're good to go. Here's some more detail, copied from the README: - `SlinkyClient`: Entrypoint class that manages a connection to DataONE, a triple store, and Redis for short-term persistence and delayed jobs - `FilteredCoordinatingNodeClient`: A view into a Coordinating Node that can limit what content appears to be available based on a Solr query. e.g., a CN client that can only see datasets that are part of a specific EML project or in a particular region - `SparqlTripleStore`: Handles inserting into and querying a generic SPARQL-compliant RDF triplestore via SPARQL queries. Designed to be used with multiple triple stores. - `Processor`: Set of classes that convert documents of various formats (e.g., XML, JSON-LD) into a set of RDF statements The old package code is left in the legacy submodule (to be deleted in the future) and its tests are still alive and working via pytest.
This is pretty basic but you can run this container and hit /get?id=foo to get the Slinky RDF for given DataONE PID.
We decided we needed to change this all back because schema.org gave up on switching to https
Sets us up for later when we automatically build and push a latest tag from the tip of develop
amoeba
changed the title
[WIP] Merge develop onto main for 0.3.0 release
Merge develop onto main for 0.3.0 release
Aug 13, 2022
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This probably doesn't need a code review but I'd recommend testing it out by installing to the dev cluster and reviewing the results from there.
Changes:
pytest
defaults to running just unit andpytest --integration
runs all tests.Remaining TODOs before merge:
TODO after merge:
slinkythegraph
DockerHub account (@ThomasThelen )