-
-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Local LLM Execution Capabilities #35
Conversation
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## main #35 +/- ##
===========================================
- Coverage 67.96% 24.66% -43.29%
===========================================
Files 11 27 +16
Lines 571 1225 +654
===========================================
- Hits 388 302 -86
- Misses 183 923 +740
Continue to review full report in Codecov by Sentry.
|
@PSchmiedmayer This PR is now ready for review! 🚀 Also keep in mind that the overall SpeziML API is just a first draft, there is definitely lots of improvements to be made there. I tried to design the API with multiple hosting platforms (local, fog, cloud..) in mind, but there is still a lot of work to do there. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the great additions and first steps here @philippzagar!
Great job with the documentation and structure in this PR, very well structured and already in a high quality state! I only had a few smaller comments here and there that would be great to be addressed before we merge the PR.
Once again: Thank you for all the time and effort that went into the PR, looking forward to see the different elements applied in different Spezi applications and e.g. integrating it into LLM on FHIR and/or HealthGPT to demonstrate the applicability to the digital health use cases 🎉
Tests/LocalLLMExecutionDemo/LocalLLMExecutionDemo/Localizable.xcstrings
Outdated
Show resolved
Hide resolved
Tests/LocalLLMExecutionDemo/LocalLLMExecutionDemo/Onboarding/LocalLLMDownloadManager.swift
Outdated
Show resolved
Hide resolved
Tests/LocalLLMExecutionDemo/LocalLLMExecutionDemo/LocalLLMChatView.swift
Outdated
Show resolved
Hide resolved
Tests/LocalLLMExecutionDemo/LocalLLMExecutionDemo/LocalLLMExecutionDemoApp.swift
Outdated
Show resolved
Hide resolved
…eat/local-llm-execution
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the great additions!
I had a few follow up comments based on the latest changes and playing around with the code myself.
This should bring the PR close to be merged. I would suggest that we move some of the suggestions in the review comments into separate issues to focus on merging this PR once we have resolved the compiler flag issue.
Tests/LocalLLMExecutionDemo/LocalLLMExecutionDemo/Onboarding/Welcome.swift
Outdated
Show resolved
Hide resolved
Tests/LocalLLMExecutionDemo/LocalLLMExecutionDemo/LocalLLMExecutionDemoApp.swift
Outdated
Show resolved
Hide resolved
…peziML into feat/local-llm-execution
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I preparation for our meeting I took a look at the current diff and added some comments. Let's go though them in our meeting and see that we can bring this PR across the finish line 🎉👍
Sources/SpeziLLMLocalDownload/SpeziLLMLocalDownload.docc/SpeziLLMLocalDownload.md
Outdated
Show resolved
Hide resolved
Sources/SpeziLLMLocalHelpers/SpeziLLMLocalHelpers.docc/SpeziLLMLocalHelpers.md
Outdated
Show resolved
Hide resolved
@PSchmiedmayer Feel free to give this PR another review! The overall local LLM execution functionality is definitely not in a perfect state yet, but we will address the remaining issues / other optimizations in follow-up PRs! 🚀 Regarding the failing Markdown Link Checker: This is intended, as we will conduct a renaming of the entire repo from How should we move forward with the repo renaming? First, rename the repo, then go ahead and merge this PR? |
@PSchmiedmayer CodeCov upload is failing as we renamed the repo from SpeziML to SpeziLLM (will only work again after merge to |
@philippzagar Just re-synced CodeCov, it seems to work again: https://github.com/StanfordSpezi/SpeziLLM/actions/runs/7066539482/job/19243527150. The latest builds seem to fail due to a compilation error in the UI Tests. Let me know once those are resolved and I can merge the PR despite the Markdown Check failing 👍 |
Very weird build error indeed, only occurs during building for the test target.. Will need to take a closer look! |
Local LLM Execution Capabilities
♻️ Current situation & Problem
As of now, the Spezi ecosystem (via
SpeziML
) provides an OpenAI integration for easy LLM execution. However, using a remote, opaque service like OpenAI comes with a multitude of challenges, especially in the health domain. The challenges include privacy, trust, security but also financial considerations.⚙️ Release Notes
SpeziML
toSpeziLLM
to better reflect the current functionality of the package.SpeziLLM
(providing base LLM infrastructure),SpeziLLMLocal
(providing local execution capabilities),SpeziLLMLocalDownload
(providing download and local storage functionality), as well asSpeziLLMOpenAI
(providing a OpenAI GPT integration). All of these targets are subject to change in the upcoming releases.📚 Documentation
Documentation has been provided via inline DocC comments and code examples.
✅ Testing
Wrote appropriate UI Tests which utilize an LLM mock.
📝 Code of Conduct & Contributing Guidelines
By submitting creating this pull request, you agree to follow our Code of Conduct and Contributing Guidelines: