Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Local LLM Execution Capabilities #35

Merged
merged 48 commits into from
Dec 4, 2023
Merged

Conversation

philippzagar
Copy link
Member

@philippzagar philippzagar commented Oct 26, 2023

Local LLM Execution Capabilities

♻️ Current situation & Problem

As of now, the Spezi ecosystem (via SpeziML) provides an OpenAI integration for easy LLM execution. However, using a remote, opaque service like OpenAI comes with a multitude of challenges, especially in the health domain. The challenges include privacy, trust, security but also financial considerations.

⚙️ Release Notes

  • Renaming the SPM package (and the entire repo) from SpeziML to SpeziLLM to better reflect the current functionality of the package.
  • Complete restructuring of the package to support local and remote LLM execution. Introduced four separate targets SpeziLLM (providing base LLM infrastructure), SpeziLLMLocal (providing local execution capabilities), SpeziLLMLocalDownload (providing download and local storage functionality), as well as SpeziLLMOpenAI (providing a OpenAI GPT integration). All of these targets are subject to change in the upcoming releases.

📚 Documentation

Documentation has been provided via inline DocC comments and code examples.

✅ Testing

Wrote appropriate UI Tests which utilize an LLM mock.

📝 Code of Conduct & Contributing Guidelines

By submitting creating this pull request, you agree to follow our Code of Conduct and Contributing Guidelines:

@philippzagar philippzagar added the enhancement New feature or request label Oct 26, 2023
@philippzagar philippzagar self-assigned this Oct 26, 2023
@codecov
Copy link

codecov bot commented Oct 26, 2023

Codecov Report

Merging #35 (aa0696a) into main (786dbba) will decrease coverage by 43.29%.
The diff coverage is 13.71%.

Additional details and impacted files

Impacted file tree graph

@@             Coverage Diff             @@
##             main      #35       +/-   ##
===========================================
- Coverage   67.96%   24.66%   -43.29%     
===========================================
  Files          11       27       +16     
  Lines         571     1225      +654     
===========================================
- Hits          388      302       -86     
- Misses        183      923      +740     
Files Coverage Δ
...peziLLM/Configuration/LLMRunnerConfiguration.swift 100.00% <100.00%> (ø)
Sources/SpeziLLM/Mock/LLMMock.swift 100.00% <100.00%> (ø)
Sources/SpeziLLM/Tasks/LLMTaskIdentifier.swift 100.00% <100.00%> (ø)
Sources/SpeziLLM/Views/LLMChatView.swift 100.00% <100.00%> (ø)
...es/SpeziLLMOpenAI/OpenAIAPIKeyOnboardingStep.swift 100.00% <100.00%> (ø)
...LLMOpenAI/OpenAIModelSelectionOnboardingStep.swift 100.00% <ø> (ø)
Sources/SpeziLLMOpenAI/OpenAIModule.swift 100.00% <ø> (ø)
Sources/SpeziLLM/LLMState+OperationState.swift 80.00% <80.00%> (ø)
...es/SpeziLLM/Helpers/BundleDescription+Bundle.swift 0.00% <0.00%> (ø)
.../SpeziLLM/Tasks/LLMRunnerSetupTaskCollection.swift 69.24% <69.24%> (ø)
... and 17 more

Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 786dbba...aa0696a. Read the comment docs.

@philippzagar
Copy link
Member Author

philippzagar commented Oct 26, 2023

@PSchmiedmayer This PR is now ready for review! 🚀
As discussed yesterday, I kept the demo application within the repository (to help with rapid API changes etc.), however, I opted to do it as a separate project (kind of an example application for the local LLM usage). I think this is the way to go here as putting all the local LLM example code into the UITests application doesn't make much sense to me. (the additions won't even be tested for now as there will be lots of changes in the future)

Also keep in mind that the overall SpeziML API is just a first draft, there is definitely lots of improvements to be made there. I tried to design the API with multiple hosting platforms (local, fog, cloud..) in mind, but there is still a lot of work to do there.

Copy link
Member

@PSchmiedmayer PSchmiedmayer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the great additions and first steps here @philippzagar!

Great job with the documentation and structure in this PR, very well structured and already in a high quality state! I only had a few smaller comments here and there that would be great to be addressed before we merge the PR.

Once again: Thank you for all the time and effort that went into the PR, looking forward to see the different elements applied in different Spezi applications and e.g. integrating it into LLM on FHIR and/or HealthGPT to demonstrate the applicability to the digital health use cases 🎉

Package.swift Outdated Show resolved Hide resolved
CONTRIBUTORS.md Show resolved Hide resolved
Sources/SpeziLocalLLM/Configuration/LLMParameters.swift Outdated Show resolved Hide resolved
Sources/SpeziLocalLLM/LLMError.swift Outdated Show resolved Hide resolved
Sources/SpeziLocalLLM/LLMLlama+Generation.swift Outdated Show resolved Hide resolved
Sources/SpeziLocalLLM/LLMState.swift Outdated Show resolved Hide resolved
Copy link
Member

@PSchmiedmayer PSchmiedmayer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the great additions!

I had a few follow up comments based on the latest changes and playing around with the code myself.

This should bring the PR close to be merged. I would suggest that we move some of the suggestions in the review comments into separate issues to focus on merging this PR once we have resolved the compiler flag issue.

Package.swift Outdated Show resolved Hide resolved
Sources/SpeziLLM/LLMError.swift Show resolved Hide resolved
Sources/SpeziLLM/Tasks/LLMGenerationTask.swift Outdated Show resolved Hide resolved
Copy link
Member

@PSchmiedmayer PSchmiedmayer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I preparation for our meeting I took a look at the current diff and added some comments. Let's go though them in our meeting and see that we can bring this PR across the finish line 🎉👍

Package.swift Outdated Show resolved Hide resolved
CITATION.cff Outdated Show resolved Hide resolved
Sources/SpeziLLMLocal/SpeziLLMLocal.docc/SpeziLLMLocal.md Outdated Show resolved Hide resolved
@philippzagar
Copy link
Member Author

@PSchmiedmayer Feel free to give this PR another review! The overall local LLM execution functionality is definitely not in a perfect state yet, but we will address the remaining issues / other optimizations in follow-up PRs! 🚀

Regarding the failing Markdown Link Checker: This is intended, as we will conduct a renaming of the entire repo from SpeziML to SpeziLLM and I already adjusted the respective links in the README and DocC files (as well as newly added documentation).

How should we move forward with the repo renaming? First, rename the repo, then go ahead and merge this PR?

@philippzagar
Copy link
Member Author

philippzagar commented Dec 2, 2023

@PSchmiedmayer CodeCov upload is failing as we renamed the repo from SpeziML to SpeziLLM (will only work again after merge to main branch is done). However, this shouldn't be too much of an issue, as the coverage basically didn't change from the last successful CodeCov upload.

@PSchmiedmayer
Copy link
Member

@philippzagar Just re-synced CodeCov, it seems to work again: https://github.com/StanfordSpezi/SpeziLLM/actions/runs/7066539482/job/19243527150. The latest builds seem to fail due to a compilation error in the UI Tests. Let me know once those are resolved and I can merge the PR despite the Markdown Check failing 👍

@philippzagar
Copy link
Member Author

@philippzagar Just re-synced CodeCov, it seems to work again: https://github.com/StanfordSpezi/SpeziLLM/actions/runs/7066539482/job/19243527150. The latest builds seem to fail due to a compilation error in the UI Tests. Let me know once those are resolved and I can merge the PR despite the Markdown Check failing 👍

Very weird build error indeed, only occurs during building for the test target.. Will need to take a closer look!

@PSchmiedmayer PSchmiedmayer merged commit 3dd6610 into main Dec 4, 2023
5 of 8 checks passed
@PSchmiedmayer PSchmiedmayer deleted the feat/local-llm-execution branch December 4, 2023 21:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

2 participants