-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Separate OpenAI compatible server support for "local server" to keep using OpenAI in parallel #3214
Comments
Hi there, if you're looking for alternative solutions for local servers with local models, you might want to check out Cortex (https://github.com/janhq/cortex) for headless AI operations. |
@Van-QA thanks for looking into this. This feature request is not about how to run local models - in our setup the OpenAI compatible serving is the standardized way to access models. The feature request was really just about having the ability to extend the Jan UI to allow us to run another OpenAI endpoint with a custom URL, so we don't have to manually swap out the url of the "official" OpenAI model in Jan all the time. |
|
Is your feature request related to a problem? Please describe it
I'm switching between using OpenAI and a local open AI compatible endpoints a lot. Since swapping out the base url whenever I switch is tedious, I was thinking of using one of the other endpoints such as https://jan.ai/docs/remote-models/openrouter. But According to the manual at https://jan.ai/docs/remote-models/generic-openai we should use the OpenAI server.
Describe the solution
Would it be possible to add another Server to the configurations panel for an OpenAI compatible endpoint(s)? Ideally allowing us to give it a name (so it's easier to tell in the chat what model is used). But even a generic name would be sufficient right now. This way the user can tell in the chats if they are actually talking to OpenAI or a local model.
Note: This might be related to #2840 but felt different enough to open another request
Teachability, documentation, adoption, migration strategy
No response
What is the motivation / use case for changing the behavior?
Reduce manual steps needed to switch urls frequently. Increase usability since the chats will actually tell the user that the conversation is not with a real OpenAI model
The text was updated successfully, but these errors were encountered: