-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compatibility Issues with SpeakGPT and Groq llama3 Endpoints #330
Comments
The library throws exceptions when known issues are returned by the OpenAI APIs. |
Thank you for your prompt response, @aallam. Here are the specifics related to the endpoint and model that are causing the issue:
The error messages are triggered after every API response, which are correct and as expected, but the app still shows an error message. This issue seems to be related to how exceptions are handled when using this particular endpoint and model with the Could we possibly look into the exception handling logic for responses from this specific API? It might help to understand why the library perceives these correct responses as errors and throws exceptions. Thank you for assisting in improving the library's compatibility with various endpoints. |
This may be related, I'm getting this exception while trying to use the completion endpoint with Groq:
|
Description
There are compatibility issues when using the openai-kotlin library with the SpeakGPT app, specifically with Groq endpoint. The app functions correctly, but it consistently triggers error messages that appear to be related to how exceptions are handled within the library.
See original bug report here for more context.
Steps to Reproduce
Expected Behavior
The library should handle endpoint interactions without triggering unnecessary error messages, ensuring smooth operation across different configurations.
Actual Behavior
Error messages are triggered after every API response, despite the responses being correct and fully processed. This suggests an issue with the exception handling mechanism in the library when used with endpoints other than the default.
It looks like speak-gpt app adds this text somewhere here, I guess, so it's not an issue with configuration or the API, it's an issue with error handling logic inside speak-gpt app. It fires when it shouldn't, I think.
Additional Information
Possible Solution
It would be beneficial to review the exception handling logic within the library to ensure it is robust across various endpoints. Alternatively, providing more detailed documentation or configuration options to handle such cases might help.
Links
Thank you for looking into this matter. Your assistance will help improve the usability of the library in diverse applications.
The text was updated successfully, but these errors were encountered: