You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I searched the LangChain documentation with the integrated search.
I used the GitHub search to find a similar question and didn't find it.
I am sure that this is a bug in LangChain rather than my code.
The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
Example Code
Say for example, we have user-defined model, and we want the same model object to be runnable in different environments, some environments require an additional request header for apikey auth
Now imagine we receive this model object, and for it to be run in a dedicated environment, it needs additional request header
First attempt to add default_headers - this doesn't work as the openai clients are already initialized at model object init time (code), thus any change on default_headers would not be reflected in openai clients used.
model.default_headers={"apikey": "xxx"}
Second attempt to bind model - this doesn't work upon model.invoke: TypeError: parse() got an unexpected keyword argument 'default_headers'
model.bind(default_headers={"apikey:": "xxx"})
There are probably hacky ways by updating the internal openai clients directly at model.client._client.default_headers etc. but I don't feel that's a robust pattern as _client being a private object could evolve without notice. Any idea on how to better support such use case in a robust way?
Error Message and Stack Trace (if applicable)
No response
Description
I'm trying to patch additional header to a LLM model after its initialization, but couldn't find an easy and robust way. See example code for details.
System Info
System Information
OS: Linux
OS Version: #1 SMP Wed Jul 17 15:10:20 UTC 2024
Python Version: 3.9.13 (main, Aug 23 2022, 09:14:58)
[GCC 10.2.1 20210110]
The text was updated successfully, but these errors were encountered:
dosubotbot
added
Ɑ: models
Related to LLMs or chat model modules
🤖:bug
Related to a bug, vulnerability, unexpected error with an existing feature
labels
Jan 11, 2025
Checked other resources
Example Code
Say for example, we have user-defined
model
, and we want the samemodel
object to be runnable in different environments, some environments require an additional request header for apikey authUser-defined
model
Now imagine we receive this model object, and for it to be run in a dedicated environment, it needs additional request header
default_headers
- this doesn't work as the openai clients are already initialized atmodel
object init time (code), thus any change ondefault_headers
would not be reflected in openai clients used.model.invoke
:TypeError: parse() got an unexpected keyword argument 'default_headers'
There are probably hacky ways by updating the internal openai clients directly at
model.client._client.default_headers
etc. but I don't feel that's a robust pattern as_client
being a private object could evolve without notice. Any idea on how to better support such use case in a robust way?Error Message and Stack Trace (if applicable)
No response
Description
I'm trying to patch additional header to a LLM model after its initialization, but couldn't find an easy and robust way. See example code for details.
System Info
System Information
Package Information
Optional packages not installed
Other Dependencies
The text was updated successfully, but these errors were encountered: