Replies: 3 comments 2 replies
-
The same applies to some data that would be lovely if the API would be model independent, for example, to use or simply log the token usage, I understand that the only way is to look at the _raw_response. For OpenAI there's |
Beta Was this translation helpful? Give feedback.
-
we've updated this to handle consequeitves |
Beta Was this translation helpful? Give feedback.
-
could heve room for some
|
Beta Was this translation helpful? Give feedback.
-
Instructor does a beautiful job of abstracting model API specifics, and allows to use OpenAI or Anthropic with an almost identical setup after the client/create has been constructed.
However, the APIs sometimes behave differently in ways that things break. For example, you can have a set of messages working for OpenAI where you alternate the roles of user and assistant, and nothing prevents you from sending consecutive messages from the user (for example you can conditionally add messages to the conversation). However, Claude doesn't like this, and will fail with an error complaining that messages between user and assistant must alternate.
We can of course refactor our code to concatenate messages from "user" which will then work for both models, but I open for discussion if Instructor would be the right place for abstracting the client code from these kinds of specificities.
This is specially relevant when testing the performance of several models with the same code, or when dispatching the same request in parallel to several models to then compare responses, where ideally the code written to generate the prompts is reused. I would not like to have to move all over my code "if Claude" type of statements.
Thoughts?
Beta Was this translation helpful? Give feedback.
All reactions