Replies: 3 comments 2 replies
-
Hey, for me it worked this way: from openai import OpenAI
from pydantic import BaseModel, Field
from typing import List
import instructor
class Character(BaseModel):
name: str
age: int
fact: List[str] = Field(..., description="5 facts about the character")
# enables `response_model` in create call
client = instructor.from_openai(
OpenAI(
base_url="http://localhost:11434/v1",
api_key="ollama", # required, but unused
),
#mode=instructor.Mode.JSON,
)
resp = client.chat.completions.create_with_completion(
model="llama3",
messages=[
{
"role": "user",
"content": "Tell me about Harry Potter",
}
],
response_model=Character,
) I got the code from here: https://python.useinstructor.com/examples/ollama/#ollama |
Beta Was this translation helpful? Give feedback.
-
@Zasha01 thank you! If I am understanding correctly -- you are running this via ollama correct? Not via one of the cloud providers right? Sadly my local machine isn't powerful enough to do the processing volume I need in a reasonable amount of time. Maybe I can try though on an e.g. modal or other provider where I set it up myself. |
Beta Was this translation helpful? Give feedback.
-
@wasauce I think the easiest way is to host it via a openai compatible server. Have you tried llama-cpp-python? I think getting it to work on modal should be relatively simple ( https://github.com/abetlen/llama-cpp-python ) |
Beta Was this translation helpful? Give feedback.
-
Is your feature request related to a problem? Please describe.
I am trying to use instructor with one of the llama 3 8b instruct models.
I have tried accessing togetherAI and https://openrouter.ai/ via instructor -- pointing at the llama 3 8b instruct models.
Together.ai doesn't support JSON mode on llama 38b instruct - so the examples I found found for accessing together.ai don't work. Is there a work around.
Describe the solution you'd like
To be able to use instructor with llama3 8b instruct.
Beta Was this translation helpful? Give feedback.
All reactions