Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update openai to v1 #694

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@

**🟢 Gorilla is Apache 2.0** With Gorilla being fine-tuned on MPT, and Falcon, you can use Gorilla commercially with no obligations! :golf:

**:rocket: Try Gorilla in 60s** [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1DEBPsccVLF_aUnmD0FwPeHFrtdC0QIUP?usp=sharing)
**:rocket: Try Gorilla in 60s** [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1q7T7g3wmRNEUwsWujLcgjHLK7YNPjQxh?usp=sharing)

:computer: Use [Gorilla in your CLI](https://github.com/gorilla-llm/gorilla-cli) with `pip install gorilla-cli`

Expand All @@ -36,12 +36,12 @@
- 🟢 [06/06] Released Commercially usable, Apache 2.0 licensed Gorilla models
- :rocket: [05/30] Provided the [CLI interface](inference/README.md) to chat with Gorilla!
- :rocket: [05/28] Released Torch Hub and TensorFlow Hub Models!
- :rocket: [05/27] Released the first Gorilla model! [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1DEBPsccVLF_aUnmD0FwPeHFrtdC0QIUP?usp=sharing) or [:hugs:](https://huggingface.co/gorilla-llm/gorilla-7b-hf-delta-v0)!
- :rocket: [05/27] Released the first Gorilla model! [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1q7T7g3wmRNEUwsWujLcgjHLK7YNPjQxh?usp=sharing) or [:hugs:](https://huggingface.co/gorilla-llm/gorilla-7b-hf-delta-v0)!
- :fire: [05/27] We released the APIZoo contribution guide for community API contributions!
- :fire: [05/25] We release the APIBench dataset and the evaluation code of Gorilla!

## Gorilla Gradio
**Try Gorilla LLM models in [HF Spaces](https://huggingface.co/spaces/gorilla-llm/gorilla-demo/) or [![Gradio Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1ktnVWPJOgqTC9hLW8lJPVZszuIddMy7y?usp=sharing)**
**Try Gorilla LLM models in [HF Spaces](https://huggingface.co/spaces/gorilla-llm/gorilla-demo/) or [![Gradio Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/18ru6QUVVegJGTTa9TzbLAXVok42KDkWx?usp=sharing)**
![gorilla_webUI_2](https://github.com/TanmayDoesAI/gorilla/assets/85993243/f30645bf-6798-4bd2-ac6e-6943840ae095)


Expand Down
10 changes: 6 additions & 4 deletions eval/get_llm_responses.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,14 +58,16 @@ def get_response(get_response_input, api_key):

try:
if "gpt" in model:
openai.api_key = api_key
responses = openai.ChatCompletion.create(
client = openai.OpenAI(
api_key=api_key,
)
responses = client.chat.completions.create(
model=model,
messages=question,
n=1,
n = 1,
temperature=0,
)
response = responses['choices'][0]['message']['content']
response = responses.choices[0].message.content
elif "claude" in model:
client = anthropic.Anthropic(api_key=api_key)
responses = client.completions.create(
Expand Down
10 changes: 6 additions & 4 deletions eval/get_llm_responses_retriever.py
Original file line number Diff line number Diff line change
Expand Up @@ -59,14 +59,16 @@ def get_response(get_response_input, api_key):

try:
if "gpt" in model:
openai.api_key = api_key
responses = openai.ChatCompletion.create(
client = openai.OpenAI(
api_key=api_key,
)
responses = client.chat.completions.create(
model=model,
messages=question,
n=1,
n = 1,
temperature=0,
)
response = responses['choices'][0]['message']['content']
response = responses.choices[0].message.content
elif "claude" in model:
client = anthropic.Anthropic(api_key=api_key)
responses = client.completions.create(
Expand Down
8 changes: 4 additions & 4 deletions eval/retrievers/gpt.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,10 +36,10 @@ def get_embeddings(
) -> List[List[float]]:
assert len(list_of_text) <= 2048, "The number of docs should be <= 2048"
list_of_text = [text.replace("\n", " ") for text in list_of_text]
openai.api_key = os.environ["OPENAI_API_KEY"]
data = openai.Embedding.create(input=list_of_text, engine="text-embedding-ada-002").data
data = sorted(data, key=lambda x: x["index"]) # maintain the same order as input.
return [d["embedding"] for d in data]
client = openai.OpenAI() # os.environ["OPENAI_API_KEY"] is default
data = client.embeddings.create(input=list_of_text,model="text-embedding-ada-002").data
data = sorted(data, key=lambda x: x.index) # maintain the same order as input.
return [d.embedding for d in data]

def from_documents(self, documents: List):
contents = [document.page_content for document in documents]
Expand Down
2 changes: 1 addition & 1 deletion inference/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

## Get Started

You can either run Gorilla through our hosted [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1DEBPsccVLF_aUnmD0FwPeHFrtdC0QIUP?usp=sharing) or [chat with it using cli](#inference-using-cli). We also provide instructions for [evaluating batched prompts](#optional-batch-inference-on-a-prompt-file). Here, are the instructions to run it locally.
You can either run Gorilla through our hosted [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1q7T7g3wmRNEUwsWujLcgjHLK7YNPjQxh?usp=sharing) or [chat with it using cli](#inference-using-cli). We also provide instructions for [evaluating batched prompts](#optional-batch-inference-on-a-prompt-file). Here, are the instructions to run it locally.

New: We release `gorilla-mpt-7b-hf-v0` and `gorilla-falcon-7b-hf-v0` - two Apache 2.0 licensed models (commercially usable).

Expand Down
37 changes: 21 additions & 16 deletions openfunctions/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ All of our models are hosted on our Huggingface UC Berkeley gorilla-llm org: [go
1. OpenFunctions is compatible with OpenAI Functions

```bash
!pip install openai==0.28.1
!pip install openai
```

2. Point to Gorilla hosted servers
Expand All @@ -44,14 +44,16 @@ All of our models are hosted on our Huggingface UC Berkeley gorilla-llm org: [go
import openai

def get_gorilla_response(prompt="Call me an Uber ride type \"Plus\" in Berkeley at zipcode 94704 in 10 minutes", model="gorilla-openfunctions-v0", functions=[]):
openai.api_key = "EMPTY"
openai.api_base = "http://luigi.millennium.berkeley.edu:8000/v1"
try:
completion = openai.ChatCompletion.create(
client = openai.OpenAI(
api_key = "EMPTY",
base_url = "http://luigi.millennium.berkeley.edu:8000/v1"
)
completion = client.chat.completions.create(
model="gorilla-openfunctions-v2",
temperature=0.0,
messages=[{"role": "user", "content": prompt}],
functions=functions,
tools=functions,
)
return completion.choices[0]
except Exception as e:
Expand All @@ -64,19 +66,22 @@ def get_gorilla_response(prompt="Call me an Uber ride type \"Plus\" in Berkeley
query = "What's the weather like in the two cities of Boston and San Francisco?"
functions = [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
"required": ["location"],
},
"required": ["location"],
},
}
}
]
get_gorilla_response(query, functions=functions)
Expand Down
38 changes: 21 additions & 17 deletions openfunctions/inference_hosted.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,6 @@
import openai
import json

openai.api_key = "EMPTY"
openai.api_base = "http://luigi.millennium.berkeley.edu:8000/v1"

# Example dummy function hard coded to return the same weather
# In production, this could be your backend API or an external API
def get_current_weather(location, unit="fahrenheit"):
Expand All @@ -21,26 +18,33 @@ def run_conversation():
messages = [{"role": "user", "content": "What's the weather like in the two cities of Boston and San Francisco?"}]
functions = [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
"required": ["location"],
},
"required": ["location"],
},
}
}
]
completion = openai.ChatCompletion.create(
client = openai.OpenAI(
api_key = "EMPTY",
base_url = "http://luigi.millennium.berkeley.edu:8000/v1"
)
completion = client.chat.completions.create(
model='gorilla-openfunctions-v2',
messages=messages,
functions=functions,
function_call="auto", # auto is default, but we'll be explicit
tools=functions,
tool_choice="auto", # auto is default, but we'll be explicit
)

print("--------------------")
Expand Down
44 changes: 32 additions & 12 deletions openfunctions/inference_local.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,26 +79,46 @@ def format_response(response: str):
query_1: str = "What's the weather like in the two cities of Boston and San Francisco?"
functions_1 = [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
"required": ["location"],
},
"required": ["location"],
},
}
}
]

# Example usage 2
# This should return an error since the function cann't help with the prompt
query_2: str = "What is the freezing point of water at a pressure of 10 kPa?"
functions_2 = [{"name": "thermodynamics.calculate_boiling_point", "description": "Calculate the boiling point of a given substance at a specific pressure.", "parameters": {"type": "object", "properties": {"substance": {"type": "string", "description": "The substance for which to calculate the boiling point."}, "pressure": {"type": "number", "description": "The pressure at which to calculate the boiling point."}, "unit": {"type": "string", "description": "The unit of the pressure. Default is 'kPa'."}}, "required": ["substance", "pressure"]}}]
functions_2 = [
{
"type": "function",
"function": {
"name": "thermodynamics.calculate_boiling_point",
"description": "Calculate the boiling point of a given substance at a specific pressure.",
"parameters": {
"type": "object",
"properties": {
"substance": {"type": "string", "description": "The substance for which to calculate the boiling point."},
"pressure": {"type": "number", "description": "The pressure at which to calculate the boiling point."},
"unit": {"type": "string", "description": "The unit of the pressure. Default is 'kPa'."}
},
"required": ["substance", "pressure"]
}
}
}
]

# Generate prompt and obtain model output
prompt_1 = get_prompt(query_1, functions=functions_1)
Expand Down
Loading