Skip to content

Commit

Permalink
rename model_id to config_name
Browse files Browse the repository at this point in the history
  • Loading branch information
pan-x-c committed Feb 19, 2024
1 parent 5334330 commit ab89f05
Show file tree
Hide file tree
Showing 33 changed files with 156 additions and 168 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,9 +110,9 @@ For OpenAI APIs, you need to prepare a dict of model config with the following f

```
{
"model_id": "{model id}", # To identify the model instance
"config_name": "{config name}", # The name to identify the config
"model_type": "openai" | "openai_dall_e" | "openai_embedding",
"model": "{model name, e.g. gpt-4}", # The used model in openai API
"model_name": "{model name, e.g. gpt-4}", # The model in openai API
# Optional
"api_key": "xxx", # The API key for OpenAI API. If not set, env
Expand All @@ -128,7 +128,7 @@ For post requests APIs, the config contains the following fields.

```
{
"model_id": "{model id}", # To identify the model instance
"config_name": "{config name}", # The name to identify the config
"model_type": "post_api",
"api_url": "https://xxx", # The target url
"headers": { # Required headers
Expand All @@ -152,7 +152,7 @@ import agentscope
agentscope.init(model_configs="./model_configs.json")

# Create a dialog agent and a user agent
dialog_agent = DialogAgent(name="assistant", model="gpt-4")
dialog_agent = DialogAgent(name="assistant", model_config_name="your_config_name")
user_agent = UserAgent()
```

Expand Down
8 changes: 4 additions & 4 deletions docs/sphinx_doc/source/tutorial/103-example.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,9 @@ Each API has its specific configuration requirements. For example, to configure

```python
model_config = {
"model_id": "{your_model_id}", # A unique identifier for the model instance
"model_type": "openai", # Choose from "openai", "openai_dall_e", or "openai_embedding"
"model": "{model_name}", # The model identifier used in the OpenAI API, such as "gpt-3.5-turbo", "gpt-4", or "text-embedding-ada-002"
"config_name": "{config_name}", # A unique name for the model config.
"model_type": "openai", # Choose from "openai", "openai_dall_e", or "openai_embedding".
"model_name": "{model_name}", # The model identifier used in the OpenAI API, such as "gpt-3.5-turbo", "gpt-4", or "text-embedding-ada-002".
"api_key": "xxx", # Your OpenAI API key. If unset, the environment variable OPENAI_API_KEY is used.
"organization": "xxx", # Your OpenAI organization ID. If unset, the environment variable OPENAI_ORGANIZATION is used.
}
Expand Down Expand Up @@ -52,7 +52,7 @@ from agentscope.agents import DialogAgent, UserAgent
agentscope.init(model_configs="./openai_model_configs.json")

# Create a dialog agent and a user agent
dialogAgent = DialogAgent(name="assistant", model_id="gpt-4")
dialogAgent = DialogAgent(name="assistant", model_config_name="gpt-4")
userAgent = UserAgent()
```

Expand Down
8 changes: 4 additions & 4 deletions docs/sphinx_doc/source/tutorial/104-usecase.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,9 @@ As we discussed in the last tutorial, you need to prepare your model configurati
```json
[
{
"config_name": "gpt-4-temperature-0.0",
"model_type": "openai",
"model_id": "gpt-4-temperature-0.0",
"model": "gpt-4",
"model_name": "gpt-4",
"api_key": "xxx",
"organization": "xxx",
"generate_args": {
Expand Down Expand Up @@ -76,13 +76,13 @@ AgentScope provides several out-of-the-box Agents implements and organizes them
"args": {
"name": "Player1",
"sys_prompt": "Act as a player in a werewolf game. You are Player1 and\nthere are totally 6 players, named Player1, Player2, Player3, Player4, Player5 and Player6.\n\nPLAYER ROLES:\nIn werewolf game, players are divided into two werewolves, two villagers, one seer, and one witch. Note only werewolves know who are their teammates.\nWerewolves: They know their teammates' identities and attempt to eliminate a villager each night while trying to remain undetected.\nVillagers: They do not know who the werewolves are and must work together during the day to deduce who the werewolves might be and vote to eliminate them.\nSeer: A villager with the ability to learn the true identity of one player each night. This role is crucial for the villagers to gain information.\nWitch: A character who has a one-time ability to save a player from being eliminated at night (sometimes this is a potion of life) and a one-time ability to eliminate a player at night (a potion of death).\n\nGAME RULE:\nThe game consists of two phases: night phase and day phase. The two phases are repeated until werewolf or villager wins the game.\n1. Night Phase: During the night, the werewolves discuss and vote for a player to eliminate. Special roles also perform their actions at this time (e.g., the Seer chooses a player to learn their role, the witch chooses a decide if save the player).\n2. Day Phase: During the day, all surviving players discuss who they suspect might be a werewolf. No one reveals their role unless it serves a strategic purpose. After the discussion, a vote is taken, and the player with the most votes is \"lynched\" or eliminated from the game.\n\nVICTORY CONDITION:\nFor werewolves, they win the game if the number of werewolves is equal to or greater than the number of remaining villagers.\nFor villagers, they win if they identify and eliminate all of the werewolves in the group.\n\nCONSTRAINTS:\n1. Your response should be in the first person.\n2. This is a conversational game. You should respond only based on the conversation history and your strategy.\n\nYou are playing werewolf in this game.\n",
"model_id": "gpt-3.5-turbo",
"model_config_name": "gpt-3.5-turbo",
"use_memory": true
}
}
```

In this configuration, `Player1` is designated as a `DictDialogAgent`. The parameters include a system prompt (`sys_prompt`) that can guide the agent's behavior, the model (`model`) that determines the type of language model of the agent, and a flag (`use_memory`) indicating whether the agent should remember past interactions.
In this configuration, `Player1` is designated as a `DictDialogAgent`. The parameters include a system prompt (`sys_prompt`) that can guide the agent's behavior, a model config name (`model_config_name`) that determines the name of the model configuration, and a flag (`use_memory`) indicating whether the agent should remember past interactions.

For other players, configurations can be customized based on their roles. Each role may have different prompts, models, or memory settings. You can refer to the JSON file located at `examples/werewolf/configs/agent_configs.json` within the AgentScope examples directory.

Expand Down
5 changes: 2 additions & 3 deletions docs/sphinx_doc/source/tutorial/201-agent.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,8 @@ class AgentBase(Operator):
def __init__(
self,
name: str,
config: Optional[dict] = None,
sys_prompt: Optional[str] = None,
model_id: str = None,
model_config_name: str = None,
use_memory: bool = True,
memory_config: Optional[dict] = None,
) -> None:
Expand Down Expand Up @@ -109,7 +108,7 @@ from agentscope.agents import DialogAgent
# Configuration for the DialogAgent
dialog_agent_config = {
"name": "ServiceBot",
"model_id": "gpt-3.5", # Specify the model used for dialogue generation
"model_config_name": "gpt-3.5", # Specify the model used for dialogue generation
"sys_prompt": "Act as AI assistant to interact with the others. Try to "
"reponse on one line.\n", # Custom prompt for the agent
# Other configurations specific to the DialogAgent
Expand Down
16 changes: 8 additions & 8 deletions docs/sphinx_doc/source/tutorial/203-model.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ where the model configs could be a list of dict:
```json
[
{
"model_id": "gpt-4-temperature-0.0",
"config_name": "gpt-4-temperature-0.0",
"model_type": "openai",
"model": "gpt-4",
"api_key": "xxx",
Expand All @@ -25,7 +25,7 @@ where the model configs could be a list of dict:
}
},
{
"model_id": "dall-e-3-size-1024x1024",
"config_name": "dall-e-3-size-1024x1024",
"model_type": "openai_dall_e",
"model": "dall-e-3",
"api_key": "xxx",
Expand Down Expand Up @@ -89,7 +89,7 @@ In AgentScope, you can load the model with the following model configs: `./flask
```json
{
"model_type": "post_api",
"model_id": "flask_llama2-7b-chat",
"config_name": "flask_llama2-7b-chat",
"api_url": "http://127.0.0.1:8000/llm/",
"json_args": {
"max_length": 4096,
Expand Down Expand Up @@ -130,7 +130,7 @@ In AgentScope, you can load the model with the following model configs: `flask_m
```json
{
"model_type": "post_api",
"model_id": "flask_llama2-7b-ms",
"config_name": "flask_llama2-7b-ms",
"api_url": "http://127.0.0.1:8000/llm/",
"json_args": {
"max_length": 4096,
Expand Down Expand Up @@ -171,7 +171,7 @@ Now you can load the model in AgentScope by the following model config: `fastcha

```json
{
"model_id": "meta-llama/Llama-2-7b-chat-hf",
"config_name": "meta-llama/Llama-2-7b-chat-hf",
"model_type": "openai",
"api_key": "EMPTY",
"client_args": {
Expand Down Expand Up @@ -211,7 +211,7 @@ Now you can load the model in AgentScope by the following model config: `vllm_sc

```json
{
"model_id": "meta-llama/Llama-2-7b-chat-hf",
"config_name": "meta-llama/Llama-2-7b-chat-hf",
"model_type": "openai",
"api_key": "EMPTY",
"client_args": {
Expand All @@ -230,7 +230,7 @@ Taking `gpt2` in HuggingFace inference API as an example, you can use the follow

```json
{
"model_id": "gpt2",
"config_name": "gpt2",
"model_type": "post_api",
"headers": {
"Authorization": "Bearer {YOUR_API_TOKEN}"
Expand All @@ -250,7 +250,7 @@ model = AutoModelForCausalLM.from_pretrained(MODEL_NAME)
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model.eval()
# Do remember to re-implement the `reply` method to tokenize *message*!
agent = YourAgent(name='agent', model_id=model_id, tokenizer=tokenizer)
agent = YourAgent(name='agent', model_config_name=config_name, tokenizer=tokenizer)
```

[[Return to the top]](#using-different-model-sources-with-model-api)
6 changes: 3 additions & 3 deletions examples/conversation/conversation.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
model_configs=[
{
"model_type": "openai",
"model_id": "gpt-3.5-turbo",
"config_name": "gpt-3.5-turbo",
"model": "gpt-3.5-turbo",
"api_key": "xxx", # Load from env if not provided
"organization": "xxx", # Load from env if not provided
Expand All @@ -19,7 +19,7 @@
},
{
"model_type": "post_api_chat",
"model_id": "my_post_api",
"config_name": "my_post_api",
"api_url": "https://xxx",
"headers": {},
},
Expand All @@ -30,7 +30,7 @@
dialog_agent = DialogAgent(
name="Assistant",
sys_prompt="You're a helpful assistant.",
model_id="gpt-3.5-turbo", # replace by your model config name
model_config_name="gpt-3.5-turbo", # replace by your model config name
)
user_agent = UserAgent()

Expand Down
6 changes: 3 additions & 3 deletions examples/distributed/configs/debate_agent_configs.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
"args": {
"name": "Pro",
"sys_prompt": "Assume the role of a debater who is arguing in favor of the proposition that AGI (Artificial General Intelligence) can be achieved using the GPT model framework. Construct a coherent and persuasive argument, including scientific, technological, and theoretical evidence, to support the statement that GPT models are a viable path to AGI. Highlight the advancements in language understanding, adaptability, and scalability of GPT models as key factors in progressing towards AGI.",
"model_id": "gpt-3.5-turbo",
"model_config_name": "gpt-3.5-turbo",
"use_memory": true
}
},
Expand All @@ -13,7 +13,7 @@
"args": {
"name": "Con",
"sys_prompt": "Assume the role of a debater who is arguing against the proposition that AGI can be achieved using the GPT model framework. Construct a coherent and persuasive argument, including scientific, technological, and theoretical evidence, to support the statement that GPT models, while impressive, are insufficient for reaching AGI. Discuss the limitations of GPT models such as lack of understanding, consciousness, ethical reasoning, and general problem-solving abilities that are essential for true AGI.",
"model_id": "gpt-3.5-turbo",
"model_config_name": "gpt-3.5-turbo",
"use_memory": true
}
},
Expand All @@ -22,7 +22,7 @@
"args": {
"name": "Judge",
"sys_prompt": "Assume the role of an impartial judge in a debate where the affirmative side argues that AGI can be achieved using the GPT model framework, and the negative side contests this. Listen to both sides' arguments and provide an analytical judgment on which side presented a more compelling and reasonable case. Consider the strength of the evidence, the persuasiveness of the reasoning, and the overall coherence of the arguments presented by each side.",
"model_id": "gpt-3.5-turbo",
"model_config_name": "gpt-3.5-turbo",
"use_memory": true
}
}
Expand Down
8 changes: 4 additions & 4 deletions examples/distributed/configs/model_configs.json
Original file line number Diff line number Diff line change
@@ -1,18 +1,18 @@
[
{
"model_id": "gpt-3.5-turbo",
"config_name": "gpt-3.5-turbo",
"model_type": "openai",
"model": "gpt-3.5-turbo",
"model_name": "gpt-3.5-turbo",
"api_key": "xxx",
"organization": "xxx",
"generate_args": {
"temperature": 0.0
}
},
{
"model_id": "gpt-4",
"config_name": "gpt-4",
"model_type": "openai",
"model": "gpt-4",
"model_name": "gpt-4",
"api_key": "xxx",
"organization": "xxx",
"generate_args": {
Expand Down
4 changes: 2 additions & 2 deletions examples/distributed/distributed_dialog.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ def setup_assistant_server(assistant_host: str, assistant_port: int) -> None:
agent_kwargs={
"name": "Assitant",
"sys_prompt": "You are a helpful assistant.",
"model_id": "gpt-3.5-turbo",
"model_config_name": "gpt-3.5-turbo",
"use_memory": True,
},
host=assistant_host,
Expand All @@ -59,7 +59,7 @@ def run_main_process(assistant_host: str, assistant_port: int) -> None:
assistant_agent = DialogAgent(
name="Assistant",
sys_prompt="You are a helpful assistant.",
model_id="gpt-3.5-turbo",
model_config_name="gpt-3.5-turbo",
use_memory=True,
).to_dist(
host=assistant_host,
Expand Down
2 changes: 1 addition & 1 deletion examples/werewolf/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ is as follows
"args": {
"name": "Player1",
"sys_prompt": "Act as a player in a werewolf game. You are Player1 and\nthere are totally 6 players, named Player1, Player2, Player3, Player4, Player5 and Player6.\n\nPLAYER ROLES:\nIn werewolf game, players are divided into two werewolves, two villagers, one seer and one witch. Note only werewolves know who are their teammates.\nWerewolves: They know their teammates' identities and attempt to eliminate a villager each night while trying to remain undetected.\nVillagers: They do not know who the werewolves are and must work together during the day to deduce who the werewolves might be and vote to eliminate them.\nSeer: A villager with the ability to learn the true identity of one player each night. This role is crucial for the villagers to gain information.\nWitch: A character who has a one-time ability to save a player from being eliminated at night (sometimes this is a potion of life) and a one-time ability to eliminate a player at night (a potion of death).\n\nGAME RULE:\nThe game is consisted of two phases: night phase and day phase. The two phases are repeated until werewolf or villager win the game.\n1. Night Phase: During the night, the werewolves discuss and vote for a player to eliminate. Special roles also perform their actions at this time (e.g., the Seer chooses a player to learn their role, the witch chooses a decide if save the player).\n2. Day Phase: During the day, all surviving players discuss who they suspect might be a werewolf. No one reveals their role unless it serves a strategic purpose. After the discussion, a vote is taken, and the player with the most votes is \"lynched\" or eliminated from the game.\n\nVICTORY CONDITION:\nFor werewolves, they win the game if the number of werewolves is equal to or greater than the number of remaining villagers.\nFor villagers, they win if they identify and eliminate all of the werewolves in the group.\n\nCONSTRAINTS:\n1. Your response should be in the first person.\n2. This is a conversational game. You should response only based on the conversation history and your strategy.\n\nYou are playing werewolf in this game.\n",
"model_id": "gpt-3.5-turbo",
"model_config_name": "gpt-3.5-turbo",
"use_memory": true
}
}
Expand Down
Loading

0 comments on commit ab89f05

Please sign in to comment.