You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: modelscope_agent_servers/README.md
+71-88
Original file line number
Diff line number
Diff line change
@@ -38,21 +38,60 @@ cd modelscope-agent
38
38
# start the assistant server
39
39
sh scripts/run_assistant_server.sh
40
40
41
+
# start the assistant server with specified backend
42
+
sh scripts/run_assistant_server.sh dashscope
41
43
```
42
44
43
45
### Use case
44
46
45
47
#### Chat
46
48
49
+
We provide compatibility with parts of the OpenAI API `chat/completions`, especially function calls. The developers can use `OpenAI` SDK with specified local url. Currently the supported model server includes `dashscope`, `openai` and `ollama`.
47
50
48
-
To interact with the chat API, you should construct a object like `ChatRequest` on the client side, and then use the requests library to send it as the request body.
51
+
Here is an code snippet using `OpenAI` SDK with `dashscope` model server:
49
52
50
-
#### function calling
51
-
An example code snippet is as follows:
53
+
```Python
54
+
api_base ="http://localhost:31512/v1/"
55
+
model ='qwen-max'
56
+
57
+
tools = [{
58
+
"type": "function",
59
+
"function": {
60
+
"name": "amap_weather",
61
+
"description": "amap weather tool",
62
+
"parameters": [{
63
+
"name": "location",
64
+
"type": "string",
65
+
"description": "城市/区具体名称,如`北京市海淀区`请描述为`海淀区`",
66
+
"required": True
67
+
}]
68
+
}
69
+
}]
70
+
71
+
tool_choice ='auto'
72
+
73
+
client = OpenAI(
74
+
api_key="YOUR_DASHSCOPE_API_KEY",
75
+
base_url=api_base,
76
+
)
77
+
chat_completion = client.chat.completions.create(
78
+
messages=[{
79
+
"role": "user",
80
+
"content": "海淀区天气是什么?"
81
+
}],
82
+
model=model,
83
+
tools=tools,
84
+
tool_choice=tool_choice
85
+
)
86
+
87
+
```
88
+
89
+
You can also use `curl` to request this API.
52
90
53
91
```Shell
54
92
curl -X POST 'http://localhost:31512/v1/chat/completions' \
55
93
-H 'Content-Type: application/json' \
94
+
-H "Authorization: Bearer $DASHSCOPE_API_KEY" \
56
95
-d '{
57
96
"tools": [{
58
97
"type": "function",
@@ -68,108 +107,51 @@ curl -X POST 'http://localhost:31512/v1/chat/completions' \
68
107
}
69
108
}],
70
109
"tool_choice": "auto",
71
-
"llm_config": {
72
-
"model": "qwen-max",
73
-
"model_server": "dashscope",
74
-
"api_key": "YOUR DASHSCOPE API KEY"
75
-
},
110
+
"model": "qwen-max",
76
111
"messages": [
77
112
{"content": "海淀区天气", "role": "user"}
78
-
],
79
-
"uuid_str": "test",
80
-
"stream": false
113
+
]
81
114
}'
82
115
83
116
```
84
117
85
118
With above examples, the output should be like this:
To interact with the chat API, you should construct a object like `AgentRequest` on the client side, and then use the requests library to send it as the request body.
147
+
110
148
#### knowledge retrieval
111
149
112
-
To enable knowledge retrieval, you'll need to include use_knowledge and files in your configuration settings.
150
+
In `assistants/lite` API, to enable knowledge retrieval, you'll need to include use_knowledge and files in your configuration settings.
113
151
114
152
-`use_knowledge`: Specifies whether knowledge retrieval should be activated.
115
153
-`files`: the file(s) you wish to use during the conversation. By default, all previously uploaded files will be used.
116
154
117
-
```Shell
118
-
curl -X POST 'http://localhost:31512/v1/chat/completions' \
119
-
-H 'Content-Type: application/json' \
120
-
-d '{
121
-
"tools": [
122
-
{
123
-
"type": "function",
124
-
"function": {
125
-
"name": "amap_weather",
126
-
"description": "amap weather tool",
127
-
"parameters": [{
128
-
"name": "location",
129
-
"type": "string",
130
-
"description": "城市/区具体名称,如`北京市海淀区`请描述为`海淀区`",
131
-
"required": true
132
-
}]
133
-
}
134
-
}],
135
-
"llm_config": {
136
-
"model": "qwen-max",
137
-
"model_server": "dashscope",
138
-
"api_key": "YOUR DASHSCOPE API KEY"
139
-
},
140
-
"messages": [
141
-
{"content": "高德天气api申请", "role": "user"}
142
-
],
143
-
"uuid_str": "test",
144
-
"stream": false,
145
-
"use_knowledge": true,
146
-
"files": ["QA.pdf"]
147
-
}'
148
-
```
149
-
150
-
With above examples, the output should be like this:
"content": "Information based on knowledge retrieval.",
161
-
}
162
-
"finish_reason": "stop"
163
-
164
-
}]
165
-
}
166
-
```
167
-
168
-
#### Assistant
169
-
170
-
Like `v1/chat/completions` API, you should construct a `ChatRequest` object when use `v1/assistants/lite`. Here is an example using python `requests` library.
0 commit comments