Skip to content

Commit 781ab5a

Browse files
committed
chore: model template for glm-4.5
add model template for glm and corresponding company-authorized icon.
1 parent 08d1f6b commit 781ab5a

File tree

2 files changed

+170
-0
lines changed

2 files changed

+170
-0
lines changed
Lines changed: 170 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,170 @@
1+
id: 10002
2+
name: GLM-4.5
3+
icon_uri: default_icon/z_ai.png
4+
icon_url: ""
5+
description:
6+
zh: GLM-4.5 系列模型是专为智能体设计的基座模型。GLM-4.5 拥有 3550 亿总参数与 320 亿激活参数,而 GLM-4.5-Air 采用更紧凑的设计,总参数达 1060 亿,激活参数为 120 亿。该系列模型统一了推理、编程与智能体能力,可满足智能体应用的复杂需求。
7+
en: The GLM-4.5 series models are foundation models designed for intelligent agents. GLM-4.5 has 355 billion total parameters with 32 billion active parameters, while GLM-4.5-Air adopts a more compact design with 106 billion total parameters and 12 billion active parameters. GLM-4.5 models unify reasoning, coding, and intelligent agent capabilities to meet the complex demands of intelligent agent applications.
8+
default_parameters:
9+
- name: temperature
10+
label:
11+
zh: 生成随机性
12+
en: Temperature
13+
desc:
14+
zh: '- **temperature**: 调高温度会使得模型的输出更多样性和创新性,反之,降低温度会使输出内容更加遵循指令要求但减少多样性。建议不要与“Top p”同时调整。'
15+
en: '**Temperature**:\n\n- When you increase this value, the model outputs more diverse and innovative content; when you decrease it, the model outputs less diverse content that strictly follows the given instructions.\n- It is recommended not to adjust this value with \"Top p\" at the same time.'
16+
type: float
17+
min: "0"
18+
max: "1"
19+
default_val:
20+
balance: "0.8"
21+
creative: "1"
22+
default_val: "1.0"
23+
precise: "0.3"
24+
precision: 1
25+
options: []
26+
style:
27+
widget: slider
28+
label:
29+
zh: 生成多样性
30+
en: Generation diversity
31+
- name: max_tokens
32+
label:
33+
zh: 最大回复长度
34+
en: Response max length
35+
desc:
36+
zh: 控制模型输出的Tokens 长度上限。通常 100 Tokens 约等于 150 个中文汉字。
37+
en: You can specify the maximum length of the tokens output through this value. Typically, 100 tokens are approximately equal to 150 Chinese characters.
38+
type: int
39+
min: "1"
40+
max: "4096"
41+
default_val:
42+
default_val: "4096"
43+
options: []
44+
style:
45+
widget: slider
46+
label:
47+
zh: 输入及输出设置
48+
en: Input and output settings
49+
- name: top_p
50+
label:
51+
zh: Top P
52+
en: Top P
53+
desc:
54+
zh: '- **Top p 为累计概率**: 模型在生成输出时会从概率最高的词汇开始选择,直到这些词汇的总概率累积达到Top p 值。这样可以限制模型只选择这些高概率的词汇,从而控制输出内容的多样性。建议不要与“生成随机性”同时调整。'
55+
en: '**Top P**:\n\n- An alternative to sampling with temperature, where only tokens within the top p probability mass are considered. For example, 0.1 means only the top 10% probability mass tokens are considered.\n- We recommend altering this or temperature, but not both.'
56+
type: float
57+
min: "0"
58+
max: "1"
59+
default_val:
60+
default_val: "0.7"
61+
precision: 2
62+
options: []
63+
style:
64+
widget: slider
65+
label:
66+
zh: 生成多样性
67+
en: Generation diversity
68+
- name: frequency_penalty
69+
label:
70+
zh: 重复语句惩罚
71+
en: Frequency penalty
72+
desc:
73+
zh: '- **frequency penalty**: 当该值为正时,会阻止模型频繁使用相同的词汇和短语,从而增加输出内容的多样性。'
74+
en: '**Frequency Penalty**: When positive, it discourages the model from repeating the same words and phrases, thereby increasing the diversity of the output.'
75+
type: float
76+
min: "-2"
77+
max: "2"
78+
default_val:
79+
default_val: "0"
80+
precision: 2
81+
options: []
82+
style:
83+
widget: slider
84+
label:
85+
zh: 生成多样性
86+
en: Generation diversity
87+
- name: presence_penalty
88+
label:
89+
zh: 重复主题惩罚
90+
en: Presence penalty
91+
desc:
92+
zh: '- **presence penalty**: 当该值为正时,会阻止模型频繁讨论相同的主题,从而增加输出内容的多样性'
93+
en: '**Presence Penalty**: When positive, it prevents the model from discussing the same topics repeatedly, thereby increasing the diversity of the output.'
94+
type: float
95+
min: "-2"
96+
max: "2"
97+
default_val:
98+
default_val: "0"
99+
precision: 2
100+
options: []
101+
style:
102+
widget: slider
103+
label:
104+
zh: 生成多样性
105+
en: Generation diversity
106+
- name: response_format
107+
label:
108+
zh: 输出格式
109+
en: Response format
110+
desc:
111+
zh: '- **文本**: 使用普通文本格式回复\n- **Markdown**: 将引导模型使用Markdown格式输出回复\n- **JSON**: 将引导模型使用JSON格式输出'
112+
en: '**Response Format**:\n\n- **Text**: Replies in plain text format\n- **Markdown**: Uses Markdown format for replies\n- **JSON**: Uses JSON format for replies'
113+
type: int
114+
min: ""
115+
max: ""
116+
default_val:
117+
default_val: "0"
118+
options:
119+
- label: Text
120+
value: "0"
121+
- label: Markdown
122+
value: "1"
123+
- label: JSON
124+
value: "2"
125+
style:
126+
widget: radio_buttons
127+
label:
128+
zh: 输入及输出设置
129+
en: Input and output settings
130+
meta:
131+
name: glm-4.5
132+
protocol: openai
133+
capability:
134+
function_call: true
135+
input_modal:
136+
- text
137+
input_tokens: 128000
138+
json_mode: false
139+
max_tokens: 128000
140+
output_modal:
141+
- text
142+
output_tokens: 16384
143+
prefix_caching: true
144+
reasoning: true
145+
prefill_response: false
146+
conn_config:
147+
base_url: "https://open.bigmodel.cn/api/paas/v4"
148+
api_key: ""
149+
timeout: 0s
150+
model: "glm-4.5"
151+
temperature: 0.7
152+
frequency_penalty: 0
153+
presence_penalty: 0
154+
max_tokens: 4096
155+
top_p: 1
156+
top_k: 0
157+
stop: []
158+
openai:
159+
by_azure: false
160+
api_version: ""
161+
response_format:
162+
type: text
163+
jsonschema: null
164+
claude: null
165+
ark: null
166+
deepseek: null
167+
qwen: null
168+
gemini: null
169+
custom: {}
170+
status: 0
279 KB
Loading

0 commit comments

Comments
 (0)