Authorizations
Use the following format for authentication: Bearer <your api key>
Body
- LLM
- VLM
Corresponding Model Name. To better enhance service quality, we will make periodic changes to the models provided by this service, including but not limited to model on/offlining and adjustments to model service capabilities. We will notify you of such changes through appropriate means such as announcements or message pushes where feasible.
deepseek-ai/DeepSeek-V3.2-Exp, Pro/deepseek-ai/DeepSeek-V3.2-Exp, inclusionAI/Ling-1T, zai-org/GLM-4.6, moonshotai/Kimi-K2-Instruct-0905, Pro/deepseek-ai/DeepSeek-V3.1-Terminus, Qwen/Qwen3-Next-80B-A3B-Instruct, Qwen/Qwen3-Next-80B-A3B-Thinking, inclusionAI/Ring-flash-2.0, inclusionAI/Ling-flash-2.0, inclusionAI/Ling-mini-2.0, ByteDance-Seed/Seed-OSS-36B-Instruct, deepseek-ai/DeepSeek-V3.1, Pro/deepseek-ai/DeepSeek-V3.1, stepfun-ai/step3, Qwen/Qwen3-Coder-30B-A3B-Instruct, Qwen/Qwen3-Coder-480B-A35B-Instruct, Qwen/Qwen3-30B-A3B-Thinking-2507, Qwen/Qwen3-30B-A3B-Instruct-2507, Qwen/Qwen3-235B-A22B-Thinking-2507, Qwen/Qwen3-235B-A22B-Instruct-2507, zai-org/GLM-4.5-Air, zai-org/GLM-4.5, baidu/ERNIE-4.5-300B-A47B, ascend-tribe/pangu-pro-moe, tencent/Hunyuan-A13B-Instruct, MiniMaxAI/MiniMax-M1-80k, Tongyi-Zhiwen/QwenLong-L1-32B, Qwen/Qwen3-30B-A3B, Qwen/Qwen3-32B, Qwen/Qwen3-14B, Qwen/Qwen3-8B, Qwen/Qwen3-235B-A22B, THUDM/GLM-Z1-32B-0414, THUDM/GLM-4-32B-0414, THUDM/GLM-Z1-Rumination-32B-0414, THUDM/GLM-4-9B-0414, THUDM/GLM-4-9B-0414, Qwen/QwQ-32B, Pro/deepseek-ai/DeepSeek-R1, Pro/deepseek-ai/DeepSeek-V3, deepseek-ai/DeepSeek-R1, deepseek-ai/DeepSeek-V3, deepseek-ai/DeepSeek-R1-0528-Qwen3-8B, deepseek-ai/DeepSeek-R1-Distill-Qwen-32B, deepseek-ai/DeepSeek-R1-Distill-Qwen-14B, deepseek-ai/DeepSeek-R1-Distill-Qwen-7B, Pro/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B, deepseek-ai/DeepSeek-V2.5, Qwen/Qwen2.5-72B-Instruct-128K, Qwen/Qwen2.5-72B-Instruct, Qwen/Qwen2.5-32B-Instruct, Qwen/Qwen2.5-14B-Instruct, Qwen/Qwen2.5-7B-Instruct, Qwen/Qwen2.5-Coder-32B-Instruct, Qwen/Qwen2.5-Coder-7B-Instruct, Qwen/Qwen2-7B-Instruct, THUDM/glm-4-9b-chat, internlm/internlm2_5-7b-chat, Pro/Qwen/Qwen2.5-7B-Instruct, Pro/Qwen/Qwen2-7B-Instruct, Pro/THUDM/glm-4-9b-chat "Qwen/QwQ-32B"
A list of messages comprising the conversation so far.
1 - 10 elementsIf set, tokens are returned as Server-Sent Events as they are made available. Stream terminates with data: [DONE]
false
The maximum number of tokens to generate. Ensure that input tokens + max_tokens do not exceed the model’s context window. As some services are still being updated, avoid setting max_tokens to the window’s upper bound; reserve ~10k tokens as buffer for input and system overhead. See Models(https://cloud.siliconflow.cn/models) for details.
4096
Switches between thinking and non-thinking modes. Default is True. This field supports the following models:
- zai-org/GLM-4.6
- Qwen/Qwen3-8B
- Qwen/Qwen3-14B
- Qwen/Qwen3-32B
- wen/Qwen3-30B-A3B
- Qwen/Qwen3-235B-A22B
- tencent/Hunyuan-A13B-Instruct
- zai-org/GLM-4.5V
- deepseek-ai/DeepSeek-V3.1
- Pro/deepseek-ai/DeepSeek-V3.1If you want to use the function call feature for deepseek-ai/DeepSeek-V3.1 or Pro/deepseek-ai/DeepSeek-V3.1 , you need to set enable_thinking to false.
false
Maximum number of tokens for chain-of-thought output. This field applies to all Reasoning models.
128 <= x <= 327684096
Dynamic filtering threshold that adapts based on token probabilities.This field only applies to Qwen3.
0 <= x <= 10.05
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
null
Determines the degree of randomness in the response.
0.7
The top_p (nucleus) parameter is used to dynamically adjust the number of choices for each predicted token based on the cumulative probabilities.
0.7
50
0.5
Number of generations to return
1
An object specifying the format that the model must output.
A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.