Creates a model response for the given chat conversation.
Documentation Index
Fetch the complete documentation index at: https://docs.siliconflow.cn/llms.txt
Use this file to discover all available pages before exploring further.
Corresponding Model Name. We periodically update our models to enhance service quality. Changes may include model on/offlining or capability adjustments. We will strive to notify you via announcements or push messages. For a complete list of available models, please check the Models.
"Pro/zai-org/GLM-4.7"
A list of messages comprising the conversation so far.
1 - 10 elementsIf set, tokens are returned as Server-Sent Events as they are made available. Stream terminates with data: [DONE]
false
The maximum number of tokens to generate. Ensure that input tokens + max_tokens do not exceed the model’s context window. As some services are still being updated, avoid setting max_tokens to the window’s upper bound; reserve ~10k tokens as buffer for input and system overhead. See Models(https://cloud.siliconflow.cn/models) for details.
4096
Switches between thinking and non-thinking modes. This field supports the following models:
- Pro/zai-org/GLM-5
- Pro/zai-org/GLM-4.7
- deepseek-ai/DeepSeek-V3.2
- Pro/deepseek-ai/DeepSeek-V3.2
- zai-org/GLM-4.6
- Qwen/Qwen3-8B
- Qwen/Qwen3-14B
- Qwen/Qwen3-32B
- Qwen/Qwen3-30B-A3B
- tencent/Hunyuan-A13B-Instruct
- zai-org/GLM-4.5V
- deepseek-ai/DeepSeek-V3.1-Terminus
- Pro/deepseek-ai/DeepSeek-V3.1-Terminus
- Qwen/Qwen3.5-397B-A17B
- Qwen/Qwen3.5-122B-A10B
- Qwen/Qwen3.5-35B-A3B
- Qwen/Qwen3.5-27B
- Qwen/Qwen3.5-9B
- Qwen/Qwen3.5-4Bfalse
Maximum number of tokens for chain-of-thought output. This field applies to most Reasoning models.
128 <= x <= 327684096
This field only applies to deepseek-ai/DeepSeek-V4-Flash. In thinking mode, the default effort for regular requests is high; for certain complex agent-type requests (such as Claude Code, OpenCode), the effort is automatically set to max. In thinking mode, for compatibility reasons, low and medium are mapped to high, and xhigh is mapped to max.
high, max "high"
Dynamic filtering threshold that adapts based on token probabilities.This field only applies to Qwen3.
0 <= x <= 10.05
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
null
Determines the degree of randomness in the response.
0.7
The top_p (nucleus) parameter is used to dynamically adjust the number of choices for each predicted token based on the cumulative probabilities.
0.7
50
0.5
Number of generations to return
1
An object specifying the format that the model must output.
A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.
The response from the model. The response header contains the x-siliconcloud-trace-id field, which serves as a unique identifier for tracing requests, facilitating log queries and issue troubleshooting.