Text generation
Language Model (LLM) User Manual
1. Model Core Capabilities
1.1 Basic Functions
Text Generation: Generate coherent natural language text based on context, supporting various styles and genres.
Semantic Understanding: Deeply parse user intent, supporting multi-round dialogue management to ensure the coherence and accuracy of conversations.
Knowledge Q&A: Cover a wide range of knowledge domains, including science, technology, culture, history, etc., providing accurate knowledge answers.
Code Assistance: Support code generation, explanation, and debugging for multiple mainstream programming languages (such as Python, Java, C++, etc.).
1.2 Advanced Capabilities
Long Text Processing: Support context windows of 4k to 64k tokens, suitable for long document generation and complex dialogue scenarios.
Instruction Following: Precisely understand complex task instructions, such as “compare A/B schemes using a Markdown table.”
Style Control: Adjust output style through system prompts, supporting various styles such as academic, conversational, and poetry.
Multimodal Support: In addition to text generation, support tasks such as image description and speech-to-text.
2. API Call Specifications
2.1 Basic Request Structure
You can make end-to-end API requests using the OpenAI SDK
2.2 Message Body Structure Description
Message Type | Description | Example Content |
---|---|---|
system | Model instruction, sets the AI role and describes how the model should generally behave and respond | Example: “You are a pediatrician with 10 years of experience” |
user | User input, passes the final user’s message to the model | Example: “How should I handle a child with persistent low fever?“ |
assistant | Model-generated historical responses, provides examples for the model to understand how it should respond to the current request | Example: “I would suggest first taking the child’s temperature…” |
When you want the model to follow layered instructions, message roles can help you get better outputs. However, they are not deterministic, so the best approach is to try different methods and see which one gives you the desired results.
3. Model Series Selection Guide
You can enter the Model Square and filter language models that support different functionalities using the filters on the left. Based on the model descriptions, you can understand the specific pricing, model parameter size, maximum context length supported by the model, and other details.
You can experience the models in the playground (the playground only provides model experience and does not have a history record function. If you want to save the conversation records, please save the session content yourself). For more usage instructions, you can refer to the API Documentation.
4.Detailed Explanation of Core Parameters
4.1 Creativity Control
4.2 Output Constraints
4.3 Summary of Language Model Scenarios
1. Model Output Encoding Issues
Currently, some models are prone to encoding issues when parameters are not set. If you encounter such issues, you can try setting the temperature, top_k, top_p, and frequency_penalty parameters.
Modify the payload as follows, adjusting as needed for different languages:
2. Explanation of max_tokens
For the LLM models provided by the platform:
-
The model with a max_tokens limit of
16384
:- Pro/deepseek-ai/DeepSeek-R1
- Qwen/QVQ-72B-Preview
- deepseek-ai/DeepSeek-R1-Distill-Llama-70B
- deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
- deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
- deepseek-ai/DeepSeek-R1-Distill-Llama-8B
- deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
- deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
- Pro/deepseek-ai/DeepSeek-R1-Distill-Llama-8B
- Pro/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B
- Pro/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
-
The model with a max_tokens limit of
8192
:- Qwen/QwQ-32B-Preview
- AIDC-AI/Marco-o1
- deepseek-ai/DeepSeek-R1
-
The model with a max_tokens limit of
4096
:- Other LLM models aside from those mentioned above
3. Explanation of context_length
The context_length varies for different LLM models. You can search for the specific model on the Model Square to view the model details.
4. Output Truncation Issues in Model Inference Here are several aspects to troubleshoot the issue:
- When encountering output truncation through API requests:
- Max Tokens Setting: Set the max_token to an appropriate value. If the output exceeds the max_token, it will be truncated. For the deepseek R1 series, the max_token can be set up to 16,384.
- Stream Request Setting: In non-stream requests, long output content is prone to 504 timeout issues.
- Client Timeout Setting: Increase the client timeout to prevent truncation before the output is fully completed.
- When encountering output truncation through third-party client requests:
- CherryStdio has a default max_tokens of 4,096. Users can enable the “Enable Message Length Limit” switch to set the max_token to an appropriate value.
5. Error Code Handling
Error Code | Common Cause | Solution |
---|---|---|
400 | Incorrect parameter format | Check the value range of parameters like temperature. |
401 | API Key not correctly set | Check the API Key. |
403 | Insufficient permissions | The most common reason is that the model requires real-name authentication. Refer to the error message for other cases. |
429 | Exceeded request frequency limit | Implement exponential backoff retry mechanism. |
503/504 | Model overload | Switch to a backup model node. |
5. Billing and Quota Management
5.1 Billing Formula
Total Cost = (Input tokens × Input price) + (Output tokens × Output price)
5.2 Example Pricing for Each Series
The specific pricing for each model can be viewed on the Model Details Page in the Model Square.