Currently, some models are prone to encoding issues when parameters are not set. In such cases, you can try setting the parameters such as temperature
, top_k
, top_p
, and frequency_penalty
.
Modify the payload as follows, adjusting as needed for different languages:
For the LLM models provided by the platform:
The model with a max_tokens limit of 16384
:
The model with a max_tokens limit of 8192
:
The model with a max_tokens limit of 4096
:
If you have special requirements, please provide feedback by clicking on the SiliconCloud MaaS Online Requirement Collection Form.
The context_length varies for different LLM models. You can search for the specific model on the Models to view the model details.
For some models, the platform provides both a free version and a paid version. The free version is named as is, while the paid version is prefixed with “Pro/” to distinguish it. The free version has fixed Rate Limits, whereas the paid version has variable Rate Limits. For specific rules, please refer to: Rate Limits.
For the DeepSeek R1
and DeepSeek V3
models, the platform distinguishes and names them based on the payment method. The Pro version only supports payment with recharged balance, while the non-Pro version supports payment with both granted balance and recharged balance.
To ensure the quality of the generated voice, it is recommended that users upload a voice sample that is 8 to 10 seconds long, with clear pronunciation and no background noise or interference.
Here are several aspects to troubleshoot the issue:
Here are some areas to check for the issue:
Here are some areas to check for the issue:
Here are some areas to check for the issue:
Currently, some models are prone to encoding issues when parameters are not set. In such cases, you can try setting the parameters such as temperature
, top_k
, top_p
, and frequency_penalty
.
Modify the payload as follows, adjusting as needed for different languages:
For the LLM models provided by the platform:
The model with a max_tokens limit of 16384
:
The model with a max_tokens limit of 8192
:
The model with a max_tokens limit of 4096
:
If you have special requirements, please provide feedback by clicking on the SiliconCloud MaaS Online Requirement Collection Form.
The context_length varies for different LLM models. You can search for the specific model on the Models to view the model details.
For some models, the platform provides both a free version and a paid version. The free version is named as is, while the paid version is prefixed with “Pro/” to distinguish it. The free version has fixed Rate Limits, whereas the paid version has variable Rate Limits. For specific rules, please refer to: Rate Limits.
For the DeepSeek R1
and DeepSeek V3
models, the platform distinguishes and names them based on the payment method. The Pro version only supports payment with recharged balance, while the non-Pro version supports payment with both granted balance and recharged balance.
To ensure the quality of the generated voice, it is recommended that users upload a voice sample that is 8 to 10 seconds long, with clear pronunciation and no background noise or interference.
Here are several aspects to troubleshoot the issue:
Here are some areas to check for the issue:
Here are some areas to check for the issue:
Here are some areas to check for the issue: