常见问题
流式输出
1. 在 python 中使用流式输出
1.1 基于 openai 库的流式输出
在一般场景中,推荐您使用 openai 的库进行流式输出。
from openai import OpenAI
client = OpenAI(
base_url='https://api.siliconflow.cn/v1',
api_key='your-api-key'
)
# 发送带有流式输出的请求
response = client.chat.completions.create(
model="deepseek-ai/DeepSeek-V2.5",
messages=[
{"role": "user", "content": "SiliconCloud公测上线,每用户送3亿token 解锁开源大模型创新能力。对于整个大模型应用领域带来哪些改变?"}
],
stream=True # 启用流式输出
)
# 逐步接收并处理响应
for chunk in response:
if not chunk.choices:
continue
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
if chunk.choices[0].delta.reasoning_content:
print(chunk.choices[0].delta.reasoning_content, end="", flush=True)
1.2 基于 requests 库的流式输出
如果您有非 openai 的场景,如您需要基于 request 库使用 siliconcloud API,请您注意: 除了 payload 中的 stream 需要设置外,request 请求的参数也需要设置stream = True, 才能正常按照 stream 模式进行返回。
import requests
url = "https://api.siliconflow.cn/v1/chat/completions"
payload = {
"model": "deepseek-ai/DeepSeek-V2.5", # 替换成你的模型
"messages": [
{
"role": "user",
"content": "SiliconCloud公测上线,每用户送3亿token 解锁开源大模型创新能力。对于整个大模型应用领域带来哪些改变?"
}
],
"stream": True # 此处需要设置为stream模式
}
headers = {
"accept": "application/json",
"content-type": "application/json",
"authorization": "Bearer your-api-key"
}
response = requests.post(url, json=payload, headers=headers, stream=True) # 此处request需要指定stream模式
# 打印流式返回信息
if response.status_code == 200:
full_content = ""
full_reasoning_content = ""
for chunk in response.iter_lines():
if chunk:
chunk_str = chunk.decode('utf-8').replace('data: ', '')
if chunk_str != "[DONE]":
chunk_data = json.loads(chunk_str)
delta = chunk_data['choices'][0].get('delta', {})
content = delta.get('content', '')
reasoning_content = delta.get('reasoning_content', '')
if content:
print(content, end="", flush=True)
full_content += content
if reasoning_content:
print(reasoning_content, end="", flush=True)
full_reasoning_content += reasoning_content
else:
print(f"请求失败,状态码:{response.status_code}")