一个接口,无缝切换所有主流 AI 服务
GPT-4o / GPT-4.1 / o3 / GPT-4-mini
Claude 3.5 Sonnet / Haiku / Opus
Gemini 2.0 / Pro / Flash / Ultra
DeepSeek-V3 / DeepSeek-R1
Mistral Large / Nemo / Codestral
Llama 3 / Llama 3.1
Llama 3 / Llama Guard
Qwen2.5 / Qwen-Max
Yi-1.5 / Yi-Large
Grok 2
Doubao-pro / Turbo
OpenAI Video Generation
多模型智能路由
Kimi / moonshot-v1
GLM-4 / GLM-4-9B
ABAB 6.5
360GPT / 智脑大模型
Baichuan 3 / 3-Turbo
SparkDesk 4.0
Pangu / Hunyuan
PPLX-70B / Online Model
Command R / R+
Reka Core / Flash
Phi-3 / Phi-3.5
持续接入中...
GPT-4o / GPT-4.1 / o3 / GPT-4-mini
Claude 3.5 Sonnet / Haiku / Opus
Gemini 2.0 / Pro / Flash / Ultra
DeepSeek-V3 / DeepSeek-R1
Mistral Large / Nemo / Codestral
Llama 3 / Llama 3.1
Llama 3 / Llama Guard
Qwen2.5 / Qwen-Max
Yi-1.5 / Yi-Large
Grok 2
Doubao-pro / Turbo
OpenAI Video Generation
多模型智能路由
Kimi / moonshot-v1
GLM-4 / GLM-4-9B
ABAB 6.5
360GPT / 智脑大模型
Baichuan 3 / 3-Turbo
SparkDesk 4.0
Pangu / Hunyuan
PPLX-70B / Online Model
Command R / R+
Reka Core / Flash
Phi-3 / Phi-3.5
持续接入中...
多种渠道与模型分组,满足不同业务需求
大部分模型可用,建议选择分组调用
包含 gemini 等多个 AI 模型,超值福利渠道
官转 mini 模型,高性价比选择
支持 Responses 接口,支持 Codex 客户端使用
价格低,易饱和,适合非关键业务
价格适中,稳定性尚可,平衡之选
价格较贵,稳定性强,适合生产环境
Gemini 官网逆向接口
支持 Gemini API 原生格式
支持 Gemini API 原生格式
Gemini AI Studio 渠道
Gemini VertexAI 渠道,企业级稳定性
Grok 官网逆向接口
Grok 官网所有模型均支持
Claude 官网逆向接口
Claude 官网模型均支持
Cursor 逆向接口,支持代码生成
AWS 官网所有模型均支持
OpenAI 官网模型均支持(稀有模型除外)
可用 Claude Code 客户端,定制服务
按量计费费用 = 分组倍率 × 模型倍率 × (提示token数 + 补全token数 × 补全倍率)/ 500000(单位:美元)多条线路保障服务稳定性
http://api.aiskt.com:16888
建议优先使用,稳定性和速度都经过优化
http://new.aiskt.com
遇到连接问题时可尝试切换备用线路
完整的 API 接口说明和代码示例
Authorization: Bearer YOUR_API_KEY兼容 OpenAI 官方 API 格式,支持 GPT、Claude、Gemini 等模型
/v1/chat/completions
创建聊天补全 - 支持 GPT、Claude、Gemini 等模型的对话式交互
| model | 必填 - 模型 ID,如 gpt-4o、claude-3-5-sonnet-20241022 |
| messages | 必填 - 消息数组,包含 role 和 content |
| temperature | 可选 - 0-2 之间,控制随机性,默认 1 |
| max_tokens | 可选 - 生成的最大 token 数 |
| stream | 可选 - 是否流式返回,默认 false |
from openai import OpenAI
client = OpenAI(
api_key="your-api-key",
base_url="https://new.aiskt.com/v1"
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": "你是一个有帮助的助手"},
{"role": "user", "content": "介绍一下人工智能"}
],
temperature=0.7,
max_tokens=1000
)
print(response.choices[0].message.content)
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'your-api-key',
baseURL: 'https://new.aiskt.com/v1'
});
const response = await client.chat.completions.create({
model: 'gpt-4o',
messages: [
{role: 'system', content: '你是一个有帮助的助手'},
{role: 'user', content: '介绍一下人工智能'}
],
temperature: 0.7,
max_tokens: 1000
});
console.log(response.choices[0].message.content);
curl https://new.aiskt.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "gpt-4o",
"messages": [
{"role": "system", "content": "你是一个有帮助的助手"},
{"role": "user", "content": "介绍一下人工智能"}
],
"temperature": 0.7,
"max_tokens": 1000
}'
/v1/chat/completions (stream)
流式聊天补全 - 实时返回生成内容,提升用户体验
from openai import OpenAI
client = OpenAI(
api_key="your-api-key",
base_url="https://new.aiskt.com/v1"
)
stream = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "写一首诗"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'your-api-key',
baseURL: 'https://new.aiskt.com/v1'
});
const stream = await client.chat.completions.create({
model: 'gpt-4o',
messages: [{role: 'user', content: '写一首诗'}],
stream: true
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '');
}
使用 Anthropic 官方 Messages API 格式调用 Claude 模型
/v1/messages
Anthropic Messages API - 使用 Claude 原生格式进行对话
| model | 必填 - Claude 模型,如 claude-3-5-sonnet-20241022 |
| messages | 必填 - 消息数组,role 只能是 user 或 assistant |
| system | 可选 - 系统提示词,单独的字符串参数 |
| max_tokens | 必填 - 最大生成 token 数(Claude 要求必填) |
| temperature | 可选 - 0-1 之间,控制随机性 |
import anthropic
client = anthropic.Anthropic(
api_key="your-api-key",
base_url="https://new.aiskt.com"
)
message = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
system="你是一个有帮助的 AI 助手",
messages=[
{"role": "user", "content": "介绍一下人工智能"}
]
)
print(message.content[0].text)
curl https://new.aiskt.com/v1/messages \
-H "Content-Type: application/json" \
-H "x-api-key: YOUR_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-3-5-sonnet-20241022",
"max_tokens": 1024,
"system": "你是一个有帮助的 AI 助手",
"messages": [
{"role": "user", "content": "介绍一下人工智能"}
]
}'
支持 Deepseek 推理模型的思维链输出
/v1/chat/completions
Deepseek Reasoning - 获取模型的推理过程和最终答案
Deepseek 推理模型会返回两部分内容:reasoning_content(推理过程)和 content(最终答案)。
推理过程展示了模型的思考步骤,有助于理解模型如何得出结论。
from openai import OpenAI
client = OpenAI(
api_key="your-api-key",
base_url="https://new.aiskt.com/v1"
)
response = client.chat.completions.create(
model="deepseek-reasoner",
messages=[
{"role": "user", "content": "9.11 和 9.9 哪个更大?"}
]
)
# 推理过程
reasoning = response.choices[0].message.reasoning_content
print("推理过程:", reasoning)
# 最终答案
answer = response.choices[0].message.content
print("最终答案:", answer)
curl https://new.aiskt.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "deepseek-reasoner",
"messages": [
{"role": "user", "content": "9.11 和 9.9 哪个更大?"}
]
}'
使用 Google Gemini API 原生格式调用
/v1beta/models/{model}:generateContent
Gemini Generate Content - 使用 Google 原生格式调用 Gemini 模型
| contents | 必填 - 内容数组,包含 role 和 parts |
| generationConfig | 可选 - 生成配置,包含 temperature、maxOutputTokens 等 |
| safetySettings | 可选 - 安全设置配置 |
import google.generativeai as genai
genai.configure(
api_key="your-api-key",
transport="rest",
client_options={"api_endpoint": "one.aiskt.com"}
)
model = genai.GenerativeModel('gemini-pro')
response = model.generate_content(
"介绍一下人工智能",
generation_config={
"temperature": 0.7,
"max_output_tokens": 1000
}
)
print(response.text)
curl https://new.aiskt.com/v1beta/models/gemini-pro:generateContent \
-H "Content-Type: application/json" \
-H "x-goog-api-key: YOUR_API_KEY" \
-d '{
"contents": [{
"parts": [{"text": "介绍一下人工智能"}]
}],
"generationConfig": {
"temperature": 0.7,
"maxOutputTokens": 1000
}
}'
将文本转换为向量表示,用于语义搜索、相似度计算、聚类分析等
/v1/embeddings
OpenAI Embeddings - 生成文本的向量表示
| model | 必填 - 嵌入模型,如 text-embedding-3-small、text-embedding-3-large |
| input | 必填 - 输入文本,可以是字符串或字符串数组 |
| encoding_format | 可选 - 返回格式,float 或 base64,默认 float |
| dimensions | 可选 - 输出向量维度(仅 v3 模型支持) |
from openai import OpenAI
client = OpenAI(
api_key="your-api-key",
base_url="https://new.aiskt.com/v1"
)
# 单个文本
response = client.embeddings.create(
model="text-embedding-3-small",
input="人工智能正在改变世界"
)
print(response.data[0].embedding)
# 批量文本
response = client.embeddings.create(
model="text-embedding-3-small",
input=["文本1", "文本2", "文本3"]
)
for item in response.data:
print(f"索引 {item.index}: {len(item.embedding)} 维向量")
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: 'your-api-key',
baseURL: 'https://new.aiskt.com/v1'
});
const response = await client.embeddings.create({
model: 'text-embedding-3-small',
input: '人工智能正在改变世界'
});
console.log(response.data[0].embedding);
curl https://new.aiskt.com/v1/embeddings \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "text-embedding-3-small",
"input": "人工智能正在改变世界"
}'
对搜索结果进行重新排序,提升检索相关性
/v1/rerank
Jina AI Rerank - 使用 Jina 重排序模型优化搜索结果
| model | 必填 - 重排序模型,如 jina-reranker-v2-base-multilingual |
| query | 必填 - 查询文本 |
| documents | 必填 - 待排序的文档数组 |
| top_n | 可选 - 返回前 N 个结果 |
import requests
response = requests.post(
"https://new.aiskt.com/v1/rerank",
headers={
"Authorization": "Bearer your-api-key",
"Content-Type": "application/json"
},
json={
"model": "jina-reranker-v2-base-multilingual",
"query": "什么是人工智能?",
"documents": [
"人工智能是计算机科学的一个分支",
"今天天气很好",
"机器学习是人工智能的核心技术",
"我喜欢吃苹果"
],
"top_n": 2
}
)
for result in response.json()["results"]:
print(f"文档 {result['index']}: 分数 {result['relevance_score']}")
curl https://new.aiskt.com/v1/rerank \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "jina-reranker-v2-base-multilingual",
"query": "什么是人工智能?",
"documents": [
"人工智能是计算机科学的一个分支",
"今天天气很好",
"机器学习是人工智能的核心技术"
],
"top_n": 2
}'
/v1/rerank
Cohere Rerank - 使用 Cohere 重排序模型
import cohere
co = cohere.Client(
api_key="your-api-key",
base_url="https://new.aiskt.com"
)
results = co.rerank(
model="rerank-multilingual-v3.0",
query="什么是人工智能?",
documents=[
"人工智能是计算机科学的一个分支",
"今天天气很好",
"机器学习是人工智能的核心技术"
],
top_n=2
)
for result in results.results:
print(f"文档 {result.index}: 分数 {result.relevance_score}")
curl https://new.aiskt.com/v1/rerank \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "rerank-multilingual-v3.0",
"query": "什么是人工智能?",
"documents": [
"人工智能是计算机科学的一个分支",
"机器学习是人工智能的核心技术"
],
"top_n": 2
}'
/v1/rerank
Xinference Rerank - 使用 Xinference 框架的重排序模型
Xinference 支持多种开源重排序模型,如 bge-reranker、bce-reranker 等。 适合需要本地部署或自定义模型的场景。
import requests
response = requests.post(
"https://new.aiskt.com/v1/rerank",
headers={
"Authorization": "Bearer your-api-key",
"Content-Type": "application/json"
},
json={
"model": "bge-reranker-v2-m3",
"query": "什么是人工智能?",
"documents": [
"人工智能是计算机科学的一个分支",
"机器学习是人工智能的核心技术"
],
"return_documents": True
}
)
print(response.json())
curl https://new.aiskt.com/v1/rerank \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "bge-reranker-v2-m3",
"query": "什么是人工智能?",
"documents": [
"人工智能是计算机科学的一个分支",
"机器学习是人工智能的核心技术"
]
}'
基于 WebSocket 的实时语音和文本对话接口
/v1/realtime
OpenAI Realtime API - 支持实时语音输入输出的对话接口
| model | 必填 - 实时模型,如 gpt-4o-realtime-preview |
| modalities | 可选 - 输入输出模式,如 ["text", "audio"] |
| voice | 可选 - 语音类型,如 alloy、echo、shimmer |
| instructions | 可选 - 系统指令 |
import asyncio
import websockets
import json
import base64
async def realtime_chat():
url = "wss://one.aiskt.com/v1/realtime?model=gpt-4o-realtime-preview"
headers = {
"Authorization": "Bearer your-api-key",
"OpenAI-Beta": "realtime=v1"
}
async with websockets.connect(url, extra_headers=headers) as ws:
# 发送配置
await ws.send(json.dumps({
"type": "session.update",
"session": {
"modalities": ["text", "audio"],
"voice": "alloy",
"instructions": "你是一个有帮助的助手"
}
}))
# 发送文本消息
await ws.send(json.dumps({
"type": "conversation.item.create",
"item": {
"type": "message",
"role": "user",
"content": [{"type": "input_text", "text": "你好"}]
}
}))
# 接收响应
async for message in ws:
data = json.loads(message)
print(f"收到: {data['type']}")
asyncio.run(realtime_chat())
const WebSocket = require('ws');
const ws = new WebSocket(
'wss://one.aiskt.com/v1/realtime?model=gpt-4o-realtime-preview',
{
headers: {
'Authorization': 'Bearer your-api-key',
'OpenAI-Beta': 'realtime=v1'
}
}
);
ws.on('open', () => {
// 发送配置
ws.send(JSON.stringify({
type: 'session.update',
session: {
modalities: ['text', 'audio'],
voice: 'alloy',
instructions: '你是一个有帮助的助手'
}
}));
// 发送消息
ws.send(JSON.stringify({
type: 'conversation.item.create',
item: {
type: 'message',
role: 'user',
content: [{type: 'input_text', text: '你好'}]
}
}));
});
ws.on('message', (data) => {
const message = JSON.parse(data);
console.log('收到:', message.type);
});
AI 图像生成服务,支持 DALL-E、Midjourney 等模型
/v1/images/generations
OpenAI DALL-E - 根据文本描述生成高质量图像
| model | 必填 - dall-e-2 或 dall-e-3 |
| prompt | 必填 - 图像描述文本 |
| size | 可选 - 图像尺寸,如 1024x1024、1792x1024 |
| quality | 可选 - standard 或 hd(仅 DALL-E 3) |
| n | 可选 - 生成图像数量,默认 1 |
from openai import OpenAI
client = OpenAI(
api_key="your-api-key",
base_url="https://new.aiskt.com/v1"
)
response = client.images.generate(
model="dall-e-3",
prompt="一只可爱的猫咪在太空中漂浮,赛博朋克风格",
size="1024x1024",
quality="hd",
n=1
)
print(response.data[0].url)
curl https://new.aiskt.com/v1/images/generations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "dall-e-3",
"prompt": "一只可爱的猫咪在太空中漂浮,赛博朋克风格",
"size": "1024x1024",
"quality": "hd",
"n": 1
}'
/mj/submit/imagine
Midjourney - 专业级 AI 艺术图像生成(通过 Midjourney Proxy)
| prompt | 必填 - 图像描述(支持 Midjourney 参数) |
| notifyHook | 可选 - 回调 URL |
| state | 可选 - 自定义状态参数 |
import requests
# 提交任务
response = requests.post(
"https://new.aiskt.com/mj/submit/imagine",
headers={"Authorization": "Bearer your-api-key"},
json={
"prompt": "a cute cat in space, cyberpunk style --ar 16:9 --v 6"
}
)
task_id = response.json()["result"]
# 查询结果
result = requests.get(
f"https://new.aiskt.com/mj/task/{task_id}/fetch",
headers={"Authorization": "Bearer your-api-key"}
)
print(result.json())
# 提交任务
curl https://new.aiskt.com/mj/submit/imagine \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"prompt": "a cute cat in space, cyberpunk style --ar 16:9 --v 6"
}
# 查询结果(使用返回的 task_id)
curl https://new.aiskt.com/mj/task/{task_id}/fetch \
-H "Authorization: Bearer YOUR_API_KEY"
语音转文字(Whisper)、文字转语音(TTS)
/v1/audio/transcriptions
语音转文字 - 将音频文件转换为文本(Whisper)
from openai import OpenAI
client = OpenAI(
api_key="your-api-key",
base_url="https://new.aiskt.com/v1"
)
with open("audio.mp3", "rb") as audio_file:
transcript = client.audio.transcriptions.create(
model="whisper-1",
file=audio_file,
language="zh"
)
print(transcript.text)
curl https://new.aiskt.com/v1/audio/transcriptions \ -H "Authorization: Bearer YOUR_API_KEY" \ -F "file=@audio.mp3" \ -F "model=whisper-1" \ -F "language=zh"
/v1/audio/speech
文字转语音 - 将文本转换为自然语音(TTS)
from openai import OpenAI
client = OpenAI(
api_key="your-api-key",
base_url="https://new.aiskt.com/v1"
)
response = client.audio.speech.create(
model="tts-1",
voice="alloy",
input="你好,欢迎使用 AI 语音服务!"
)
response.stream_to_file("output.mp3")
curl https://new.aiskt.com/v1/audio/speech \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "tts-1",
"voice": "alloy",
"input": "你好,欢迎使用 AI 语音服务!"
}' \
--output output.mp3
AI 音乐创作服务(Suno AI)
/v1/music/generations
Suno AI - 根据描述生成完整的音乐作品
| prompt | 必填 - 音乐描述或歌词 |
| make_instrumental | 可选 - 是否生成纯音乐(无人声) |
| tags | 可选 - 音乐风格标签 |
import requests
response = requests.post(
"https://new.aiskt.com/v1/music/generations",
headers={"Authorization": "Bearer your-api-key"},
json={
"prompt": "一首轻快的流行歌曲,关于夏天和海滩",
"tags": "pop, upbeat, summer",
"make_instrumental": False
}
)
task_id = response.json()["id"]
print(f"任务 ID: {task_id}")
curl https://new.aiskt.com/v1/music/generations \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"prompt": "一首轻快的流行歌曲,关于夏天和海滩",
"tags": "pop, upbeat, summer",
"make_instrumental": false
}'
AI 视频创作服务(可灵、即梦、Sora)
/v1/video/generations
可灵 AI / 即梦 - 文生视频、图生视频
| model | 必填 - kling-v1 或 jimeng-v1 |
| prompt | 必填 - 视频描述 |
| image_url | 可选 - 参考图片 URL(图生视频) |
| duration | 可选 - 视频时长(秒) |
import requests
response = requests.post(
"https://new.aiskt.com/v1/video/generations",
headers={"Authorization": "Bearer your-api-key"},
json={
"model": "kling-v1",
"prompt": "一只猫咪在草地上奔跑,阳光明媚",
"duration": 5
}
)
task_id = response.json()["id"]
print(f"任务 ID: {task_id}")
curl https://new.aiskt.com/v1/video/generations \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "kling-v1",
"prompt": "一只猫咪在草地上奔跑,阳光明媚",
"duration": 5
}'
/v1/video/generations
OpenAI Sora - 高质量文生视频(预览版)
import requests
response = requests.post(
"https://new.aiskt.com/v1/video/generations",
headers={"Authorization": "Bearer your-api-key"},
json={
"model": "sora-1.0",
"prompt": "一个女孩在东京街头漫步,霓虹灯闪烁",
"size": "1920x1080",
"duration": 10
}
)
print(response.json())
curl https://new.aiskt.com/v1/video/generations \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "sora-1.0",
"prompt": "一个女孩在东京街头漫步,霓虹灯闪烁",
"size": "1920x1080",
"duration": 10
}'
查看当前账户可访问的所有 AI 模型
/v1/models
获取可用模型列表 - 查看所有可用的 AI 模型及其信息
from openai import OpenAI
client = OpenAI(
api_key="your-api-key",
base_url="https://new.aiskt.com/v1"
)
models = client.models.list()
for model in models.data:
print(f"{model.id} - {model.owned_by}")
curl https://new.aiskt.com/v1/models \ -H "Authorization: Bearer YOUR_API_KEY"
创建、查询、更新和删除 API 令牌
/api/token
获取令牌列表 - 查看账户下所有的 API 令牌
curl https://new.aiskt.com/api/token \ -H "Authorization: Bearer YOUR_SESSION_TOKEN"
import requests
response = requests.get(
"https://new.aiskt.com/api/token",
headers={"Authorization": "Bearer your-session-token"}
)
tokens = response.json()
for token in tokens["data"]:
print(f"{token['name']}: {token['key']}")
/api/token
创建新令牌 - 生成新的 API 密钥
| name | 必填 - 令牌名称 |
| remain_quota | 可选 - 令牌额度限制 |
| expired_time | 可选 - 过期时间(Unix 时间戳) |
| unlimited_quota | 可选 - 是否无限额度,默认 false |
curl https://new.aiskt.com/api/token \
-H "Authorization: Bearer YOUR_SESSION_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"name": "我的 API 密钥",
"remain_quota": 1000000,
"unlimited_quota": false
}'
import requests
response = requests.post(
"https://new.aiskt.com/api/token",
headers={
"Authorization": "Bearer your-session-token",
"Content-Type": "application/json"
},
json={
"name": "我的 API 密钥",
"remain_quota": 1000000,
"unlimited_quota": False
}
)
token = response.json()
print(f"新令牌: {token['data']['key']}")
/api/token
更新令牌 - 修改令牌的名称、额度等信息
curl -X PUT https://new.aiskt.com/api/token \
-H "Authorization: Bearer YOUR_SESSION_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"id": 123,
"name": "更新后的名称",
"remain_quota": 2000000
}'
import requests
response = requests.put(
"https://new.aiskt.com/api/token",
headers={
"Authorization": "Bearer your-session-token",
"Content-Type": "application/json"
},
json={
"id": 123,
"name": "更新后的名称",
"remain_quota": 2000000
}
)
print(response.json())
/api/token/{id}
删除令牌 - 永久删除指定的 API 令牌
curl -X DELETE https://new.aiskt.com/api/token/123 \ -H "Authorization: Bearer YOUR_SESSION_TOKEN"
import requests
response = requests.delete(
"https://new.aiskt.com/api/token/123",
headers={"Authorization": "Bearer your-session-token"}
)
print(response.json())
查询账户的 Token 使用量和消费统计
/v1/usage
查询 Token 使用量 - 获取账户的详细消费记录
import requests
response = requests.get(
"https://new.aiskt.com/v1/usage",
headers={"Authorization": "Bearer your-api-key"},
params={
"start_date": "2024-01-01",
"end_date": "2024-01-31"
}
)
usage = response.json()
print(f"总消费: ${usage['total_cost']}")
print(f"总 Token: {usage['total_tokens']}")
curl "https://new.aiskt.com/v1/usage?start_date=2024-01-01&end_date=2024-01-31" \ -H "Authorization: Bearer YOUR_API_KEY"
详细的 API 响应结构和字段说明
ChatCompletion 对象
聊天补全响应格式 - 标准的 OpenAI ChatCompletion 对象结构
| id | 唯一标识符,如 chatcmpl-xxx |
| object | 对象类型,固定为 "chat.completion" |
| created | Unix 时间戳 |
| model | 使用的模型 ID |
| choices | 生成的回复数组,包含 message、finish_reason 等 |
| usage | Token 使用统计,包含 prompt_tokens、completion_tokens、total_tokens |
{
"id": "chatcmpl-8xYz1234567890",
"object": "chat.completion",
"created": 1704067200,
"model": "gpt-4o",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "人工智能是计算机科学的一个分支..."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 20,
"completion_tokens": 150,
"total_tokens": 170
}
}
Stream (SSE)
流式响应格式 - Server-Sent Events (SSE) 格式的增量响应
data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","created":1704067200,"model":"gpt-4o","choices":[{"index":0,"delta":{"role":"assistant","content":""},"finish_reason":null}]}
data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","created":1704067200,"model":"gpt-4o","choices":[{"index":0,"delta":{"content":"人工"},"finish_reason":null}]}
data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","created":1704067200,"model":"gpt-4o","choices":[{"index":0,"delta":{"content":"智能"},"finish_reason":null}]}
data: {"id":"chatcmpl-xxx","object":"chat.completion.chunk","created":1704067200,"model":"gpt-4o","choices":[{"index":0,"delta":{},"finish_reason":"stop"}]}
data: [DONE]
Embeddings 对象
向量嵌入响应格式 - 包含文本的向量表示
{
"object": "list",
"data": [
{
"object": "embedding",
"index": 0,
"embedding": [
0.0023064255,
-0.009327292,
-0.0028842222,
...
]
}
],
"model": "text-embedding-3-small",
"usage": {
"prompt_tokens": 8,
"total_tokens": 8
}
}
Images 对象
图像生成响应格式 - 包含生成的图像 URL 或 Base64 数据
{
"created": 1704067200,
"data": [
{
"url": "https://example.com/image.png",
"revised_prompt": "一只可爱的猫咪在太空中漂浮,赛博朋克风格"
}
]
}
delta 字段包含增量内容,最后以 data: [DONE] 结束API 错误响应格式和常见错误码
错误响应对象
标准错误响应格式 - 所有错误都返回统一的 JSON 结构
{
"error": {
"message": "Invalid API key provided",
"type": "invalid_request_error",
"param": null,
"code": "invalid_api_key"
}
}
400 Bad Request - 请求参数错误或格式不正确401 Unauthorized - API 密钥无效、缺失或已过期403 Forbidden - 无权访问该资源或模型404 Not Found - 请求的资源不存在429 Too Many Requests - 请求频率超限,请降低请求速度或稍后重试500 Internal Server Error - 服务器内部错误,请稍后重试502 Bad Gateway - 上游服务错误,通常是模型提供商问题503 Service Unavailable - 服务暂时不可用,请检查服务状态使用前请仔细阅读以下内容
有问题及时关注服务状态页面!
请合法使用,违规者发现封号不退款。
为缓解服务器和数据库的压力,消费日志保留 3-7 天左右。
考虑到 API 商品的特殊性,售出概不退款。若遇不可抗力平台无法继续提供服务,将妥善制定方案以弥补用户损失。