Qwen

Qwen

qwen3-next-80b-a3b

Qwen3-Next-80B-A3B

Low-cost Qwen route for chat, batch generation, and API consolidation

FeaturedPublic model detailAvailableMoE Transformer

Params

80B / 3B active

Context

256K

Max Output

32K

License

Apache-2.0

TTFT

160ms

Throughput

120 tok/s

Why pick it

  • Standard $0.18/M with $0.13/M batch
  • Useful as a broad default route

Pricing

TierPublicCachedPrice sourceNote
Realtime$0.13 / $0.13N/ASiliconFlow lanePublic price reflects the runtime catalog without claimed savings comparisons
Batch$0.09 / $0.09N/ASiliconFlow laneBatch public pricing follows the same runtime source

Quick start

OpenAI-compatible surface. Swap the base URL and ship

Python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.luminapath.tech/v1",
    api_key="BATCHIN_API_KEY"
)

resp = client.chat.completions.create(
    model="qwen3-next-80b-a3b",
    messages=[{"role": "user", "content": "Summarize why this model is a fit for my workload"}]
)

print(resp.choices[0].message.content)
JavaScript
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.luminapath.tech/v1",
  apiKey: process.env.BATCHIN_API_KEY,
});

const resp = await client.chat.completions.create({
  model: "qwen3-next-80b-a3b",
  messages: [{ role: "user", content: "Summarize why this model is a fit for my workload" }],
});

console.log(resp.choices[0]?.message?.content);
cURL
curl https://api.luminapath.tech/v1/chat/completions \
  -H "Authorization: Bearer ***" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "qwen3-next-80b-a3b",
    "messages": [{"role":"user","content":"Summarize why this model is a fit for my workload"}]
  }'

Specs

Architecture

MoE Transformer

Vendor group

Qwen

Context window

256K

Max output

32K

Best for

featured
qwen

Related models

Back to model center
Qwen

Qwen

qwen3-coder-30b-a3b

Qwen3-Coder-30B-A3B

Lower-cost Qwen coder lane for tool-using apps, copilots, and internal agents

View detail
DeepSeek

DeepSeek

deepseek-v3-2

DeepSeek V3.2

General-purpose DeepSeek flagship with strong price-performance for managed and batch workloads

View detail
OpenAI

OpenAI OSS

gpt-oss-120b

GPT-OSS-120B

Open-weight GPT-OSS route for low-cost general inference and experimentation

View detail
DeepSeek

DeepSeek

deepseek-v4-flash

DeepSeek V4 Flash

Fast production DeepSeek route with standard, Asia, and batch pricing lanes

View detail