Why pick it
- Standard $0.19/M with $0.14/M batch
- Good for bulk automation and routing fallback
Xiaomi
mimo-v2-flash
Entry MiMo route for cost-sensitive high-volume tasks
Params
V2 Flash
Context
128K
Max Output
32K
License
Apache-2.0
TTFT
160ms
Throughput
120 tok/s
Why pick it
Pricing
Quick start
OpenAI-compatible surface. Swap the base URL and ship
from openai import OpenAI
client = OpenAI(
base_url="https://api.luminapath.tech/v1",
api_key="BATCHIN_API_KEY"
)
resp = client.chat.completions.create(
model="mimo-v2-flash",
messages=[{"role": "user", "content": "Summarize why this model is a fit for my workload"}]
)
print(resp.choices[0].message.content)import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://api.luminapath.tech/v1",
apiKey: process.env.BATCHIN_API_KEY,
});
const resp = await client.chat.completions.create({
model: "mimo-v2-flash",
messages: [{ role: "user", content: "Summarize why this model is a fit for my workload" }],
});
console.log(resp.choices[0]?.message?.content);curl https://api.luminapath.tech/v1/chat/completions \
-H "Authorization: Bearer ***" \
-H "Content-Type: application/json" \
-d '{
"model": "mimo-v2-flash",
"messages": [{"role":"user","content":"Summarize why this model is a fit for my workload"}]
}'Specs
Architecture
Dense Transformer
Vendor group
Xiaomi
Context window
128K
Max output
32K
Best for
Related models
Back to model centerXiaomi
mimo-v2-omni
Balanced MiMo route for multimodal-capable enterprise assistants
View detailQwen
qwen3-next-80b-a3b
Low-cost Qwen route for chat, batch generation, and API consolidation
View detailOpenAI OSS
gpt-oss-20b
Small GPT-OSS lane for cheap assistants, workers, and fallback routes
View detailDeepSeek
deepseek-v4-flash
Fast production DeepSeek route with standard, Asia, and batch pricing lanes
View detail