海量在线大模型 兼容OpenAI API

MiniMax: MiniMax M1

$0.0012/1k
$0.0066/1k
开始对话
minimax/minimax-m1
上下文长度: 1,000,000 text->text Other 2025-06-18 更新
MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "lightning attention" mechanism, allowing it to process long sequences—up to 1 million tokens—while maintaining competitive FLOP efficiency. With 456 billion total parameters and 45.9B active per token, this variant is optimized for complex, multi-step reasoning tasks. Trained via a custom reinforcement learning pipeline (CISPO), M1 excels in long-context understanding, software engineering, agentic tool use, and mathematical reasoning. Benchmarks show strong performance across FullStackBench, SWE-bench, MATH, GPQA, and TAU-Bench, often outperforming other open models like DeepSeek R1 and Qwen3-235B.

模型参数

架构信息

模态: text->text
Tokenizer: Other

限制信息

上下文长度: 1,000,000
最大回复长度: 40,000