海量在线大模型 兼容OpenAI API

全部大模型

326个模型 · 2025-09-17 更新
qwen/qwen2.5-vl-32b-instruct:free
Qwen2.5-VL-32B is a multimodal vision-language model fine-tuned through reinforcement learning for enhanced mathematical reasoning, structured outputs, and visual problem-solving capabilities. It excels at visual analysis tasks, including object recognition, textual interpretation within images, and precise event localization in extended videos. Qwen2.5-VL-32B demonstrates state-of-the-art performance across multimodal benchmarks such as MMMU, MathVista, and VideoMME, while maintaining strong reasoning and clarity in text-based tasks like MMLU, mathematical problem-solving, and code generation.
2025-03-25 8,192 text+image->text Qwen
Qwen: Qwen2.5 VL 32B Instruct
$0.0002/1k
$0.0006/1k
qwen/qwen2.5-vl-32b-instruct
Qwen2.5-VL-32B is a multimodal vision-language model fine-tuned through reinforcement learning for enhanced mathematical reasoning, structured outputs, and visual problem-solving capabilities. It excels at visual analysis tasks, including object recognition, textual interpretation within images, and precise event localization in extended videos. Qwen2.5-VL-32B demonstrates state-of-the-art performance across multimodal benchmarks such as MMMU, MathVista, and VideoMME, while maintaining strong reasoning and clarity in text-based tasks like MMLU, mathematical problem-solving, and code generation.
2025-03-25 16,384 text+image->text Qwen
Qwen: Qwen-Turbo
$0.0002/1k
$0.0008/1k
qwen/qwen-turbo
Qwen-Turbo, based on Qwen2.5, is a 1M context model that provides fast speed and low cost, suitable for simple tasks.
2025-02-01 1,000,000 text->text Qwen
Qwen: Qwen-Plus
$0.0016/1k
$0.0048/1k
qwen/qwen-plus
Qwen-Plus, based on the Qwen2.5 foundation model, is a 131K context model with a balanced performance, speed, and cost combination.
2025-02-01 131,072 text->text Qwen
Qwen: Qwen-Max
$0.0064/1k
$0.026/1k
qwen/qwen-max
Qwen-Max, based on Qwen2.5, provides the best inference performance among Qwen models, especially for complex multi-step tasks. It's a large-scale MoE model that has been pretrained on over 20 trillion tokens and further post-trained with curated Supervised Fine-Tuning (SFT) and Reinforcement Learning from Human Feedback (RLHF) methodologies. The parameter count is unknown.
2025-02-01 32,768 text->text Qwen
Qwen: Qwen VL Plus
$0.0008/1k
$0.0025/1k
qwen/qwen-vl-plus
Qwen's Enhanced Large Visual Language Model. Significantly upgraded for detailed recognition capabilities and text recognition abilities, supporting ultra-high pixel resolutions up to millions of pixels and extreme aspect ratios for image input. It delivers significant performance across a broad range of visual tasks.
2025-02-05 7,500 text+image->text Qwen
Qwen: Qwen VL Max
$0.0032/1k
$0.013/1k
qwen/qwen-vl-max
Qwen VL Max is a visual understanding model with 7500 tokens context length. It excels in delivering optimal performance for a broader spectrum of complex tasks.
2025-02-02 7,500 text+image->text Qwen
Qwen: QwQ 32B Preview
$0.0008/1k
$0.0008/1k
qwen/qwq-32b-preview
QwQ-32B-Preview is an experimental research model focused on AI reasoning capabilities developed by the Qwen Team. As a preview release, it demonstrates promising analytical abilities while having several important limitations: Language Mixing and Code-Switching: The model may mix languages or switch between them unexpectedly, affecting response clarity. Recursive Reasoning Loops: The model may enter circular reasoning patterns, leading to lengthy responses without a conclusive answer. Safety and Ethical Considerations: The model requires enhanced safety measures to ensure reliable and secure performance, and users should exercise caution when deploying it. Performance and Benchmark Limitations: The model excels in math and coding but has room for improvement in other areas, such as common sense reasoning and nuanced language understanding.
2024-11-28 32,768 text->text Qwen
qwen/qwq-32b:free
QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.
2025-03-06 32,768 text->text Qwen
Qwen: QwQ 32B
$0.0006/1k
$0.0016/1k
qwen/qwq-32b
QwQ is the reasoning model of the Qwen series. Compared with conventional instruction-tuned models, QwQ, which is capable of thinking and reasoning, can achieve significantly enhanced performance in downstream tasks, especially hard problems. QwQ-32B is the medium-sized reasoning model, which is capable of achieving competitive performance against state-of-the-art reasoning models, e.g., DeepSeek-R1, o1-mini.
2025-03-06 32,768 text->text Qwen
qwen/qwen-2.5-coder-32b-instruct:free
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: Significantly improvements in code generation, code reasoning and code fixing. A more comprehensive foundation for real-world applications such as Code Agents. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies. To read more about its evaluation results, check out Qwen 2.5 Coder's blog.
2024-11-12 32,768 text->text Qwen
Qwen2.5 Coder 32B Instruct
$0.0002/1k
$0.0006/1k
qwen/qwen-2.5-coder-32b-instruct
Qwen2.5-Coder is the latest series of Code-Specific Qwen large language models (formerly known as CodeQwen). Qwen2.5-Coder brings the following improvements upon CodeQwen1.5: Significantly improvements in code generation, code reasoning and code fixing. A more comprehensive foundation for real-world applications such as Code Agents. Not only enhancing coding capabilities but also maintaining its strengths in mathematics and general competencies. To read more about its evaluation results, check out Qwen 2.5 Coder's blog.
2024-11-12 32,768 text->text Qwen