海量在线大模型 兼容OpenAI API

全部大模型

320个模型 · 2025-07-23 更新
Mistral: Mixtral 8x22B Instruct
$0.0036/1k
$0.0036/1k
mistralai/mixtral-8x22b-instruct
Mistral's official instruct fine-tuned version of Mixtral 8x22B. It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include: - strong math, coding, and reasoning - large context length (64k) - fluency in English, French, Italian, German, and Spanish See benchmarks on the launch announcement here. moe
2024-04-17 65,536 text->text Mistral
mistralai/mistral-small-3.2-24b-instruct:free
Mistral-Small-3.2-24B-Instruct-2506 is an updated 24B parameter model from Mistral optimized for instruction following, repetition reduction, and improved function calling. Compared to the 3.1 release, version 3.2 significantly improves accuracy on WildBench and Arena Hard, reduces infinite generations, and delivers gains in tool use and structured output tasks. It supports image and text inputs with structured outputs, function/tool calling, and strong performance across coding (HumanEval+, MBPP), STEM (MMLU, MATH, GPQA), and vision benchmarks (ChartQA, DocVQA).
2025-06-21 96,000 text+image->text Mistral
Mistral: Mistral Small 3.2 24B
$0.0002/1k
$0.0004/1k
mistralai/mistral-small-3.2-24b-instruct
Mistral-Small-3.2-24B-Instruct-2506 is an updated 24B parameter model from Mistral optimized for instruction following, repetition reduction, and improved function calling. Compared to the 3.1 release, version 3.2 significantly improves accuracy on WildBench and Arena Hard, reduces infinite generations, and delivers gains in tool use and structured output tasks. It supports image and text inputs with structured outputs, function/tool calling, and strong performance across coding (HumanEval+, MBPP), STEM (MMLU, MATH, GPQA), and vision benchmarks (ChartQA, DocVQA).
2025-06-21 128,000 text+image->text Mistral
mistralai/mistral-small-3.1-24b-instruct:free
Mistral Small 3.1 24B Instruct is an upgraded variant of Mistral Small 3 (2501), featuring 24 billion parameters with advanced multimodal capabilities. It provides state-of-the-art performance in text-based reasoning and vision tasks, including image analysis, programming, mathematical reasoning, and multilingual support across dozens of languages. Equipped with an extensive 128k token context window and optimized for efficient local inference, it supports use cases such as conversational agents, function calling, long-document comprehension, and privacy-sensitive deployments. The updated version is Mistral Small 3.2
2025-03-18 128,000 text+image->text Mistral
Mistral: Mistral Small 3.1 24B
$0.0001/1k
$0.0001/1k
mistralai/mistral-small-3.1-24b-instruct
Mistral Small 3.1 24B Instruct is an upgraded variant of Mistral Small 3 (2501), featuring 24 billion parameters with advanced multimodal capabilities. It provides state-of-the-art performance in text-based reasoning and vision tasks, including image analysis, programming, mathematical reasoning, and multilingual support across dozens of languages. Equipped with an extensive 128k token context window and optimized for efficient local inference, it supports use cases such as conversational agents, function calling, long-document comprehension, and privacy-sensitive deployments. The updated version is Mistral Small 3.2
2025-03-18 96,000 text+image->text Mistral
mistralai/mistral-small-24b-instruct-2501:free
Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment. The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware. Read the blog post about the model here.
2025-01-31 32,768 text->text Mistral
Mistral: Mistral Small 3
$0.0001/1k
$0.0001/1k
mistralai/mistral-small-24b-instruct-2501
Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment. The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware. Read the blog post about the model here.
2025-01-31 32,768 text->text Mistral
mistralai/mistral-nemo:free
A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi. It supports function calling and is released under the Apache 2.0 license.
2024-07-19 131,072 text->text Mistral
Mistral: Mistral Nemo
$0.0000/1k
$0.0002/1k
mistralai/mistral-nemo
A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi. It supports function calling and is released under the Apache 2.0 license.
2024-07-19 32,000 text->text Mistral
Mistral: Mistral Medium 3
$0.0016/1k
$0.0080/1k
mistralai/mistral-medium-3
Mistral Medium 3 is a high-performance enterprise-grade language model designed to deliver frontier-level capabilities at significantly reduced operational cost. It balances state-of-the-art reasoning and multimodal performance with 8× lower cost compared to traditional large models, making it suitable for scalable deployments across professional and industrial use cases. The model excels in domains such as coding, STEM reasoning, and enterprise adaptation. It supports hybrid, on-prem, and in-VPC deployments and is optimized for integration into custom workflows. Mistral Medium 3 offers competitive accuracy relative to larger models like Claude Sonnet 3.5/3.7, Llama 4 Maverick, and Command R+, while maintaining broad compatibility across cloud environments.
2025-05-07 131,072 text+image->text Mistral
mistralai/mistral-7b-instruct-v0.3
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. An improved version of Mistral 7B Instruct v0.2, with the following changes: Extended vocabulary to 32768 Supports v3 Tokenizer Supports function calling NOTE: Support for function calling depends on the provider.
2024-05-27 32,768 text->text Mistral
mistralai/mistral-7b-instruct-v0.2
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. An improved version of Mistral 7B Instruct, with the following changes: 32k context window (vs 8k context in v0.1) Rope-theta = 1e6 No Sliding-Window Attention
2023-12-28 32,768 text->text Mistral