海量在线大模型 兼容OpenAI API

全部大模型

349个模型 · 2025-12-17 更新
Mistral: Mistral Nemo
$0.0001/1k
$0.0002/1k
mistralai/mistral-nemo
A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi. It supports function calling and is released under the Apache 2.0 license.
2024-07-19 131,072 text->text Mistral
Mistral: Mistral Medium 3.1
$0.0016/1k
$0.0080/1k
mistralai/mistral-medium-3.1
Mistral Medium 3.1 is an updated version of Mistral Medium 3, which is a high-performance enterprise-grade language model designed to deliver frontier-level capabilities at significantly reduced operational cost. It balances state-of-the-art reasoning and multimodal performance with 8× lower cost compared to traditional large models, making it suitable for scalable deployments across professional and industrial use cases. The model excels in domains such as coding, STEM reasoning, and enterprise adaptation. It supports hybrid, on-prem, and in-VPC deployments and is optimized for integration into custom workflows. Mistral Medium 3.1 offers competitive accuracy relative to larger models like Claude Sonnet 3.5/3.7, Llama 4 Maverick, and Command R+, while maintaining broad compatibility across cloud environments.
2025-08-13 131,072 text+image->text Mistral
Mistral: Mistral Medium 3
$0.0016/1k
$0.0080/1k
mistralai/mistral-medium-3
Mistral Medium 3 is a high-performance enterprise-grade language model designed to deliver frontier-level capabilities at significantly reduced operational cost. It balances state-of-the-art reasoning and multimodal performance with 8× lower cost compared to traditional large models, making it suitable for scalable deployments across professional and industrial use cases. The model excels in domains such as coding, STEM reasoning, and enterprise adaptation. It supports hybrid, on-prem, and in-VPC deployments and is optimized for integration into custom workflows. Mistral Medium 3 offers competitive accuracy relative to larger models like Claude Sonnet 3.5/3.7, Llama 4 Maverick, and Command R+, while maintaining broad compatibility across cloud environments.
2025-05-07 131,072 text+image->text Mistral
Mistral: Mistral Large 3 2512
$0.0020/1k
$0.0060/1k
mistralai/mistral-large-2512
Mistral Large 3 2512 is Mistral’s most capable model to date, featuring a sparse mixture-of-experts architecture with 41B active parameters (675B total), and released under the Apache 2.0 license.
2025-12-02 262,144 text+image->text Mistral
mistralai/mistral-7b-instruct-v0.3
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. An improved version of Mistral 7B Instruct v0.2, with the following changes: Extended vocabulary to 32768 Supports v3 Tokenizer Supports function calling NOTE: Support for function calling depends on the provider.
2024-05-27 32,768 text->text Mistral
mistralai/mistral-7b-instruct-v0.2
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. An improved version of Mistral 7B Instruct, with the following changes: 32k context window (vs 8k context in v0.1) Rope-theta = 1e6 No Sliding-Window Attention
2023-12-28 32,768 text->text Mistral
mistralai/mistral-7b-instruct-v0.1
A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations for speed and context length.
2023-09-28 2,824 text->text Mistral
mistralai/mistral-7b-instruct:free
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. Mistral 7B Instruct has multiple version variants, and this is intended to be the latest version.
2024-05-27 32,768 text->text Mistral
Mistral: Mistral 7B Instruct
$0.0001/1k
$0.0002/1k
mistralai/mistral-7b-instruct
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. Mistral 7B Instruct has multiple version variants, and this is intended to be the latest version.
2024-05-27 32,768 text->text Mistral
Mistral: Ministral 8B
$0.0004/1k
$0.0004/1k
mistralai/ministral-8b
Ministral 8B is an 8B parameter model featuring a unique interleaved sliding-window attention pattern for faster, memory-efficient inference. Designed for edge use cases, it supports up to 128k context length and excels in knowledge and reasoning tasks. It outperforms peers in the sub-10B category, making it perfect for low-latency, privacy-first applications.
2024-10-17 131,072 text->text Mistral
Mistral: Ministral 3B
$0.0002/1k
$0.0002/1k
mistralai/ministral-3b
Ministral 3B is a 3B parameter model optimized for on-device and edge computing. It excels in knowledge, commonsense reasoning, and function-calling, outperforming larger models like Mistral 7B on most benchmarks. Supporting up to 128k context length, it’s ideal for orchestrating agentic workflows and specialist tasks with efficient inference.
2024-10-17 131,072 text->text Mistral
Mistral: Ministral 3 8B 2512
$0.0006/1k
$0.0006/1k
mistralai/ministral-8b-2512
A balanced model in the Ministral 3 family, Ministral 3 8B is a powerful, efficient tiny language model with vision capabilities.
2025-12-02 262,144 text+image->text Mistral