海量在线大模型 兼容OpenAI API

全部大模型

228个模型 · 2025-02-09 更新
Mistral: Mixtral 8x22B Instruct
$0.0036/1k
$0.0036/1k
mistralai/mixtral-8x22b-instruct
Mistral’s official instruct fine-tuned version of Mixtral 8x22B. It uses 39B active parameters out of 141B, offering unparalleled cost efficiency for its size. Its strengths include: - strong math, coding, and reasoning - large context length (64k) - fluency in English, French, Italian, German, and Spanish See benchmarks on the launch announcement here. moe
2024-04-17 65,536 text->text Mistral
Mistral: Mistral Small 3
$0.0003/1k
$0.0006/1k
mistralai/mistral-small-24b-instruct-2501
Mistral Small 3 is a 24B-parameter language model optimized for low-latency performance across common AI tasks. Released under the Apache 2.0 license, it features both pre-trained and instruction-tuned versions designed for efficient local deployment. The model achieves 81% accuracy on the MMLU benchmark and performs competitively with larger models like Llama 3.3 70B and Qwen 32B, while operating at three times the speed on equivalent hardware. Read the blog post about the model here.
2025-01-31 32,768 text->text Mistral
Mistral: Mistral Nemo
$0.0001/1k
$0.0003/1k
mistralai/mistral-nemo
A 12B parameter model with a 128k token context length built by Mistral in collaboration with NVIDIA. The model is multilingual, supporting English, French, German, Spanish, Italian, Portuguese, Chinese, Japanese, Korean, Arabic, and Hindi. It supports function calling and is released under the Apache 2.0 license.
2024-07-19 131,072 text->text Mistral
mistralai/mistral-7b-instruct-v0.3
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. An improved version of Mistral 7B Instruct v0.2, with the following changes: Extended vocabulary to 32768 Supports v3 Tokenizer Supports function calling NOTE: Support for function calling depends on the provider.
2024-05-27 32,768 text->text Mistral
mistralai/mistral-7b-instruct-v0.1
A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations for speed and context length.
2023-09-28 32,768 text->text Mistral
mistralai/mistral-7b-instruct:free
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. Mistral 7B Instruct has multiple version variants, and this is intended to be the latest version.
2024-05-27 8,192 text->text Mistral
Mistral: Mistral 7B Instruct
$0.0001/1k
$0.0002/1k
mistralai/mistral-7b-instruct
A high-performing, industry-standard 7.3B parameter model, with optimizations for speed and context length. Mistral 7B Instruct has multiple version variants, and this is intended to be the latest version.
2024-05-27 32,768 text->text Mistral
Mistral: Ministral 8B
$0.0004/1k
$0.0004/1k
mistralai/ministral-8b
Ministral 8B is an 8B parameter model featuring a unique interleaved sliding-window attention pattern for faster, memory-efficient inference. Designed for edge use cases, it supports up to 128k context length and excels in knowledge and reasoning tasks. It outperforms peers in the sub-10B category, making it perfect for low-latency, privacy-first applications.
2024-10-17 128,000 text->text Mistral
Mistral: Ministral 3B
$0.0002/1k
$0.0002/1k
mistralai/ministral-3b
Ministral 3B is a 3B parameter model optimized for on-device and edge computing. It excels in knowledge, commonsense reasoning, and function-calling, outperforming larger models like Mistral 7B on most benchmarks. Supporting up to 128k context length, it’s ideal for orchestrating agentic workflows and specialist tasks with efficient inference.
2024-10-17 128,000 text->text Mistral
Mistral: Codestral Mamba
$0.0010/1k
$0.0010/1k
mistralai/codestral-mamba
A 7.3B parameter Mamba-based model designed for code and reasoning tasks. Linear time inference, allowing for theoretically infinite sequence lengths 256k token context window Optimized for quick responses, especially beneficial for code productivity Performs comparably to state-of-the-art transformer models in code and reasoning tasks Available under the Apache 2.0 license for free use, modification, and distribution
2024-07-19 256,000 text->text Mistral
Mistral: Codestral 2501
$0.0012/1k
$0.0036/1k
mistralai/codestral-2501
Mistral’s cutting-edge language model for coding. Codestral specializes in low-latency, high-frequency tasks such as fill-in-the-middle (FIM), code correction and test generation. Learn more on their blog post: https://mistral.ai/news/codestral-2501/
2025-01-15 256,000 text->text Mistral
Mistral Tiny
$0.0010/1k
$0.0010/1k
mistralai/mistral-tiny
This model is currently powered by Mistral-7B-v0.2, and incorporates a “better” fine-tuning than Mistral 7B, inspired by community work. It’s best used for large batch processing tasks where cost is a significant factor but reasoning capabilities are not crucial.
2024-01-10 32,000 text->text Mistral