海量在线大模型 兼容OpenAI API

全部大模型

228个模型 · 2025-02-09 更新
Unslopnemo 12b
$0.0020/1k
$0.0020/1k
thedrummer/unslopnemo-12b
UnslopNemo v4.1 is the latest addition from the creator of Rocinante, designed for adventure writing and role-play scenarios.
2024-11-09 32,000 text->text Mistral
Toppy M 7B (free)
免费使用
undi95/toppy-m-7b:free
A wild 7B parameter model that merges several models using the new task_arithmetic merge method from mergekit. List of merged models: - NousResearch/Nous-Capybara-7B-V1.9 - HuggingFaceH4/zephyr-7b-beta - lemonilia/AshhLimaRP-Mistral-7B - Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b - Undi95/Mistral-pippa-sharegpt-7b-qlora merge #uncensored
2023-11-10 4,096 text->text Mistral
Toppy M 7B
$0.0003/1k
$0.0003/1k
undi95/toppy-m-7b
A wild 7B parameter model that merges several models using the new task_arithmetic merge method from mergekit. List of merged models: - NousResearch/Nous-Capybara-7B-V1.9 - HuggingFaceH4/zephyr-7b-beta - lemonilia/AshhLimaRP-Mistral-7B - Vulkane/120-Days-of-Sodom-LoRA-Mistral-7b - Undi95/Mistral-pippa-sharegpt-7b-qlora merge #uncensored
2023-11-10 4,096 text->text Mistral
SorcererLM 8x22B
$0.018/1k
$0.018/1k
raifle/sorcererlm-8x22b
SorcererLM is an advanced RP and storytelling model, built as a Low-rank 16-bit LoRA fine-tuned on WizardLM-2 8x22B. Advanced reasoning and emotional intelligence for engaging and immersive interactions Vivid writing capabilities enriched with spatial and contextual awareness Enhanced narrative depth, promoting creative and dynamic storytelling
2024-11-09 16,000 text->text Mistral
OpenHermes 2.5 Mistral 7B
$0.0007/1k
$0.0007/1k
teknium/openhermes-2.5-mistral-7b
A continuation of OpenHermes 2 model, trained on additional code datasets. Potentially the most interesting finding from training on a good ratio (est. of around 7-14% of the total dataset) of code instruction was that it has boosted several non-code benchmarks, including TruthfulQA, AGIEval, and GPT4All suite. It did however reduce BigBench benchmark score, but the net gain overall is significant.
2023-11-20 4,096 text->text Mistral
openchat/openchat-7b:free
OpenChat 7B is a library of open-source language models, fine-tuned with “C-RLFT (Conditioned Reinforcement Learning Fine-Tuning)” - a strategy inspired by offline reinforcement learning. It has been trained on mixed-quality data without preference labels. For OpenChat fine-tuned on Mistral 7B, check out OpenChat 7B. For OpenChat fine-tuned on Llama 8B, check out OpenChat 8B. open-source
2023-11-28 8,192 text->text Mistral
OpenChat 3.5 7B
$0.0002/1k
$0.0002/1k
openchat/openchat-7b
OpenChat 7B is a library of open-source language models, fine-tuned with “C-RLFT (Conditioned Reinforcement Learning Fine-Tuning)” - a strategy inspired by offline reinforcement learning. It has been trained on mixed-quality data without preference labels. For OpenChat fine-tuned on Mistral 7B, check out OpenChat 7B. For OpenChat fine-tuned on Llama 8B, check out OpenChat 8B. open-source
2023-11-28 8,192 text->text Mistral
Nous: Hermes 2 Mixtral 8x7B DPO
$0.0024/1k
$0.0024/1k
nousresearch/nous-hermes-2-mixtral-8x7b-dpo
Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the Mixtral 8x7B MoE LLM. The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks. moe
2024-01-16 32,768 text->text Mistral
Mistral: Pixtral Large 2411
$0.0080/1k
$0.024/1k
mistralai/pixtral-large-2411
Pixtral Large is a 124B parameter, open-weight, multimodal model built on top of Mistral Large 2. The model is able to understand documents, charts and natural images. The model is available under the Mistral Research License (MRL) for research and educational use, and the Mistral Commercial License for experimentation, testing, and production for commercial purposes.
2024-11-19 128,000 text+image->text Mistral
Mistral: Pixtral 12B
$0.0004/1k
$0.0004/1k
mistralai/pixtral-12b
The first multi-modal, text+image-to-text model from Mistral AI. Its weights were launched via torrent: https://x.com/mistralai/status/1833758285167722836.
2024-09-10 4,096 text+image->text Mistral
Mistral: Mixtral 8x7B Instruct
$0.0010/1k
$0.0010/1k
mistralai/mixtral-8x7b-instruct
Mixtral 8x7B Instruct is a pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47 billion parameters. Instruct model fine-tuned by Mistral. #moe
2023-12-10 32,768 text->text Mistral
Mistral: Mixtral 8x7B (base)
$0.0024/1k
$0.0024/1k
mistralai/mixtral-8x7b
Mixtral 8x7B is a pretrained generative Sparse Mixture of Experts, by Mistral AI. Incorporates 8 experts (feed-forward networks) for a total of 47B parameters. Base model (not fine-tuned for instructions) - see Mixtral 8x7B Instruct for an instruct-tuned model. moe
2023-12-10 32,768 text->text Mistral