海量在线大模型 兼容OpenAI API

全部大模型

320个模型 · 2025-07-23 更新
ReMM SLERP 13B
$0.0028/1k
$0.0040/1k
undi95/remm-slerp-l2-13b
A recreation trial of the original MythoMax-L2-B13 but with updated models. #merge
2023-07-22 6,144 text->text Llama2
Pygmalion: Mythalion 13B
$0.0032/1k
$0.0048/1k
pygmalionai/mythalion-13b
A blend of the new Pygmalion-13b and MythoMax. #merge
2023-09-02 4,096 text->text Llama2
Noromaid 20B
$0.0050/1k
$0.0080/1k
neversleep/noromaid-20b
A collab between IkariDev and Undi. This merge is suitable for RP, ERP, and general knowledge. merge #uncensored
2023-11-26 8,192 text->text Llama2
MythoMax 13B
$0.0002/1k
$0.0002/1k
gryphe/mythomax-l2-13b
One of the highest performing and most popular fine-tunes of Llama 2 13B, with rich descriptions and roleplay. #merge
2023-07-02 4,096 text->text Llama2
Midnight Rose 70B
$0.0032/1k
$0.0032/1k
sophosympatheia/midnight-rose-70b
A merge with a complex family tree, this model was crafted for roleplaying and storytelling. Midnight Rose is a successor to Rogue Rose and Aurora Nights and improves upon them both. It wants to produce lengthy output by default and is the best creative writing merge produced so far by sophosympatheia. Descending from earlier versions of Midnight Rose and Wizard Tulu Dolphin 70B, it inherits the best qualities of each.
2024-03-22 4,096 text->text Llama2
Mancer: Weaver (alpha)
$0.0060/1k
$0.0060/1k
mancer/weaver
An attempt to recreate Claude-style verbosity, but don't expect the same level of coherence or memory. Meant for use in roleplay/narrative situations.
2023-08-02 8,000 text->text Llama2
Goliath 120B
$0.036/1k
$0.044/1k
alpindale/goliath-120b
A large LLM created by combining two fine-tuned Llama 70B models into one 120B model. Combines Xwin and Euryale. Credits to - @chargoddard for developing the framework used to merge the model - mergekit. - @Undi95 for helping with the merge ratios. merge
2023-11-10 6,144 text->text Llama2
Fimbulvetr 11B v2
$0.0032/1k
$0.0048/1k
sao10k/fimbulvetr-11b-v2
Creative writing model, routed with permission. It's fast, it keeps the conversation going, and it stays in character. If you submit a raw prompt, you can use Alpaca or Vicuna formats.
2024-04-21 4,096 text->text Llama2
Typhoon2 70B Instruct
$0.0035/1k
$0.0035/1k
scb10x/llama3.1-typhoon2-70b-instruct
Llama3.1-Typhoon2-70B-Instruct is a Thai-English instruction-tuned language model with 70 billion parameters, built on Llama 3.1. It demonstrates strong performance across general instruction-following, math, coding, and tool-use tasks, with state-of-the-art results in Thai-specific benchmarks such as IFEval, MT-Bench, and Thai-English code-switching. The model excels in bilingual reasoning and function-calling scenarios, offering high accuracy across diverse domains. Comparative evaluations show consistent improvements over prior Thai LLMs and other Llama-based baselines. Full results and methodology are available in the technical report.
2025-03-29 8,192 text->text Llama3
TheDrummer: Anubis 70B V1.1
$0.0020/1k
$0.0032/1k
thedrummer/anubis-70b-v1.1
TheDrummer's Anubis v1.1 is an unaligned, creative Llama 3.3 70B model focused on providing character-driven roleplay & stories. It excels at gritty, visceral prose, unique character adherence, and coherent narratives, while maintaining the instruction following Llama 3.3 70B is known for.
2025-06-29 131,072 text->text Llama3
shisa-ai/shisa-v2-llama3.3-70b:free
Shisa V2 Llama 3.3 70B is a bilingual Japanese-English chat model fine-tuned by Shisa.AI on Meta’s Llama-3.3-70B-Instruct base. It prioritizes Japanese language performance while retaining strong English capabilities. The model was optimized entirely through post-training, using a refined mix of supervised fine-tuning (SFT) and DPO datasets including regenerated ShareGPT-style data, translation tasks, roleplaying conversations, and instruction-following prompts. Unlike earlier Shisa releases, this version avoids tokenizer modifications or extended pretraining. Shisa V2 70B achieves leading Japanese task performance across a wide range of custom and public benchmarks, including JA MT Bench, ELYZA 100, and Rakuda. It supports a 128K token context length and integrates smoothly with inference frameworks like vLLM and SGLang. While it inherits safety characteristics from its base model, no additional alignment was applied. The model is intended for high-performance bilingual chat, instruction following, and translation tasks across JA/EN.
2025-04-16 32,768 text->text Llama3
shisa-ai/shisa-v2-llama3.3-70b
Shisa V2 Llama 3.3 70B is a bilingual Japanese-English chat model fine-tuned by Shisa.AI on Meta’s Llama-3.3-70B-Instruct base. It prioritizes Japanese language performance while retaining strong English capabilities. The model was optimized entirely through post-training, using a refined mix of supervised fine-tuning (SFT) and DPO datasets including regenerated ShareGPT-style data, translation tasks, roleplaying conversations, and instruction-following prompts. Unlike earlier Shisa releases, this version avoids tokenizer modifications or extended pretraining. Shisa V2 70B achieves leading Japanese task performance across a wide range of custom and public benchmarks, including JA MT Bench, ELYZA 100, and Rakuda. It supports a 128K token context length and integrates smoothly with inference frameworks like vLLM and SGLang. While it inherits safety characteristics from its base model, no additional alignment was applied. The model is intended for high-performance bilingual chat, instruction following, and translation tasks across JA/EN.
2025-04-16 32,768 text->text Llama3