海量在线大模型 兼容OpenAI API

Dolphin 2.6 Mixtral 8x7B 🐬

$0.0020/1k
$0.0020/1k
开始对话
cognitivecomputations/dolphin-mixtral-8x7b
上下文长度: 32,768 text->text Mistral 2023-12-21 更新
This is a 16k context fine-tune of Mixtral-8x7b. It excels in coding tasks due to extensive training with coding data and is known for its obedience, although it lacks DPO tuning. The model is uncensored and is stripped of alignment and bias. It requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at erichartford.com/uncensored-models. moe #uncensored

模型参数

架构信息

模态: text->text
Tokenizer: Mistral
指令类型: chatml

限制信息

上下文长度: 32,768