📝 Tlacuilo-12B 📝
Creative writing model that holds up for RP/adventure
<< Back to Main | << Back to Models | Hugging Face | GitHub
Tlacuilo-12B
Base Model: LatitudeGames/Muse-12B (and mistralai/Mistral-Nemo-Base-2407 family)
by Toasty Pigeon

Description: A creative writing model tuned for more varied prose while also keeping (and improving) roleplay/adventure performance. Built by starting from Muse-12B and applying staged training: books → RP → a small instruct phase.

Use Cases:
• Creative writing
• Roleplay
• Adventure / interactive fiction

Links:
Huggingface (Full Weights)

Usage:
• Chat template: ChatML (trained using Muse-12B formatting)
• Suggested sampler range:
- Temperature 1.0 / min_p 0.05
- Up to Temperature 1.3 / min_p 0.02 if you like it hotter

Training Notes:
• Stage 1: books (~28M tokens/epoch), QKV-only QLoRA, 32k context, 2 epochs, LR 1e-5
• Stage 2: RP (~4M tokens), QLoRA on o_proj + down_proj, 16k context, 1 epoch, LR 5e-6
• Stage 3: small instruct (koto-instruct-sft subset), all linear modules, 4k context, 1 epoch, LR 2e-6

Chat Template:
• ChatML


← Back to Model Archive | Browse by Author | Browse by Series