Llama-3.3+(3.1v3.3)-70B-Cakrawala

Creative Model

View on Hugging FaceBack to Models

Hourly Usage

Performance Metrics

Avg. Total Time

70.97s

Avg. TTFT

65.61s

Avg. Prefill TPS

73.92

Avg. Gen TPS

19.06

Model Information

Context Size

32768

Quantization

r64

Engine

aphrodite

Creation Method

LoRA Finetune

Model Type

Llama70B

Chat Template

Llama 3

Reasoning

No

Vision

No

Parameters

70B

Added At

12/26/2024


license: mit language:

  • en base_model:
  • meta-llama/Llama-3.1-70B-Instruct tags:
  • axolotl datasets:
  • NarrativAI/CakrawalaRP

๐ŸŽญ Cakrawala-Llama-3.1-70B

Where Worlds Converge and Adventures Begin!

๐ŸŒŸ What's Special About This Model?

Cakrawala-Llama-3.1-70B is a fine-tuned variant of the Llama-3.1-70B-Instruct model, specifically optimised for generating rich roleplaying conversations and character interactions. The model has been trained to excel at producing detailed, contextually appropriate character dialogues with rich descriptions of physical actions, expressions, and emotional states while maintaining consistent character voices and perspectives throughout extended interactions.

๐Ÿงช The Secret Sauce

Training Diet:

  • Fed with 13,000 conversation pairs
  • Each conversation is a minimum 12-13 turns long
  • Focused heavily details like facial expressions, environmental descriptions, and character reactions that are focused a lot on keeping the model in character.

Tech Wizardry:

  • Trained on Llama-3.1-70B-Instruct
  • Fine-tuned using QLoRA
  • Trained over 2 epochs

Training Parameters

  • Gradient Accumulation Steps: 1
  • Micro Batch Size: 4
  • Learning Rate: 0.0002
  • Optimizer: AdamW
  • Scheduler: Cosine
  • Mixed Precision: BF16 & FP16 with TF32 support

๐Ÿ”ง Under the Hood

  • Trained on 8 x H100 SXM GPUs

๐ŸŽฌ License & Credits

  • Licensed under MIT
  • Based on meta-llama/Llama-3.1-70B-Instruct

GGUF Quants


Built with โค๏ธ for roleplayers, by roleplayers