Avg. Total Time
12.37s
Avg. TTFT
5.71s
Avg. Prefill TPS
1565.10
Avg. Gen TPS
21.27
Context Size
32768
Quantization
r64
Engine
aphrodite
Creation Method
Merge
Model Type
Llama70B
Chat Template
Llama 3
Reasoning
No
Vision
No
Parameters
70B
Added At
8/7/2025
license: llama3.3 language:
L3.3-70B-Unslop-v2.0
This evolution of The-Omega-Directive delivers unprecedented coherence without the LLM slop:
Key Training Details:
Recommended Settings: LLam@ception
Notes: Q4_K_S/Q4_K_M recommended for speed/quality balance. Q6_K for high quality. Q8_0 best quality.
Notes: Q4_K_S/Q4_K_M recommended. IQ1_S/IQ1_M for extreme low VRAM. Q6_K for near-original quality.
This model enhances The-Omega-Directive's unalignment:
By using this model, you agree: