Avg. Total Time
23.43s
Avg. TTFT
8.07s
Avg. Prefill TPS
1591.33
Avg. Gen TPS
20.63
Context Size
32768
Quantization
r64
Engine
aphrodite
Creation Method
Merge
Model Type
Llama70B
Chat Template
Llama 3
Reasoning
No
Vision
No
Parameters
70B
Added At
8/7/2025
license: llama3.3 language:
L3.3-70B-Unslop-v2.1
This evolution of The-Omega-Directive delivers unprecedented coherence without the LLM slop:
Key Training Details:
Recommended Settings: Pending ¯\_(ツ)_/¯ Base model presets should also work.
Notes: Q4_K_M recommended for speed/quality balance. Q6_K for very high quality. Q8_0 for original like quality. Prefer imatrix if possible in most cases.
Notes: Q4_K_M recommended. IQ1_S/IQ1_M for desperate. Q6_K for very high quality.
This model enhances The-Omega-Directive's unalignment:
By using this model, you agree: