Avg. Total Time
39.27s
Avg. TTFT
13.30s
Avg. Prefill TPS
396.87
Avg. Gen TPS
11.18
Context Size
32768
Quantization
r64
Engine
aphrodite
Creation Method
Merge
Model Type
Llama70B
Chat Template
Llama 3
Reasoning
No
Vision
No
Parameters
70B
Added At
1/16/2025
base_model:

Enjoyed SicariusSicariiStuff/Negative_LLAMA_70B but the prose was too dry for my tastes. So I merged it with TheDrummer/Anubis-70B-v1 for verbosity. Anubis has positivity bias so Negative could balance things out.
This is a merge of pre-trained language models created using mergekit.
GGUF Quants:
This model was merged using the SLERP merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: SicariusSicariiStuff/Negative_LLAMA_70B
- model: TheDrummer/Anubis-70B-v1
merge_method: slerp
base_model: TheDrummer/Anubis-70B-v1
parameters:
t: [0.1, 0.55, 1, 0.55, 0.1]
dtype: bfloat16