Avg. Total Time
21.20s
Avg. TTFT
9.23s
Avg. Prefill TPS
181.08
Avg. Gen TPS
21.89
Context Size
32768
Quantization
r64
Engine
aphrodite
Creation Method
Merge
Model Type
Llama70B
Chat Template
Llama 3
Reasoning
No
Vision
No
Parameters
70B
Added At
2/9/2025
base_model:

After a lot positive feedback on Progenitor V1.1, I got some advice regarding a couple of settings which I could finetune for hopefully better results. Mainly changing the tokenizer and letting the merge compute at full float32 before scaling down to bfloat16 (shout out to kromeurus). 2.1 didn't quite meet the standard set by 1.1, so with a few more tweaks I made 2.2 which I feel slightly improved on the outstanding 1.1, and is therefore the true successor.
This is a merge of pre-trained language models created using mergekit.
This model was merged using the Linear DELLA merge method using nbeerbower/Llama-3.1-Nemotron-lorablated-70B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: Sao10K/L3.1-70B-Hanami-x1
parameters:
weight: 0.20
density: 0.7
- model: Sao10K/70B-L3.3-Cirrus-x1
parameters:
weight: 0.20
density: 0.7
- model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
weight: 0.20
density: 0.7
- model: TheDrummer/Anubis-70B-v1
parameters:
weight: 0.20
density: 0.7
- model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
parameters:
weight: 0.20
density: 0.7
merge_method: della_linear
base_model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
parameters:
epsilon: 0.2
lambda: 1.1
dype: float32
out_dtype: bfloat16
tokenizer:
source: union