Avg. Total Time
14.81s
Avg. TTFT
10.99s
Avg. Prefill TPS
88.21
Avg. Gen TPS
22.55
Context Size
32768
Quantization
r64
Engine
aphrodite
Creation Method
Merge
Model Type
Llama70B
Chat Template
Llama 3
Reasoning
No
Vision
No
Parameters
70B
Added At
6/23/2025
base_model:

Scripturient is a culmination of my ongoing experiments with merging specialized curated models. Designed to keep creativity high, without sacrificing stability.
As for samplers, the model doesn't need samplers to reign it in much at all. My recommendation is:
Temp: 1
Min P: 0.01
That being said, it can handle even higher temperatures and Nsigma works well too.
Because of the nature of this sort of 'Hyper Multi Model Merge', my recommendation is not to run this on anything lower than a Q5 quant.
If you enjoy my work, please consider supporting me, It helps me make more models like this! Support on KO-FI <3
I want to say a special thank you to everyone at the BeaverAI community who supports me, be that with testing, feedback, advice or donations! Special shoutouts to (forgive me if I left someone out!): @Artus | @Geechan | @Kromeurus | @NarpasSword | @Thana Alt | @FrenzyBiscuit | @Saintonan | @Lightning_missile | @Inasity | @Amp | @madison 🦋 @ IQ3_XS | @zerofata
The following YAML configuration was used to produce this model:
models:
- model: TareksLab/Diamond-DL-V1-LLaMa-70B
parameters:
weight: 0.10
density: 0.7
epsilon: 0.20
- model: TareksLab/Citrine-MS-V3-LLaMa-70B
parameters:
weight: [0.5, 0.2, 0.1, 0.1, 0.1]
density: 0.7
epsilon: 0.20
- model: TareksLab/Amethyst-SCE-V4-LLaMa-70B
parameters:
weight: [0.2, 0.4, 0.2, 0.1, 0.1]
density: 0.7
epsilon: 0.20
- model: TareksLab/Ruby-D-V3-LLaMa-70B
parameters:
weight: [0.1, 0.2, 0.4, 0.2, 0.1]
density: 0.7
epsilon: 0.20
- model: TareksLab/Carnelian-SCE-V4-LLaMa-70B
parameters:
weight: [0.1, 0.1, 0.2, 0.4, 0.2]
density: 0.7
epsilon: 0.20
- model: TareksLab/Emerald-SCE-V3-LLaMa-70B
parameters:
weight: [0.1, 0.1, 0.1, 0.2, 0.5]
density: 0.7
epsilon: 0.20
merge_method: della_linear
base_model: TareksLab/Diamond-DL-V1-LLaMa-70B
parameters:
lambda: 1.1
normalize: false
dtype: float32
out_dtype: bfloat16
chat_template: llama3
tokenizer:
source: TareksLab/Ruby-D-V3-LLaMa-70B
pad_to_multiple_of: 8