Avg. Total Time
170.78s
Avg. TTFT
120.97s
Avg. Prefill TPS
474.02
Avg. Gen TPS
N/A
Context Size
262144
Quantization
r64
Engine
vllm
Creation Method
LoRA Finetune
Model Type
Gemma31B
Chat Template
Gemma4
Reasoning
Yes
Vision
Yes
Parameters
31B
Added At
5/2/2026
base_model:
completely untested, ymmv, but Fuck It We Ball, hopefully it's better at conversation than v0 was.
This is a merge of pre-trained language models created using mergekit.
This model was merged using the TIES merge method using google/gemma-4-31B-it as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: google/gemma-4-31B-it
- model: trashpanda-org/gemma-4-31b-larkspur-v0
parameters:
density: 1
weight: 1
merge_method: ties
base_model: google/gemma-4-31B-it
parameters:
normalize: true
dtype: bfloat16