Llama-3.3-70B-GeneticLemonade-Opus

Creative model

View on Hugging FaceBack to Models

Hourly Usage

Performance Metrics

Avg. Total Time

68.97s

Avg. TTFT

53.93s

Avg. Prefill TPS

1234.74

Avg. Gen TPS

18.88

Model Information

Context Size

32768

Quantization

r64

Engine

aphrodite

Creation Method

Merge

Model Type

Llama70B

Chat Template

Llama 3

Reasoning

No

Vision

No

Parameters

70B

Added At

9/9/2025


library_name: transformers base_model_relation: merge license: llama3 tags:

  • mergekit
  • merge base_model:
  • shisa-ai/shisa-v2-llama3.3-70b
  • zerofata/L3.3-GeneticLemonade-Unleashed-v3-70B
  • TheDrummer/Anubis-70B-v1.1
  • Delta-Vector/Plesio-70B

GENETIC LEMONADE

Opus

image/png

01 // OVERVIEW

Felt like making a merge.

This model combines three individually solid, stable and distinctly different RP models.

zerofata/GeneticLemonade-Unleashed-v3 Creative, generalist RP / ERP model.

Delta-Vector/Plesio-70B Unique prose and unique dialogue RP / ERP model.

TheDrummer/Anubis-70B-v1.1 Character portrayal, neutrally aligned RP / ERP model.

02 // SILLYTAVERN SETTINGS

Play with these, they are not the 'best' settings just a stable baseline.

Recommended Samplers

> Temp: 0.9 - 1.2
> MinP: 0.03 - 0.04
> TopP: 0.9 - 1.0
> Dry: 0.8, 1.75, 4

Instruct

Llama-3-Instruct-Names but you will need to uncheck "System same as user".

03 // QUANTIZATIONS

04 // MERGE CONFIG

models:
  - model: zerofata/L3.3-GeneticLemonade-Unleashed-v3-70B
  - model: Delta-Vector/Plesio-70B
  - model: TheDrummer/Anubis-70B-v1.1
base_model: shisa-ai/shisa-v2-llama3.3-70b
merge_method: sce
parameters:
  select_topk: 0.16
out_dtype: bfloat16
tokenizer:
 source: base