Llama-3.3-70B-Sapphira-0.2

Creative model

View on Hugging FaceBack to Models

Hourly Usage

Performance Metrics

Avg. Total Time

44.18s

Avg. TTFT

11.48s

Avg. Prefill TPS

537.82

Avg. Gen TPS

20.23

Model Information

Context Size

32768

Quantization

r64

Engine

aphrodite

Creation Method

Merge

Model Type

Llama70B

Chat Template

Llama 3

Reasoning

No

Vision

No

Parameters

70B

Added At

9/9/2025


base_model: [] library_name: transformers tags:

  • mergekit
  • merge
  • unaligned
  • not-for-all-audiences

Sapphira-L3.3-70b-0.2

image/png

Storytelling and RP model similar to BruhzWater/Sapphira-L3.3-70b-0.1, but a little spicier.

I prefer the prose of this one over the original. It has a bit more of BruhzWater/Serpents-Tongue-L3.3-70b-0.3, which consists of:

Static quants: https://huggingface.co/mradermacher/Sapphira-L3.3-70b-0.2-GGUF

iMatrix quants: https://huggingface.co/mradermacher/Sapphira-L3.3-70b-0.2-i1-GGUF

Chat Template:

Llama3

Instruction Template:

Deep Cogito

Llama3

Sampler Settings

Starter:

Temp: 1
Min_P: 0.02
Top_P: 1

Experimental 1:

Temp: .95 - 1.1
Min_P: .015 - .03
Top_P: .97 - 1
XTC_Threshold: .11
XTC_Probability: .15

Experimental 2:

Temp: .95 - 1.1
Min_P: .015 - .03
Top_P: 1
Typical_P: .99
XTC_Threshold: .11
XTC_Probability: .15

Merge Method

This model was merged using the Multi-SLERP merge method using deepcogito/cogito-v2-preview-llama-70B.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: /workspace/cache/models--BruhzWater--Apocrypha-L3.3-70b-0.3/snapshots/3facb4c0a7b953ff34a5caa90976830bf82a84c2
    parameters:
      weight: [0.5]
  - model: /workspace/cache/models--BruhzWater--Serpents-Tongue-L3.3-70b-0.3/snapshots/d007a7bcc7047d712abb2dfb6ad940fe03cd2047
    parameters:
      weight: [0.7]
base_model: /workspace/cache/models--deepcogito--cogito-v2-preview-llama-70B/snapshots/1e1d12e8eaebd6084a8dcf45ecdeaa2f4b8879ce
merge_method: multislerp
tokenizer:
  source: base
chat_template: llama3
parameters:
  normalize_weights: false
  eps: 1e-8
pad_to_multiple_of: 8
int8_mask: true
dtype: bfloat16