Llama-3.3-70B-Grandiloquence

Creative Model

View on Hugging FaceBack to Models

Hourly Usage

Performance Metrics

Avg. Total Time

2.60s

Avg. TTFT

7.58s

Avg. Prefill TPS

563.78

Avg. Gen TPS

18.50

Model Information

Context Size

32768

Quantization

r64

Engine

aphrodite

Creation Method

Merge

Model Type

Llama70B

Chat Template

Llama 3

Reasoning

No

Vision

No

Parameters

70B

Added At

2/17/2025


base_model:

  • Nohobby/L3.3-Prikol-70B-v0.5
  • Doctor-Shotgun/L3.3-70B-Magnum-v4-SE
  • meta-llama/Llama-3.3-70B-Instruct
  • NeverSleep/Lumimaid-v0.2-70B
  • ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
  • Sao10K/70B-L3.3-mhnnn-x1 library_name: transformers tags:
  • mergekit
  • merge license: llama3.3

EXPERIMENT

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Linear DELLA merge method using meta-llama/Llama-3.3-70B-Instruct as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: NeverSleep/Lumimaid-v0.2-70B
    parameters:
      weight: 0.20
      density: 0.7
  - model: ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
    parameters:
      weight: 0.20
      density: 0.7
  - model: Doctor-Shotgun/L3.3-70B-Magnum-v4-SE
    parameters:
      weight: 0.20
      density: 0.7
  - model: Sao10K/70B-L3.3-mhnnn-x1
    parameters:
      weight: 0.20
      density: 0.7
  - model: Nohobby/L3.3-Prikol-70B-v0.5
    parameters:
      weight: 0.20
      density: 0.7
merge_method: della_linear
base_model: meta-llama/Llama-3.3-70B-Instruct
parameters:
  epsilon: 0.2
  lambda: 1.1
dype: float32
out_dtype: bfloat16
tokenizer:
 source: union