Llama-3.3-70B-Progenitor-V3.3

Creative Model

View on Hugging FaceBack to Models

Hourly Usage

Performance Metrics

Avg. Total Time

28.09s

Avg. TTFT

12.93s

Avg. Prefill TPS

718.02

Avg. Gen TPS

20.87

Model Information

Context Size

32768

Quantization

r64

Engine

aphrodite

Creation Method

Merge

Model Type

Llama70B

Chat Template

Llama 3

Reasoning

No

Vision

No

Parameters

70B

Added At

2/17/2025


base_model:

  • SicariusSicariiStuff/Negative_LLAMA_70B
  • TheDrummer/Anubis-70B-v1
  • meta-llama/Llama-3.3-70B-Instruct
  • Sao10K/70B-L3.3-Cirrus-x1
  • Sao10K/L3.1-70B-Hanami-x1
  • EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1 library_name: transformers tags:
  • mergekit
  • merge license: llama3.3

image/png

Had to make another test model after getting a recommendation to try the 3.3 Llama instruct model as the base, and here it is. I am really having fun with this one. I think it beats 1.1 and 2.2.

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Linear DELLA merge method using meta-llama/Llama-3.3-70B-Instruct as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Sao10K/L3.1-70B-Hanami-x1
    parameters:
      weight: 0.20
      density: 0.7
  - model: Sao10K/70B-L3.3-Cirrus-x1
    parameters:
      weight: 0.20
      density: 0.7
  - model: SicariusSicariiStuff/Negative_LLAMA_70B
    parameters:
      weight: 0.20
      density: 0.7
  - model: TheDrummer/Anubis-70B-v1
    parameters:
      weight: 0.20
      density: 0.7
  - model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
    parameters:
      weight: 0.20
      density: 0.7
merge_method: della_linear
base_model: meta-llama/Llama-3.3-70B-Instruct
parameters:
  epsilon: 0.2
  lambda: 1.1
dype: float32
out_dtype: bfloat16
tokenizer:
 source: union

Support on KO-FI <3