Llama-3.3-70B-Dazzling-Star-Aurora-v0.0

Creative Model

View on Hugging FaceBack to Models

Hourly Usage

Performance Metrics

Avg. Total Time

17.02s

Avg. TTFT

7.13s

Avg. Prefill TPS

587.37

Avg. Gen TPS

21.19

Model Information

Context Size

32768

Quantization

r64

Engine

aphrodite

Creation Method

Merge

Model Type

Llama70B

Chat Template

Llama 3

Reasoning

No

Vision

No

Parameters

70B

Added At

1/25/2025


base_model:

  • EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
  • ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.3
  • unsloth/Meta-Llama-3.1-70B library_name: transformers tags:
  • mergekit
  • merge license: other license_name: eva-llama3.3

Dazzling-Star-Aurora-70b-v0.0

If somewhere amid that aimlessly drifting sky, There was a planet where our wishes could flow free... would we try to make it there? I wonder what we'd wish for if we did...~

Listen to the song on youtube: https://www.youtube.com/watch?v=e1EExQiRhC0

70b version of Dazzling Star Aurora, with EVA L3.3 70b and RPMax L3.1 70b.

Models:

  • EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
  • ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.3
  • unsloth/Meta-Llama-3.1-70B

Instruct Format: LLama 3

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the TIES merge method using unsloth/Meta-Llama-3.1-70B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
    parameters:
      weight: 0.3
      density: 0.7
  - model: ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.3
    parameters:
      weight: 0.4
      density: 0.8
base_model: unsloth/Meta-Llama-3.1-70B
parameters:
  epsilon: 0.05
  lambda: 1
  normalize: true
  int8_mask: true
merge_method: ties
dtype: bfloat16