Llama-3.3-70B-Apocrypha-0.4a

Creative model

View on Hugging FaceBack to Models

Hourly Usage

Performance Metrics

Avg. Total Time

17.29s

Avg. TTFT

8.85s

Avg. Prefill TPS

821.80

Avg. Gen TPS

20.87

Model Information

Context Size

32768

Quantization

r64

Engine

aphrodite

Creation Method

Merge

Model Type

Llama70B

Chat Template

Llama 3

Reasoning

No

Vision

No

Parameters

70B

Added At

9/25/2025


base_model: [] library_name: transformers tags:

  • mergekit
  • merge
  • unaligned
  • not-for-all-audiences

Apocrypha-L3.3-70b-0.4a

image/png

Storytelling and Creative Writing model. (Work in progress)

My most stable merge yet /s. Llama-3-70B-Instruct-Storywriter is tough to work with, but I like it too much to exclude it. It'd be sick if it had a L3.3 version..

This iteration of Apocrypha has Wayfarer-Large-70B instead of EVA-LLaMA-3.33-70B-v0.0. - Wayfarer seems to help the model actually end its reponses instead of going on forever. I find it to be a nice addition.

If you like this model, go support the original creators! (summon this guy https://huggingface.co/tdrussell)

Chat and Instruct Template:

Llama3

Merge Details

This model was merged using the SCE merge method using BruhzWater/Eden-L3.3-70b-0.4a as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models: 
  - model: /workspace/cache/models--Doctor-Shotgun--L3.3-70B-Magnum-Diamond/snapshots/a7dfb66b4469a4c9ca07ff28bccc73a44797e76c
  - model: /workspace/cache/models--nbeerbower--Llama3.1-Gutenberg-Doppel-70B/snapshots/f083f3a89b8275e7e5329bb0668ada189f80b507
  - model: /workspace/cache/models--tdrussell--Llama-3-70B-Instruct-Storywriter/snapshots/19be2a7c6382a9150e126cf144e2b2964e700d3c
  - model: /workspace/cache/models--LatitudeGames--Wayfarer-Large-70B-Llama-3.3/snapshots/68cb7a33f692be64d4b146576838be85593a7459
  - model: /workspace/cache/models--TheDrummer--Fallen-Llama-3.3-70B-v1/snapshots/d46ef2629f1c3cd46789a55793c5ff0af60de3e8
base_model: /workspace/prototype-0.4x295
select_topk: 0.24
merge_method: sce
tokenizer:
  source: base
chat_template: llama3
pad_to_multiple_of: 8
int8_mask: true
dtype: float32