Avg. Total Time
83.90s
Avg. TTFT
46.98s
Avg. Prefill TPS
690.06
Avg. Gen TPS
18.10
Context Size
32768
Quantization
r64
Engine
aphrodite
Creation Method
Merge
Model Type
Llama70B
Chat Template
Llama 3
Reasoning
No
Vision
No
Parameters
70B
Added At
3/25/2025
license: llama3.3 base_model:
Hi hi! 🌟
This is a collaboration work between GradientPutri and Sao10K.
This is a passion project of mine spanning the past few weeks, so we hope you like it.
While there may some minor issues, I think the final result is nice, and there are nice outputs which was the main goal.
Model card made by GradientPutri.
This model is based on Meta's Llama 3.3 and is subject to the Llama 3.3 Community License Agreement and the Acceptable Use Policy.
While we are unable to disallow commercial usage, do note that this is a project made using our own resources, time and effort. I'd rather not be discouraged from doing future project models instead. We kindly request that commercial users reach out before deployment to discuss usage and proper attribution. We appreciate users who help maintain transparency in the AI ecosystem by keeping us informed of how our work is being used. Same goes for any merges or derivatives, hopefully :)
Total token count: ~270M Tokens (210M Trainable), over 2 epochs.
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
---
Note that newlines are represented within example above
temperature: 0.75
min_p: 0.1
Repetition Penalty: 1.1
Presence Penalty: 1.1
# Iterations
num_epochs: 2
# Batching - Global Batch 4x GPUs × Batch 2 × 4 Grad_accum = 32
gradient_accumulation_steps: 4
micro_batch_size: 2
# Optimizer
optimizer: paged_ademamix_8bit
lr_scheduler: cosine
learning_rate: 0.00002
max_grad_norm: 1
weight_decay: 0.01
🦊 Thank you for visiting! May the foxes bring you good fortune! 🌸