German-English specialized model
View on Hugging FaceBack to ModelsAvg. Total Time
44.63s
Avg. TTFT
6.81s
Avg. Prefill TPS
468.79
Avg. Gen TPS
17.50
Context Size
32768
Quantization
r64
Engine
aphrodite
Creation Method
LoRA Finetune
Model Type
Llama70B
Chat Template
Llama 3
Reasoning
No
Vision
No
Parameters
70B
Added At
12/22/2024
license: llama3.1 language:

Fine-tuned Model - to showcase the potential of resource-efficient Fine-Tuning of Large Language Models using Spectrum Fine-Tuning
Introducing Llama-3.1-SauerkrautLM-70b-Instruct – our Sauerkraut version of the powerful meta-llama/Meta-Llama-3.1-70B-Instruct!
| Model | HF | EXL2 | GGUF | AWQ |
|---|---|---|---|---|
| Llama-3.1-SauerkrautLM-70b-Instruct | Link | coming soon | coming soon | coming soon |
Llama-3.1-SauerkrautLM-70b-Instruct
This model showcases the potential of resource-efficient fine-tuning of large language models using Spectrum Fine-Tuning. Here's a brief on the procedure:
Fine-tuning on German-English Data:
Cross-lingual Transfer Learning using Sauerkraut Mix v2:
Sauerkraut Mix v2:
The primary goal of this training was twofold:
To demonstrate that Spectrum Fine-Tuning, targeting just 15% of the layers, can significantly enhance a 70 billion parameter model's capabilities while using only a fraction of the resources required by classic fine-tuning approaches.
To showcase the effectiveness of cross-lingual transfer learning using the Sauerkraut Mix v2 dataset, enabling multilingual improvement without extensive language-specific training data.
The results have been remarkable:
Key Findings:
AGIEVAL

GPT4ALL

TRUTHFULQA

BBH-HF

MMLU-Multilingual

We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
If you are interested in customized LLMs for business applications, please get in contact with us via our website. We are also grateful for your feedback and suggestions.
We are also keenly seeking support and investment for our startup, VAGO solutions where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions
Many thanks to meta-llama for providing such a valuable model to the Open-Source community.