Avg. Total Time
30.16s
Avg. TTFT
20.68s
Avg. Prefill TPS
852.30
Avg. Gen TPS
10.54
Context Size
32768
Quantization
r64
Engine
aphrodite
Creation Method
LoRA Finetune
Model Type
Llama70B
Chat Template
Llama 3
Reasoning
No
Vision
No
Parameters
70B
Added At
12/22/2024
We introduce the latest in the Smaug series, the Dracarys family of finetunes targeting coding performance improvements across a variety of base models.
This variant is a finetune of meta-llama/Meta-Llama-3.1-70B-Instruct
Compared to meta-llama/Meta-Llama-3.1-70B-Instruct, Dracarys has better LiveCodeBench scores (see evaluation results below).
The prompt format is unchanged from Llama 3 70B Instruct (see evaluations for prompt details for LCB)
See the snippet below for usage with Transformers:
import transformers
import torch
model_id = "abacusai/Dracarys-72B-Instruct"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
messages = [
{"role": "system", "content": "You are data science coding assistant that generates Python code using Pandas and Numpy."},
{"role": "user", "content": "Write code to select rows from the dataframe `df` having the maximum `temp` for each `city`"},
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>"),
pipeline.tokenizer.convert_tokens_to_ids("<|end_of_text|>"),
]
outputs = pipeline(
prompt,
max_new_tokens=256,
eos_token_id=terminators,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
| Model | Code Generation | Code Execution | Test Output Prediction |
|---|---|---|---|
| Dracarys2-Llama-3.1-70B-Instruct | 33.44 | 48.26 | 52.10 |
| Meta-Llama-3.1-70B-Instruct | 32.23 | 48.768 | 41.40 |
| Model | Easy | Medium | Hard |
|---|---|---|---|
| Dracarys2-Llama-3.1-70B-Instruct | 71.29 | 18.48 | 3.57 |
| Meta-Llama-3.1-70B-Instruct | 68.4 | 17.99 | 3.57 |
| Model | COT | Non-COT |
|---|---|---|
| Dracarys2-Llama-3.1-70B-Instruct | 75.55 | 48.26 |
| Meta-Llama-3.1-70B-Instruct | 70.14 | 48.768 |
| Model | Easy | Medium | Hard |
|---|---|---|---|
| Dracarys2-Llama-3.1-70B-Instruct | 63.53 | 47.30 | 43.61 |
| Meta-Llama-3.1-70B-Instruct | 51.22 | 35.91 | 34.30 |
| Model | Global Average | Coding Average | Reasoning Average | Mathematics Average | Data Analysis Average | Language Average | IF Average |
|---|---|---|---|---|---|---|---|
| Dracarys2-Llama-3.1-70B-Instruct | 47.8 | 36.3 | 47.3 | 38.9 | 46.1 | 41.5 | 76.6 |
| Meta-Llama-3.1-70B-Instruct | 45.1 | 30.7 | 35.3 | 37.0 | 48.4 | 42.1 | 77.2 |