Qwen3.5-27B-Musica-v1

Creative model

View on Hugging FaceBack to Models

Hourly Usage

Performance Metrics

Avg. Total Time

47.81s

Avg. TTFT

7.32s

Avg. Prefill TPS

1359.19

Avg. Gen TPS

27.59

Model Information

Context Size

262144

Quantization

r64

Engine

vllm

Creation Method

LoRA

Model Type

Qwen35

Chat Template

Qwen3.5

Reasoning

Yes

Vision

Yes

Parameters

27B

Added At

3/27/2026


license: apache-2.0 datasets:

  • EVA-UNIT-01/Lilith-v0.3
  • zerofata/Gemini-3.1-Pro-GLM5-Characters
  • zerofata/Instruct-Anime
  • zerofata/Anime-AMA-Prose
  • allura-forge/mimo-v2-pro-claude-distill-hs3
  • allura-forge/doubao-seed2.0-distill-multiturn-expr-rp
  • Delta-Vector/Orion-Deepseek-V3-RP-Filtered
  • Delta-Vector/Orion-Deepseek-R1-RP-Filtered
  • Gryphe/ChatGPT-4o-Writing-Prompts
  • Gryphe/Sonnet3.5-Charcard-Roleplay
  • ToastyPigeon/kimi-stories-instruct
  • ToastyPigeon/kimi-rp-v3
  • ToastyPigeon/fujin-filtered-instruct
  • Dxniz/Novelist-CoT language:
  • en base_model:
  • ArliAI/Qwen3.5-27B-Derestricted pipeline_tag: image-text-to-text

Qwen3.5-27B Musica v1

RP/storygen/conversational tune of Qwen3.5-27B. Stylewise looked pretty nice to me and seems decently steerable, should also reduce refusal rate even further than derestricted ver on which it was based on. Both reasoning and non-reasoning modes are supported, reasoning mode even has several styes of reasoning, reroll to see them (perhaps I should mark them to make them manually evocable on next iter?). Might or might not have slightly better world knowledge than base, lol.

This training run was sponsored by ArliAI

Wishlist for next iter - more conversational reasoning data (and more reasoning data in general) and perhaps something multiturn for creative writing. Perhaps also train Qwen3.5-9B and Nemotron-3-Super-120B-A12B before iterating on dataset.

Training notes

Rank 64, alpha 64 LoRA on top of ArliAI's Derestricted version, for two epochs with constant scheduler. Training took ~17 hours on OwenArli's 2xRTX Pro 6000 Blackwell.

Run's graphs on Comet (DW about that API key in config, I deactivated it before sharing this lol)

LoRA adapter

Recommended samplers

  • Temperature: 1
  • NSigma: 2
  • Min-P: 0.02

Axolotl config

See Axolotl config
base_model: /home/arli/models/Qwen3.5-27B-Derestricted

plugins:
  - axolotl.integrations.liger.LigerPlugin
  - axolotl.integrations.cut_cross_entropy.CutCrossEntropyPlugin
cut_cross_entropy: true
liger_rope: true
liger_rms_norm: true
liger_layer_norm: true
liger_glu_activation: true

load_in_8bit: false
load_in_4bit: false

shuffle_merged_datasets: true
datasets:
  - path: ./musica-nonreasoning-sft-megafix.jsonl
    type: chat_template
    field_messages: conversations
    message_property_mappings:
      role: from
      content: value
  - path: ./musica-reasoning-sft-fix.jsonl
    type: chat_template
    field_messages: conversations
    message_property_mappings:
      role: from
      content: value

dataset_prepared_path: ./last_run_prepared
val_set_size: 0
output_dir: ./outputs/v1
adapter: lora
save_safetensors: true

sequence_len: 16384
sample_packing: true
pad_to_sequence_len: true

lora_r: 64
lora_alpha: 64
lora_dropout: 0.0
lora_target_modules:
  - q_proj
  - k_proj
  - v_proj
  - o_proj
  - down_proj
  - up_proj
  # Uncomment below to also target the linear attention projections.
  # These use separate in_proj_qkv / in_proj_z / out_proj (Qwen3.5-specific).
  # - linear_attn.in_proj_qkv
  # - linear_attn.in_proj_z
  # - linear_attn.out_proj

lora_mlp_kernel: false
lora_qkv_kernel: false
lora_o_kernel: false

gradient_accumulation_steps: 8
micro_batch_size: 1
num_epochs: 2
optimizer: adamw_torch_fused
lr_scheduler: constant
learning_rate: 4e-6
max_grad_norm: 0.5

bf16: auto

use_comet: true
comet_project_name: musica-27b

auto_resume_from_checkpoints: false
logging_steps: 1
flash_attention: true

warmup_ratio: 0
evals_per_epoch: 0
saves_per_epoch: 4
save_total_limit: 4

gradient_checkpointing: false
gradient_checkpointing_kwargs:
  use_reentrant: false

fsdp_config:
  fsdp_version: 2
  offload_params: false
  cpu_ram_efficient_loading: false
  auto_wrap_policy: TRANSFORMER_BASED_WRAP
  transformer_layer_cls_to_wrap: Qwen3_5DecoderLayer
  state_dict_type: FULL_STATE_DICT
  sharding_strategy: FULL_SHARD
  reshard_after_forward: true
  activation_checkpointing: true