Avg. Total Time
17.95s
Avg. TTFT
8.77s
Avg. Prefill TPS
1008.22
Avg. Gen TPS
17.04
Context Size
32768
Quantization
r64
Engine
aphrodite
Creation Method
FFT
Model Type
Gemma27B
Chat Template
Gemma 2
Reasoning
No
Vision
Yes
Parameters
27B
Added At
8/8/2025
Welcome to Nidum's Gemma-3-27B Instruct Uncensored, a powerful and versatile model optimized for unrestricted interactions. Designed for creators, researchers, and AI enthusiasts seeking innovative and boundary-pushing capabilities.
| Quantization | Bits per Weight | Ideal For | Link |
|---|---|---|---|
| Q8_0 | 8-bit | Best accuracy and performance | model-Q8_0.gguf |
| Q6_K | 6-bit | Strong accuracy and fast inference | model-Q6_K.gguf |
| Q5_K_M | 5-bit | Balance between accuracy and speed | model-Q5_K_M.gguf |
| Q3_K_M | 3-bit | Low memory usage, good performance | model-Q3_K_M.gguf |
| TQ2_0 | 2-bit (Tiny) | Maximum speed and minimal resources | model-TQ2_0.gguf |
| TQ1_0 | 1-bit (Tiny) | Minimal footprint and fastest inference | model-TQ1_0.gguf |
Q8_0 or Q6_K.Q5_K_M.Q3_K_M, TQ2_0, or TQ1_0.from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "nidum/nidum-Gemma-3-27B-Instruct-Uncensored"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = "Tell me a futuristic story about space travel."
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(**inputs, max_length=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Unlock your creativity and innovation potential with Nidum Gemma-3-27B Instruct Uncensored. Experience the freedom to create, explore, and innovate without limits.