Gemma-3-27B-Nidum-Uncensored

All-around Model

View on Hugging FaceBack to Models

Hourly Usage

Performance Metrics

Avg. Total Time

17.95s

Avg. TTFT

8.77s

Avg. Prefill TPS

1008.22

Avg. Gen TPS

17.04

Model Information

Context Size

32768

Quantization

r64

Engine

aphrodite

Creation Method

FFT

Model Type

Gemma27B

Chat Template

Gemma 2

Reasoning

No

Vision

Yes

Parameters

27B

Added At

8/8/2025


license: gemma

🚀 Nidum Gemma-3-27B Instruct Uncensored

Welcome to Nidum's Gemma-3-27B Instruct Uncensored, a powerful and versatile model optimized for unrestricted interactions. Designed for creators, researchers, and AI enthusiasts seeking innovative and boundary-pushing capabilities.

✨ Why Nidum Gemma-3-27B Instruct Uncensored?

  • Uncensored Interaction: Generate content freely without artificial restrictions.
  • High Intelligence: Exceptional reasoning and comprehensive conversational capabilities.
  • Versatile Applications: Perfect for creative writing, educational interactions, research projects, virtual assistance, and more.
  • Open and Innovative: Tailored for users who appreciate limitless creativity.

🚀 Available GGUF Quantized Models

QuantizationBits per WeightIdeal ForLink
Q8_08-bitBest accuracy and performancemodel-Q8_0.gguf
Q6_K6-bitStrong accuracy and fast inferencemodel-Q6_K.gguf
Q5_K_M5-bitBalance between accuracy and speedmodel-Q5_K_M.gguf
Q3_K_M3-bitLow memory usage, good performancemodel-Q3_K_M.gguf
TQ2_02-bit (Tiny)Maximum speed and minimal resourcesmodel-TQ2_0.gguf
TQ1_01-bit (Tiny)Minimal footprint and fastest inferencemodel-TQ1_0.gguf

🎯 Recommended Quantization

  • Best accuracy: Use Q8_0 or Q6_K.
  • Balanced performance: Use Q5_K_M.
  • Small footprint (mobile/edge): Choose Q3_K_M, TQ2_0, or TQ1_0.

🚀 Example Usage (Original Model)

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_name = "nidum/nidum-Gemma-3-27B-Instruct-Uncensored"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

prompt = "Tell me a futuristic story about space travel."
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(**inputs, max_length=200)
print(tokenizer.decode(output[0], skip_special_tokens=True))

🚀 Your AI, Your Way

Unlock your creativity and innovation potential with Nidum Gemma-3-27B Instruct Uncensored. Experience the freedom to create, explore, and innovate without limits.