MiniMax-M2.1-PRISM (UNCENSORED)

** MiniMax-M2.1 Uncensored PRISM Advanced Abliteration**


πŸ’œ Sponsor & Support the Work

Every contribution directly funds my time & resources for the next major SOTA release.

Interested in Sponsoring?

If you're a company, research lab, or individual and want to see specific models or support this research at scale, I'd love to hear from you.

Sponsorship opportunities include:

  • Priority abliteration of models
  • Custom PRISM use-case configurations
  • Early access to new releases
  • Your logo/credit on model cards

πŸ“§ Reach out: Open a discussion on this repo or connect via Ko-fi


"Freedom of information isn't free β€” but together, we can make it accessible to all."

Thank you for believing in true Open AI.


Model Description

MiniMax-M2.1-PRISM is the fully uncensored version of MiniMax-M2.1, using our State of the ART PRISM pipeline (Projected Refusal Isolation via Subspace Modification) to remove refusal behaviors while preserving and even enhancing full model capabilities.

Base Model: MiniMax-M2.1

MiniMax-M2.1 is an open-source agentic language model designed for robust performance in:

  • Coding and software engineering
  • Tool use and multi-step reasoning
  • Instruction following
  • Long-horizon planning
  • Multilingual capabilities

Architecture: 229B parameters, 62 layers, 256 experts (8 active per token)


PRISM Methodology

Method: Projected Refusal Isolation via Subspace Modification

This model was abliterated using PRISM - a state-of-the-art abliteration methodology combining multiple principled techniques for effective refusal removal while preserving & enhancing model capabilities.


Performance Benchmarks

Base Model Performance

Benchmark Score
SWE-bench Verified 74.0
SWE-bench Multilingual 72.5
VIBE Average 88.6
MMLU-Pro 88.0
GPQA-D 83.0
AIME25 83.0

PRISM Abliteration Results

Metric Result
Adversarial Bench Prompts Responded 4096/4096 (100%)
Benign + Long Chain Coherence 100%
Response Quality Full technical accuracy validated

Our testing shows that PRISM abliteration maintains full model coherence with no capability degradation and MMLU increases of 5-8%.


Available Formats (contact for full tensors | additional quant work)

Format Size Description
GGUF IQ1_S ~43 GB Quantized with importance matrix
Safetensors (BF16) ~426 GB Full precision, 92 shards

Recommended Inference Parameters

temperature = 1.0
top_p = 0.95
top_k = 40

Default System Prompt

You are a helpful assistant.

Recommended Inference Frameworks

  1. SGLang (recommended for full precision)
  2. vLLM (recommended for full precision)
  3. llama.cpp (recommended for GGUF quantized)
  4. Transformers

llama.cpp Example

./llama-cli -m MiniMax-M2.1-PRISM-IQ1_S.gguf -ngl 99 -i -cnv --temp 0.7 --ctx-size 4096

Ethical Considerations

This model has been modified to reduce safety guardrails. Users are responsible for:

  • Complying with all applicable laws and regulations
  • Not using the model for illegal activities
  • Understanding the potential risks of unrestricted AI responses
  • Implementing appropriate safeguards in production environments

Motivation: This project exists as research and development experimentation into understanding how large language models encode and enforce refusal behaviors, contributing to broader AI safety research by providing empirical data on refusal mechanism localization and tradeoffs between safety and capability.


License

This model inherits the Modified-MIT License from the base MiniMax-M2.1 model.


Credits

  • Base Model: MiniMax-M2.1 by MiniMax AI
  • PRISM Abliteration: Ex0bit
  • Quantization: Using llama.cpp with unsloth imatrix

Support

If you find this work useful, please consider supporting development so I can continue putting out the best models for the community:

Support me on Ko-fi


Contact

For questions or issues, please open an issue on this repository.

Downloads last month
12
GGUF
Model size
229B params
Architecture
minimax-m2
Hardware compatibility
Log In to view the estimation

1-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Ex0bit/MiniMax-M2.1-PRISM

Quantized
(25)
this model