Nemo-Instruct-2407-MPOA-v3-12B

MPOA (Magnitude-Preserving Othogonalized Ablation, AKA norm-preserving biprojected abliteration) has been applied to layers 10-34 in this model, to both mlp.down_proj.weight and self_attn.o_proj.weight streams.

Compliance was not maximized for this model. The model appears to be near an edge of chaos with regard to some safety refusals, which should be suitable for varied text completion.

The harmless/baseline set contained Chinese and English prompts. The harmful/contrast set contained Chinese, English, and French prompts. English text generation remains coherent.

More details pending.

Downloads last month
-
Safetensors
Model size
12B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for grimjim/Nemo-Instruct-2407-MPOA-v3-12B

Finetuned
(151)
this model
Quantizations
1 model