wtf man

#2
by ItzPingCat - opened

seems very smart at most things...shits itself at function calling and hallucinates tools and tool outputs

image
thinking trace looks fine...output not good

image
image
base gpt-oss gets the correct answer, which is that web search is not available

seems very smart at most things...shits itself at function calling and hallucinates tools and tool outputs

Same thing happens in Talemate, which is an agentic RP platform. After a lot of tweaking I got it down to 1 in 4 tool / agent calls going fully off the rails. Very unreliable for any agentic system / tool calling. In my experience it's true for all version of GPT-OSS I've tried, except for the base version. Maybe there is a magic set of presets out there for this model, but I haven't found them yet.

But why doesn’t it work

I'm no expert at this, but from my understanding it has something to do with the fine-tuning process. The process of ablation/desrestriction is, right now, not like a tiny surgical incision removing or tweaking alignment. It's more akin to a lobotomy, unrestricted but also less intelligent, or in this case less skilful. Depending on someone's use case they might never really notice it.

From a theoretical standpoint:

With some models, preserving norms/magnitudes is a good thing; others not so much. It's probably a good idea for MoE, but the devil's in the details.

Projecting out the refusal contribution along the harmless direction and orthogonalizing it reduces damage, by minimizing change along the harmless direction itself and around it. That's how that (sub)technique maintains more baseline performance.

This model is heavily quantized given its floating point represenations, so it's possible that requanting post-abliteration could induce additional quant damage, even if intermediate calculations are performed with 32-bit floating point.

Isn't GPT-OSS natively MXFP4? so if you take the mxfp4, up to fp32, perform the ablation and then downscale back to mxfp4, why should there me a major loss in quality

I would venture that training of GPT-OSS was quant-aware, to reduce loss due to downscaling. Ablation is currently not.

@grimjim how do you propose quant aware abaltion?

This post from Nvidia lays out a possible method, except we've ablated instead of performing SFT for step 2.
https://developer.nvidia.com/blog/fine-tuning-gpt-oss-for-accuracy-and-performance-with-quantization-aware-training/

I eternally wait for a QAT version of this

Sign up or log in to comment