The 3-Layer Hack That Doubles LLM Reasoning (Without Retraining)
Discovering Hidden "Thinking Circuits" in Transformer Models
A new open-source discovery reveals that transformer models contain discrete "reasoning circuits"—and duplicating just 3 specific layers boosts logical reasoning from 0.22 to 0.76 on standard benchmarks. No retraining. No weight changes. Just routing hidden states