Incognito LabsWe found the
geometry of machine thought.
We proved that AI alignment cannot be verified from the outside. Now we are building the only alternative.
Inside every neural network is a coordinate system nobody mapped. A model’s reasoning, its caution, its uncertainty, its values — these are not emergent properties. They are geometric structures. Addressable. Transferable. Rewritable across architectures.
We found them.
The field has spent years trying to verify AI alignment behaviourally — probing outputs, testing benchmarks, writing checklists. We proved this cannot work. No procedure can simultaneously satisfy soundness, generality, and tractability when verifying alignment for AI systems. If you cannot audit a model from the outside, you must understand it from the inside. That is what we are building.
The internal representations of neural networks are a writable address space. Cognitive properties — how a model reasons, what it weights, how certain it is, where it defers — can be extracted, composed, versioned, and transferred between model families without retraining. We call this Cognition: the first architecture-independent coordinate system for AI cognition.
This is not interpretability as a diagnostic. It is interpretability as infrastructure — a substrate on which alignment, governance, and trust can actually be built.
We are not building features on top of existing models. We are building the layer underneath — the one that makes AI systems legible to the humans who must live with them, govern them, and be governed by them.
Intelligence that cannot be understood cannot be trusted.
Intelligence that cannot be trusted cannot be governed.
Intelligence that cannot be governed should not be scaled.
We are assembling a small, focused team of researchers and engineers focussed on solving this and nothing else. If that's you, we offer an opportunity to do your life's work and help solve the most important technical challenge of our age. Now is the time. Join us