The Standard Postulates
Conventional quantum mechanics rests on postulates that are mathematically precise but physically mysterious: states are vectors in a Hilbert space, observables are self-adjoint operators, measurement collapses the state to an eigenvalue with Born rule probability |⟨ψ|φ⟩|². The postulates work to extraordinary experimental precision, but their origin — why this mathematical structure, why complex Hilbert spaces, why Born rule — is left unexplained.
The GPP framework derives these structures from geometry rather than postulating them.
Complex Hilbert Space from Cayley–Dickson
The first Cayley–Dickson doubling ℝ → ℂ introduces the complex unit i. The L²(ℝ+, dμ) space with Haar measure dω/ω is the natural Hilbert space of quantum mechanics: it carries the Mellin transform as its Fourier theory, and the principal series representations of (ℝ+, ×) are precisely the L² eigenstates. The imaginary unit i is not a choice — it is forced by the first algebraic doubling.
Unitarity from Haar Self-Duality
Unitarity of the S-matrix — conservation of probability under time evolution — is the statement that the time evolution operator U(t) preserves the Hilbert space inner product. In the GPP framework, unitarity is a consequence of Haar self-duality: the involution ω ➦ ω−1 preserves dω/ω, so the shadow transform preserves the L² norm. Physical states must lie on the principal series Re(Δ) = 1 — the unique locus invariant under the shadow involution. This is not a postulate; it is the condition that the Haar measure be preserved.
The Born Rule
The Born rule P(φ|ψ) = |⟨φ|ψ⟩|² follows from Gleason's theorem (1957): any measure on the closed subspaces of a Hilbert space of dimension ≥ 3 is of Born rule form. The GPP framework recovers this by noting that the Haar measure on the celestial sphere induces a natural probability measure on the space of conformal primary operators, and that this measure is the Born rule when restricted to the Hilbert space of physical states (those on the principal series).
The Measurement Problem and the T Boundary
The measurement problem — why does quantum superposition give way to classical definite outcomes? — is addressed through the T-boundary structure. A measurement interaction entangles the system with a macroscopic apparatus. The apparatus, being massive, oscillates between M⁴+ and M⁴− at extremely high Compton frequency (ω ∝ M · c² / ħ ∼ 1045 Hz for a gram-scale device). The decoherence time τD ∼ ħ / (M c² Δx² / λC²) is the timescale on which the oscillation-averaged interference terms vanish. Classical outcomes are the T-averaged states; quantum superposition is the pre-averaging coherence.
This does not resolve the measurement problem in the sense of deriving a unique outcome from many-worlds unitary evolution — that remains an open problem. It does explain why macroscopic objects appear classical: their Compton frequency is so high that T-boundary oscillations average out all quantum coherence on any observationally accessible timescale.
Why Complex Numbers
The famous question — why does quantum mechanics use complex numbers rather than reals or quaternions? — has a precise answer in the GPP framework.
| Algebra | Doubling | Physical Role |
|---|---|---|
| ℝ | 0th (base) | Classical observables, Haar measure on (ℝ+, ×) |
| ℂ | 1st | Quantum amplitudes, Hilbert space, U(1) gauge symmetry |
| ℍ | 2nd | Spinors, SU(2) weak symmetry, spin-½ |
| 𝕆 | 3rd | Colour SU(3), three generations, Gr(2,4) |
Hurwitz's theorem stops the tower at the octonions: sedenions fail to be a normed division algebra. Three doublings. Three gauge factors. Three generations. This is not a coincidence — it is the algebraic structure of spacetime.