The order of work is explicit: practice, failure, pattern, stress testing, retrospective grounding, writing. The framework was extracted from the work, not applied to it. Existing intellectual traditions were used retrospectively, to test whether observed patterns were idiosyncratic or recurrent, to identify where similar insights had already been articulated, and to locate counterarguments, limits, and boundary conditions.
Evidence in this body of work includes repeated failure modes under comparable pressure, structural invariants across domains, observable effects of specific interventions, negative cases where the framework fails, and portability of capability across tools and contexts. Single success stories are treated as illustration, not as evidence. Intent is treated as context, not as evidence.
Each substantive claim is assessed against five questions. What kind of claim is this, and what evidence is appropriate? Does the claim over-generalise across regions, cultures, or timeframes? Does it smuggle in Western defaults, technological determinism, prestige assumptions, or moralised productivity narratives? Was this once true but now outdated? Does the claim reflect how actors inside that ecosystem actually understand value, risk, and meaning, or is it projection? Claims that fail these checks were rewritten, narrowed, or removed.
Falsifiability
This work would be undermined if sustained evidence showed that responsibility remains stable under acceleration; that clarity reliably increases with speed and tooling; that authority and legitimacy transfer cleanly across contexts; that AI systems reduce ambiguity rather than amplifying it; or that agency erosion becomes immediately visible rather than delayed. Readers operating in environments where these conditions do not hold should use the framework selectively or not at all.