The Argument Restated
Five propositions carry the weight. Each produces predictions that can be tested and, where the evidence warrants, rejected.
1) The thermodynamic floor.
Computation consumes energy. Energy has price. Computation therefore carries an irreducible marginal cost. Optimization can compress that cost, but it cannot eliminate it. This constraint is not only theoretical but ultimately will manifest as competitive discipline. If a workload persistently yields less value per unit of electricity than an alternative use available to the operator, it will be selected against.
A simple test falls out of this logic: In settings where power can be monetized through a credible outside option, do compute workloads persistently operate below the implied floor without being displaced? If they do, the floor does not bind and the framework fails at its foundation.
2) The verification gradient.
Tasks automate not in the order of cognitive difficulty, but in the order of verification cost. Verification cost equates to what it takes to confirm correct performance at scale. Capability may be necessary for substitution, but the sequence is ultimately set by selection pressure. Selection pressure, at equilibrium, is priced by verification cost.
This prediction can be evaluated ordinally. Holding capability constant, high value-per-verification tasks cross the substitution threshold earlier because correctness can be measured and enforced cheaply. Low value-per-verification tasks remain gated by review, auditing, and liability even when model performance is high. If adoption proceeds primarily according to capability benchmarks rather than verification economics, this principle is wrong.
3) The migration of scarcity.
When cognition becomes abundant, scarcity does not disappear, but relocates. As generation costs fall, value migrates to constraints that code cannot dissolve. These include physical infrastructure with long lead times, verification franchises that confer permission to act, and institutional capacity to attach enforceable consequence.
This proposition predicts in part where durable margins will reside. If model providers and generic platform operators sustain structurally rising margins while bottleneck owners and verifiers see compression, it would constitute grounds to reject the thesis.
4) The liability sink.
Autonomous configurations cannot be jailed, disbarred, or made to bear consequence as a bounded person. Yet high-stakes systems require a locus for enforcement—someone must be exposed to claims, penalties, and sanction. Authorization routes consequence to a legible counterparty; the signature converts recommendation into commitment.
This prediction is predominantly institutional. As cognitive work migrates to configurations, authorization-bearing roles should increasingly price exposure rather than cognition. Compensation should track enforceability, insurance capacity, and tail-risk structure more than marginal analytic skill. The proposition fails if authorization requirements relax as capability improves, or if systems themselves acquire standing that substitutes for human and institutional consequence.
5) The coordination prerequisite.
Agents transacting with one another simply cannot rely on the enforcement mechanisms that underpin human commerce. Legal recourse presumes legal standing. Social sanction presumes continuous identity and reputation. Physical coercion presumes a body. We know that a software runtime has none of these. Coordination therefore requires substitutes. It requires pre-committed collateral, programmatic settlement, and dispute pathways that function without relying on identity, jurisdiction, or banking relationships.
The prediction in this case is infrastructural. If agent-to-agent coordination scales broadly on traditional legal and banking rails without new forms of collateralization, settlement, and verification, this prerequisite is overstated. If scalable coordination does emerge, it should do so where enforcement is endogenous to the protocol rather than external to it. The SHEAF protocol provides one candidate mechanism: a diagnostic that determines whether heterogeneous agents can reach structural agreement at all, returning a verifiable impossibility certificate when they cannot and pricing the infrastructure corrections that would make agreement feasible.
Together these propositions form a dependency chain. The floor establishes discipline. The gradient determines sequence. Scarcity migration identifies who captures surplus. The liability sink explains why human and institutional roles persist. The coordination prerequisite specifies what infrastructure the agentic economy must build to scale.
The derivation begins with a monetary precedent. This is not because it is the primary subject, but because it provides an existence proof. One conversion already operates at scale. Electricity is transformed into a bearer asset through computation and is settled without identity or intermediary. The question is what else follows if that thermodynamic foundation is taken seriously.
Predictions
Five predictions follow from the propositions above. Each is stated in terms that permit disconfirmation within two decades.
1. Authorization becomes the scarce input. In sectors where cognitive output is abundant — medicine, law, finance, engineering — the binding constraint will shift from "can we produce the analysis" to "who can authorize its use." The diagnosis, the brief, the risk assessment, the design: these will be generated faster than they can be signed. If this shift does not become visible within two decades, the "authorization as position" thesis overstates the transition.
2. Credential regimes tighten rather than loosen. Existing licensing bodies will respond to cognitive abundance by raising barriers to authorization, not lowering them. The argument will be quality assurance; the effect will be rent protection. If professional licensing liberalizes substantially in response to AI capability gains, the rent-seeking prediction fails.
3. Signing bandwidth becomes priced. In domains where human authorization is required, the right to sign will become an explicit cost center. Physicians will be valued less for diagnostic judgment and more for the capacity to approve. Engineers will be valued less for design skill and more for the authority to stamp. If authorization remains unpriced and unconcentrated, the economic logic of this volume is incomplete.
4. Liability evolves toward pre-commitment. Legal doctrine will increasingly favor bonds, collateral, and pre-committed penalties over post-adjudication suits. When the defendant cannot be identified, when the counterparty has no legal standing, when the transaction occurs faster than courts can process, recourse must be built into the transaction itself. If traditional tort remains the dominant liability mechanism, the "receipt regime" economic case is weaker than this volume claims.
5. The recursive share of AI work rises. Within ten years, more than half of the computational work involved in training frontier models will be performed by AI systems, not human researchers. If human labor remains the binding constraint in AI development, the "capital that improves its own deployment" thesis is overstated.
Disconfirmations
1. Human cognitive labor remains binding. If knowledge work continues to be constrained by human attention rather than by authorization and verification, this volume is wrong about Factor Prime. The transition described would be real but slower and narrower than claimed.
2. Authorization remains abundant. If signing rights proliferate rather than concentrate — if professional licensing opens, if liability distributes, if credential barriers lower — the "signature decides who eats" is rhetoric rather than structure.
3. Liability routes successfully to humans. If legal systems successfully hold human principals liable for AI actions without requiring pre-commitment, the urgency of bonded arbitration diminishes. Traditional accountability would suffice.
4. Energy constraints relax. If energy becomes cheap enough that thermodynamic framing is economically irrelevant — if computation is no longer bounded by power, if the "work" in unforgeable work becomes negligible — the materialist foundation of this volume matters less than claimed.
5. Credential regimes open. If professional licensing responds to AI abundance by lowering rather than raising barriers, the rent-seeking prediction fails and the political economy must be revised.
These conditions do not destroy the thesis if any one obtains. They indicate where the thesis would need to be confined to a narrower scope or a longer timeline. A theory worth having is a theory that can be wrong. These are the conditions under which this volume's central claims would be wrong.