Part V

The Agentic Discontinuity

3 min read

Tools extend human capability. Copilots augment human judgment. Agents act autonomously within delegated scope. The progression sounds continuous—a smooth gradient from hammer to calculator to assistant—but the third step introduces a discontinuity that the gradient metaphor obscures. When a computational configuration can initiate transactions, commit resources, and coordinate with other configurations without human approval at each step, something categorically new enters the economy. Not a better tool. A different kind of participant.

Three terms require precision before proceeding.

An agent is a runtime invocation—ephemeral, stateless, without persistent identity. The full ontology appears in V.D. For Part V, the operative concept is the constraint structure that governs deployment.

Liability is the capacity to bear enforceable consequence—to be the party that courts, regulators, or counterparties can reach when something goes wrong. Not merely financial exposure: legal and institutional accountability that persists through time.

Authorization is the act that converts recommendation into commitment. The signature matters because the signatory can be held accountable for what was signed.

What do autonomous agents require to function at scale? The requirements turn out to be less technical than institutional. Capability has advanced faster than the infrastructure to deploy it safely; the binding constraints are verification, liability, and coordination. Verification determines which tasks automate first—the ratio of value produced to verification cost predicts the sequence better than capability benchmarks do, because tasks where correctness can be checked cheaply automate before tasks where verification is expensive, regardless of how intelligent the underlying system appears. Liability determines who signs—agents cannot be sued or incarcerated or subjected to reputational sanction, so consequence must route somewhere, which means human authorization remains necessary for high-stakes actions even when machines perform the cognitive work. Coordination determines how agents transact with each other—without legal standing or persistent identity, they cannot rely on credit or reputation in the traditional sense, and require instead an enforcement mechanism based on collateral that can be algorithmically seized upon defection.

These constraints, not capability limits, determine the pace and shape of what comes next.

Chapter 15 established that verification infrastructure can operate at scale without trusted counterparties. The Bitcoin mechanism demonstrated the physical possibility: a thermodynamic floor that makes forgery expensive enough to deter and cheap enough to verify. What follows asks the inverse question. Given that verification can happen at scale, which tasks will agents actually perform?

The answer is not determined by capability. Frontier models can, at sufficient quality thresholds, produce legal briefs, clinical diagnoses, structural engineering assessments, and architectural plans. What determines deployment is which of those outputs the economy will accept without re-verification by a human expert. A domain where output correctness is cheap to check (arithmetic, code compilation, formal proof, standardized tax calculation) will automate rapidly regardless of the task's cognitive complexity. A domain where output correctness is expensive to check (medical judgment, building safety, litigation strategy) will resist automation even when the underlying capability is present. The cost of confirming correctness, not the cost of producing the output, governs the sequence.

The ordering is therefore not by what machines can do. It is by what machines can be caught doing wrong, cheaply enough to learn from the catch. The ratio of task value to verification cost, formalized in Chapter 16 as the V/C ratio, predicts which domains cross the substitution threshold first. The rest of Part V traces this ordering across labor markets, institutional structures, and the liability architecture that determines who bears the cost when verification fails.