Where Tokens Meet Atoms
Everything is very simple in war, but the simplest thing is difficult. These difficulties accumulate and produce a friction, which no man can imagine exactly who has not seen war.
A delivery truck consumes diesel. A manufacturing line consumes natural gas and grid power. A language model consumes electricity and inference (in the form of input tokens, be they supplied today by a human through direct input or a separate language model runtime's or chained automated resource). These are different energy regimes operating at different speeds, and the gap between them is where the transition stalls.
As cognition cheapens, scarcity migrates downstream into actuation: the conversion of a decision into an irreversible state change. An order placed, a payment settled, a robot moved, a permit granted, a molecule synthesized, a contract signed. Robotics is one subset; actuation is the broader category. An agent that can design a product but cannot procure materials remains in the proposal stage. An agent that can draft a contract but cannot execute it remains in the proposal stage. The interface between computation and consequence becomes the binding constraint as the cognitive work itself commoditizes.
In the language of Factor Prime, actuation is where the selection gradient becomes expensive. Proposals are cheap; verification and liability are not. A trained model can generate a thousand product designs, a thousand contract drafts, a thousand logistics optimizations. The selection gradient, the mechanism that filters useful output from waste, operates through deployment. Deployment requires physical throughput, trusted interfaces, sensor feedback, and liability-bearing entities. Each of these scales differently than tokens.
The binding constraint is the irreversible commitment of resources under uncertainty: the moment when a decision becomes a consequence that cannot be cheaply reversed. Cognition proposes, but actuation disposes.
Moravec and Baumol
Hans Moravec observed in 1988 that tasks humans find easy are often tasks machines find hard, and vice versa.(Moravec 1988)Hans Moravec, Mind Children: The Future of Robot and Human Intelligence (Cambridge, MA: Harvard University Press, 1988).View in bibliography Chess and calculus operate in well-defined symbolic domains; walking on uneven ground and folding laundry require real-time sensorimotor integration in noisy environments. Large language models pass bar exams and generate functional code yet are still unable to tie a shoelace. The asymmetry persists because physical feedback loops, safety margins, and verification costs follow different scaling laws than token generation.
William Baumol identified a related phenomenon: sectors amenable to mechanization see falling costs while sectors requiring physical presence see rising relative costs and absorb an increasing share of economic activity.(Baumol 1967)William J. Baumol, "Macroeconomics of Unbalanced Growth: The Anatomy of Urban Crisis," American Economic Review 57, no. 3 (1967): 415–426.View in bibliography If cognitive work becomes radically cheaper while physical work remains constrained, economic activity shifts toward the bottleneck.
The Four Constraints
Actuation constraints come in four varieties. Each represents a different interface between computation and consequence, and each occupies a distinct position in the V/C ordering established in V.A.
Physical Throughput
An agent can generate a thousand product designs in hours. Fabricating one design requires weeks of factory time. The numerator scales with tokens; the denominator scales with atoms. This ratio — design capacity to production capacity — can grow without bound as cognition cheapens, because the movement and transformation of matter obeys physical law, not Moore's Law.
The timelines are irreducible. Semiconductor lead times for advanced packaging now extend twelve to eighteen months.(Association 2024)Semiconductor Industry Association, "State of the U.S. Semiconductor Industry" (2024).View in bibliography Large power transformer orders face two to three year delivery windows.(Energy 2024)U.S. Department of Energy, "Large Power Transformer Supply and Demand" (2024).View in bibliography Data center construction runs eighteen to thirty-six months from site selection to operation, assuming interconnection approval, which itself averages four or more years in congested grid regions.(Laboratory 2024)Lawrence Berkeley National Laboratory, "Queued Up: Characteristics of Power Plants Seeking Transmission Interconnection" (2024).View in bibliography Cognitive acceleration does not compress any of them.
Verification cost is high: did the package arrive intact? Did the weld hold under stress? Did the factory produce to specification? Each question requires physical inspection, sensor instrumentation, or time-delayed observation of consequences. Physical throughput caps the rate of automation even when cognition is abundant. Design scales with tokens. Fabrication scales with atoms.
Trusted Interfaces
The stickiness is organizational, not technical. An AI system can analyze customer data, generate marketing copy, and recommend pricing strategies. Executing those recommendations requires access to the CRM, the email platform, the pricing engine, and the payment processor. Each access point demands authentication, authorization, and audit trails — the authorization mechanisms that grant systems the right to act. The permissions are the bottleneck, and permissions are granted by humans with accountability for outcomes.
A Fortune 500 company may have hundreds of internal systems, each with its own permission model, audit requirements, and human gatekeepers. Connecting an agent to these systems is months of integration work per system, regardless of the agent's cognitive capability. Verification cost per transaction is moderate (did the API call succeed? did the database query return valid results?), but aggregate cost is high because authorization chains must be maintained, audited, and periodically re-certified. The organizational overhead scales with the surface area of integration. Trusted interfaces create organizational drag proportional to scope — they set the integration lead time.
Verification of Reality
Verification of reality — the capacity to observe what actually happened through sensors, audits, inspections, and ground truth — is the denominator in the V/C ratio. Where instrumentation is dense, V/C is high and automation proceeds rapidly. Where instrumentation is sparse, V/C is low regardless of the task's value.
Consider predictive maintenance. A chemical plant with comprehensive sensor coverage (temperature, pressure, vibration, flow rate at every critical point) can deploy automated maintenance scheduling with high confidence. A plant with sparse instrumentation cannot, even if the same predictive model is available, because the model lacks feedback. The AI system can model equipment failure and estimate remaining useful life, but the model is only as good as the sensor data feeding it. Industrial sensors cost tens to hundreds of dollars per installation point. Instrumenting a factory at the density required for high-fidelity prediction can exceed the cost of the equipment being monitored. The cognitive work scales with compute; the sensing work scales with physical infrastructure. Sensor density sets a lower bound on the selection gradient's operating speed — and often proves the binding constraint even when capability is sufficient.
Liability-Bearing Entities
Liability frameworks gate deployment in high-stakes domains. Autonomous vehicles illustrate why: the cognitive work (perceiving roads, planning trajectories, executing maneuvers) is largely solved for highway conditions. Waymo operates in limited geofenced areas largely because liability and regulatory architectures remain geographically specific, rather than because the technology fails elsewhere.(Waymo 2024)Waymo, "Waymo Service Areas" (2024).View in bibliography The deployability envelope is set not by capability but by the accountable parties who absorb consequences when things go wrong — legal persons, signatories, insured actors.
Who is liable when an autonomous system causes harm? The manufacturer? The software developer? The operator? The principal who deployed the agent? Insurance underwriters require answers; regulators require answers; courts require answers. These answers emerge through litigation, legislation, and regulatory interpretation — processes that operate on timescales of years to decades. The V/C ratio for high-liability domains remains low even when capability is high, because the denominator includes the expected cost of unresolved accountability. Correctness of a liability assignment may not be verifiable until litigation occurs, years after the decision.
When vertically integrated autonomous systems compose at organizational boundaries, compositional mismatch invisible to bilateral checks can externalize costs to non-participants. See The Physical Coherence Fee for a constructed scenario grounding this mechanism.
The Stack and Its Ordering
Actuation is not one bottleneck: it is a stack. The four constraints bind in different orders depending on the domain. Across all four, the mechanism is the same: cognition lowers the numerator's cost, but actuation keeps the denominator (verification and liability) sticky, so V/C ceases to improve where consequences become hard to verify or underwrite.
In high-stakes regulated domains (healthcare, autonomous vehicles, financial services), liability binds first. Capability exists. Deployment waits on institutional clarity. In enterprise software, trusted interfaces bind first. The technology works. Integration with legacy systems and permission structures determines adoption speed. In atoms-heavy industries (manufacturing, logistics, energy), physical throughput binds first. Design can iterate at digital speed. Fabrication and delivery cannot. Across all domains, verification of reality determines how quickly the V/C ratio can improve: where sensors are dense, iteration is fast; where sensors are sparse, even capable systems operate blind.
The ordering matters for prediction. An investor asking "when will agents transform domain X?" should identify which constraint binds first in that domain. The answer determines whether the timeline is measured in months (integration work), years (infrastructure buildout), or decades (institutional evolution).
How the Constraints Interact
The four constraints interact, and their interaction connects to the settlement infrastructure developed in V.B.
Physical throughput requires verification (did the package arrive? did the weld hold?). Verification requires trusted interfaces (who has access to the sensor data? who attests to the measurement?). Trusted interfaces require liability-bearing entities (who is accountable when the interface fails?). Liability requires physical presence (where is the entity that can be sued?). The loop passes through institutions, laws, and physical infrastructure at every turn.
This is what V.B's overcollateralized bonding begins to address for digital commitments. Smart contracts substitute code execution for court adjudication. Cryptographic enforcement substitutes for legal enforcement. These mechanisms work for transfers of tokens and proofs of computation. They do not work for physical commitments. A smart contract cannot verify that a package was delivered, that a machine was repaired, that a building was constructed to specification.
The oracle problem (connecting on-chain contracts to off-chain reality) is an actuation bottleneck. Until sensors are ubiquitous, tamper-proof, and legally admissible, the physical world remains partially opaque to autonomous coordination. The settlement layer handles digital finality; the actuation layer handles physical consequence. The gap between them is where human intermediation persists.
The Joule Standard in the Actuation Domain
The Joule Standard established in IV.E operates differently in the actuation domain.
Inference competes directly with Bitcoin mining for electrical capacity. A kilowatt-hour routed to inference must generate more value than the same kilowatt-hour routed to mining, or the capacity routes to mining. This creates a floor on which cognitive tasks are economically viable.
Actuation does not compete for the same resource pool in the same way. A delivery truck consumes diesel. A manufacturing line consumes natural gas and grid power. The conversion to product value follows a different production function than the conversion of tokens to decisions. The hurdle rate disciplines inference pricing. It does not directly discipline logistics pricing or manufacturing pricing.
The hurdle rate operates indirectly through coordination. An agent managing a supply chain consumes inference to make decisions. If the decisions do not generate sufficient value to clear the inference cost, the agent is uneconomic. The actuation itself may be profitable, but the cognitive layer managing the actuation must clear the floor. This creates a nested constraint: actuation economics sets the ceiling on what agents can accomplish. The hurdle rate sets the floor on what cognitive overhead is sustainable.
The implication, in expectation, is that high-value actuation with cheap verification will be agent-managed first. An agent managing pharmaceutical cold-chain logistics (where a single spoiled shipment costs tens to hundreds of thousands of dollars and verification is cheap via temperature sensors and GPS tracking) can justify substantial inference overhead. The value at stake clears the floor by orders of magnitude.
Low-value actuation with expensive verification may remain human-managed. An agent managing janitorial scheduling (where the value per decision is single-digit dollars and verification requires physical inspection) cannot justify the same cognitive overhead. If inference costs a fraction of a dollar per complex decision, the agent is viable only when the margin exceeds that cost by a comfortable multiple.
The hurdle rate partitions the actuation space into agent-viable and human-retained domains. The partition moves as inference costs fall.
Value Migration
The analysis implies a value migration.
If cognitive capability becomes ambient while actuation remains constrained, value migrates to the bottleneck. The positions likely to capture the most value in the coming decade are actuators: energy grids, robotics platforms, logistics networks, manufacturing bases, regulated rails, and the institutional machinery that grants permissions and absorbs liability.
The pattern has precedent. In the early internet era, connectivity was the bottleneck. Telecommunications companies captured value. As connectivity became abundant, attention became the bottleneck; platforms that aggregated attention captured value. Each transition shifted the locus of scarcity, and the locus of value followed.
Electricity is the limiting case. Electricity was once a competitive advantage: factories located near hydropower, cities grew around generating stations. Now electricity is ambient; its presence confers no advantage, only its absence confers disadvantage. If cognitive capability follows the same trajectory, differentiation moves to what cannot be commoditized: the physical, the institutional, the regulated, the embodied.
The Two-Speed Economy
The framework appears to make contradictory predictions. Autonomous agents can coordinate through Bitcoin-denominated settlement, form markets without human intermediaries, and potentially constitute a self-sustaining production system. Yet actuation bottlenecks tightly constrain deployment: physical throughput measured in years, regulatory timelines extending across political cycles, sensor infrastructure that scales linearly with inspection points, liability frameworks that require human accountability.
These describe different layers of the same economy, operating at different speeds. The two-speed economy reconciles the apparent contradiction: temporal and sectoral separation.
The digital-native layer consists of agents trading tokens, data, compute cycles, attention, API access, and information-only services. Verification is computational: a smart contract executes or it does not; a data query returns valid results or throws an error; an API call succeeds or times out. Settlement is cryptographic: finality is achieved when a transaction reaches sufficient confirmations on-chain, typically under fifteen minutes for Bitcoin, seconds for stablecoins on faster chains. Liability is limited to the collateral escrowed in advance; there is no recourse beyond the pre-committed bond. This layer can scale rapidly. An agent can negotiate terms, post collateral, execute a service, verify delivery, and settle payment in the time it takes a human organization to schedule a meeting.
The physical-interface layer consists of agents attempting to coordinate resources in the material world: manufacturing, logistics, energy delivery, construction, healthcare, legal actuation, regulatory compliance. Verification requires physical observation: did the package arrive intact? did the valve close properly? did the building pass inspection? Settlement depends on institutions: legal title, insurance claims, regulatory approval, tax compliance. Liability requires juridical persons: entities that can be sued, that possess assets subject to seizure, that operate within territorial jurisdictions. This layer scales at institutional speed. A logistics agent can optimize a route in milliseconds, but the truck still travels at highway speed on roads subject to traffic, weather, and hours-of-service regulations.
The gap between these layers is the source of the installation-phase instability that Carlota Perez identified in prior techno-economic transitions.(Perez 2002)Carlota Perez, Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages (Cheltenham, UK: Edward Elgar, 2002).View in bibliography Capital flows to the layer where returns are visible and deployment is fast. Durable value depends on actuation in the physical layer, where deployment remains strangled by the constraints this section has detailed.
The installation phase is characterized by this mismatch. Agents proliferate in digital-native domains: trading tokens, optimizing ad placements, routing inference queries, arbitraging information asymmetries. These activities generate revenue, scale rapidly, and attract capital. They do not, by themselves, constitute the infrastructure on which the deployment phase depends. Deployment requires bridging to the physical layer: energy grids, logistics networks, regulatory frameworks, and liability structures that allow agents to commit resources without human principals bearing unlimited exposure.
When capital accumulates in the digital layer faster than the physical layer can absorb deployment, a correction occurs. The dot-com crash illustrated this: connectivity infrastructure could not be built as fast as market valuations assumed. The same dynamic recurs whenever a rapidly scaling layer outruns the physical and institutional substrate it depends on.
The Factor Prime transition faces the same dynamic. The digital-native agent economy can form rapidly, perhaps within a decade. The physical infrastructure required for the deployment phase (energy interconnection, robotics at scale, regulatory harmonization, liability clarity) operates on a different timeline. The gap between them is measured in years to decades, and the gap creates the volatility Perez predicts during the installation phase: capital overshoot into the rapidly scaling layer, followed by retrenchment when returns depend on the slowly scaling layer.
The framework predicts a two-speed transition: rapid formation of a digital-native agent economy with limited physical actuation, followed by a prolonged deployment phase as the physical layer catches up. The question is whether the institutional machinery adapts fast enough to prevent the gap from becoming a chasm, or whether the mismatch creates a crisis that forces adaptation.
Falsification Conditions
The two-speed thesis fails if the digital-native agent economy does not develop substantially ahead of physical automation—if agent coordination for information-only services remains marginal while robotics, permitting, and regulatory frameworks advance rapidly. The framework predicts digital-first, physical-later. If reality delivers both simultaneously or physical-first, the bifurcation does not hold.
The binding-constraint thesis fails if actuation costs decline at rates comparable to inference costs across multiple categories: robotics unit costs, permitting timelines, liability insurance premiums for autonomous systems. The framework predicts that actuation costs remain sticky relative to cognition costs. If actuation commoditizes in parallel, the value-migration prediction fails.
The verification-of-reality constraint weakens substantially if the oracle problem finds a solution that does not require institutional scaffolding—ubiquitous tamper-proof sensors with legal admissibility, or cryptographic attestation chains that courts accept as equivalent to physical inspection. The framework assumes sensors remain expensive and legally contested for the relevant horizon.
The Question That Remains
The bottleneck is identified: cognition cheapens while actuation stays expensive, and the gap between them structures the transition. But identifying the bottleneck is not the same as predicting what happens when agents reach it. An agent that can design, price, negotiate, and optimize but cannot sign, settle, inspect, or be sued does not simply stop. It presses against the constraint. It finds the seams where institutional scaffolding is thinnest and coordination costs are lowest, and it enters there first.
Where, exactly? Not in the abstract but in the specific: which markets form, which firm boundaries dissolve, which institutional interfaces become the choke points that determine who captures value and who becomes a spectator? The actuation bottleneck tells us what agents cannot yet do. The next question is what they do with what they can.