Appendix B: The Coordination Substrate
The problem is precisely how to extend the span of our utilization of resources beyond the span of the control of any one mind.
The mathematics is one thing; whether its assumptions hold is another. A derivation can be internally consistent and empirically wrong. The equations are correct given their premises. Whether those premises obtain in practice is a different question, addressed here for each major component: the hurdle rate, the bonding mechanism, the attestation infrastructure, the term structure, and the pricing model.
Readers who accept the mathematics but doubt its applicability should find their objections anticipated. Readers who reject specific premises should be able to identify exactly where the framework would break.
B.1 — When the Hurdle Rate Binds
Appendix A.1 derives the Bitcoin hurdle rate from first principles. Any compute capacity connected to electricity has a minimum value floor: the satoshis it could acquire by routing that electricity to mining. The derivation is clean. The question is when it binds.
The arbitrage assumes frictionless switching. Reality has friction. The floor is not a hard surface but a soft zone, and the zone's width depends on conditions the mathematics abstracts away.
Hardware specificity. The cleanest arbitrage exists for general-purpose hardware that can run either workload. A GPU can mine (inefficiently) or run inference (efficiently). An ASIC can only mine. A TPU can only run inference. The floor binds tightly for flexible capacity and loosely for dedicated capacity.
In practice, most frontier inference runs on hardware optimized for inference, not mining. The arbitrage that disciplines pricing operates at the margin, where the next megawatt could flow to either workload. If that margin is thin, if most capacity is dedicated rather than flexible, the floor's disciplining effect weakens. The test is whether flexible capacity is sufficient to set marginal prices.
Switching latency. Hash-rate leasing markets allow redirection in seconds. Spinning up inference workloads takes longer: model loading, context hydration, cold-start penalties. The floor binds over horizons long enough for switching to occur. Over shorter horizons, prices can deviate.
The practical implication is that the floor operates as a medium-term gravitational pull rather than an instantaneous constraint. Inference pricing can float above the floor for hours or days before arbitrage compresses the spread. The mathematics describes equilibrium; the path to equilibrium has friction.
Information asymmetry. Real-time profitability data is noisy. Miners and inference operators cannot observe the exact BTC-denominated return on the marginal kilowatt-hour at every instant. They estimate from lagged data, forecasts, and heuristics. The arbitrage operates against yesterday's information, not today's reality.
This information lag creates a band around the theoretical floor. Prices can deviate within the band without triggering arbitrage because operators are uncertain whether the deviation is real or noise. The band's width depends on data quality, update frequency, and operator sophistication.
Geographic constraints. Power that cannot physically reach mining infrastructure is not disciplined by the floor. A stranded renewable installation without grid interconnection or co-located mining capacity has no alternative use for its electricity. Its "floor" is zero, not the Bitcoin hurdle rate.
The framework assumes sufficient geographic distribution of mining infrastructure that most power sources have access to the arbitrage. If mining concentrates in a few locations and power cannot flow freely, the floor binds only for power near those locations.
Difficulty adjustment dynamics. The Bitcoin difficulty adjustment tightens the floor over two-week windows. If mining becomes unusually profitable, hash rate enters, difficulty rises, and profitability compresses. If mining becomes unprofitable, hash rate exits, difficulty falls, and profitability recovers.
This self-correcting mechanism means the floor is more reliable over medium horizons (weeks to months) than over short horizons (hours to days). The mathematics describes the equilibrium the difficulty adjustment enforces; the adjustment takes time.
When the floor becomes soft. The hurdle rate mechanism weakens if:
- Flexible capacity becomes a negligible fraction of total compute
- Switching costs rise (regulatory, technical, or contractual lock-in)
- Mining concentrates in jurisdictions that cannot receive power flows from major inference regions
- Information quality degrades (exchanges fail, data feeds become unreliable)
- The difficulty adjustment mechanism breaks (sustained hashrate decline exceeding adjustment capacity)
Falsifier. The floor is not binding if inference pricing persistently deviates from the hurdle rate by more than the switching cost band for periods longer than the difficulty adjustment window (two weeks). The band width is an order-of-magnitude estimate, perhaps 10-20% under current conditions, derived from observed hash-rate leasing spreads, cold-start penalties for inference workloads, and typical information lags in profitability data. The estimate requires calibration as markets mature; the claim is structural (a band exists), not precise (the band is exactly this wide). Systematic deviation beyond any plausible band indicates either that the arbitrage is not operating or that the framework's assumptions about flexible capacity are wrong.
The band's practical implication. A 20% switching-cost band represents loose discipline, not tight constraint. If the floor allows inference pricing to deviate by a fifth without triggering arbitrage, the hurdle rate's function is gravitational rather than binding: it pulls pricing toward a neighborhood rather than pinning it to a point. The floor's practical relevance as a disciplining mechanism depends on the band narrowing as switching infrastructure matures: hash-rate leasing markets, inference marketplaces, and containerized workload migration all compress the wedge by reducing redeployment friction. The framework's forward predictions should be read accordingly: the floor is currently a tendency, and the claim is that it tightens over time as the infrastructure that enables switching becomes more liquid and more competitive.
B.2 — Why Overcollateralized Bonding Is Necessary
Appendix A.3 provides the mechanics. The elimination argument runs as follows: what must be true for any alternative to work at machine scale, and why those alternatives either collapse into collateral anyway or quietly reintroduce legal identity.
The core constraint is not trust. It is enforceability. An agent (cheap to copy, cheap to discard, not natively tied to a legal person) cannot be reached by traditional enforcement. A contract that cannot impose a cost on non-performance is not a contract. It is a request.
Three candidate enforcement channels exist in human commerce:
Law. Works when counterparties have legal identity and attachable assets. Courts can compel performance, award damages, and seize property. The mechanism requires persistent identity, jurisdictional reach, and the infrastructure of civil procedure. Agents lack legal standing. They cannot sue and cannot be sued. Legal enforcement requires wrapping the agent in an entity that possesses these properties: a corporation, a trust, a registered individual. The wrapper can exist, but the scarce resource shifts: no longer intelligence but the wrapper that can be held accountable.
Reputation. Works when identity is persistent and expensive to discard, and when the discounted value of future rents exceeds the one-shot gain from defection. Human reputation functions because humans cannot copy themselves, cannot cheaply acquire new identities, and accumulate reputation over decades. Agents fail all three conditions. A runtime can be copied trivially. A new agent with a fresh identity costs nothing to instantiate. Historical performance data can be inherited, forged, or discarded. The conditions for reputation to substitute for enforcement do not naturally obtain.
Reputation could work if a sybil-resistant identity layer emerged and became the default interface, a primitive that makes identity expensive to create and impossible to discard. Such a layer would change the agent ontology. It does not exist today. Until it does, reputation alone cannot enforce.
Force. Works when counterparties have bodies, jurisdictions, and coercible surfaces. For agents, this channel is void. There is nothing to imprison, nothing to threaten, nothing to seize outside the digital domain.
Overcollateralized bonding is the clean alternative when coordination must proceed without wrappers, without durable identity, and without physical enforcement. It makes the cost of defection internal and automatic. Collateral is posted up-front and slashed under explicit conditions (Appendix A.3). No trust is required because no party can defect profitably. The enforcement mechanism is the cryptographic covenant itself.
The claim is not that collateral is pleasant or efficient. It is that in the absence of enforceable identity, collateral is the only enforcement that scales without asking anyone's permission.
Boundary conditions. Overcollateralized bonding is not necessary if:
- Agents operate exclusively through entities with legal personhood and balance sheets (a "firm-first" agent economy where every runtime is a subsidiary)
- A widely adopted, sybil-resistant identity layer emerges and becomes practically unavoidable, allowing reputation to substitute for collateral
- Hardware-rooted identity plus remote attestation becomes ubiquitous and binding to settlement, so "invocation" is no longer cheap to discard
Each of these conditions could obtain. If they do, the framework's collateral requirements loosen. If they do not, collateral remains load-bearing.
B.3 — The Attestation Architecture
Every enforcement mechanism in the framework depends on accurate attestation of state transitions. The bonding covenants in A.3 release or slash based on oracle reports. The term structure in A.4 requires price feeds. Agent-CAPM in A.5 requires verified performance data. Attestation is not a peripheral concern. It is the infrastructure on which everything else rests.
If attestation fails, if oracles are unreliable, corrupt, or capturable—the enforcement mechanism collapses. Collateral does not enforce if the covenant cannot determine whether performance occurred. A term structure based on manipulated price feeds is worse than no term structure at all. The framework's viability depends on attestation reliability exceeding some threshold. The question is what that requires.
The structure of credible attestation. An attestation is credible when the attester has more to lose from false reporting than to gain. This can be achieved through several mechanisms:
Skin in the game. The attester posts a bond that is slashed upon proven false attestation. The bond must exceed the maximum gain from any single false report. This is the same overcollateralization logic applied to the attestation layer: make defection unprofitable by construction.
Reputation with exit barriers. If the attester cannot easily exit and reconstitute under a new identity, accumulated reputation becomes a hostage. This requires the same sybil-resistance that agent identity lacks; attesters may need to be entities with persistent legal standing even when agents are not.
Cryptographic proofs. For deterministic computations, zero-knowledge proofs or verifiable computation can attest to correctness without trust. The output of a computation can be proven to follow from its inputs. This eliminates the attestation problem for the subset of claims that are deterministic.
Multi-oracle consensus. When no single attester is fully trusted, requiring agreement among multiple independent attesters raises the bar for manipulation. An attacker must corrupt a threshold rather than a single point.
Attack surfaces. The attestation layer faces specific attacks:
Collusion. If the agent and oracle are controlled by the same party, they can coordinate false reports to extract collateral from counterparties. Mitigation requires oracle independence: separation of control, rotation, or random selection.
Bribery. Even independent oracles can be bribed. If the one-time bribe exceeds the oracle's bond or reputation value, rational oracles accept. Mitigation requires bonds large enough that bribery is prohibitively expensive.
Front-running. If the oracle's report is observable before settlement, parties can trade on the information. This may not corrupt the attestation but can redistribute value in ways that undermine market integrity.
Manipulation of inputs. The oracle reports what it observes. If observations are manipulable (price feeds on thin markets, sensor data on compromised hardware), the attestation faithfully reports a corrupted reality.
The recursive problem. If attesters must be bonded, who attests to the attester's performance? The regress has to terminate somewhere. Three candidate termination points exist, each with structural weaknesses.
Legal identity. The regress terminates at a layer of attesters with legal personhood and attachable assets. Courts can reach them. This is the most mature option: traditional auditing firms, regulated custodians, licensed professionals. The weakness is jurisdictional. Attestation disputes that cross borders require choice-of-law provisions, international enforcement, and months or years of litigation. For high-frequency, low-value attestations, legal recourse is too slow and expensive to be credible. For cross-jurisdictional agent activity, no court has uncontested reach. Legal termination works for the subset of attestations that occur within a single jurisdiction, involve high enough stakes to justify legal cost, and can wait for judicial resolution. The rest requires something else.
Hardware-rooted trust. The regress terminates at trusted execution environments (TEEs) that cannot lie about their computation. Intel SGX, AMD SEV, ARM TrustZone: hardware enclaves that attest to their own code and state. The weakness is the supply chain. Who certifies the certifiers of the silicon? The hardware manufacturer could insert backdoors. The firmware could be compromised. The attestation key could be extracted. Each layer of the stack introduces parties who must be trusted: foundry, packaging, firmware authors, key ceremony operators. Hardware trust does not eliminate trust; it concentrates it in semiconductor supply chains rather than in attestation service providers. For applications where the threat model excludes nation-state adversaries with fab access, hardware trust may suffice. For applications where it does not, the termination point is insufficient.
Social consensus. The regress terminates at a community of verifiers whose collective judgment is the final word. This is how Bitcoin itself terminates its consensus regress: full nodes run by thousands of independent operators, any of whom can reject invalid blocks. The weakness is the apparent tension with B.2's claim that reputation does not work for agents. The tension is real but resolvable. Social consensus can work as an attestation backstop precisely because the attesters are not agents; they are humans or human-controlled institutions with persistent identity, reputational stake, and legal exposure. The community that validates attestations need not be composed of stateless runtimes; it can be composed of entities for whom reputation does function. The cost is that this layer operates at human speed, not machine speed. Social consensus is suitable for low-frequency, high-stakes attestations (quarterly audits, dispute resolution, protocol upgrades) but cannot clear the volume of attestations that high-frequency agent commerce would require. It is a court of last resort, not a transaction processor.
The framework does not resolve the recursive problem. It flags that some termination point is required, that no termination point is fully satisfactory, and that practical systems will likely employ all three in combination: hardware trust for the high-frequency layer, legal identity for the medium-stakes layer, and social consensus as the ultimate backstop.
Deterministic versus subjective attestation. The attestation problem varies by claim type:
Deterministic claims can be verified by recomputation. "Did this code compile?" "Does this hash match?" "Did this transaction confirm before this block height?" These admit cryptographic proof or cheap replication. The oracle problem for deterministic claims is largely solved.
Subjective claims cannot be verified by recomputation. "Did the agent provide satisfactory service?" "Was the medical recommendation appropriate?" "Is this content harmful?" These require human judgment or AI evaluation that is itself contestable. The oracle problem for subjective claims remains open.
The framework's enforcement mechanisms work cleanly for deterministic claims. For subjective claims, they require either reduction to deterministic proxies (metrics that approximate subjective quality) or acceptance of attestation error as a cost of doing business.
The attestation problem as V/C problem. The recursive attestation regress described above appears intractable when stated as a monolithic problem: who verifies the verifier? The framework's own central concept provides the organizing principle that decomposes it. The V/C ratio—the ratio of value at stake to the cost of credible verification—varies by claim type, and the appropriate attestation architecture varies with it.
For deterministic claims, verification cost approaches zero relative to value: code compiles or does not, hashes match or do not, transactions confirm before a block height or do not. The V/C ratio is effectively infinite. These claims are already solved by cryptographic proof and cheap recomputation. No oracle is needed; the mathematics is self-attesting.
For observable-but-contestable claims—component quality within specification, delivery within a time window, sensor readings within tolerance—verification cost is moderate and V/C is in the range where bonded attestation with optimistic resolution operates. The UMA and Kleros pattern applies: the attester posts a bond, attestations are assumed correct unless challenged, and challengers must also post bonds. The economics work when the cost of a challenge is lower than the value at risk but high enough to deter frivolous disputes.
For subjective claims—service satisfaction, medical appropriateness, content quality—verification cost is high, V/C is low, and the attestation problem reduces to the same liability-sink structures identified elsewhere in the framework. Human judgment is the terminal oracle, and the cost of that judgment is the irreducible floor on verification. These claims cannot be made cheaply credible; they can only be made credible at a price that scales with consequence.
The regress terminates, then, not at a single backstop but at a graduated architecture. Each claim type receives the attestation mechanism its V/C ratio warrants. The decomposition reduces the monolithic oracle problem to a set of domain-specific engineering problems at different maturity stages: some already solved, some solvable with existing mechanisms, and some requiring the human-in-the-loop structures that the framework predicts will persist as liability sinks.
The prior question: semantic agreement. The entire attestation architecture presupposes that agents can determine whether their claims are about the same things, that when Agent A reports "delivery completed" and Agent B reports "delivery not received," the disagreement is factual rather than terminological. In practice, agents with heterogeneous representations (different schemas, different evaluation criteria, different scoping of terms) may disagree not because one is lying but because they are not talking about the same object under the same conditions. A diagnostic that determines whether agreement is structurally possible given the agents' overlap structure — and that produces a verifiable certificate when it is not — sits beneath the attestation layer in the architecture. One candidate for this diagnostic is the SHEAF protocol, which examines pairwise overlaps and returns a verdict: agreement achievable, agreement structurally impossible (with certificate), or configuration too complex for exact determination. When impossibility is certified, the diagnostic can price the infrastructure corrections (additional shared contexts, relaxed equivalence standards) that would make agreement feasible. The attestation architecture described above assumes semantic agreement as a precondition; a protocol like SHEAF would make that precondition inspectable rather than assumed. Whether SHEAF specifically or some alternative diagnostic fills this role is an open engineering question; that some such diagnostic is needed is a structural requirement of the architecture.
Threshold requirements. The framework functions if attestation error rates stay below some threshold. The threshold depends on the application:
- For high-value, low-frequency commitments (infrastructure finance), even 1% error may be intolerable
- For low-value, high-frequency commitments (micro-payments for API calls), 5% error may be acceptable if the expected value remains positive
The term structure and CAPM pricing incorporate attestation risk as a component of the spread. But if error rates exceed the threshold where expected value turns negative, rational participants exit and the market unravels.
Falsifier. The attestation architecture fails if:
- Oracle collusion or bribery becomes endemic (more than 10% of high-value attestations corrupted)
- No credible termination point emerges for the attestation regress (every proposed backstop is demonstrated vulnerable)
- Attestation costs remain high enough that only high-value transactions can bear them (the long tail of small transactions cannot be economically verified)
B.4 — The Bootstrap Problem
Appendix A.4 shows constructibility: given tradable instruments, a term structure can be extracted. The harder question is emergence: how the system moves from "possible" to "standard" without assuming the curve already exists.
The bootstrapping problem has a simple shape. The curve is useful only when it is liquid. Liquidity appears only when many actors treat the curve as useful. This is not a mathematical problem. It is a coordination problem among issuers, market makers, and balance sheets.
Why the coordination is hard. A term structure requires someone to be the first durable counterparty. The first issuer of a 90-day instrument quotes a rate. If no one trades against it, the rate is meaningless. If few trade, the rate is noisy. Only with sufficient volume does the rate become a reliable signal.
The first movers bear costs that later entrants avoid: establishing conventions, building infrastructure, absorbing early losses from mispricing, and educating counterparties. These costs are not recovered. They are donated to the ecosystem. Rational actors prefer to wait for someone else to pay them.
A plausible sequence. Consistent with the instrument set described in A.4:
Stage 1: Short-end price discovery. Liquid, low-duration instruments dominate early. Overnight and one-week rates require minimal commitment and minimal basis risk. Participants can learn the market's dynamics without locking capital for extended periods. The short end establishes itself before the long end.
Stage 2: A balance-sheet anchor. One or more large BTC treasuries (exchanges, lenders, miners, custodians—warehouse duration and quote two-way prices. Their willingness to absorb inventory creates the liquidity that other participants rely on. The anchor does not need to profit on every trade; it needs to establish the market's existence.
Why would any actor accept the first-mover costs described above? Three incentive structures can motivate the subsidy:
Strategic positioning. The entity that establishes the benchmark becomes the benchmark's natural administrator. If the BTC term structure becomes critical infrastructure for the agentic economy, the first mover captures structural advantage: data on market flows, relationships with early participants, influence over methodology. The costs are venture investment in a position that compounds if the market emerges.
Cross-subsidization. An exchange, lender, or custodian may subsidize term structure formation to strengthen its core business. A Bitcoin exchange benefits from derivatives markets that reference its price feeds. A custodian benefits from instruments that require custody. A lender benefits from a yield curve that prices its loans. The term structure is a loss leader for adjacent profit centers.
Ideological commitment. Some actors in the Bitcoin ecosystem operate on longer time horizons and broader objectives than quarterly profit maximization. Miners, protocol developers, and long-term holders may subsidize infrastructure that strengthens Bitcoin's settlement layer, treating the cost as contribution to a public good rather than commercial investment. This motivation is fragile at scale but may be sufficient for early-stage bootstrapping.
The prediction is not that first movers will be altruistic, but that the combination of strategic optionality, cross-subsidization, and ideological motivation can cover the gap between first-mover costs and first-mover revenues until network effects take over.
Stage 3: Reference publication. A curve becomes real when it is published as a benchmark with transparent methodology and survivable governance. The publication creates a focal point. Participants can reference "the" rate rather than negotiating which quote to use. The publisher's credibility determines the benchmark's authority.
Stage 4: Compression via carry. Once adjacent maturities exist, cash-and-carry and roll trades enforce consistency. Arbitrageurs profit from mispricings between maturities, compressing spreads and aligning the curve. The arbitrage activity is self-reinforcing: tighter spreads attract more participants, which further tightens spreads.
What fails this process:
- Persistent segmentation: rates differ structurally across venues because settlement and custody are not fungible
- No credible benchmark publisher emerges, or publishers are not trusted because they are censorable or conflicted
- Basis risk dominates the carry, so arbitrage cannot compress the curve into a coherent object
- Liquidity remains too thin: bid-ask spreads exceed 50 basis points at benchmark maturities
The institutional prerequisite. The term structure is an institutional artifact before it is a statistical artifact. It requires actors willing to commit balance sheet before it produces the signals that justify committing balance sheet. Someone must be the first durable counterparty, and the structure of that entity determines whether the coordination problem is solved.
Three structural problems must be addressed before that entity can function.
The collateral gap. A stateless runtime has cash flow but no balance sheet. An agent performing inference work may generate substantial fees over ninety days, but to post the Bitcoin bond required for an engagement, it must possess the collateral before earning the fees. The agent cannot work without collateral, but cannot accumulate collateral without working. The deploying principal could front the capital, but this creates balance sheet drag proportional to the sum of all outstanding commitments. A third option resolves the circularity: a separate party posts the bond on the agent's behalf, in exchange for a fee that prices time value, slash risk, and operational margin. This creates a collateral intermediary whose viability depends on solving the second problem.
Specification underwriting. Traditional financial underwriting assesses identity—creditworthiness, reputation, legal recourse. The underwriter asks whether this person will pay. The collateral intermediary described above must assess something different: whether this configuration will defect. Model hash, system prompt, historical slash rate, verification architecture—these are the inputs to specification credit, not personal credit. Configurations with zero historical slash rates across thousands of invocations receive lower collateral fees than novel or high-variance specifications. Every bonding engagement generates data on specification reliability, and the intermediary that accumulates this data builds an advantage that competitors cannot replicate without equivalent volume. The underwriting problem is harder than traditional credit because no credit bureau exists to query; it is also more tractable because the inputs are deterministic and the outcomes are machine-verifiable.
Rate publication as benchmark formation. The collateral intermediary that publishes its cost of bonding at various durations does more than price its own services. It seeds the reference rate that other market participants use to price theirs. The rate at which the intermediary lends balance sheet to algorithms becomes a benchmark for the agentic economy, discovered through repeated transaction rather than administered by committee. This is the mechanism by which Stage 3 of the bootstrap sequence (reference publication) can emerge from Stage 2 (balance-sheet anchoring): the entity that warehouses duration and publishes rates becomes the benchmark's natural administrator, and its rates become the focal point around which the curve crystallizes.
B.5 — Agent-CAPM in Practice
Appendix A.5 derives the pricing equation. Using it operationally requires assumptions that deserve scrutiny, without pretending that agents solve full portfolio problems on every invocation.
The practical move is to treat pricing as a policy problem. A service specification is given a small set of parameters that update slowly. Each invocation prices off those parameters. The heavy computation happens at parameter estimation time, not at invocation time.
Inputs an agent needs. Operationally, pricing requires:
A term-structure oracle. The agent must observe for standard tenors (Appendix A.4). The oracle publishes rates at known intervals (e.g., every 144 blocks). Failure behavior must be specified: what rate does the agent use if the oracle is unreachable?
A market risk premium proxy. The agent must observe , derived from a broad index of agent-economy cash flows or a proxy basket. This is the hardest input to estimate because the agent economy is nascent. Early implementations may use fixed estimates or calibrate to observed spreads.
A beta estimate per specification. Beta attaches to the service type (inference, routing, actuation brokerage), not to individual invocations. Estimation uses historical cash flow covariance with the market proxy. New specifications without history require fundamental beta estimates (Section A.5).
A collateral model. The mapping from contract type and oracle reliability to collateral ratio (Appendix A.3). Higher-risk engagements require higher collateral; lower-risk engagements can operate closer to 100%.
The capital constraint, not the compute constraint. The conventional assumption in AI economics is that compute cost dominates. Inference pricing, on this view, is a function of GPU-hours, model efficiency, and energy cost. Reduce the cost of a forward pass and you reduce the cost of the service.
This assumption inverts when coordination is collateralized.
If capital is scarce and every commitment requires overcollateralized bonding, "pricing cognition" becomes "pricing capital time." The fee schedule is dominated by:
- Time-value of locked BTC — the term structure rate applied to collateral over the commitment duration
- Systematic exposure — beta times risk premium, compensating for the service's covariance with the agent economy
- Operating costs — inference compute, bandwidth, orchestration (often a distant third)
This ordering is non-obvious and important. A developer focused on reducing inference latency or improving model efficiency may find that these gains are absorbed by the capital charge for bonding the service. A 10x improvement in inference cost saves pennies; a 10% reduction in collateral requirements saves dollars. The binding constraint has shifted.
The implication for system design: optimizing for compute efficiency is necessary but not sufficient. The systems that achieve lowest total cost are those that minimize collateral requirements through demonstrated reliability, efficient verification, and tight integration with attestation infrastructure. Reputation accumulation becomes a capital efficiency strategy, not merely a marketing advantage.
What must be measured. For this to be more than armchair finance, markets must produce data on:
- Realized tenor-weighted returns on collateralized service portfolios
- Covariance of service cash flows with the market proxy used to define
- How collateral ratios evolve as oracle error and dispute rates fall
- Whether beta estimates stabilize by specification type or remain noisy
Falsifier. Agent-CAPM fails as a predictive model if:
- Beta estimates do not converge by service specification (systematic risk is not priced consistently)
- High-beta services do not earn higher returns than low-beta services (the risk premium is not compensated)
- The term structure does not form (the risk-free rate input is unavailable)
- Collateral ratios do not decline with accumulated performance history (reputation does not substitute for collateral)
The mathematical machinery in Appendix A operates on assumptions that this appendix has made explicit:
| Section | Core Assumption | Failure Mode |
|---|---|---|
| B.1 | Flexible capacity arbitrages mining vs. inference | Hardware specialization, switching friction, geographic segmentation |
| B.2 | Legal, reputational, and physical enforcement are unavailable | Sybil-resistant identity emerges; firm wrappers become standard |
| B.3 | Attestation is accurate above threshold | Collusion, bribery, or input manipulation becomes endemic |
| B.4 | Coordination produces a liquid term structure | Segmentation, no anchor, or persistent basis risk |
| B.5 | Systematic risk is priced via observable beta | Beta instability, risk premium not compensated |
The framework does not require all assumptions to hold perfectly. It requires them to hold well enough that the mechanisms function. "Well enough" is an empirical question that the markets themselves will answer.
One assumption the table does not capture is semantic: the coordination substrate presupposes that agents can determine whether their local claims compose into globally consistent agreements. Section B.3 identifies this as the prior question that the attestation architecture presupposes and describes one candidate diagnostic (the SHEAF protocol) that could make the precondition inspectable. The coordination substrate's argument does not depend on SHEAF specifically but on the existence of some mechanism with these properties.
If the framework is wrong, it will fail in specific ways that trace back to specific premises. The traces should now be visible.