The Capture Battlefield
Quis custodiet ipsos custodes?
The liability sink is not merely a legal assignment. It is an economic asset. Whoever occupies it extracts rent from every deployment that must pass through its authorization.
Consider what the physician's signature is worth when diagnostic cognition commoditizes. The cognitive content of the diagnosis migrates to the model; the liability content remains with the physician. If every AI-assisted diagnosis requires physician sign-off, every deployment pays the physician's rent. The signature becomes a toll booth, and the toll is set not by the value the physician adds—which may approach zero in routine cases—but by the legal requirement that someone with credentials must authorize the output.
The same logic applies across authorization-gated domains, and it intensifies as the cognitive content of each domain migrates further. The engineer's stamp, the lawyer's opinion letter, the accountant's attestation, the regulator's certification: each is a position that can extract rent when the cognitive work that justifies it has departed. The fight over who occupies these positions is a fight over trillions of dollars in cumulative rent extraction across the transition.
This is not a prediction about corruption. It is a prediction about incentives. Stigler's insight was that regulatory capture occurs not because regulators are venal but because the concentrated interests of regulated industries face diffuse and unorganized consumers.(Stigler 1971)George J. Stigler, "The Theory of Economic Regulation," Bell Journal of Economics and Management Science 2, no. 1 (1971): 3–21.View in bibliography The dynamic repeats: professional guilds, credentialing bodies, and regulatory agencies have concentrated interests in maintaining authorization requirements. Deployers and consumers have diffuse interests in reducing them. The default is capture.
Once established, regulatory rents are capitalized into asset values—licenses, credentials, franchise positions—creating constituencies whose opposition to reform appears as defense of property rather than defense of privilege.(Tullock 1975)Gordon Tullock, "The Transitional Gains Trap," Bell Journal of Economics 6, no. 2 (1975): 671–678.View in bibliography
Legal document preparation illustrates the pattern in a domain where the stakes are lower than medicine but the dynamics are identical.
In 1999, the Texas Unauthorized Practice of Law Committee won a trial court ruling against Parsons Technology, maker of Quicken Family Lawyer, arguing that software providing legal forms constituted the practice of law. The Fifth Circuit vacated—not on the merits, but because the Texas legislature enacted H.B. 1507 during the appeal, carving out an exception for software that disclaims substitution for attorney advice.(Committee 1999)Unauthorized Practice of Law Committee, "Unauthorized Practice of Law Committee v. Parsons Technology, Inc." (1999).View in bibliography The strategy was revealed: define "practice of law" broadly enough to encompass useful automation, then enforce that definition until political pressure forces legislative retreat. LegalZoom and its successors navigate this boundary daily—employing attorneys to review completed documents in some jurisdictions, disclaiming legal advice in others, structuring offerings to avoid the most aggressive enforcement. The cognitive work is performed by software. The authorization work is performed by humans, often perfunctorily.
The parallel is precise. An AI system can draft a contract more thoroughly than a junior associate billing $400 per hour. The system cannot sign the opinion letter. The signature's value is not the cognitive work it represents but the authorization it confers: the right to practice, the liability capacity to be sued, the institutional standing that courts recognize. As the cognitive gap widens, the signature becomes pure rent.
State bars understand this. Rules requiring attorney supervision of AI-generated work, mandatory disclosure of AI assistance, restrictions on AI in client communications—each proposal, framed as consumer protection, preserves attorney intermediation where the attorney adds no epistemic value. The concentrated interest of the bar faces the diffuse interest of legal consumers. The bar has organizational capacity that consumers lack.
The distinction that matters is between safety and incumbency protection.
Legitimate safety concerns exist. Autonomous systems in high-stakes domains can fail in ways that cause irreversible harm. Regulatory scrutiny of such deployments is not Ludditism; it is the ordinary function of institutions designed to protect against concentrated downside risk. A regime that requires demonstrated reliability before deployment in life-critical applications is not capture; it is prudence.
The test is whether the proposed rule reduces harm or preserves incumbents. Consider three regulatory postures:
The first requires demonstrated performance before deployment. If the autonomous system must show lower error rates than human practitioners before authorization, the rule constrains capability in the service of safety. This is coherent. It may slow deployment, but it ties restriction to a measurable criterion that deployment can eventually satisfy.
The second requires human review of every output regardless of demonstrated performance. If the autonomous system outperforms human practitioners by an order of magnitude but still requires human sign-off on each decision, the rule preserves the human's position without corresponding safety benefit. The physician who rubber-stamps a thousand diagnoses per day adds no verification value. The requirement persists because it routes fees and liability, not because it improves outcomes.
The third prohibits deployment in categories defined by tradition rather than risk. If autonomous legal research is prohibited not because it causes harm but because it constitutes "unauthorized practice of law," the rule protects the bar's monopoly, not the client's interest. The category "practice of law" was defined before the capabilities existed; applying it to new capabilities is boundary manipulation in service of incumbency.
The regulatory battlefield is contested across all three postures. Incumbents prefer the second and third. Deployers prefer the first. The outcome in each domain will depend on the relative organization and political access of the contending parties. The prediction is not that capture will occur universally but that capture attempts will be universal. Some will succeed.
But the distinction between safety and incumbency protection admits a third category: regulatory delay that appears to reduce risk but actually increases it. If the technologies already deployed pose positive hazard—nuclear weapons, advanced biotechnology, current AI systems capable of misuse—then time itself is an enemy. The cumulative risk that matters is the integral of hazard over the duration of exposure, not the instantaneous rate at any moment. A regime that slows transition from a state with positive hazard accumulates more risk, not less, unless the technologies on the immediate horizon are substantially more dangerous than those already in existence. The precautionary principle implicitly assumes stagnation is safe. If any risky technology already exists, stagnation guarantees eventual catastrophe; the only question is the rate at which we pass through the period of elevated hazard. Delay that feels prudent may be the most dangerous choice available.
Capture attempts are local; the economy is global. This asymmetry forces arbitrage.
Jurisdictional arbitrage is the market's response to regulatory divergence.
If the United States requires physician authorization for AI-assisted diagnostics and Singapore does not, medical AI companies will incorporate in Singapore, serve patients remotely, and route liability through Singaporean courts. If the European Union imposes strict liability on AI developers and Wyoming imposes liability only on deployers, development will migrate to Wyoming and deployment will occur from legal structures designed to limit EU exposure. If professional licensure requirements vary across states, services will be offered from the most permissive jurisdiction to customers in the most restrictive.
This is not speculation. It is the observed pattern in every domain where regulatory regimes diverge: corporate law (Delaware), financial services (Caymans, Luxembourg), online gambling (Malta, Gibraltar), cryptocurrency (various). The novelty is not the dynamic but the domain. Cognitive services, previously tied to local practitioners, become arbitrageable when the practitioner is a model and the jurisdiction is a legal structure.
The arbitrage creates pressure toward convergence, but the direction of convergence is contested. Restrictive jurisdictions will seek to block access from permissive ones; permissive jurisdictions will seek to attract deployers. The race can run in either direction. If restrictive jurisdictions succeed in blocking access—through data localization requirements, professional licensing enforcement, or liability extension to local operations—the arbitrage closes and local regimes prevail. If permissive jurisdictions succeed in maintaining access—through jurisdictional shielding, treaty structures, or practical unenforceability—the arbitrage persists and restrictive regimes become vestigial.
The observable test is where AI-native companies incorporate, where they locate their legal exposure, and whether restrictive jurisdictions succeed in reaching conduct that originates elsewhere.
Jurisdictional arbitrage is a market response to regulatory divergence. But the deeper question is what form regulation should take in the first place: whether authorization should gate capability or follow use.
The ex-ante model requires permission before deployment. An agency reviews the system, certifies its safety, and authorizes its use in defined contexts. This is the FDA model for drugs, the FAA model for aircraft, the approach that safety-focused advocates prefer. Its virtue is that it prevents harm before it occurs. Its vice is that it concentrates gatekeeping power, creates approval bottlenecks, and embeds the judgment of the moment into durable regulatory structures that may not adapt as capabilities evolve.
The ex-post model allows deployment by default and imposes liability for harm. Anyone can deploy; those whose deployments cause damage pay. This is the common-law model for most economic activity, the approach that deployment-focused advocates prefer. Its virtue is that it permits experimentation, distributes judgment across many actors, and adapts through case-by-case adjudication rather than categorical prohibition. Its vice is that it permits harm before correction, may under-deter when defendants are judgment-proof, and offers cold comfort to those injured by systems that should not have been deployed.
Neither model is obviously correct. The choice depends on the reversibility of harm, the detectability of error, the adequacy of compensation, and the costs of delay. Where harm is irreversible and error is difficult to detect, ex-ante review is justified despite its costs. Where harm is compensable and error is observable, ex-post liability is sufficient and ex-ante gatekeeping is rent extraction.
The theoretical literature on technology and risk typically models development as one-dimensional: society chooses speed along a fixed trajectory. This abstracts away from the fights that matter most. The capture battlefield is not primarily about pace; it is about direction. Which technologies develop depends on which authorization regimes obtain, which depends on who controls the liability sink. The path through capability space is itself being determined by the political economy of the present. Framing the choice as "faster or slower" obscures that the choice is also "toward what."
The political economy of this choice is asymmetric. Ex-ante gatekeeping creates a concentrated constituency—the gatekeepers—with strong incentives to preserve and expand their authority. Ex-post liability creates no such constituency. It merely empowers courts to adjudicate disputes as they arise. The default, absent countervailing organization, is the expansion of ex-ante regimes.
The regulatory capture battlefield is being contested now. The outcomes will shape who can deploy, who must authorize, and who captures the rent that flows through the authorization membrane. Every lobbying engagement, every rulemaking proceeding, every legislative drafting session is an allocation of claims on the surplus that Factor Prime will generate. Most participants do not understand the stakes—not because they are naive but because the framing of each specific battle (safety standards, consumer protection, professional quality) obscures the cumulative pattern. Frontier labs understand but are positioning for advantage rather than explaining the game. Professional guilds frame their interests as public welfare. Regulators understand their domain but not the cross-domain pattern. The legibility required to see the whole is rare, and those who possess it are not advertising their understanding. The fight, beneath the safety framing, is about who gets to be the toll booth.