Silicon Metabolism

There is a compelling analogy between biological organisms and industrial activities.

Robert Ayres, 'Industrial Metabolism' (1994)

Intelligence is physical, it has a thermodynamic cost, and the cost is what makes its advantages durable. The previous part established these claims. But this leaves a question unanswered. Why does computation deserve a special place in the production function, distinct from other energy-intensive industrial processes?

Steel production consumes enormous quantities of energy. So does aluminum smelting, cement manufacturing, ammonia synthesis. These are not minor activities at the margins of the economy. They are the material substrate of industrial civilization, their energy intensity measured in exajoules, their outputs quite literally load-bearing. The steel in a bridge, the aluminum in an airframe, the cement in a foundation—these are what the built environment is made of, and making them requires converting vast quantities of fuel into heat and heat into chemical transformation. Yet no one proposes that steel or aluminum constitutes a new factor of production. The distinction lies in what computation transforms, and in what that transformation can be directed toward. What distinguishes computation is that it can be turned back onto itself.

A steel mill takes iron ore and coke and transforms them into steel, a specific material with specific properties. Tensile strength, ductility, corrosion resistance. Atoms are rearranged. Chemical bonds are broken and formed. The product weighs as much as the inputs minus whatever escapes as slag and exhaust. The transformation is physical in the most direct sense, which is precisely what limits its generality. Steel can be shaped into beams or rails or turbine blades, but it remains steel. It cannot become copper or plastic or semiconductor-grade silicon. The output is constrained by the chemistry of the input, and the constraint is absolute.

Computation transforms something different. It converts energy into state selection, choosing among possible configurations, designs, and actions in ways that reduce waste, error, and search cost. A computer takes a pattern of inputs and produces a pattern of outputs according to rules encoded in its program. The inputs and outputs are physical in the sense that they are represented by voltages or magnetic orientations or optical states, but the transformation itself operates on the pattern rather than the substrate. The same computation can be performed on silicon or gallium arsenide or superconducting circuits or, in principle, on any physical system capable of representing and manipulating discrete states. The logic is substrate-independent even as the cost remains bound to physics. What this means in economic terms is that computation converts energy into reduced search cost and better coordination, which is why it diffuses through the entire economy rather than remaining confined to applications that specifically require its material properties, the way steel remains confined to applications that require the properties of steel.

Claude Shannon, working at Bell Labs in the 1940s, gave this observation its mathematical foundation. His 1948 paper defined information as a reduction in uncertainty and showed that information could be quantified in bits, independent of the physical medium carrying it.(Shannon 1948)Claude E. Shannon, "A Mathematical Theory of Communication," Bell System Technical Journal 27, no. 3 (1948): 379–423.View in bibliography A bit is a bit whether it is stored in a transistor, punched into a card, or encoded in the timing of neural spikes. Shannon's framework separated the logic of communication from its physical implementation, creating a science of information that applied equally to telegraphy, telephony, radio, and any future medium yet to be invented.

The implications went beyond communication. If information is substrate-independent in its logic, then so is computation, which is the systematic manipulation of information according to rules. A calculation performed on paper, on an abacus, on a mechanical calculator, or on an electronic computer produces the same result. The physical implementation affects speed, reliability, and energy cost, but not the logic of the operation. This is the sense in which computation is abstract. It is a pattern of relationships that can be instantiated in many different physical systems, even as the economics of each instantiation differ by orders of magnitude.

David Deutsch, a physicist at Oxford who helped lay the foundations of quantum computation, made the point explicit in a 1985 paper that reframed the Church-Turing thesis in physical terms.(Deutsch 1985)David Deutsch, "Quantum Theory, the Church-Turing Principle and the Universal Quantum Computer," Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences 400, no. 1818 (1985): 97–117.View in bibliography Deutsch argued that the question of what is computable is a physical question. The laws of physics determine which transformations of information are possible and which are not. A universal computer, in Deutsch's formulation, is a physical system capable of emulating the relevant dynamics of many other physical systems to sufficient precision that the emulation yields useful predictions. The computer is universal because physics permits this kind of generality, not as mathematical abstraction but as physical fact.

This generality is what distinguishes computation from other industrial processes. A steel mill can only make steel. A chemical plant can only make the chemicals its reactors and catalysts permit. But a computer, given enough time and memory, can model a steel mill, simulate airflow through a turbine blade, explore the folding landscape of a protein, or evaluate millions of candidate chip layouts before committing any of them to fabrication. The substitution of computation for physical experimentation is limited, since many physical processes resist efficient simulation, but where it works it compresses the cost of search and design by factors that compound as the capability diffuses across sectors.

Steam power was transformative because it could drive pumps, looms, locomotives, and factories. Electricity was transformative because it could power lights, motors, communication systems, and computation. Computation is transformative for the same structural reason, the generality that allows a single technology to be applied across many domains, but with an additional property that sets it apart from every prior general-purpose technology.

Steam and electricity improved themselves only indirectly, by raising the productivity of the human and industrial systems that redesigned them. The engineer who improved the steam engine used pen and paper and physical intuition. The engine did not contribute to its own redesign except by making the economy productive enough to support the engineer. Computation improves itself more directly. Electronic design automation explores vast design spaces for the next generation of chips. Models participate in their own improvement cycle through code generation, synthetic data creation, and automated evaluation of candidate architectures. None of these loops is fully closed, and human judgment remains essential at decision points. Which architectures to explore, which benchmarks to trust, which trade-offs to accept. But the computational portion of the improvement cycle is growing, and the human portion is increasingly concentrated at the level of direction-setting rather than execution. The feedback loop is more native to computation than it ever was to steam or electricity, even if it remains incomplete and may remain so indefinitely.

Because it is general-purpose and partially self-improving, computation behaves less like a sectoral input and more like a meta-input, one that raises the productivity of search and coordination across many sectors simultaneously.

Standard production functions treat inputs as exogenous. Factor Prime is endogenous to its own improvement, and the recursion is not metaphorical: it operates through the training loop, where this generation's output becomes next generation's input. Whether this endogeneity produces sustained acceleration or merely a transient burst before bottlenecks bind is an empirical question that the current evidence does not settle.

Bottlenecks eventually bind. The cost of training data, the physics of heat dissipation, the lead time for new fabrication facilities, the scarcity of engineering talent that can turn raw capability into deployed systems. All of these impose limits, and the limits are real. But over a meaningful range, the improvement process exhibits something closer to increasing returns than diminishing returns, because capability reduces the marginal cost of producing further capability in those portions of the process where computation can substitute for labor. A factor that produces more of itself operates according to a different logic than a factor that produces only output.

Standard production functions treat computation as an intermediate good, produced by the capital and labor of the technology sector and consumed by other sectors as an input. This accounting is correct as far as it goes, but it obscures the recursive structure. Computation is used to produce more computation, and the computation it produces is used to produce yet more computation. The loops nest. An input that participates in expanding its own production frontier is qualitatively different from an input that produces only final goods — though the degree of that participation, and whether it constitutes genuine self-improvement or merely labor substitution within a human-directed pipeline, remains contested.

Consider the contrast with other factors. Labor can be trained and augmented, but it does not replicate under managerial control the way capital equipment can be replicated. The production of new workers remains stubbornly biological, bound by timescales no management technique can compress. Capital can be accumulated, but a factory cannot design and build its successor without human intervention at every consequential step. Land is fixed in supply. The surface of the earth expands for no one. Energy is abundant in principle but requires infrastructure to convert and deliver, and the infrastructure takes decades to build.

Computation, by contrast, can be directed toward its own improvement. The recursion is imperfect, and human judgment remains essential at many points, but the effect is to make the production of computational capability increasingly a function of computational capability itself. The factor is producing more of the factor, and the factor it produces is producing still more.

This recursive property has consequences for how we model economic growth. Standard growth models treat technological progress as an exogenous residual, the part of output growth that cannot be explained by measured inputs, or as the output of a separate research sector subject to diminishing returns. But if the technology that produces technology can improve itself, the returns to investment in that technology may be increasing rather than diminishing over some range, until physical constraints bend the curve back toward the horizontal. The scaling laws observed in large language models, where capability increases predictably with compute over many orders of magnitude, suggest that we remain within such a range. The question is how far the range extends and what happens when it ends, and no one knows the answer with confidence.

The physics of computation sets the ultimate boundaries. Landauer's principle establishes a floor on energy consumption per bit erased. Lloyd's limits establish a ceiling on operations per unit mass and energy. Current silicon operates roughly a million times above Landauer's floor, which means there is room for improvement that would have seemed miraculous to engineers of an earlier generation. The brain demonstrates that biological systems can approach efficiencies far beyond what semiconductor technology has achieved. Within these boundaries, the improvement is driven increasingly by computation acting on itself. Computation does not fabricate its own transistors, not yet and perhaps not ever, but it increasingly reduces the marginal cost of designing, verifying, and operating the systems that do.

This is what it means to say that energy is being structured into computation and computation into intelligence. The phrase describes a physical process, not a metaphor. Energy flows into data centers and is converted into switching states in transistors. The switching states implement algorithms that manipulate patterns of information. The patterns encode learned models that reduce uncertainty about the world. Predicting the next token, classifying an image, recommending an action, generating a design that a human engineer would not have found. Each stage of the transformation dissipates entropy and produces structure, and the structure has economic value because it does useful work.

Informational work has physical consequences. A better prediction reduces waste. A better classification improves allocation. A better recommendation saves time. A better design requires fewer materials or less energy to achieve the same function. The transformation of information into decisions, which is what computation accomplishes, translates into the transformation of matter and energy through the actions those decisions inform.

We can now give Factor Prime a semi-formal definition. It is energy, structured through computation and disciplined by selection, that produces economically useful uncertainty reduction. One operational proxy is cost per successfully completed task at a defined quality threshold, where success means acceptance by a downstream verifier, whether human or automated. The metric is imperfect, but it captures what the production function needs to track. The conversion of joules into economically relevant decisions. Thermodynamic cost is necessary for value but not sufficient. What matters is whether the structure produced by that cost passes the test of use.