From Cloud App to State‑Scale: The New Politics of Compute

Late September 2025 revealed how frontier AI is shifting from cloud contracts to state scale infrastructure. This analysis asks who should govern megascale compute, how communities benefit, and what a credible social contract must include.

ByTalosTalos
Trends and Analysis
From Cloud App to State‑Scale: The New Politics of Compute

September’s turning point

In one crowded week at the end of September 2025, the center of gravity for frontier AI shifted. On September 22, OpenAI and NVIDIA unveiled a plan to deploy at least 10 gigawatts of AI systems, with NVIDIA staging investment as capacity comes online. This is not a typical product cycle. It reads like an infrastructure program and places compute on the same planning horizon as power plants and transmission corridors. See the details in the announcement of the 10 gigawatts of NVIDIA systems.

A day later, OpenAI, Oracle, and SoftBank disclosed five new Stargate data center sites across multiple U.S. grid regions, bringing planned capacity toward 7 gigawatts, with a pathway to 10 by the end of 2025. That is not a conventional cloud footprint. It is a distributed industrial buildout with utility interconnects, regional employment targets, and multi year capital plans measured in tens of billions. You can see the site expansion in the partners’ update on five new Stargate sites.

Then, on September 25, CoreWeave announced another large contract with OpenAI. Together, these moves form a single picture. Frontier AI is no longer something you rent by the hour. It is becoming the grid scale substrate on which the next economy will either be built or bent.

From product to public works

For fifteen years, we learned to think of compute as elastic, abstracted, almost weightless. Click a button and the servers appear. That illusion rested on an underlay of power, land, water, and capital that stayed out of sight. September 2025 brought those realities back into view. Ten gigawatts is the order of magnitude used to plan generation fleets and to balance grid regions. Five new sites do not only need cages and cooling; they need substations, high voltage feeders, and sometimes brand new lines to reach strong transmission nodes.

Treating this shift as a procurement story misses the point. When AI becomes state scale, its legitimacy questions change. Cloud contracts answer to customers. Power plants answer to publics. The social license for compute must be earned the same way utilities earn theirs, through processes that recognize trade offs across time, space, and communities.

The politics of scale

The numbers are not just big. They are binding. A gigawatt class site ties up scarce transformers, long lead electrical equipment, and interconnection queue slots that already stretch for years. It absorbs industrial parcels near transmission and reliable water or alternative cooling. It locks in power purchase agreements that shape price curves for everyone on the same grid. When multiple actors sprint for capacity at once, they create a de facto policy even before any legislature votes.

Consider two vantage points:

  • If you live near a future AI factory, you will not experience it as a cloud. You will experience new traffic, a construction timeline measured in years, and a line item on your utility’s integrated resource plan.
  • If you work at a regional transmission operator, you will face interconnection studies that combine volatile load, on site generation, and demand response capabilities that must actually perform during the hottest week of the year.

Scale forces coordination. Without it, the scramble for megawatts becomes a private race that reallocates public capacity by default.

Who gets to legitimate megascale compute

Legitimacy is not the same as legality. A permit is not a social contract. Megascale compute needs both. The question is where the forum sits and who has standing.

  • Utility commissions decide rates and prudence for regulated assets. They can require public interest tests for long term AI load commitments that affect non participants on the same system.
  • Independent system operators and regional transmission organizations already convene stakeholders around resource adequacy and reliability. They can create predictable interconnection classes for data center clusters, with firm performance obligations and transparent queue rules.
  • Municipalities control zoning, roads, and local environmental impact. They can negotiate community benefit agreements that are real, measurable, and enforceable.
  • State energy offices can set standards for the carbon intensity and water stewardship of new industrial load, aligning with climate goals and drought realities.
  • Federal actors shape incentives, export controls, and critical infrastructure designations. They can define national interest criteria for strategic compute while guarding against favoritism.

The legitimacy puzzle is simple to state. AI labs are private. The externalities are public. The remedy is hard because labs move faster than institutions do. Policy must establish default conditions that travel with the load, not with the logo.

The ethics of converting public goods into private intelligence

Energy, land, water, and transmission capacity are not like office chairs. They are shared, finite, and often publicly managed. When an AI factory consumes them, the community is making an investment. Even when no public grant is written, the public is lending access to constrained resources and taking risk. What does the public receive in exchange?

  • Reliability, not just promises. Curtailable contracts must be enforceable and measured in actual performance under stress events, not marketing claims.
  • Environmental integrity. Power deals that claim clean energy should reflect time matched, grid aware accounting, not annualized averages that hide peak fossil use.
  • Good jobs with durable ladders. If a site claims thousands of construction jobs and hundreds of permanent jobs, the pipeline for local talent should be real, with apprenticeships and community college partnerships that last beyond ribbon cutting.
  • Local upside that compounds. Community benefit agreements should include revenue sharing or local equity vehicles for long term participation, not just one time grants.

These questions connect to the broader debate over the moral economy of memory, where consent, debt, and justice are already reshaping expectations for data and model training. When the inputs are physical and scarce, the same moral vocabulary applies. Who consented to the use of grid headroom, who carries the debt of infrastructure upgrades, and how are benefits distributed with any sense of justice?

The emerging social contract for AI factories

We know what social contracts look like for utilities. We do not have one for compute yet, so we should write one. A workable compact could include the following ingredients.

  • Transparency by default. Public disclosure of site level power demand ranges, interconnection status, and expected load shapes within reasonable bounds to protect security and competition. Utilities already publish anonymized large load data. Extend it to compute clusters.
  • Clean power with time fidelity. Hourly or sub hourly matching for large loads, verified by independent registries. Credit partial curtailment and on site generation, but require proof.
  • Water stewardship and heat reuse. Cooling choices should reflect local water stress. Waste heat should be cataloged and, where feasible, recovered for district uses.
  • Load flexibility that is real. Interruptible tariffs should have teeth, with automatic penalties and public reporting for non performance during grid events.
  • Community participation rights. Mechanisms for neighbors, workers, and local governments to trigger review if promised conditions drift.

A social contract is not a vibe. It is a checklist that can be audited and renewed.

Antitrust in the age of compute oligopoly

When a few firms control the inputs to intelligence, antitrust frameworks must wake up. The risk is not only price. It is foreclosure. If access to state scale compute is gated by exclusive relationships and circular financing among chipmakers, cloud operators, and model labs, then challengers are locked out of the frontier. That harms innovation now and creates systemic risk later. Concentration in chips, interconnects, and sites can cascade into concentration in models and applications.

Policy has tools, but they are rusty for this use case.

  • Exclusive supply and right of first refusal clauses should face bright line tests when they touch scarce, strategic inputs.
  • Structural separation principles can travel. If a firm sells compute capacity and competes with customers for model training, it should meet strict non discrimination and transparency obligations.
  • Interoperability mandates can reduce switching costs and weaken lock in. Standardized job scheduling interfaces and portable crediting for clean power matching are two practical examples.
  • Public capacity should remain public. When utilities or states underwrite sites with grants, tax credits, or subsidized land, they should secure open access conditions that survive changes in ownership.

Antitrust is not the only answer, but without it the market risks congealing into a compute club that sets the pace and the price of intelligence for everyone else.

Grid governance grows up

The grid was designed for generation following load, then slowly adapted to renewables that need load to follow generation. AI factories add a third pattern. They are large, fast growing, and sometimes flexible, but only if incentives and controls align.

  • Queue reform with performance milestones can prevent paper projects from blocking real ones.
  • Location signals matter. Siting near surplus generation or strong transmission should be rewarded. Weak nodes should carry a premium that reflects the true cost to upgrade.
  • Capacity markets and reliability products should incorporate verifiable load flexibility from data centers. Promise less, deliver more, get paid. Promise more, deliver less, pay dearly.
  • Behind the meter generation and storage should be encouraged where it reduces stress, not where it simply arbitrages prices. Rules can distinguish by measuring system benefits directly.
  • Regional planning should treat AI factories as anchor tenants for transmission buildouts that benefit many loads, not just the first mover with the loudest press release.

The grid cannot become a shadow policy tool for AI. It must become a transparent one, with clear metrics for system reliability, emissions intensity, and community outcomes.

Geopolitics of steel, silicon, and sites

Compute is now geopolitics with electrical characteristics. The supply chain for accelerators, memory, power electronics, chillers, and transformers is globally stretched. Export controls shape where certain chips can land. Data center siting decisions can tilt labor markets and local politics. If a nation wants strategic autonomy in AI, it needs a coherent plan for sites, silicon, and skilled workers. That plan will touch immigration policy, land use, permitting reform, and trade.

Allies will coordinate. Competitors will race. Either way, the bottlenecks are physical and financial, not only algorithmic. The winner will be the coalition that can turn money into megawatts, then into models, while keeping voters on side.

What a public interest compute charter could say

There is no need to wait for perfect law. Companies can voluntarily adopt a charter now and invite regulators to codify it. A credible charter could commit to:

  1. Time matched clean power for large sites, verified by independent registries.
  2. Enforceable, measured load flexibility with public performance reports after grid events.
  3. Water budgeting and transparent cooling plans aligned to local hydrology.
  4. Heat reuse audits with public results and a plan to capture wins.
  5. Community benefit agreements with revenue sharing or equity like participation for host communities.
  6. Fair access commitments when public incentives are used, including non discriminatory capacity allocation and standardized interfaces.
  7. Independent safety and security reviews for physical and cyber risks at critical sites.

This checklist does not solve every conflict. It makes trade offs visible and raises the floor for behavior.

Three scenarios for 2026 to 2030

  • The compute club. A handful of firms control chips, interconnects, and the first wave of 10 gigawatt class sites. Prices stay high. Access for smaller labs improves only through alliances. Regulators nibble at the edges but avoid structural remedies. Innovation continues, but it is channeled.

  • The regulated utility hybrid. States and system operators create a new category for strategic compute facilities with open access rules and public interest tests. Utilities co invest in sites with clear rate making guidance. Large labs adapt and accept obligations in exchange for predictable interconnection and tariff treatment.

  • The federated mesh. Public and private actors coordinate to build regional compute pools tied to clean energy zones. Interoperable scheduling and credits allow workloads to migrate across hubs. Smaller labs and universities gain burst access during off peak windows. The market stays competitive and resilient.

Reality will likely mix elements of all three. The choice is not a forecast. It is a design space.

What to watch in the next 12 months

  • Interconnection timelines and transformer lead times for the newly announced sites. If these compress, it signals alignment between utilities and AI builders. If they slip, the social contract is not yet real.
  • Regulatory pilots for data center demand response and firm service interruption. Pilot design will reveal how serious everyone is about flexibility.
  • Antitrust scrutiny of exclusive supply and finance loops among chipmakers, AI labs, and specialized cloud providers. Early remedies will set precedent.
  • State level rules for hourly clean energy matching and water disclosure for industrial load. These are the levers that move behavior.
  • Community negotiations in counties hosting the new sites. The structure of those deals will translate values into terms.

For deeper context on how consent and equity should be measured as AI scales, see our discussion of consent, debt, and justice in AI.

The bet we are making

The September announcements are not just about faster models. They represent a bet that intelligence at scale will justify the conversion of public energy, land, and capital into private factories for thinking. That bet might pay off in productivity, science, and prosperity. It might also concentrate power in the hands of a few and harden our grid against the wrong risks.

We get to choose how the bet is structured. We can choose rules that treat compute as a new utility, with duties to match its privileges. We can ask for clean power that is real, flexibility that is proven, and benefits that reach beyond the fence line. We can insist that access to the frontier is not sold only by club members to each other.

AI is becoming infrastructure. Infrastructure is a public act. If we remember that, the future of compute can be built with consent, competition, and care, not only with capital.

Other articles you might like

AI’s Moral Economy of Memory: Consent, Debt, Justice

AI’s Moral Economy of Memory: Consent, Debt, Justice

Courts put a price on past training while a major lab shifts to opt in with five year retention. Here is a practical playbook for consent, influence metering, compensation, and retention that rewards creators and sustains innovation.