Executive Summary
The global compute infrastructure landscape is undergoing one of its most consequential structural shifts in a generation. Demand for AI processing capacity is growing faster than any prior technology wave, and the legacy of centralized, grid-dependent data centers is proving increasingly inadequate to absorb it. The four largest hyperscalers alone are on pace to spend approximately $635 to $665 billion in combined capital expenditure in 2026 1, up roughly 67 to 74 percent year-over-year. That capital is chasing a resource that is harder to acquire than ever before: reliable, available power at scale.
The fundamental constraint is no longer capital, and it is no longer GPU supply alone. It is electricity. Grid interconnection queues in the United States now stretch an average of five to seven years in saturated markets, with some utilities citing 12 years just to study an interconnection request 2. Data centers, by contrast, must be operational within 18 to 36 months to meet commitments. The gap between those timelines is forcing a structural rethink of where and how compute infrastructure gets built.
Distributed compute, once treated as a niche workaround for cryptocurrency mining, has entered a new era. The combination of stranded power assets, modular data center form factors, and escalating grid congestion has created conditions for a genuine alternative to the centralized hyperscale campus model. This report surveys the state of that market as of early 2026: the scale of demand, the nature of the bottlenecks, the players executing on distributed models, and the regulatory forces shaping the space.
The Demand Surge: AI's Insatiable Appetite for Power
The AI buildout has no precedent in the history of data center infrastructure. Goldman Sachs now projects that U.S. data center power demand will grow 175 to 220 percent from 2023 levels by 2030 3, the equivalent of adding a top-ten power-consuming nation to the grid. The IEA projects global data center electricity consumption will nearly double, from 415 TWh in 2023 to 945 TWh by 2030 4. In the United States, S&P Global estimates that grid-connected data center demand will reach 134.4 GW by 2030, nearly triple the 61.8 GW recorded in 2025 5.
The AI workload itself is driving the steepest part of this curve. Gartner estimates that AI-optimized server electricity usage will rise nearly fivefold, from 93 TWh in 2025 to 432 TWh by 2030 6. AI workloads currently account for 15 to 25 percent of data center electricity use, a share that is projected to reach 35 to 50 percent by decade's end. GPU demand reflects the same dynamic: lead times for data-center-grade GPUs sit at 36 to 52 weeks as of early 2026 7, and the data center GPU market, valued at $26.3 billion in 2026, is projected to reach $178.1 billion by 2033 at a compound annual growth rate of 31.4 percent 8.
Hyperscaler capital commitments confirm this trajectory. Amazon is guiding toward approximately $200 billion in 2026 capital expenditure. Alphabet is targeting $175 to $185 billion. Microsoft and Meta are committed to $145 billion and $115 to $135 billion, respectively. Project Stargate, the joint venture between SoftBank, OpenAI, Oracle, and MGX, has committed $500 billion over four years 9, with the first $100 billion already in motion. McKinsey projects $6.7 trillion in total global data center investment through 2030 10, with $5.2 trillion directed at AI-ready facilities.
Meeting this demand requires generation capacity that does not yet exist. Goldman Sachs estimates approximately 82 GW of new U.S. generation capacity is needed to serve data center demand by 2030 11, with roughly 60 percent expected to come from natural gas and 27 percent from solar. BCG's analysis warns of a potential shortfall of up to 80 GW of firm power by 2030 12. The infrastructure to close that gap cannot be built through traditional grid expansion alone.
The Centralization Problem
The dominant data center model of the past two decades was built around predictable assumptions: reliable grid access, low land costs, utility-scale power at known prices, and clustered workloads in purpose-built campuses. Each of those assumptions is now under stress.
Grid saturation at traditional hubs. Northern Virginia, which houses the world's highest concentration of data center capacity, has vacancy rates below one percent and interconnection queues extending five to seven or more years. Dublin imposed a moratorium on new data center grid connections until at least 2028. Amsterdam and Singapore face comparable constraints 13. The IEA estimates that 20 percent of planned data centers globally could face delays in obtaining grid connections 14.
Interconnection queue collapse. As of the end of 2024, nearly 2,300 GW of generation and storage capacity was actively waiting in U.S. interconnection queues, equivalent to roughly twice the entire current U.S. generating capacity 15. Only 13 percent of capacity requesting interconnection from 2000 to 2019 had reached commercial operation by end of 2024 16. The median time from interconnection request to commercial operation has grown from under two years for projects completed before 2007 to a median of five years for projects completing in 2023, and an average of eight years for those operational in 2025 17.
Equipment supply constraints compound the problem. GE Vernova's turbine order backlog hit a record 80 GW in December 2025 against annual production of 20 GW, making the company effectively sold out through 2029 18. Siemens Energy and Mitsubishi Power show the same constraint for heavy-frame gas turbines. Large transformer lead times run 80 to 120 weeks, with transmission-class units taking three to six years in some cases 19.
Cost escalation. Colocation power pricing has risen 53 percent in three years, from approximately $120 per kilowatt-month in 2021 to $184 per kilowatt-month in 2024 20, driven almost entirely by power scarcity. North American solar PPA prices rose 9 percent year-over-year to $61.67 per MWh in Q4 2025 21, and ERCOT wind PPA prices rose 19 percent in the same period. There is no price ceiling in sight while demand outpaces generation capacity additions.
Concentration risk is increasingly recognized. A 2025 NERC Long-Term Reliability Assessment found that five regions face high risk by 2030 22: MISO, PJM, ERCOT, the WECC Basin, and the WECC Northwest. The 2025 assessment, released January 2026, projected summer peak demand growth of 224 GW through 2035, which is 69 percent higher than the prior year's projection, and recorded the highest compound annual growth rates for peak demand since NERC began tracking in 1995 23. When critical AI workloads concentrate in a single geography or on a single grid, the failure scenarios become correspondingly severe. Power failures already account for 52 percent of significant data center outages 24, and more than 90 percent of midsize and large organizations report downtime costs exceeding $300,000 per hour.
The structural math of centralized compute, grid-dependent, geographically concentrated, and dependent on equipment with multiyear lead times, does not close at the scale AI requires.
The Distributed Alternative
The response to grid scarcity is already visible in the data. Sightline Climate's February 2026 analysis of 777 announced data centers and AI factories, totaling 190 GW of planned capacity, found that on-site and hybrid power approaches account for less than 10 percent of total projects by count but nearly 50 percent of announced capacity in megawatts 25. A handful of grid-independent or behind-the-meter campuses are driving the majority of planned capacity additions.
The distributed and edge compute market reflects this shift. The edge data center market is valued at $14.7 billion in 2025 and projected to reach $71.9 billion by 2035 at a CAGR of 17.5 percent 26. Technavio's projection is more aggressive: $45.1 billion in incremental growth at a CAGR of 32.8 percent from 2024 to 2029 27. The distributed cloud market is growing at a CAGR of 23.6 percent through 2033 28.
Three structural forces are accelerating distributed adoption:
1. The inference imperative. AI training workloads remain concentrated at multi-gigawatt campuses because they require massive, tightly coupled GPU clusters. Inference workloads do not. Per CBRE's North America Data Center Trends report for H2 2025, "the shift from AI training to AI Inference demand is creating a need for more regional and distributed data centers, not just hyperscale hubs. Inference workloads often need to be near end users, reshaping site strategy." 29 As inference grows to dominate the total compute mix, the geographic and architectural case for distributed deployment strengthens.
2. Stranded power as a design input. The oil and gas industry flares over 140 billion cubic meters of natural gas annually, wasting more than $30 billion in potential energy value and emitting over 350 million tonnes of CO2-equivalent 30. That stranded gas can be converted to electricity at a fraction of grid cost. Stranded gas acquisition costs run 10 to 30 cents per thousand cubic feet versus a market price typically above $3 per MCF, translating to fully loaded power costs under one cent per kilowatt-hour versus three to eight cents or more on the grid 31. For compute operators, behind-the-meter power from stranded resources is not merely cheaper. It bypasses the interconnection queue entirely, collapsing deployment timelines from years to months.
3. Modular form factors. Containerized, skid-mounted data center units can be deployed in weeks, relocated as well economics change, and scaled incrementally. Giga Energy reports building AI-ready data center sites in 6 to 8 months versus the industry standard of 24 to 36 or more months 32 using pod-based construction. The agility advantage over fixed-site hyperscale builds is significant, particularly for operators whose stranded gas sites may have limited production lifecycles.
A Bloom Energy survey found that one-third of hyperscalers and colocation providers plan to bring power production entirely on-site by 2030 33, a 22 percent increase from the prior survey six months earlier. EPRI, NVIDIA, Prologis, and InfraPartners announced a collaboration in February 2026 to develop 5 to 20 MW distributed data centers adjacent to utility substations with available capacity, specifically targeting AI inference workloads 34. The distributed model is no longer niche.
Key Players and Market Movements
Several companies have defined the emerging distributed compute infrastructure market, and their strategic trajectories illustrate both the opportunity and the divergent approaches to capturing it.
Crusoe (formerly Crusoe Energy Systems) is the most important reference point, and the most instructive one. Founded in 2018, Crusoe pioneered Digital Flare Mitigation (DFM) technology, deploying mobile containerized data centers powered by wellhead natural gas at oil and gas production sites. By 2024, the company had converted 10.4 billion cubic feet of flare gas, generating approximately 1.3 TWh of electricity and avoiding 1.3 million metric tonnes of CO2-equivalent emissions 35.
In March 2025, Crusoe made a decisive strategic pivot. The company sold its entire DFM and bitcoin mining business to NYDIG 36, which acquired more than 425 modular data centers across seven U.S. states and Argentina, representing over 270 MW of power generation capacity, along with approximately 135 Crusoe employees. Crusoe retained a significant equity stake in the resulting joint venture but exited the stranded gas-to-compute business entirely. The company has repositioned as a hyperscale AI infrastructure provider, building a 1.2 GW campus at Lancium's Abilene, Texas clean campus 37 serving Oracle, Microsoft, and OpenAI as part of Project Stargate. Crusoe's valuation reached $10 billion following its $1.375 billion Series E in October 2025 38, with total capital raised of approximately $3.9 billion.
The Crusoe pivot is significant not as a disqualification of the stranded gas-to-compute model, but as a market signal. The pioneer in the space validated the underlying technology, scaled it to hundreds of sites, and then chose the higher-margin hyperscale AI market over the distributed model. That decision left the stranded gas-to-compute segment without its most prominent operator and created a clear opening for focused entrants.
Lancium occupies a related but distinct position. Founded in 2017, the company owns grid interconnections and land in West Texas as its primary assets, with 3.2 GW of approved ERCOT interconnections and 6.1 GW of total capacity across five sites 39. Its Clean Campus in Abilene is the anchor site for Project Stargate, contracted at 1.2 GW. Blackstone invested $500 million in equity in late 2024 40, followed by a $600 million debt financing in October 2025 41. Lancium's model, owning scarce grid interconnections and charging take-or-pay fees, is grid-dependent rather than distributed, but it illustrates the extraordinary value being placed on power access as an asset class.
Applied Digital (Nasdaq: APLD) demonstrates the market for purpose-built AI data centers in secondary power markets. The company operates in North Dakota, where it has 286 MW of operational capacity across Jamestown and Ellendale 42. Its Polaris Forge 1 campus in Ellendale is fully leased to CoreWeave for an estimated $11 billion in lease revenue over 15 years 43. Polaris Forge 2 in Harwood represents a $3 billion, 280 MW facility expandable to 1 GW 44, with a $5 billion lease already signed with an unnamed investment-grade hyperscaler. Applied Digital's stock rose 237 percent in 2025 45, making it the top-performing internet services stock of the year. The company is also exploring behind-the-meter natural gas generation with Babcock and Wilcox at 1 GW scale.
Giga Energy represents the modular, stranded gas-to-compute model at commercial scale. Texas-based and founded in 2019, Giga has deployed over 150 MW of containerized modular data centers across Texas, Shanghai, and Argentina 46, including a flare gas operation in Mendoza Province that launched in March 2024. The company has a joint venture with Atlas Power to deploy stranded and flare gas-to-compute infrastructure in the Williston Basin, the same basin that spans North Dakota's most active oil and gas production region. Its 6 to 8 month deployment timeline 47 versus the industry standard of two to three-plus years is a defining competitive advantage.
Pacifico Energy is pursuing the off-grid model at a scale that underscores where the market is heading. The company received a Texas Commission on Environmental Quality air permit for 7.65 GW of gas-fired power generation at its GW Ranch project in Pecos County, the largest air permit ever granted in the United States 48. The facility, integrated with 1.8 GW of battery storage and up to 750 MW of solar, is 100 percent off-grid and designed exclusively for hyperscale AI data centers, with first power targeted for Q1 2027.
Chevron entered the market in 2025 with a planned 2.5 GW off-grid natural gas facility in West Texas, expandable to 5 GW 49, using GE Vernova turbines in partnership with Engine No. 1. CEO Mike Wirth explicitly framed the project around grid independence: "We're working to build an energy park that's not connected to the grid, so the costs don't flow to all consumers."
Regulatory Tailwinds
The regulatory landscape contains several intersecting forces, some creating urgency for distributed compute adoption, others creating near-term uncertainty.
EPA Methane Regulations. The Biden EPA's 2024 New Source Performance Standards (NSPS) under 40 CFR Part 60, Subparts OOOOb/c, represented the most comprehensive federal methane rules for oil and gas to date, requiring operators to end routine flaring and deploy comprehensive leak detection. The Trump EPA issued an Interim Final Rule in July 2025 extending compliance deadlines by 18 months, followed by a November 2025 Final Rule pushing the flaring compliance deadline a further 180 days and the reporting deadline to November 2026 50. The EPA also announced in March 2025 that it is reconsidering the underlying 2024 methane requirements entirely. The rollback reduces the immediate regulatory pressure on oil and gas operators to adopt flare gas-to-compute as a compliance tool, but the long-term trajectory of state-level regulations, voluntary ESG commitments, and the World Bank's Zero Routine Flaring initiative continues to create structural demand for alternatives to routine flaring.
The FLARE Act. Introduced by Senator Ted Cruz in March 2025, the Flaring Limits and Resource Efficiency (FLARE) Act provides permanent 100 percent bonus depreciation for flaring and venting mitigation systems 51, including equipment that converts natural gas to electricity, computational power, or digital asset mining. The bill explicitly includes "conversion to computational power" as a qualifying use and prohibits foreign entities of concern from accessing the benefit. If enacted, the FLARE Act would meaningfully improve the tax economics of stranded gas-to-compute deployments.
The GRID Act. The bipartisan GRID Act, introduced by Senators Hawley and Blumenthal in February 2026, would require new data centers of 20 MW or more to obtain power from sources other than the electric grid, with a 10-year off-ramp for existing facilities 52. If enacted, this would be the most significant direct regulatory driver for off-grid and behind-the-meter compute infrastructure, creating an explicit requirement for the model that distributed operators are already building.
Federal permitting acceleration. Executive Order 14318 (July 2025) streamlined NEPA reviews, extended FAST-41 coverage to data centers over 100 MW, and directed the EPA to expedite Clean Air Act and Clean Water Act permitting for qualifying projects. The Trump administration's America's AI Action Plan designated data center and energy infrastructure as a strategic national priority 53, removing several layers of federal process friction.
North Dakota regulatory posture. North Dakota's Industrial Commission Order 24665 requires 91 percent gas capture across all wells 54, creating a standing compliance imperative for Bakken operators who cannot otherwise monetize associated gas. The state's Public Service Commission has explicitly acknowledged the data center opportunity and does not directly regulate behind-the-meter facilities, making it one of the most accommodating regulatory environments in the country for distributed compute deployment.
White House Ratepayer Protection Pledge. On March 4, 2026, seven major hyperscalers signed a voluntary compact 55 committing to pay for all new generation and grid infrastructure costs associated with their data centers without passing costs to utility ratepayers. The pledge reinforces the structural shift toward self-generation and behind-the-meter power as the preferred approach for large AI infrastructure builds.
Investment Flows
Capital is moving toward compute infrastructure at a scale that has no modern precedent, and the composition of that capital is shifting toward power-first, grid-independent models.
Data center investment deals hit a record $61 billion in 2025 56, while data center debt issuance nearly doubled to $182 billion, up from $92 billion in 2024 57. McKinsey projects $6.7 trillion in global data center investment through 2030 58, with $5.2 trillion of that directed at AI-ready facilities. Goldman Sachs estimates that meeting U.S. data center demand through 2030 will require $790 billion in grid capital expenditure alone 59, separate from the compute infrastructure investment itself.
The sectoral flows are instructive:
Hyperscale AI campuses are attracting the largest single commitments. Crusoe's $3.4 billion joint venture with Blue Owl Capital and Primary Digital Infrastructure for its Abilene campus, Lancium's $1.1 billion in equity and debt financing from Blackstone and debt markets 60, and Applied Digital's $5 billion partnership with Macquarie Asset Management 61 all represent institutional capital treating power access as the primary investable asset, with compute as the downstream monetization layer.
Behind-the-meter and off-grid power generation is attracting energy majors who previously had no role in data center infrastructure. Chevron's planned 2.5 to 5 GW gas-fired off-grid facility in West Texas represents a significant signal: one of the world's largest oil and gas companies identifying data center power supply as a core business line.
Distributed compute and stranded gas-to-compute attracts a different capital profile: private equity, energy transition funds, and strategic corporate investors seeking lower entry valuations with asymmetric upside tied to AI infrastructure demand. The Giga Energy/Atlas Power joint venture in the Williston Basin and comparable operators in the Permian Basin are executing on deployment strategies that require less capital per megawatt than hyperscale builds but generate comparable returns per kilowatt-hour given stranded gas economics.
Secondary market data centers are drawing institutional attention. Applied Digital's stock performance of 237 percent in 2025 62 and its $11 billion CoreWeave lease have validated the thesis that purpose-built AI data centers in secondary power markets, specifically places with low-cost, reliable power and favorable climates, generate institutional-grade long-term contracts. North Dakota, with its cool climate, available land, and stranded gas resources, has emerged as one of the most active secondary markets in the country.
The broader signal is that infrastructure capital is repricing power access as a scarce asset. Colocation pricing has risen 53 percent in three years 63. Land adjacent to available grid capacity commands significant premiums over comparable land without it. Operators who control power, rather than renting it from utilities, hold a structural advantage that compounds over the course of multi-decade data center leases.
What Comes Next
The distributed compute infrastructure market in 2026 sits at the intersection of three converging forces: unprecedented AI compute demand, a grid system structurally incapable of meeting that demand at the required speed, and a substantial inventory of untapped power in the form of stranded and flare gas. That combination is not theoretical. It is already producing commercial-scale deployments, institutional investment, and legislative frameworks designed to accelerate it.
Several dynamics will define the next three to five years:
The inference buildout will drive distributed deployment at scale. AI training will remain concentrated at a small number of gigawatt-scale campuses operated by or for the major hyperscalers. Inference is different. CBRE projects that inference demand will drive a need for more regional and distributed infrastructure 64 as model deployment reaches enterprise and government end users who require low latency and data sovereignty. That inference layer will be built at smaller scale, in more locations, closer to end users. That is exactly the deployment profile that modular, behind-the-meter infrastructure supports.
Stranded gas-to-compute will mature from proof-of-concept to asset class. Crusoe's DFM business, now operated by NYDIG with over 425 modular data centers across seven states and Argentina, represents a validated playbook. The pioneer chose to exit for the higher-margin hyperscale market, but the underlying technology and economics remain intact. Operators who remain focused on stranded gas-to-compute are not working from unproven concepts. They are executing against a validated model in a market that the pioneer's departure has opened.
Regulatory pressure on flaring will intensify over the medium term. The 2025 EPA rollbacks delay but do not eliminate the federal trajectory toward methane accountability. State-level regulations in North Dakota and elsewhere require active gas capture. The voluntary ESG commitments of major operators create demand for documented emission reductions independent of regulatory mandates. The voluntary carbon market's shift toward high-integrity methane avoidance credits 65 creates a potential additional revenue stream for operators who can document and verify their conversions.
Grid bottlenecks will persist through at least 2030. GE Vernova is sold out through 2029. Transformer lead times run 80 to 120 weeks. The interconnection queue holds 2,300 GW of pending requests 66. Sightline Climate projects that 30 to 50 percent of the 16 GW slated to come online in 2026 will be delayed 67, primarily due to power constraints. There is no scenario in which the existing grid infrastructure absorbs the AI compute buildout without sustained delay and cost escalation. Every year of constraint is a year in which behind-the-meter, stranded-energy-powered compute holds a structural speed and cost advantage over grid-dependent alternatives.
Secondary markets will capture a growing share of new capacity. The combination of available land, lower-cost power, favorable regulatory environments, and natural cooling makes regions like North Dakota, the Permian and Williston Basins, and parts of the mountain West structurally attractive for new AI data center deployment. Applied Digital's $16 billion in contracted lease revenue across its North Dakota portfolio 68 demonstrates that hyperscalers and cloud providers are willing to sign long-term, high-value contracts in secondary markets when the power economics and operational reliability are in place.
The build-out of AI compute infrastructure is the defining infrastructure story of this decade. The question is not whether it will happen, but where and how. The grid cannot absorb it fast enough. Nuclear takes a decade. Large transformer manufacturers are sold out through the end of the decade. What is available now is stranded energy: gas that is currently flared, power that is currently wasted, in regions with available land and operators willing to deploy modular compute infrastructure on top of it. The economics are compelling. The regulatory direction, despite near-term uncertainty, is favorable. The timing, with AI inference demand about to enter its steepest growth phase, is precise.
Distributed compute infrastructure is not a supplement to the hyperscale buildout. In the near term, it may be the only viable path to getting new compute capacity online at the speed AI deployment requires.
This report is based on publicly available data, regulatory filings, and market research compiled through March 2026. All data points are cited to primary or authoritative secondary sources. Market projections reflect a range of analyst and research institution estimates and should not be interpreted as forward-looking guidance.