According to CNBC, HSBC CEO Georges Elhedery warned at the Global Financial Leaders’ Investment Summit in Hong Kong about a significant mismatch between AI investments and revenues, noting that while computing power is essential, current revenue profiles may not justify massive spending. Morgan Stanley estimates global data center capacity will grow six times over five years, with data centers and hardware alone costing $3 trillion by 2028, while McKinsey projects AI-capable data centers will require $5.2 trillion in capital expenditure by 2030. Elhedery emphasized that consumers aren’t ready to pay for AI services yet, and productivity benefits won’t materialize for five years or more, with General Atlantic CEO William Ford calling it a “10-, 20-year play” that could involve “misallocation of capital” and “irrational exuberance” in early stages. Big Tech firms now collectively expect capital expenditures to exceed $380 billion this year, while OpenAI has announced roughly $1 trillion in infrastructure deals with partners including Nvidia, Oracle and Broadcom.
The Technical Reality Behind the Numbers
The staggering infrastructure costs reflect fundamental technical requirements that differentiate AI computing from traditional IT. AI models require specialized tensor processing units (TPUs) and graphics processing units (GPUs) that consume significantly more power and generate more heat than conventional servers. McKinsey’s analysis reveals that AI data centers demand liquid cooling systems, advanced power distribution, and specialized networking infrastructure that can cost 3-5 times more per rack than traditional data centers. The computational density required for training large language models means companies aren’t just building more data centers—they’re building entirely different kinds of facilities with unprecedented power requirements, often exceeding 50 megawatts per facility compared to the 5-10 megawatts typical of traditional cloud data centers.
The Capital Allocation Dilemma
What makes this spending cycle particularly challenging is the uncertainty around which AI applications will generate sustainable revenue streams. Unlike previous technology waves where use cases were clearer, AI’s transformative potential comes with implementation complexity that delays monetization. Companies face a prisoner’s dilemma: they must invest heavily to stay competitive, yet the path to recouping these investments remains unclear. Morgan Stanley’s assessment of six-fold capacity growth assumes widespread enterprise adoption, but many organizations are still struggling with basic data infrastructure and governance requirements needed to leverage AI effectively. The gap between infrastructure readiness and organizational capability creates a dangerous mismatch where capital is deployed years before the ecosystem can utilize it efficiently.
Historical Precedents and Key Differences
While comparisons to railroads and electricity are valid—both required massive upfront investment with delayed returns—AI infrastructure faces unique challenges. Unlike physical infrastructure with predictable depreciation schedules, AI computing hardware faces rapid obsolescence as model architectures evolve. The current generation of AI chips may become obsolete within 3-4 years as new architectures emerge, creating a treadmill of continuous investment. Additionally, unlike railroads that served clear transportation needs or electricity that powered existing machinery, AI must create new markets and behaviors. The success of these investments depends not just on technological capability but on fundamental changes in how businesses operate and consumers behave—changes that historically take longer to materialize than infrastructure deployment.
The Winners and Losers Equation
The concentration of spending among Big Tech companies creates both advantages and risks for the broader ecosystem. While their scale enables efficient infrastructure deployment, it also creates dependency relationships where smaller companies must build on platforms controlled by a few dominant players. The $1 trillion in OpenAI infrastructure deals represents a new model of capital formation where capability access becomes more valuable than ownership. However, this concentration also means that if any major player misjudges demand or technology shifts, the ripple effects could destabilize the entire supply chain from chip manufacturers to power providers. The current spending patterns assume continuous, exponential growth in AI adoption—an assumption that may not account for regulatory hurdles, talent shortages, or simply slower-than-expected enterprise adoption cycles.
The Path to Monetization
The critical question isn’t whether AI will transform industries—most experts agree it will—but whether the timing of returns aligns with investment cycles. Current infrastructure spending assumes that within 3-5 years, enterprises will be paying premium prices for AI services and consumers will adopt AI-powered products at scale. However, history suggests that transformative technologies typically follow an S-curve adoption pattern with a longer-than-expected initial phase. The internet bubble of the late 1990s saw similar infrastructure overbuilding, with many companies failing while the eventual winners emerged stronger. The difference this time is the scale of capital required—$8 trillion represents nearly 8% of global GDP, creating systemic risk if returns don’t materialize within the expected timeframe.
