top of page

3Q25 Letter--Are we in an AI capex bubble?

Dear Investors and Friends,

​

You’ve no doubt seen that many well-known investment managers have drawn comparisons between today’s AI infrastructure cycle and the late-1990s telecom or dot-com bubbles. We humbly suggest that perhaps those analogies may be reductionist and miss key takeaways.

 

We accept that the sheer size of hyperscaler AI capital expenditure is extraordinary and that the return on that capital remains uncertain. The world’s largest technology platforms are committing hundreds of billions of dollars annually to AI data centres, model development, and inference capacity. Whether those investments ultimately earn a satisfactory return depends on factors that are still evolving: the economics of AI inference, the pricing of agentic AI services, and the degree to which AI replaces or augments existing workflows. In short, we do not know the eventual ROI, we do not know which platform will win, and we do not know which core franchises—search, e-com, social media or office productivity—may be disrupted. In this context, it is perplexing to see the hyperscalers’ share prices re-rate further on compressing cash flows.

 

The picks-and-shovels ecosystem serving the AI build-out is experiencing the opposite dynamic. Power, memory, and thermal-management suppliers are already seeing earnings leverage as capacity tightens. Return on invested capital for these enablers is high today and rising, reflecting constrained supply, high utilisation, and pricing power. Within Variis’ EM strategy, we have a sizeable exposure to this second category, owning companies such as Delta Electronics, SK Hynix, TSMC, and Samsung that stand to benefit, regardless of which hyperscalers ultimately prevail.

 

But, in calling AI infrastructure capex a bubble, commentators risk missing the strategic logic driving the behaviour. Three motivations help explain why hyperscalers are willing to invest so aggressively despite unclear near-term ROI:

 

1. Defensive compulsion. Each hyperscaler recognises that AI may threaten its core franchise—search, e-com, social media or office productivity—and that falling behind could permanently erode network advantages. This is a textbook prisoner’s-dilemma dynamic: no player can risk under-investing while others press ahead.

 

2. Visible, rising demand. Real usage is growing at an extraordinary rate. Token processing, the basic unit of compute consumption, has expanded more than thirty-fold in the past year and is currently doubling every two months. This isn’t the fibre-optic overbuild of the late 1990s, when supply anticipated demand; here, demand is already ahead of capacity.

 

3. A vast addressable market. As AI systems evolve into general-purpose agentic platforms embedded across enterprise and consumer services, the eventual total addressable market may exceed the size of today’s entire cloud and search industries combined.

 

From an investor’s perspective, then, it is reasonable to acknowledge that the economic case is speculative, but the strategic case is coherent—and that the most attractive risk‑adjusted opportunities lie in the infrastructure layer.

​

Power Constrains the Pace of the AI Build-Out

​

Even if hyperscalers wanted to overbuild, their ambitions face hard physical limits. Unlike the telecom bubble—where fibre could be laid almost without constraint—today’s AI data centre expansion is capped by power availability and grid interconnection lead times.


Across major U.S. markets, new power connections can take three to five years. Large hyperscalers report that utility bottlenecks, not financing, are now the gating factor for incremental capacity. Gartner estimates that by 2027, up to 40% of AI‑optimised data centres will face power‑related operating restrictions. In effect, the grid acts as a structural brake on overinvestment.

 

This matters for investors because it means the AI build-out is less likely to experience the same “capacity overshoot” that triggered the telecom collapse.

​

Implications for the Value Chain

 

Variis’ exposure is deliberately positioned downstream of this strategic contest between hyperscalers, in the infrastructure and component suppliers that benefit regardless of which hyperscalers ultimately prevail. We do not assume that hyperscaler ROI will be high; rather, our thesis is that the investment itself is durable—a multi‑year process of capacity investment that underwrites sustained earnings growth for firms such as Delta Electronics, SK Hynix, TSMC, and Samsung.

 

We view these companies as the rational beneficiaries of a global AI arms race that may be speculative at the platform level but is structurally supportive at the picks‑and‑shovels layer—where utilisation is high, backlogs are robust, and pricing power is improving as capacity remains tight.

​

Risk and Timing: Early Innings of the AI Build‑Out

​

The principal risk to our picks‑and‑shovels investments is that hyperscaler AI capital expenditure slows. If the platforms were to curtail build‑out plans materially, suppliers across the value chain would be impacted.


A bear argument has emerged around the limits of capital available to fund the AI infrastructure build-out. Hyperscalers have already committed most of their cumulative free cash flow from the past five years—roughly USD 1.4 trillion—to AI data centre build-outs, while individual projects such as Stargate imply investment requirements exceeding USD 1.5 trillion over the next few years. The concern is that we are moving from an era of self-funded expansion to one reliant on structured or “creative” financing, introducing a degree of reflexivity and potential fragility into the ecosystem. Our counterpoint is that this is not primarily a micro-economic issue but a macro-strategic one. AI has become a global arms race in which nations and corporations view compute capacity as a determinant of economic competitiveness and sovereignty. At that level, the aggregate investment—on the order of 0.5–1% of world GDP annually—is entirely affordable for the global economy, particularly when compared with prior industrial revolutions or energy-transition capex cycles. Capital funding may rely on private, sovereign, and corporate balance sheets, but the strategic imperative to expand compute capacity remains overwhelming. We therefore see the constraint not as affordability but execution bottlenecks—power, advanced packaging, and manufacturing throughput.


Era / Theme                                                   Annual Capex (% GDP)                                        Timeframe
U.S. Railroad Expansion                                1.0 – 1.2%                                                              1860s – 1880s
Electrification & Power-Grid Build-Out        0.8 – 1.0%                                                              1920s – 1930s
Highways & Post-War Infrastructure             0.6 – 0.8%                                                              1950s – 1960s
Telecom & Internet Build-Out                       0.4 – 0.6%                                                              1990s – early 2000s
Cloud & Hyperscaler Infrastructure               0.3 – 0.4%                                                              2010s – early 2020s
Energy Transition (Renewables, EV Infra)      1.2 – 1.5%                                                              2020s – present
AI Infrastructure (Projection)                           0.5 – 1.0%                                                              2024 – 2030 (est.)
Sources: Fogel (1964), Atack et al. (2010, NBER), Maddison Project Database (2023); IEA (2017, 2024), Smil (2010); FHWA (2020), IMF WEO (2020); OECD (2003), World Bank (2004); Synergy Research (2023), Dell’Oro Group (2022); BloombergNEF (2024); NVIDIA Investor Day (Oct 2025), McKinsey (2024).

​​

While the near-term bear concerns around financing creativity are valid, we believe consensus may still be underestimating the scale and duration of the AI infrastructure investment cycle. NVIDIA recently reiterated its view that cumulative AI data centre capex could reach USD 3–4 trillion by 2030, roughly double prevailing sell-side estimates. As management notes, most of the demand growth to date has come from the migration from CPU to GPU workloads; the larger, transformative applications of AI—inference, agentic processes, and new categories of software—are only beginning. Supply-chain data reinforce this: TSMC’s 2026 CoWoS (Chip-on-Wafer-on-Substrate) capacity is still forecast to undersupply NVIDIA’s needs by roughly 20%, and downstream constraints in power and data centre real estate now dominate the bottleneck equation. The implication is that, even as financing becomes more creative, aggregate global AI capex may ultimately exceed current consensus trajectories, driven by geopolitical imperatives and the accelerating adoption of AI workloads across industries. In our view, this validates the long-duration earnings power embedded in the picks-and-shovels ecosystem and supports the argument that the constraint remains physical, not financial.


According to Morgan Stanley, by mid‑2026, leading LLM developers are expected to apply roughly ten‑times the compute of previous training runs. If existing scaling laws hold, model intelligence could double, triggering a new wave of workload growth and data centre demand. Their analysis also highlights that AI infrastructure stocks able to de‑bottleneck this growth—especially in semiconductors, packaging, power, and cooling—should be key beneficiaries.
• Power availability remains the primary governor of data centre expansion, constraining how quickly capacity can come online.
• TSMC’s CoWoS packaging throughput is fully utilised, with expansion lagging customer needs by several quarters.
• Thermal and power‑conversion infrastructure suppliers report record backlogs and high utilisation rates.
These bottlenecks suggest that AI infrastructure investment remains in its early innings. Even if quarterly capex growth moderates, the multi‑year trajectory of spend appears upward, as hyperscalers seek to close the gap between current data centre capability and demand for AI inference and agentic workloads.
For the picks‑and‑shovels suppliers serving this ecosystem, this dynamic underpins a sustained runway of earnings growth and rising returns on invested capital.

​

Historical Context: Telecom Overbuild vs. AI Infrastructure


The late‑1990s telecom capex bubble is often presented as a template for how to understand AI infrastructure investment. A closer comparison of the two shows important differences.

​

A detailed Federal Reserve study (Doms, 2004) documents how U.S. telecom service‑provider capex rose from roughly $47bn in 1995 to $121bn in 2000—a ~206% increase—while fibre route‑miles more than doubled as multiple long‑haul and CLEC entrants duplicated infrastructure, extrapolating projections of Internet traffic doubling every three months. When demand proved closer to doubling annually, utilisation and pricing collapsed; many operators failed, and capex retrenched sharply. A September 2002 WSJ article reported that only 2.7% of the installed fibre was being used at the time. 


The lesson for today is not that capex intensity per se is a problem, but that unconstrained entry and falling unit costs in a commoditising network invite overbuild. In AI, the dynamic is different: AI DC utilisation rates are high, investment is concentrated among a handful of cash‑generative hyperscalers, and the binding constraints are physical—power, advanced packaging, and thermal limits. That structure lowers the probability of a classic overshoot‑and‑bust.


Demand–Supply Evidence: Why Capacity Remains Tight


Demand: Token processing growth has been extraordinary over the past year, with enterprise pilots maturing into production workloads and consumer AI adoption driving inference. The TAM for agentic AI—spanning productivity, code, design, search, commerce, and robotics—appears vast, and it is obviously still early days.
Supply: Advanced packaging capacity (e.g., CoWoS), HBM availability, and power infrastructure are the practical governors of near‑term build‑out. Lead times remain long; utilisation at critical nodes is high; and suppliers across power, memory, and thermal are scaling from elevated baselines. The balance of evidence is that compute remains the bottleneck.


Beneficiaries of the AI Data Centre Build‑out

​​

We recognise the uncertainty around hyperscaler ROI and do not assume the platform layer will deliver particularly attractive incremental returns on capex. But the strategic logic for continued investment is clear, demand is visible, and the gating factors are physical, not financial. 


Variis’ positioning reflects this assessment: a significant allocation to the infrastructure enablers where returns on capital are attractive today and rising as capacity remains strained. In our view, we are still in the early innings of the AI data centre build‑out, and the most attractive risk‑adjusted opportunities remain in the picks‑and‑shovels of this new general‑purpose technology.
 

 

​

​

​

​

​

​

​​


Risk Management and Diversification

​​

Our investment in AI-related picks-and-shovels businesses — including Delta Electronics, SK Hynix, TSMC, and Samsung — totals close to one-quarter of the Variis strategy. This is a material allocation, reflecting our belief that these companies are well-positioned to capture durable, high-return growth as AI data centre capacity expands and power, packaging, and thermal bottlenecks persist.

​

However, more than three-quarters of the strategy is invested in other distinct secular themes largely unrelated to AI. One of these is our long-standing theme of investing in Emerging-Market businesses with Developed-Market analogues, which today represents about one-third of the strategy. These include online-to-offline companies such as Allegro, MercadoLibre, Coupang, and Didi, which benefit from replicating proven economic models of their developed-market peers while operating in structurally higher-growth environments.

 

Another theme is our investment in high-quality, stable compounders — for example high-quality banks including Bank Central Asia (BCA), OTP Bank, and HDFC Bank — that benefit from low credit penetration and strong competitive moats. These positions provide both stability and compounding power, balancing exposure to more innovation-driven themes like AI.

 

Thank you for your continued interest and support!

 

Leila, Eko, Rufus and Jamie

​

​

Disclaimer

FOR PROFESSIONAL INVESTORS AND ADVISORS ONLY
The contents of this document are communicated by, and the property of, Variis Partners LLP. The information and opinions contained in this document are subject to updating and verification and may be subject to amendment. No representation, warranty, or undertaking, express or limited, is given as to the accuracy or completeness of the information or opinions contained in this document by Variis Partners LLP or its directors. No liability is accepted by such persons for the accuracy or completeness of any information or opinions. As such, no reliance may be placed for any purpose on the information and opinions contained in this document. The information contained in this document is strictly confidential and is not intended to be advice or an offer or solicitation to invest. The value of investments and any income generated may go down as well as up and is not guaranteed.

 

Picture1.png
bottom of page