The AGI-le Investor
11 February 2025·3 min read

DeepSeek Changes the Equation

DeepSeekAI ModelsDigital InfrastructureInvestment Strategy
LN Sadani

LN Sadani

Chief Executive Officer, Lensbridge Capital

On 27 January 2025, the release of DeepSeek R1 — a Chinese open-source language model that reportedly matched the performance of OpenAI's o1 at a fraction of the training cost — triggered the largest single-day market cap loss in US stock market history, with Nvidia shedding approximately US$600 billion in value. The market's reaction reflected a straightforward concern: if frontier AI can be achieved with dramatically less compute, the entire infrastructure buildout thesis is called into question. That concern, while understandable, misreads the economics of AI adoption.

The history of technology is replete with examples of efficiency improvements that expanded rather than contracted markets. When the cost of computing fell by orders of magnitude in the 1990s and 2000s, demand for computing did not fall — it exploded, because lower costs made new applications economically viable. The same dynamic applies to AI. If the cost of training and running a capable AI model falls by 90%, the number of organisations that can afford to deploy AI increases dramatically. The aggregate demand for compute does not fall; it rises, because the addressable market has expanded.

The more nuanced question is whether DeepSeek's efficiency gains change the nature of infrastructure demand. The answer is yes, in ways that matter for investors. Training-optimised infrastructure — the massive GPU clusters that have driven the first wave of data centre investment — may see slower growth than previously expected. Inference-optimised infrastructure — the distributed, lower-latency facilities needed to serve billions of AI queries — will likely see accelerated growth. The shift from training to inference as the dominant workload has significant implications for data centre design, location, and power requirements.

At Lensbridge, our infrastructure thesis has always been grounded in the demand for AI services at scale, not in any particular model architecture or training paradigm. DeepSeek reinforces rather than undermines that thesis — and it sharpens our focus on the inference infrastructure layer as the most durable part of the AI infrastructure stack.