In a landmark development that could shift the competitive dynamics of AI infrastructure globally, Amsterdam-based Nebius Group has signed a $17.4 billion, five-year GPU supply agreement with Microsoft. The deal—potentially expanding to $19.4 billion with optional capacity add-ons—makes Microsoft one of Nebius’ largest clients and sends a strong signal to the hyperscaler and AI cloud markets that the Nvidia-powered challenger is rapidly scaling. Following the announcement, Nebius’ shares surged over 47% after hours. The agreement provides Microsoft access to Nebius’ GPU clusters based out of its Vineland, New Jersey data center, marking one of the largest AI infrastructure contracts ever signed outside of the Big Three cloud providers. Nebius CEO Arkady Volozh characterized the deal as transformative, not only for its direct revenue implications but also for the strategic acceleration it brings to the company’s broader AI cloud roadmap. With capacity scaling rapidly and ARR guidance raised to as much as $1.1 billion for year-end, here are the four key drivers underpinning this game-changing transaction.
Strategic Scale Partnership With Microsoft
The $17.4 billion agreement with Microsoft marks a pivotal strategic milestone for Nebius, positioning it as a critical infrastructure partner to one of the world’s largest AI and cloud computing players. While Microsoft continues to diversify its AI infrastructure partnerships beyond incumbent suppliers like CoreWeave, this deal offers it dedicated access to Nebius’ Vineland GPU clusters built on Nvidia’s latest architectures—including the Blackwell B200s and future Grace Blackwell units. For Nebius, the scale of this partnership provides both operational leverage and strategic validation. It enables the company to ramp up capital-intensive data center deployments while benefiting from revenue visibility that underwrites its gigawatt-scale power build-out. Furthermore, Nebius has confirmed that it remains Microsoft’s secondary AI infrastructure supplier after CoreWeave, strengthening its position as a serious player in the expanding GPU-as-a-service ecosystem. Critically, this partnership also signals a diversification trend among hyperscalers who are seeking geographically dispersed and vendor-neutral AI capacity—especially as AI models grow in scale and latency sensitivity. The deal is back-end loaded, meaning a large portion of deployments and revenue will materialize in Q4 and 2026, aligning well with Nebius’ infrastructure expansion across the U.S., U.K., Finland, and Israel. Microsoft may also increase its service consumption, potentially pushing the deal value to $19.4 billion. This optionality adds meaningful upside to Nebius’ midterm revenue outlook, which already targets “mid-single-digit billions” by 2027. Importantly, the strategic nature of this partnership creates a halo effect for Nebius in discussions with other frontier AI labs and enterprises—customers that may be incentivized to follow Microsoft’s lead in seeking dedicated, vertically integrated GPU cloud providers.
Capital Markets Validation Via Hypergrowth & Profitability Milestones
Nebius’ financial performance leading into the Microsoft deal has shown accelerating momentum that underpins investor confidence in its scale thesis. In Q2, the company reported $105.1 million in revenue—a 625% year-over-year surge and 106% sequential growth. More notably, the core AI infrastructure business turned adjusted EBITDA positive, well ahead of internal projections. This milestone is particularly important for capital markets participants as it demonstrates that Nebius can scale while maintaining capital discipline in a segment known for heavy upfront CapEx. Its revised full-year ARR guidance of $900 million to $1.1 billion—up from $750 million to $1 billion—shows strong visibility driven by both closed contracts and late-year capacity onboarding. The company has already secured 220 megawatts of power for GPU deployment by year-end, with expansion pathways toward 1 gigawatt by 2026. Financing this build-out is supported by a diversified capital base, including $4 billion raised to date and significant equity stakes in non-core holdings like ClickHouse, Toloka, and Avride. These assets provide Nebius with billions in optional liquidity, should external fundraising become less favorable. Management has also highlighted flexibility in opportunistically raising additional capital based on market conditions. At the same time, early investments into software stack optimization and MLPerf benchmark achievements have enhanced the platform’s appeal to developers and enterprises. Collectively, these achievements offer capital markets validation that Nebius is evolving from a regional upstart into a systemically important player in the AI infrastructure value chain.
Positioned In A Structural Growth Cycle For AI Compute
The AI infrastructure sector is currently experiencing a once-in-a-generation growth cycle driven by exponential increases in demand for high-performance GPUs, LLM training environments, and inference-at-scale deployments. Nebius is positioning itself at the center of this cycle with a strategy built on vertical integration, hyperscale infrastructure, and software enablement. The company is aggressively expanding its global footprint with new clusters in New Jersey, Finland, the U.K., and Israel, and has confirmed it is in late-stage negotiations for additional greenfield sites capable of delivering hundreds of megawatts by 2026. Unlike competitors that rely on build-to-suit models or colocation leases, Nebius prefers greenfield deployments, which offer cost advantages (~20% TCO reduction) and better control over phasing, thermal dynamics, and hardware optimization. This approach aligns with demand from large AI-native startups and scale-ups like HeyGen, Photoroom, and Lightning.AI, which require flexible but dedicated environments. Moreover, Nebius has advanced its software offerings, doubling interconnect speeds and improving reliability via automated health checks. These enhancements have been validated through MLPerf benchmarks, showing linear scaling of Llama 3.1 training across cloud clusters—performance comparable to bare metal setups. The company also introduced an enterprise-grade inference-as-a-service platform targeting open-weight model deployments (Llama, Qwen, Flux), with enterprise use cases in mind. Finally, with global governments investing in sovereign AI infrastructure, regions like the U.K. and Israel represent secular growth opportunities. Nebius is aligning its infrastructure timing with GPU availability, ensuring peak readiness for high-margin deployments in Q4 and beyond. Its positioning in this growth cycle reflects a well-capitalized, technically differentiated operator preparing for broader AI adoption at scale.
High Customer Concentration As A Risk & Catalyst
While the Microsoft deal offers unmatched revenue visibility, it also raises questions about customer concentration risk for Nebius. As of Q3 2025, Microsoft will become one of the company’s largest—if not the largest—revenue contributors over the next several years. While such relationships are common in GPU cloud markets, particularly given the volume purchasing power of hyperscalers, this dynamic introduces potential volatility should Microsoft choose to alter consumption, pricing, or vendor strategy. Importantly, Microsoft is also CoreWeave’s largest customer, and the hyperscaler has previously denied reports of contract cancellations, signaling a competitive and performance-driven supplier landscape. However, Nebius appears to be mitigating concentration risk through active diversification of its customer base. New enterprise logos such as Shopify, Cloudflare, and Prosus have joined its roster, with Shopify leveraging Nebius and Toloka for AI merchant journey optimization and Cloudflare powering inference-at-the-edge via Workers API. These strategic wins validate Nebius’ relevance beyond hyperscaler dependency. Furthermore, Nebius continues to dominate among AI-native startups seeking clean-slate infrastructure partners. Its ecosystem partnerships with Mistral, Lightning.AI, Anyscale, and Nvidia (DGX Cloud Lepton launch partner) indicate broad-based traction. Still, the sheer magnitude of the Microsoft contract will overshadow all other revenue lines in the near term. Nebius’ go-to-market strategy, now led by ex-Twilio CRO Marc Boroditsky, includes hiring regional general managers and ramping sales operations across strategic verticals. If successful, this execution will dilute concentration risk over time. In the interim, however, investors will need to monitor Nebius’ exposure to Microsoft’s GPU consumption cycles and their effect on midterm financial forecasts.
Key Takeaways
Nebius’ $17.4 billion agreement with Microsoft is a defining moment for the AI infrastructure market and a pivotal milestone in the company’s lifecycle. The contract validates its engineering capabilities, geographic positioning, and go-to-market readiness at hyperscale. However, the partnership also introduces customer concentration risk and locks Nebius into delivering backend-weighted capacity during a time of macro uncertainty and supply chain constraints. Capital expenditure remains elevated with $2 billion planned for 2025, and adjusted EBITDA remains negative at the group level for the year. From a valuation standpoint, Nebius’ trailing multiples paint a complex picture. As of September 8, 2025, its LTM EV/Revenue stands at 59.38x, LTM P/S at 61.34x, and LTM EV/EBITDA at –38.11x, reflecting the company’s high-growth yet loss-making profile. Forward multiples imply gradual normalization with EV/Revenue at 13.97x and EV/EBITDA at 88.69x. While the growth trajectory justifies premium metrics in the near term, the valuation assumes flawless execution, continued demand elasticity, and minimal disruptions from competitive or regulatory forces. Investors will need to weigh these variables carefully as Nebius transitions from hypergrowth to scale.