Skip to content

Nebius

Earnings Reports

Articles

Underwritten public offering of 10,810,811 Class A ordinary shares at an offering price of $92.50 per Class A share. The underwriters of such offering have a 30-day option (starting from September 10, 2025) to purchase up to an additional 1,621,621 Class A shares at the offering price

(10810811+1621621)*92.5 = 1,149,999,960.0 , 1.15B美元

Convertible senior notes, in two series: 1.00% convertible notes due 2030 (the “2030 Notes”) and 2.75% convertible notes due 2032 (the “2032 Notes”, and together with the 2030 Notes, the “Notes”)

Accordingly, the aggregate principal amount of each series of Notes is approximately $1.58 billion, and the total aggregate original principal amount of the Notes is approximately $3.16 billion.

The initial conversion rate for each series of Notes is 7.2072 Class A shares per $1,000 original principal amount of Notes, which represents an initial conversion price of approximately $138.75 per Class A share. The initial conversion price represents a premium of 50% over the offering price of $92.50 per Class A share in the concurrent offering of our Class A shares referenced above.

Under this multi-year agreement, Nebius will deliver dedicated capacity to Microsoft from its new data center in Vineland, New Jersey starting later this year.

“Nebius’s core AI cloud business, serving customers from AI startups to enterprises, is performing exceptionally well. We have also said that, in addition to our core business, we expect to secure significant long-term committed contracts with leading AI labs and big tech companies. I’m happy to announce the first of these contracts, and I believe there are more to come. The economics of the deal are attractive in their own right, but, significantly, the deal will also help us to accelerate the growth of our AI cloud business even further in 2026 and beyond.”

Nebius expects to finance the capital expenditure associated with the contract through a combination of cash flow coming from the deal and the issuance of debt secured against the contract in the near term, at terms enhanced by the credit quality of the counterparty. The Company is also evaluating a number of additional financing options to enable significantly faster growth than originally planned and will update the market on its financing strategy in due course.

Nebius is a technology company building full-stack infrastructure to service the high-growth global AI industry. Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America and Israel.

With a full stack of purposefully designed and tuned proprietary software and hardware designed in-house, Nebius gives AI builders the compute, storage, managed services and tools they need to build, tune and run their models and applications.

Sept 8 (Reuters) - Nebius Group (NBIS.O), opens new tab said on Monday it will provide Microsoft (MSFT.O), opens new tab with GPU infrastructure capacity, in a deal worth $17.4 billion, over a five-year term, sending its shares soaring over 47% after the bell.

Nebius is notable for providing the lowest pricing in the GPU cloud market, enabled by their financial position. With billions of dollars on their balance sheet and no existing debt, they benefit from abundant financial resources and significant maneuvering room. Their financial strength directly translates more risk taking and much stronger investment into business development. Examples of this include innovative offerings like bridging H100 contracts into B200 deployments as well as ubiquitous billboards in Santa Clara designed to capture mindshare. The result is unparalleled cost savings for their customers, as Nebius offers market-leading terms and highly competitive rates.

One of Nebius’s key strategies for maintaining such low prices is its commitment to using custom Original Design Manufacturer (ODM) chassis. By designing hardware internally and partnering directly with Original Design Manufacturers (ODMs), Nebius bypasses traditional OEM providers like Dell and Supermicro, which typically apply gross margins of around 10-15%. Nebius’s ODM strategy significantly reduces gross margins to about 2%, dramatically lowering both initial hardware investments and ongoing operating expenses, such as power consumption. This cost efficiency places Nebius uniquely among non-Hyperscalers providers, as they adopt an optimization typically only seen within hyperscale cloud providers.

Due to its roots as an ex-Russian cloud provider, it boasts an exceptionally talented team of cracked ex-Russian engineers. Nebius still lags behind competitors regarding user experience, though. Despite offering on-demand NVIDIA H100 GPUs at roughly $1.50 per hour (at least for the first thousand hours per month) —half the cost charged by competitors like Lambda Labs—Nebius struggles with customer adoption. Many users still prefer Lambda Labs primarily because Nebius’s UI and UX remain overly complex and unintuitive, creating friction that deters less technically inclined customers. Nebius is committed to fixing its UI/UX issues.

Finally, Nebius currently offers a fully managed Kubernetes solution but does not yet provide fully automated managed Slurm clusters, a significant gap in their product portfolio. They are actively developing their “Soperator” Slurm solution, which includes foundational passive and active health checks. However, these checks still fall short of industry-leading standards set by providers like CoreWeave. To match competitors’ reliability and observability, Nebius will need to invest more heavily in comprehensive, weekly scheduled active health checks and implement advanced out-of-the-box Grafana dashboards. Strengthening these operational aspects would further enhance their already compelling value proposition by increasing reliably to the level of CoreWeave and having automated node lifecycles.

2025Q2

Demand is continuing to grow rapidly as frontier AI labs build more LLMs, thousands of new AI-native startups develop ever greater numbers of applications, and large enterprises incorporate AI into critical workstreams. We expect the fundamental trends in our space to continue to drive growth for years to come.

We are in the midst of a once-in-a-generation opportunity.

Demand for AI infrastructure — compute, software and services — is only going to get stronger.

In Q2, our capital expenditures were $510.6 million, primarily driven by purchases of GPUs and GPU-related hardware, and our data center expansion activities.

Additionally, our ISEG2 system ranked #13 in the June 2025 TOP500 list of the world's fastest supercomput-ers, and #1 in Europe among commercially available systems.

We pushed our infrastructure reliability to new heights, significantly increasing Mean Time Before Failure (MTBF) to 168,000 GPU hours per week for a 3,000-GPU cluster.

Over the next several years, we believe inference has the potential to become a bigger opportunity than training, as more companies bring Al solutions at scale into their businesses.

We deepened our collaboration with NVIDIA by expanding our integrations with NVIDIA’s accelerated computing platform, including support for the NVIDIA AI Enterprise software stack.

2025Q1

2024Q4

2024Q3

2024Q2

2024Q1