Skip to content

MSFT

Investor Relations

OpenAI

Today we’re announcing new funding—$40 billion at a $300 billion post-money valuation, which enables us to push the frontiers of AI research even further, scale our compute infrastructure, and deliver increasingly powerful tools for the 500 million people who use ChatGPT every week.

The Stargate Project is a new company which intends to invest $500 billion over the next four years building new AI infrastructure for OpenAI in the United States. We will begin deploying $100 billion immediately.
The initial equity funders in Stargate are SoftBank, OpenAI, Oracle, and MGX. SoftBank and OpenAI are the lead partners for Stargate, with SoftBank having financial responsibility and OpenAI having operational responsibility. Masayoshi Son will be the chairman.
Arm, Microsoft, NVIDIA, Oracle, and OpenAI are the key initial technology partners.
As part of Stargate, Oracle, NVIDIA, and OpenAI will closely collaborate to build and operate this computing system.
OpenAI will continue to increase its consumption of Azure as OpenAI continues its work with Microsoft with this additional compute to train leading models and deliver great products and services.

Articles

I also want to acknowledge the uncertainty and seeming incongruence of the times we’re in. By every objective measure, Microsoft is thriving—our market performance, strategic positioning, and growth all point up and to the right. We’re investing more in CapEx than ever before. Our overall headcount is relatively unchanged, and some of the talent and expertise in our industry and at Microsoft is being recognized and rewarded at levels never seen before. And yet, at the same time, we’ve undergone layoffs.

This is the enigma of success in an industry that has no franchise value. Progress isn’t linear. It’s dynamic, sometimes dissonant, and always demanding. But it’s also a new opportunity for us to shape, lead through, and have greater impact than ever before.

The success we want to achieve will be defined by our ability to go through this difficult process of “unlearning” and “learning.” It requires us to meet changing customer needs, by continuing to maintain and scale our current business, while also creating new categories with new business models and a new production function. This is inherently hard, and few companies can do both.

We must reimagine our mission for a new era. What does empowerment look like in the era of AI? It’s not just about building tools for specific roles or tasks. It’s about building tools that empower everyone to create their own tools. That’s the shift we are driving—from a software factory to an intelligence engine empowering every person and organization to build whatever they need to achieve.

To deliver on our mission, we need to stay focused on our three business priorities: security, quality, and AI transformation.

Security and quality are non-negotiable. Our infrastructure and services are mission critical for the world, and without them we don’t have permission to move forward.

We will reimagine every layer of the tech stack for AI—infrastructure, to the app platform, to apps and agents. The key is to get the platform primitives right for these new workloads and for the next order of magnitude of scale. Our differentiation will come from how we bring these layers together to deliver end-to-end experiences and products, with the core ethos of a platform company that fosters ecosystem opportunity broadly. Getting both the product and platform right for the AI wave is our North Star!

Growth mindset has served us well over the last decade—the everyday practice of being a learn-it-all, not a know-it-all.

This platform shift is reshaping not only the products we build and the business models we operate under, but also how we are structured and how we work together every day. It might feel messy at times, but transformation always is. Teams are reorganizing. Scopes are expanding. New opportunities are everywhere. It reminds me of the early ’90s, when PCs and productivity software became standard in every home and every desk! That’s exactly where we are now with AI.

What we’ve learned over the past five decades is that success is not about longevity. It’s about relevance. Our future won’t be defined by what we’ve built before, but by what we empower others to build now.

Dwarkesh Patel
Where is the value going to be created in AI?

Satya Nadella
That's a great one. So I think there are two places where I can say with some confidence. One is the hyperscalers that do well, because the fundamental thing is if you sort of go back to even how Sam and others describe it, if intelligence is log of compute, whoever can do lots of compute is a big winner.

The other interesting thing is, if you look at underneath even any AI workload, like take ChatGPT, it's not like everybody's excited about what's happening on the GPU side, it's great. In fact, I think of my fleet even as a ratio of the AI accelerator to storage, to compute. And at scale, you've got to grow it.
...
So our hyperscale business, Azure business, and other hyperscalers, I think that’s a big thing.
Then after that, it becomes a little fuzzy. You could say, hey, there is a winner-take-all model- I just don't see it. This, by the way, is the other thing I’ve learned: being very good at understanding what are winner-take-all markets and what are not winner-take-all markets is, in some sense, everything.
...
Consumer markets sometimes can be winner-take-all, but anything where the buyer is a corporation, an enterprise, an IT department, they will want multiple suppliers. And so you got to be one of the multiple suppliers.
...
Then above that, I think it's going to be the same old stuff, which is in consumer, in some categories, there may be some winner-take-all network effect. After all, ChatGPT is a great example.

Dwarkesh Patel
You recently reported that your yearly revenue from AI is $13 billion. But if you look at your year-on-year growth on that, in like four years, it'll be 10x that. You'll have $130 billion in revenue from AI, if the trend continues. If it does, what do you anticipate doing with all that intelligence, this industrial scale use?

Is it going to be through Office? Is it going to be you deploying it for others to host? You've got to have the AGIs to have $130 billion in revenue? What does it look like?

Satya Nadella
The way I come at it, Dwarkesh, it's a great question because at some level, if you're going to have this explosion, abundance, whatever, commodity of intelligence available, the first thing we have to observe is GDP growth.

Before I get to what Microsoft's revenue will look like, there's only one governor in all of this. This is where we get a little bit ahead of ourselves with all this AGI hype. Remember the developed world, which is what? 2% growth and if you adjust for inflation it’s zero?

So in 2025, as we sit here, I'm not an economist, at least I look at it and say we have a real growth challenge. So, the first thing that we all have to do is, when we say this is like the Industrial Revolution, let's have that Industrial Revolution type of growth.

That means to me, 10%, 7%, developed world, inflation-adjusted, growing at 5%. That's the real marker. It can't just be supply-side.

In fact that’s the thing, a lot of people are writing about it, and I'm glad they are, which is the big winners here are not going to be tech companies. The winners are going to be the broader industry that uses this commodity that, by the way, is abundant. Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we'll be fine as an industry.

But that's to me the moment. Us self-claiming some AGI milestone, that's just nonsensical benchmark hacking to me. The real benchmark is: the world growing at 10%.
...
That is correct. But by the way, the classic supply side is, "Hey, let me build it and they’ll come." That's an argument, and after all we've done that, we've taken enough risk to go do it.

But at some point, the supply and demand have to map. That's why I'm tracking both sides of it. You can go off the rails completely when you are hyping yourself with the supply-side, versus really understanding how to translate that into real value to customers.

That's why I look at my inference revenue. That's one of the reasons why even the disclosure on the inference revenue... It's interesting that not many people are talking about their real revenue, but to me, that is important as a governor for how you think about it.

You're not going to say they have to symmetrically meet at any given point in time, but you need to have existence proof that you are able to parlay yesterday's, let’s call it capital, into today's demand, so that then you can again invest, maybe exponentially even, knowing that you're not going to be completely rate mismatched.

Quarterly Earnings

FY2025 Q4

We are driving and riding a set of compounding S curves across silicon, systems, and models to continuously improve the efficiency and performance for our customers.
Take for example GPT4o family of models – which have the highest volume of inference tokens. Through software optimization alone, we are delivering 90% more tokens for the same GPU compared to a year ago.

Nestlé, for example, migrated more than 200 SAP instances, 10,000-plus servers, 1.2 petabytes of data to Azure with near-zero business disruption.That makes it one of the largest and most successful migrations in business history.

OpenAI, for example, uses Cosmos DB in the hot path of every ChatGPT interaction – storing chat history, user profiles, and conversational state. And Azure PostgreSQL stores metadata critical to the operation of ChatGPT as well as OpenAI’s developer APIs.

Nasdaq is using Foundry to build agents that help customers prepare for board meetings, cutting prep time by up to 25%.

when we look narrowly at just the number of tokens served by Foundry APIs, we processed over 500 trillion this year, up over 7X.

Our family of Copilot apps has surpassed 100 million monthly active users across commercial and consumer. And when you take a broader look at the engagement of AI features across our products, we have over 800 million monthly active users.

Hundreds of partners like Adobe, SAP, ServiceNow, and Workday have built their own third-party agents that integrate with Copilot and Teams.

The surge in vibe coding projects and AI coding agents, whether it is Claude Code, Codex, Cursor, or GitHub Copilot are generating more pull requests and more repos on GitHub.

In fact, on Monday, we introduced Copilot Mode in Edge. It’s especially exciting to see the innovation coming back to browsers!

LinkedIn is home to 1.2 billion members, with four consecutive years of double-digit member growth.

When it comes to gaming, we have 500 million monthly active users across platforms and devices.

Search and news advertising revenue ex-TAC increased 21% and 20% in constant currency driven by continued growth in both volume and revenue per search, as well as roughly 8 points of favorable impact from third-party partnerships, including the benefit of a low prior year comparable.

Capital expenditures were $24.2 billion, including $6.5 billion of finance leases where we recognize the full value at the time of lease commencement. Cash paid for P, P, and E, was $17.1 billion. The difference is primarily due to finance leases. More than half of our spend was on long-lived assets that will support monetization over the next 15 years and beyond. The remaining spend was primarily for servers, both CPUs and GPUs, and driven by strong demand signals.

Building on the strong momentum we saw this past year, we expect to deliver another year of double-digit revenue and operating income growth in FY26. We will continue to invest against the expansive opportunity ahead across both capital expenditures and operating expenses given our leadership position in commercial cloud, strong demand signals for our cloud and AI offerings, and significant contracted backlog. Capital expenditure growth, as we shared last quarter, will moderate compared to FY25 with a greater mix of short-lived assets. Due to the timing of delivery of additional capacity in H1, including large finance lease sites, we expect growth rates in H1 will be higher than in H2. We remain focused on delivering revenue growth and increasing our operational agility, and as a result, expect operating margins to be relatively unchanged year-over-year.

We expect Q1 capital expenditures to be over $30 billion driven by the continued strong demand signals we see.

In Azure, we expect Q1 revenue growth of approximately 37% in constant currency driven by strong demand for our portfolio of services on a significant base. Even as we continue bringing more datacenter capacity online, we currently expect to remain capacity constrained through the first half of our fiscal year.

Growth will continue to be driven by volume and revenue per search across Edge and Bing. Overall Search and news advertising revenue growth should be in the low double digits.

Back in the day, when I was getting started on Azure, I used to look over the lake and see Netflix and Amazon. And I’d say, I wish Netflix ran on Azure. And in some sense, that’s kind of what we now have, which is the largest AI workloads run on Azure.

You optimize the entire platform faster, everything from what we’re doing with Cosmos DB for a chat interface like ChatGPT or Copilot, is, guess what, going to be most relevant for any AI application, going forward. The entire data stack that we have now built is going to be optimized for what people describe as one of the hardest challenges of any AI application, called context engineering, which is, how do you collect your data and then make sure that the context around the prompts remain stable over a long period, so that you get the intelligence to actually deliver the results you want.

I think, your question, is as long as we have head apps shaping the platform, and then after that, we have the broad diffusion happen, which, in some sense, both of those is what we’re seeing. I feel very good about being in good standing, going forward.

I mean that, in some sense, was a bit of the state of the art, maybe even a year ago, whereas now you have essentially, these very stateful app patterns that are emerging that require quite a bit of rethinking of even the app stack. I mean, take even the storage tier stuff, the degree of sophistication you have. And hey, how much of an index do you really want to build by preprocessing, so that your prompt engineering, or context engineering, as I call it, can be better and higher quality. I think all of that is emerging.

Even if you’re not using GitHub Copilot to create the code check in or the pull request, interestingly enough, we’re seeing massive increase to GitHub Copilot Code Review Agent, even if you used maybe Claude code or whatever else to write the code.

In terms of feeling good about the ROI and the growth rates and the correlation, I feel very good that the spend that we’re making is correlated to basically contracted on the books business that we need to deliver, and we need the teams to execute at their very best to get the capacity in place as quickly and effectively as they can.

As we’ve talked about, the growth rate will decline year over year, but at its core, our investments, particularly in short-lived assets like servers, GPUs, CPUs, network and storage, is just really correlated to the backlog we see and the curve of demand. And I talked about it, my gosh, in January, and said I thought we’d be in better supply demand shape by June. And now, I’m saying I hope I’m in better shape by December. And that’s not because we slowed capex. Even with accelerating the spend and trying to pull leases in and get CPUs and GPUs in the system as quickly as we can, we are still seeing demand improve.

I don’t want people to get overly focused on a pivot point, because when you’re in these expansive moments, picking a data point usually means you’re going to pick to be too conservative in terms of market share gain and in terms of winning. And so, I tend to put my energy more there.

I think I said this in a previous learnings as well, which is the difference between a hoster and a hyperscaler is software, and the same is going to be true here. That GPT-4o example I gave is all software, the optimization even in the last year. We know how to use the software skills to take any piece of hardware and make it multiple x better. And so, that’s where the yield will come.

I think sometimes people get a lot of energy around cost control as a driver of margin. The other driver is to focus on making sure you deliver great product that’s competitive and innovative and can take share, because that drives revenue. And revenue itself and revenue growth, as you all know better, even perhaps than I do, is a durable way to see margin improvement. It builds on itself.

Applying all of our skillset here to deliver efficiencies, whether that’s at whatever layer of the stack that exists. The S-curves compound, and we are doing that work. And we’re focused on it at the same time we’re doing the build out. You’ll see improvements there, even as we continue to invest.

FY2025 Q3

FY2025 Q2

FY2025 Q1

FY2024 Q4

FY2024 Q3

FY2024 Q2

FY2024 Q1

FY2023 Q4

FY2023 Q3

FY2023 Q2

FY2023 Q1

FY2022 Q4

FY2022 Q3

FY2022 Q2

FY2022 Q1

FY2021 Q4

FY2021 Q3