Skip to content

Nvidia

Nvidia Investor Relations

Nvidia Financial Statements Summary Sheet

GPU Gemes

"GPU Gems" is a series of books published by Addison-Wesley that showcase cutting-edge techniques and insights into graphics processing unit (GPU) programming.

  • GPU Gems 1 (2004):
    The first volume established the series and covered foundational concepts in GPU programming.
  • GPU Gems 2 (2005):
    This volume delved deeper into advanced rendering techniques and shader programming.
  • GPU Gems 3 (2007):
    Focused on emerging trends in GPU programming, including parallel computing and real-time effects.

Nvidia GPU Architectures

  1. GeForce 256 (1999)

  2. GeForce 3 (2001)

  3. GeForce 6 (2004)

  4. GeForce 8800 (2006)

  5. Fermi (2010)

  6. Kepler (2012)

  7. Maxwell (2014)

  8. Pascal (2016)

  9. Volta (2017)

  10. Turing (2018)

  11. Ampere (2020)

  12. Ada Lovelace (2022)

  13. Hopper (2022)

  14. Blackwell (2024)

Products

Investment Articles

Jensen Huang

NVIDIA is the largest install base of video game architecture in the world. GeForce is some 300 million gamers in the world, still growing incredibly well, super vibrant.

We created a library called cuDNN. cuDNN is the world's first neural network computing library. And so we have cuDNN, we have cuOpt for combinatory optimization, we have cuQuantum for quantum simulation and emulation, all kinds of different libraries, cuDF for data frame processing ...

And so we just did it one domain after another domain after another domain. We have a rich library for self-driving cars. We have a fantastic library for robotics, incredible library for virtual screening, whether it's physics based virtual screening or neural network based virtual screen, incredible library for climate tech. And so one domain after another domain. And so we have to go meet friends and create the market.

There are two things that are happening at the same time, and it gets conflated, and it's helpful to tease apart. So the first thing, let's start with a condition where there's no AI at all. Well, in a world where there's no AI at all, general purpose computing has run out of steam still.
...
And so the first thing that's going to happen is the world's trillion dollars of general purpose data centers are going to get modernized into accelerated computing. That's going to happen no matter what. And the reason for that is, as I described, Moore's Law is over.
...
And now, what's amazing is, so the first trillion dollars of data centers is going to get accelerated and invented this new type of software called Generative AI. This Generative AI is not just a tool, it is a skill. And so this is the interesting thing. This is why a new industry has been created.
...
For the very first time, we're going to create skills that augment people. And so that's why people think that AI is going to expand beyond the trillion dollars of data centers and IT, and into the world of skills. So what's a skill? A digital chauffeur is a skill, autonomous, a digital assembly line worker, robot, a digital customer service, chatbot, digital employee for planning NVIDIA's supply chain ...

There's not one software engineer in our company today who don't use code generators either the ones that we built ourselves for CUDA or USD, which is another language that we use in the company, or Verilog, or C and C++ and code generation. And so I think the days of every line of code being written by software engineers, those are completely over. And the idea that every one of our software engineers would essentially have companion digital engineers working with them 24/7, that's the future. And so the way I look at NVIDIA, we have 32,000 employees. Those 32,000 employees are surrounded by hopefully 100x more digital engineers.

And so the amazing thing is, when you want to build this AI computer, people say words like super-cluster, infrastructure, supercomputer, for good reason because it's not a chip, it's not a computer per se. And so we're building entire data centers. By building the entire data center, if you just ever look at one of these superclusters, imagine the software that has to go into it to run it. There is no Microsoft Windows for it. Those days are over. So all the software that's inside that computer is completely bespoke. Somebody has to go write that. So the person who designs the chip and the company that designs that supercomputer, that supercluster and all the software that goes into it, it makes sense that it's the same company because it will be more optimized, they'll be more performant, more energy efficient, more cost effective. And so that's the first thing.

The second thing is, AI is about algorithms. And we're really, really good at understanding what is the algorithm, what's the implication to the computing stack underneath and how do I distribute this computation across millions of processors, run it for days, with the computer being as resilient as possible, achieving great energy efficiency, getting the job done as fast as possible, so on and so forth. And so we're really, really good at that.

And then lastly, in the end, AI is computing. AI is software running on computers. And we know that the most important thing for computers is install base, having the same architecture across every cloud across on-prem to cloud, and having the same architecture available, whether you're building it in the cloud, in your own supercomputer, or trying to run it in your car or some robot or some PC, having that same identical architecture that runs all the same software is a big deal. It's called install base. And so the discipline that we've had for the last 30 years has really led to today. And it's the reason why the most obvious architecture to use if you were to start a company is to use NVIDIA's architecture. Because we're in every cloud, we're anywhere you like to buy it. And whatever computer you pick up, so long as it says NVIDIA inside, you know you can take the software and run it.

Nvidia Quarterly Earnings

Q2 FY2025

Our sovereign AI opportunities continue to expand as countries recognize AI expertise and infrastructure at national imperatives for their society and industries.

Remember that computing is going through two platform transitions at the same time and that's just really, really important to keep your mind focused on, which is general-purpose computing is shifting to accelerated computing and human engineered software is going to transition to generative AI or artificial intelligence learned software.

And for NVIDIA's software TAM can be significant as the CUDA-compatible GPU installed-base grows from millions to tens of millions. And as Colette mentioned, NVIDIA software will exit the year at a $2 billion run rate.

Q1 FY2025

We are fundamentally changing how computing works and what computers can do, from general purpose CPU to GPU accelerated computing, from instruction-driven software to intention-understanding models, from retrieving information to performing skills, and at the industrial level, from producing software to generating tokens, manufacturing digital intelligence.

~ Jensen Huang

Q4 FY2024

Q3 FY2024

Q2 FY2024

Q1 FY2024

Q4 FY2023

Q3 FY2023

Q2 FY2023

Q1 FY2023

Q4 FY2022

Q3 FY2022

Q2 FY2022

Q1 FY2022

Q4 FY2021

Q3 FY2021

Q2 FY2021

Q1 FY2021

Q4 FY2020

Q3 FY2020

Q2 FY2020

Q2 FY2020

Q4 FY2019

Q3 FY2019

Q2 FY2019

Q1 FY2019

Q4 FY2018

Q3 FY2018

Q2 FY2018

Q1 FY2018

Q4 FY2017

Q3 FY2017

Q2 FY2017

Q1 FY2017

Q4 FY2016

Q3 FY2016

Q2 FY2016

Q1 FY2016

Q4 FY2015

Q3 FY2015

Q2 FY2015

Q1 FY2015

Q4 FY2013

Q3 FY2013

Q2 FY2013

Q1 FY2013

Q4 FY2012

Q3 FY2012

Q2 FY2012

Q1 FY2012

Q4 FY2011

Q1 FY2011