As Big Tech companies invest heavily in the A.I. race, Nvidia is among the first to reap the rewards. The chipmaker’s dominance of the GPU market has propelled its sales to a record $30 billion in the fiscal quarter ended July 28, up 122 percent from last year and 15 percent from the previous quarter. Its profits also soared to $16.6 billion, a 168 percent rise compared to the previous year. While both figures beat Wall Street expectations, and Nvidia issued better-than-expected revenue guidance for the current quarter, they weren’t enough to impress investors. Nvidia shares fell 5 percent after yesterday’s (Aug. 28) earnings report.
Since the breakthrough of generative A.I. into the mainstream, the technology has transformed Nvidia’s business trajectory. Demand for its graphics processing units (GPUs)—chips housed by data centers that power A.I. models—has sent the company’s shares soaring over the past year. Nvidia is currently the world’s third most valuable public company behind Apple (AAPL) and Microsoft (MSFT) with a market cap of nearly $3 trillion.
Jensen Huang, Nvidia’s founder and CEO, is well aware that chips make all the difference for tech companies vying to reach the next level in generative A.I. “The first person to the next plateau gets to introduce a revolutionary level of A.I.,” Huang told analysts yesterday when asked why Big Tech is clamoring to get their hands on Nvidia’s GPUs. “The ability to systematically and consistently race to the next plateau and be the first one there? It’s how you establish leadership,” he added.
The bulk of Nvidia’s revenue for the past fiscal quarter came from its data center division, which reported a 154 percent jump year-over-year to $26.3 billion. Cloud service providers accounted for around 45 percent of the division’s sales, while more than 50 percent came from consumer, internet and enterprise companies, said Colette Kress, Nvidia’s chief financial officer, during yesterday’s earnings call. Heightened GPU demand is expected to continue rising as “next-generation models will require 10 to 20 times more compute to train with significantly more data,” she said.
Earlier this month, reports surfaced that Nvidia’s next-generation Blackwell A.I. chips could be delayed by months due to design hiccups. The new series is the successor to Nvidia’s Hopper architecture and was unveiled earlier this year by Huang, who said the GPUs would be made available by the end of 2024. Samples of Blackwell are currently shipping to customers and partners, according to Kress, while its production ramp is still expected to begin in the company’s fourth fiscal quarter. During this time, the company expects to see “several billion dollars in Blackwell revenue,” she added.
Nvidia’s clients, which include the likes of Microsoft, Meta (META), Google (GOOGL) and Amazon (AMZN), have continued to flesh out their A.I. ambitions in recent months alongside major cash investments. Microsoft, for example, reported capital expenditures of $19 billion for the April-June quarter, representing a 78 percent increase year-over-year that was largely driven by cloud and A.I. initiatives. This massive spending will soon pay off as companies receive GPUs and begin conserving funds through data processing, according to Huang. “When they build out Hopper-based infrastructure and soon Blackwell-based infrastructure, they’ll start saving money,” he said.
Before Nvidia became one of the largest generative A.I. winners, its sales for years were largely driven by its video game chips. The company’s gaming division brought in $2.9 billion in revenue between May and July. Its automotive and robotics sector and professional visualization branch, meanwhile, generated $346 million and $454 million, respectively.