Nvidia – Everything Old is New Again

We listened very closely to Nvidia CEO Jensen Huang’s GTC keynote and his investor Q&A session. As always, there was a lot to digest in his comments, but a couple themes stood out, not so much for what they say about Nvidia, but what they say about technology cycles.

First, a central topic of the event was Nvidia’s positioning for the fight over AI Inference. A major part of their message here was pushing back against the tide of custom ASICs coming from the Internet Hyperscalers – aka Nvidia’s major customers. In the keynote, but especially in the investor Q&A, Huang pushed back hard. His basic position holds that Nvidia’s core business is designing chips for AI, while for the hyperscalers it is a side effort. Nvidia, he maintained, will always be better at this. (A subtext of this was the cloud service providers like AWS and Azure have much less interest in custom accelerators because they do not have control of the software their customers run. This is a theme we have been working on for years.) As much as AI is new, Huang’s theme is very familiar to us. We have been hearing the debate about custom versus merchant silicon for 20 years, and we know it started well before that. Merchant chip vendors almost always have these advantages over time.

A second topic was the breath of Nvidia’s offerings. They do more than supply chips – they supply software, rack designs, data center blueprints – the whole package. Nvidia’s offerings very much position them as the ‘easiest’ way to build out AI compute capacity. Put another way, for AI – no one is going to get fired for buying Nvidia. (At least until the CFO gets the Nvidia invoice.) This is critical, and true, but also very reminiscent of past cycles.

One last major theme was the breadth of demand. With investors, Huang went out of his way to highlight how much the enterprise – as opposed to hyperscalers – are looking to build out AI capacity. In both the keynote and the Q&A he talked at length about “AI Factories”. In the future, manufacturers like GM, he claimed, will need to build both a factory for their products and an “AI Factory for their data”. The implication is that demand for Nvidia products is both much larger and much less concentrated than anyone thinks. Here, we are not quite convinced. While it is apparent that the demand for Nvidia chips is still very strong, we are not sure that enterprises will suddenly reverse their decades long move to the cloud. All the reasons the company gives for this imperative were made ten years ago when the shift to public cloud computing began. But the reasons for moving to the cloud remain just as compelling as they did then. We are not saying he is wrong, but we do think it is too early to call.

Leave a Reply