NVIDIA Corp at Bank of America Global Technology Conference Transcript - Thomson StreetEvents

NVIDIA Corp at Bank of America Global Technology Conference Transcript

NVIDIA Corp at Bank of America Global Technology Conference Transcript - Thomson StreetEvents
NVIDIA Corp at Bank of America Global Technology Conference Transcript
Published Jun 05, 2024
14 pages (9035 words) — Published Jun 05, 2024
Price US$ 54.00  |  Buy this Report Now

About This Report

  
Abstract:

Edited Transcript of NVDA.OQ presentation 5-Jun-24 7:30pm GMT

  
Brief Excerpt:

...I hope everyone enjoyed their lunch. Welcome back to this session. I'm Vivek Arya. I lead the semiconductor research coverage at Bank of America Securities. I'm really delighted and privileged to have Ian Buck, Vice President of NVIDIA's HPC and Hyperscale business. Ian has a PhD from Stanford. And when many of us were enjoying our spring break, Ian and his team were working on Brook, which is the precursor to CUDA, which I think is the beating heart of every GPU that NVIDIA sells. So really delighted to have Ian with us. What I thought I would do is lead off with some of my questions, but if there's anything that you feel is important to the discussion, please feel free to raise your hand. But a very warm welcome to you, Ian. Really delighted that you could be with us. Ian Buck ...

  
Report Type:

Transcript

Source:
Company:
NVIDIA Corp
Ticker
NVDA.OQ
Time
7:30pm GMT
Format:
PDF Adobe Acrobat
Buy Now

The following is excerpted from the question-and-answer section of the transcript.

(Questions from industry analysts are provided in full, but answers are omitted - download the transcript to see the full question-and-answer session)

Question: Vivek Arya - Bank of America Securities - Analyst : Okay. So Ian, maybe let's -- to start it off, let's talk about Computex and some of the top announcements that NVIDIA made. What do you find the most interesting and exciting as you look at growth prospects over the next few years?


Question: Vivek Arya - Bank of America Securities - Analyst : So let's start looking at this from the end market, right? You work very closely with all the hyperscalers. From the outside, when we look at this market, we see the accelerator market was like over $40 billion last year, right? It could be like over $100 billion this year. But help us bridge this to what the hyperscalers are doing, right? What are they doing with all the acceleration and all this hardware that they're putting in? Is it about making bigger and bigger models? Like where are they in that journey of their large language model outwards and how they're able to monetize them?


Question: Vivek Arya - Bank of America Securities - Analyst : Yeah. So you mentioned that AI, the traditional AI or CNNs, right, they have been around for a long time. We used to talk about like tens of millions of parameters, and here, we are knocking on the door of what, almost 2 trillion parameters. Do you see a peak in terms of when we say, okay, this is it, the model sizes? Now we might even go backwards, that we might try to optimize the size of these models, right, have smaller or mid-sized models. Or we are not yet at that point?


Question: Vivek Arya - Bank of America Securities - Analyst : So 50 times more? REFINITIV STREETEVENTS | www.refinitiv.com | Contact Us consent of Refinitiv. 'Refinitiv' and the Refinitiv logo are registered trademarks of Refinitiv and its affiliated companies. JUNE 05, 2024 / 7:30PM, NVDA.OQ - NVIDIA Corp at Bank of America Global Technology Conference


Question: Vivek Arya - Bank of America Securities - Analyst : But is there matching returns, do you think, at some point? Or just the cost of training, can that get to a level where it puts an upper limit on how large these models can be?


Question: Vivek Arya - Bank of America Securities - Analyst : All right. Do you find it interesting that some of the most frequently used and the largest models, one is developed by a start-up, and one is developed by somebody who's not a hyperscaler, right? So where do you think the biggest hyperscalers are in their journey? Are they still in early stages? Are they hoping to just leverage the technology that's been built up? Or do you think they have to get things going also and that can provide growth over the next several years?


Question: Vivek Arya - Bank of America Securities - Analyst : Got it. Now I'm glad you brought that up in terms of the one-year product cadence because one aspect of this is we are seeing these model sizes. I've seen one statistic that says they are doubling every six months or so. So that argues that even a one-year product cadence is actually not fast enough. But then the other practical side of it is that your customers then have to live in this constant, right, flux in their data center. So how do you look at the puts and takes of this one-year product cadence?


Question: Vivek Arya - Bank of America Securities - Analyst : Right. There's always this question about what are the killer apps driving generative AI, right? Yes, we understand that a lot of hardware is being deployed. So what are the top use cases, right? You mentioned that customers deploying NVIDIA hardware are seeing that four times or 5 times the return on their investment. But what are the use cases that are actually being -- obviously, that's over a four-year period, right? So what are the big use cases that you think are the most promising right now?


Question: Vivek Arya - Bank of America Securities - Analyst : Got it. I wanted to talk about AI inference and get your views on what is NVIDIA's moat in AI inference? Because if I say that inference is a workload where I'm really constraining the parameters, where I'm optimizing sometimes more of a cost than performance, why isn't a custom ASIC the best product for AI inference, right? I know exactly what I need to infer, right? And I can customize it and I don't need to go after -- I don't need to make the same chip work for training also. So why isn't a custom ASIC the best product for AI inference?


Question: Vivek Arya - Bank of America Securities - Analyst : Practically, the large customers, do they have separate clusters for training, separate for inference? Or are they mixing and matching? Are they reusing them some type for training, some type of -- practically, how do they do it?


Question: Vivek Arya - Bank of America Securities - Analyst : Got it. Since you have been so intimately involved with CUDA since its founding, right, how do you address the pushback that people have is that a lot of other software extraction is being done away from CUDA and it will make CUDA obsolete at some point, that that's not really a sustainable moat for NVIDIA. How do you address that pushback?


Question: Vivek Arya - Bank of America Securities - Analyst : Got it. How is the outlook around Blackwell as we look at next year? First of all, do you think that because of the different -- the power requirements that are going up significantly, does that constrain the growth of Blackwell in any way? And what's the lead time in engagements between when somebody wants to deploy, right, versus when they have to start a discussion with NVIDIA, i.e., how far is your visibility of growth into next year?


Question: Vivek Arya - Bank of America Securities - Analyst : Got it. So customers who are deploying Blackwell, are they replacing the Hoppers or Amperes that were already in place? Or are they putting up new infrastructure? Like how should we think about the replacement cycle of these?


Question: Vivek Arya - Bank of America Securities - Analyst : Taking out traditional servers.


Question: Vivek Arya - Bank of America Securities - Analyst : Got it. And lastly, InfiniBand versus the Ethernet, right? So most of the clusters that NVIDIA has built so far have primarily used InfiniBand. What is the strategy behind the new Spectrum-X product because there is a large incumbent that is out there? Just like NVIDIA is a large incumbent on the compute side, there is an incumbent on the switching side. So what did make customers adopt your product versus staying with the incumbent?


Question: Vivek Arya - Bank of America Securities - Analyst : Do you see the attach rate of your Ethernet switch going up? Because I think NVIDIA has outlined like several billion dollars of which includes the NICs as well, right? Even before Blackwell starts, right?


Question: Vivek Arya - Bank of America Securities - Analyst : And then as Blackwell rolls out next year, do you see your attach rate of Ethernet?

Table Of Contents

NVIDIA Corp at Goldman Sachs Communacopia & Technology Conference Transcript – 2024-09-11 – US$ 54.00 – Edited Transcript of NVDA.OQ presentation 11-Sep-24 2:20pm GMT

NVIDIA Corp Q2 2025 Earnings Call Summary – 2024-08-28 – US$ 54.00 – Edited Brief of NVDA.OQ earnings conference call or presentation 28-Aug-24 9:00pm GMT

NVIDIA Corp Q2 2025 Earnings Call Transcript – 2024-08-28 – US$ 54.00 – Edited Transcript of NVDA.OQ earnings conference call or presentation 28-Aug-24 9:00pm GMT

NVIDIA Corp Annual Shareholders Meeting Summary – 2024-06-26 – US$ 54.00 – Edited Brief of NVDA.OQ shareholder or annual meeting 26-Jun-24 4:00pm GMT

NVIDIA Corp Annual Shareholders Meeting Transcript – 2024-06-26 – US$ 54.00 – Edited Transcript of NVDA.OQ shareholder or annual meeting 26-Jun-24 4:00pm GMT

NVIDIA Corp GTC Financial Analyst Q&A Summary – 2024-03-19 – US$ 54.00 – Edited Brief of NVDA.OQ conference call or presentation 19-Mar-24 3:30pm GMT

NVIDIA Corp GTC Financial Analyst Q&A Transcript – 2024-03-19 – US$ 54.00 – Edited Transcript of NVDA.OQ conference call or presentation 19-Mar-24 3:30pm GMT

NVIDIA Corp at Bank of America Global AI Conference Summary – 2023-09-11 – US$ 54.00 – Preliminary Brief of NVDA.OQ presentation 11-Sep-23 4:00pm GMT

NVIDIA Corp at Bank of America Global AI Conference Transcript – 2023-09-11 – US$ 54.00 – Edited Transcript of NVDA.OQ presentation 11-Sep-23 4:00pm GMT

NVIDIA Corp at Citi Global Technology Conference Summary – 2023-09-07 – US$ 54.00 – Edited Brief of NVDA.OQ presentation 7-Sep-23 4:15pm GMT

More from Thomson StreetEvents

Thomson StreetEvents—Thomson StreetEvents is a leading provider of Web-based solutions for the investment community, offering services that transform the way companies communicate and meet disclosure requirements while assisting investors in managing and leveraging this information. Thomson StreetEvents service offers institutional investors a one-stop solution for managing corporate disclosure information by aggregating conference calls, webcasts, transcripts, call summaries, and other financial information into a time-saving, efficiency tool.
Purchase Thomson StreetEvents' Transcripts (verbatim reports) and Briefs (call summaries) of earnings, guidance, M&A and other corporate calls directly through Alacra. Discounted prices apply to reports produced over two weeks ago.

About the Author


Cite this Report

  
MLA:
Thomson StreetEvents. "NVIDIA Corp at Bank of America Global Technology Conference Transcript" Jun 05, 2024. Alacra Store. May 05, 2025. <http://www.alacrastore.com/thomson-streetevents-transcripts/NVIDIA-Corp-at-Bank-of-America-Global-Technology-Conference-T16019696>
  
APA:
Thomson StreetEvents. (2024). NVIDIA Corp at Bank of America Global Technology Conference Transcript Jun 05, 2024. New York, NY: Alacra Store. Retrieved May 05, 2025 from <http://www.alacrastore.com/thomson-streetevents-transcripts/NVIDIA-Corp-at-Bank-of-America-Global-Technology-Conference-T16019696>
  
US$ 54.00
$  £  
Have a Question?

Any questions about the report you're considering? Our Customer Service Team can help! Or visit our FAQs.

More Research

Search all our Credit Research from one place.