Fountainheadinvesting

Fountainhead Investing

  • Objective Analysis: Research On High Quality Companies With Sustainable Moats
  • Tech Focused: 60% Allocated To AI, Semiconductors, Technology

5 Star Tech Analyst Focused On Excellent Companies With Sustainable Moats

Categories
AI Industry Semiconductors Stocks

Nvidia GTC Keynote – CEO Jensen Huang

Quick Key Takeaways: Worth every minute. I own shares and will add on declines.

Incredible product road map:

Blackwells are in full production and Blackwell NV72 is expected in 2H2025. Some may quibble about a “delay, ” as investors expected Q1 or Q2 of strong sales from NV36 and NV72, but it hardly makes a difference in the long run. In the worst-case scenario, the stock could drop 10-15%, but that should attract buying unless other macroeconomic uncertainties cause a continuous slump in the market/economy. I would back up the truck at $100.

Rubin, which is the next series of GPU systems, will be available in the second half of 2026 – again a massive leap in performance.

Nvidia (NVDA) has a 1-year upgrade cadence,

a)Nobody else has that

b) It’s across the board, GPUs, Racks, Networking, Storage, and Packaging the whole ecosystem of partners.

Nvidia’s market leadership is going to last a while – That is my main investing thesis, and I can withstand the short-term bumps.

Cost Analysis: I’m glad Jensen spoke about this in more detail and what stood out for me was a clear-cut analysis of reducing variable costs as their GPU systems get smarter and more efficient to bring total costs (TCO) down. Generating tokens to answer queries is horrendously expensive for Large Language Models, like ChatGPT and it was a black hole.

I expect costs to come down significantly, making the business model viable. Customers are expected to pay up to $3Mn for the Blackwell NV72, and it has to become profitable for them.

Omniverse, Cosmos, and Robotics, – are other focus areas to go beyond data centers. Nvidia needs other target markets, industrials, factories, automakers, and oil and gas companies to embrace AI and therefore use of their GPUs, to reduce their dependence on hyperscalers, and Jensen spent a lot of time on them. He also emphasized enterprise software partnerships, for AI, and gaining full acceptance as a ubiquitous product and making extra revenues. For Nvisia’s vision of alternate intelligent computing, we have to see more Palantirs, Service Nows, and AppLovins. In my opinion, Agentic AI will make serious inroads in 12-15 months.

I’ll add more detail in another note, once I parse the transcript in detail.

Bottom line – this is going to be an incredible journey and we’re just at the beginning; Sure it’s going to be a bumpy ride, and given the macro environment, it would be prudent to manage risks by waiting for the right entry, and taking profits when overvalued or overbought, or if you have the expertise, using other hedging mechanisms. I’m sold on Jensen’s idea of a new paradigm of accelerated and intelligent computing based on GPUs and agents. Nvidia is best positioned to take full advantage of it.

Categories
AI Cloud Service Providers Industry Semiconductors Stocks

Nebius (NBIS) $45 Has Long-Term Potential But May Be Priced To Perfection


While Nebius has shot up 135% in the past year and is approaching fever pitch as a speculative AI infrastructure investment, it does have long-term potential to justify buying on declines.
Nebius was carved out of the Yandex group, an erstwhile Russian company, known as the Yahoo of Russia. After sanctions due to the Ukraine war and the resulting spinoff, this is a European company with US operations with little or no Russian exposure or additional geo-political risks.
Nvidia has a 0.3% stake in the company, and a strategic partnership to expand AI infrastructure to Small and Medium businesses beyond hyperscalers.
Nebius has five revenue segments – Data Center, Toloka, Triple Ten, Clickhouse and AVRide.

I want to focus on the main data center segment in this article.
Datacenter
The best and most strategic segment is the data center, and the key reason to invest in the company is to take full advantage of AI needs beyond the hyperscalers. I expect at least 100% annual revenue growth in the next two years from the data center, slowing down to 50% in year 3.
Nebius is going all out in creating enough capacity for demand in the next two to three years.
It launched its first data center in the US, in Kansas City to start operations in Q1 2025 with an initial capacity of 5 MW, scalable up to 40 MW.
Further expansion plans – most likely all of that is Nvidia’s B200 GPUs.
• Finland 25MW to 75MB by late 2025 or early 2026.
• France – 5MW Launching in November 2025.
• Kansas City – Second facility with 5MW to expand to 40MW.
• One to two further greenfield data centers in Europe.
Datacenter offerings: Either as computing power or GPU rentals, or the more specialized PaaS (Platform As A Service) with its AI studio, which gives customers choices of OpenAI or DeepSeek models among others. It is priced based on usage and token generation to cater to medium-sized, smaller, and/or specialized domain-specific customers.
Fragmentation likely: As the AI data center industry progresses, I believe, Inferencing and modeling requirements will be fragmented and domain-specific. The DeepSeek software and modeling workarounds do suggest this market could easily be targeted by customized requirements, where brute computing power as the norm will morph into specialized or customized requirements. In which case while customers could contract for larger GPU training clusters, they would also look for cheaper inference solutions, which rely on software enhancements. This is likely to happen over time and may well work to Nebius’ advantage since they want to go beyond pure GPU rentals and provide a full stack – this could be both a challenge and an opportunity and it would be crucial to get more visibility in Nebius’ offerings or services as the year progresses specially compared to competition like CoreWeave.
Lowering training costs: While there remains a huge cloud about what DeepSeek did spend on training, and even as 2025 seems to be secure because of the large Capex of $320 Bn committed by the hyperscalers, it would be remiss to not acknowledge that the trend would seek lower training costs as well – in which case a) data center computing power will be at risk and b) pricing could be the main differentiator. As of now, Goldman Sachs is projecting data center demand to exceed supply by about 2:1, and the gap is unlikely to be filled even with rapid deployment through 2026.
Spend more to earn more: Most of the forecasted growth is based on Capex possibly over $2.5Bn in 2025 with an additional $2.5Bn to $3Bn in 2026. Currently, Nebius is well capitalized with about $2.9Bn in cash, but if data centers don’t generate enough cash, there could be dilution to raise more capital or the sale of stakes in their 4 other businesses. This is very likely to happen in 2026.

Negatives and challenges

Provide value to customers beyond brute computing power: Over time data center rentals will get commoditized and become price-sensitive. The DeepSeek modeling workarounds do suggest that brute computing power will morph into smaller specialized or customized requirements. This could also work to Nebius’ advantage, since they can provide a full stack, i.e. –a challenge and an opportunity.
Pricing could be a challenge: The trend towards seeking lower training costs should continue – as of now Goldman Sachs is projecting data center demand to exceed supply by about 2:1, and the gap is unlikely to be filled even with rapid deployment through 2026, but Nebius needs to stay on top of it, to ensure they generate enough cash to continue spending on growth.
High Capex Needs: Most of the forecasted growth is based on Capex possibly $2.5Bn to $3Bn in 2025 and 2029 each – currently Nebius is well capitalized with over $2.9Bn in cash, but investors will need to be patient with this outlay first and prepared for dilution.

Valuation:

Nebius had forecasted to reach an ARR (Annual Recurring Revenue) Run Rate of – $750Mn to $1Bn for 2025, which is based on about $60-80Mn of ARR in Dec 2025 times 12. It’s not the ARR in February. It would grow from the current $300 Mn to $ 875 Mn by the end of 2025. Normally annual revenues are much lower than ARR – a lot of ARR is deferred revenue because the ARR includes contracted revenues, which are then pro-rated for the year. An ARR of $875Mn at the midpoint could imply 2025 revenue of between $400 and $500Mn. (This is still about 3x the estimated 2024 revenue of $137Mn – so there is tremendous growth. (But at this stage a lot of estimates!)
At a market cap of $10.4Bn, we’re looking at over 20x to 25x, 2025 revenue, so the price may have gotten ahead of itself.
This is a thinly traded company with rampant speculation, and I think the best move would be to sell 25% to 50% before earnings, should the quarterly results and forecasts disappoint. I’m already making a decent profit in a short time and keep the rest for the long term. Nebius reports pre-market on Thursday 20th, Feb.
I would like to see more visibility before committing to invest more.

Competition

CoreWeave (which is private) and also an Nvidia strategic partner had estimated revenue of $2.4Bn in 2024, and with the addition of 9 new data centers to 23 very likely to have around $8Bn of sales in 2025.
CoreWeave was last valued at around $23Bn but is targeting an IPO valuation of $35Bn thus giving it an estimated sales multiple of anywhere between just 4-9x for 2025, way below Nebius.
Even if we assume that the other businesses contribute an additional 25% or $125Mn in 2025 revenues we’re still valuing Nebius much higher than CoreWeave – a larger and more established competitor with Nvidia as a partner, and Microsoft as a customer.

That makes me wary; I’d be happy if my sales estimates are too low, but if they are not, then I would rather wait for dips.

Categories
AI Cloud Service Providers Industry Semiconductors Stocks

Hyperscalers, Meta and Microsoft Confirm Massive Capex Plans

Meta (META) has committed to $60-65Bn of Capex and Microsoft (MSFT) $80Bn: After the DeepSeek revelations, this is a great sign of confidence for Nvidia (NVDA), Broadcom (AVGO), and Marvel (MRVL) and other semiconductor companies. Nvidia, Broadcom, and Marvell should continue to see solid demand in 2025.

Meta CEO, Mark Zuckerberg also mentioned that one of the advantages that Meta has (and other US firms by that same rationale) is that they will have a continuous supply of chips, which DeepSeek will not have, and the likes of US customers like Meta will easily outperform when it comes to scaling and servicing customers. (They will fine-tune Capex between training and inference). Meta would be looking at custom silicon as well for other workloads, which will help Broadcom and Marvell.

Meta executives specifically called out a machine-learning system designed jointly with Nvidia as one of the factors driving better-personalized advertising. This is a good partnership and I don’t see it getting derailed anytime soon.

Meta also talked about how squarely focused they were on software and algorithm improvements. Better inference models are the natural progression and the end goal of AI. The goal is to make AI pervasive in all kinds of apps for consumers/businesses/medical breakthroughs, and so on. For that to happen you still need scalable computing power to reach a threshold when the models have been trained enough to provide better inference and/or be generative enough, to do it for a specific domain or area of expertise.

This is the tip of the iceberg, we’re not anywhere close to reducing the spend. Most forecasts that I looked at saw data center training spend growth slowing down only in 2026, and then spending on inference growing at a slower speed. Nvidia’s consensus revenue forecasts show a 50% revenue gain in 2025 and 25% thereafter, so we still have a long way to go.

I also read that Nvidia’s GPUs are doing 40% of inference work, they’re very much on the ball on inference.

The DeepSeek impact: If DeepSeek’s breakthrough in smarter inference were announced by a non-Chinese or an American company and if they hadn’t claimed a cheaper cost, it wouldn’t have made the impact it did.  The surprise element was the reported total spend, and the claim that they didn’t have access to GPUs – it was meant to shock and awe and create cracks in the massive spending ecosystem, which it is doing. But the reported total spend or not using high GPUs doesn’t seem plausible, at least to me. Here’s my earlier article detailing some of the reasons. The Chinese government subsidized every export entry to the world, from furniture to electric vehicles, so why not this one? That has been their regular go-to-market strategy. 

Cheaper LLMs are not a plug-and-play replacement. They will still require significant investment and expertise to train and create an effective inference model. I think the GPU requirements will not diminish because you need GPUs for training and time scaling, smarter software will still need to distill data. 

Just as a number aiming at a 10x reduction in cost is a good target but it will compromise quality and performance. Eventually, the lower-tier market will get crowded and commoditized – democratized if you will, which may require cheaper versions of hardware and architecture from AI chip designers, as an opportunity to serve lower-tier customers.

American companies will have to work harder, for sure – customers want cheap (Databricks’ CEO’s phone hasn’t stopped ringing for alternative solutions) unless they TikTok this one as well…..

Categories
AI Cloud Service Providers Industry Semiconductors Stocks

DeepSeek Hasn’t Deep-Sixed Nvidia

01/28/2025

Here is my understanding of the DeepSeek breakthrough and its repercussions on the AI ecosystem

DeepSeek used “Time scaling” effectively, which allows their r1 model to think deeper at the inference phase. By using more power instead of coming up with the answer immediately, the model will take longer to research for a better solution and then answer the query, better than existing models.

How did the model get to that level of efficiency?

DeepSeek used a lot of interesting and effective techniques to make better use of its resources, and this article from NextPlatform does an excellent job with the details.

Besides effective time scaling the model distilled the answers from other models including ChatGPT’s models.

What does that mean for the future of AGI, AI, ASI, and so on?

Time scaling will be adopted more frequently, and tech leaders across Silicon Valley are responding to improve their methods as cost-effectively as possible. That is the logical and sequential next step – for AI to be any good, it was always superior inference that was going to be the differentiator and value addition.

Time scaling can be done at the edge as the software gets smarter.

If the software gets smarter, will it require more GPUs?

I think the GPU requirements will not diminish because you need GPUs for training and time scaling, smarter software will still need to distill data. 

Cheaper LLMs are not a plug-and-play replacement. They will still require significant investment and expertise to train and create an effective inference model. Just as a number aiming at a 10x reduction in cost is a good target but it will compromise quality and performance. Eventually, the lower-tier market will get crowded and commoditized – democratized if you will, which may require cheaper versions of hardware and architecture from AI chip designers, as an opportunity to serve lower-tier customers.

Inferencing

Over time, yes inference will become more important – Nvidia has been talking about the scaling law, which diminishes the role of training and the need to get smarter inference for a long time. They are working on this as well, I even suspect that the $3,000 Digits they showcased for edge computing will provide some of the power needed.

Reducing variable costs per token/query is huge: The variable cost will reduce, which is a huge boon to the AI industry, previously retrieving and answering tokens cost more than the entire monthly subscription to ChatGPT or Gemini.

From Gavin Baker on X on APIs and Costs:

R1 from DeepSeek seems to have done that, “r1 is cheaper and more efficient to inference than o1 (ChatGPT). r1 costs 93% less to *use* than o1 per each API, can be run locally on a high end work station and does not seem to have hit any rate limits which is wild.

However, “Batching massively lowers costs and more compute increases tokens/second so still advantages to inference in the cloud.”

It is comparable to o1 from a quality perspective although lags o3.

There were real algorithmic breakthroughs that led to it being dramatically more efficient both to train and inference.  

On training costs and real costs:

Training in FP8, MLA and multi-token prediction are significant.  It is easy to verify that the r1 training run only cost $6m.

The general consensus is that the “REAL” costs with the DeepSeek model much larger than the $6Mn given for the r1 training run.

Omitted are:

Hundreds of millions of dollars on prior research and has access to much larger clusters.

Deepseek likely had more than 2048 H800s;  An equivalently smart team can’t just spin up a 2000 GPU cluster and train r1 from scratch with $6m.  

There was a lot of distillation – i.e. it is unlikely they could have trained this without unhindered access to GPT-4o and o1, which is ironical because you’re banning the GPU’s but giving access to distill leading edge American models….Why buy the cow when you can get the milk for free?

The NextPlatform too expressed doubts about DeepSeek’s resources

We are very skeptical that the V3 model was trained from scratch on such a small cluster.

A schedule of geographical revenues for Nvidia’s Q3-FY2025 showed 15% of Nvidia’s or over $4Bn revenue “sold” to Singapore, with the caveat that it may not be the ultimate destination, which also creates doubts that DeepSeek may have gotten access to Nvidia’s higher-end GPUs despite the US export ban or stockpiled them before the ban. 

Better software and inference is the way of the future

As one of the AI vendors at CES told me, she had the algorithms to answer customer questions and provide analytical insides at the edge for several customers – they have the data from their customers and the software, but they couldn’t scale because AWS was charging them too much for cloud GPU usage when they didn’t need that much power. So besides r1’s breakthrough in AGI, this movement has been afoot for a while, and this will spur investment and innovation in inference. We will definitely continue to see demand for high-end Blackwell GPUs to train data and create better models for at least the next 18 months to 24 months after which the focus should shift to inference and as Nvidia’s CEO said, 40% of their GPUs are already being used for inference.

Categories
AI Stocks

CES Note On Nvidia (NVDA)

Nvidia Announcements: Jensen Huang CEO delivered the CES (Consumer Electronic Show) keynote on January 6th, 2025. An immensely popular event, with about 140,000 attendees, the charismatic Jensen drew a packed crowd. Jensen did not disappoint.

New Gaming Cards: Nvidia’s new line of RTX 50 Series gaming graphics cards is based on the company’s Blackwell chip architecture. Considering the massive leap Blackwell has made over Hopper, its previous iteration in data center applications, getting it to work in gaming is a huge deal – a massive improvement with superior rendering and higher frame rates for gamers.

Digits – The Linux-based desktop computer with the GB 10 Grace Blackwell Superchip with a CPU and GPU. A first in its history at $3,000 “Placing an AI supercomputer on the desks of every data scientist, AI researcher, and student empowers them to engage and shape the age of AI,” Huang said.

It may seem like a niche product for high-end engineers/professors/scientists and researchers, but I think it’s a deliberate and excellent strategy to evangelize the product, through the folks who can develop use cases and apps that can further the market for cheaper at scale mass Digits in the future. 

This is straight from the Nvidia playbook from the last two decades – they have always involved the scientific and research community from the start. I can bet a large number of these are going to be distributed free to campuses.

I think Digits will turn out to be a very consequential product for Nvidia – with several billion in revenue in a few years. But forecasts aside, what’s key is that Digits uses a scaled-down version of Nvidia’s Grace AI server CPU technology. It’s packaged in a Mac Mini-sized form factor with the help of Taiwan-based MediaTek, which Huang commended for its expertise in building low-power chips. 

Quoting Tae Kim from Barron’s who authored an excellent book on Nvidia.

“Over time, the logical move for Nvidia would be to scale down this CPU further for consumer Windows laptops. By integrating its graphics expertise, MediaTek’s power-saving capabilities, and the efficiency of Arm-based CPU technology, Nvidia could create a processor that offers leading graphics for gaming and high performance for productivity, along with long battery life. While prior Arm-based Windows PCs have struggled with software compatibility, Nvidia’s top-notch software engineering could make it work.”

Huang strongly hinted it was likely to happen. “We architected a high-performance CPU with [MediaTek],” he said on Tuesday at a question-and-answer session with financial analysts at CES. “It was a great win-win.”

When pressed by an analyst if Digits was an iterative step toward moving into the PC market, “I’m going to have to wait to tell you that,” Huang said. “Obviously, we have plans.”

There are questions about why Nvidia chose Linux over Windows – and we should hear more about that at their GTC conference in March.

It could shake up the moribund PC market, which has been suffering from a lack of growth rates after COVID-19. Desktop PCs and laptop computers still generate large revenues for Intel and Advanced Micro Devices, the primary makers of x86-based processors – a legacy that could give way to ARM-based processors, which Apple uses. Analysts expect Intel to generate $30 billion in revenue from its client computing business in 2024, according to FactSet, while AMD has $6.7 billion in revenue in its client segment.

Billions of potential new revenue are at stake for Nvidia, which is forecasted to make $180Bn in sales in the 12 months ending January 2026. While data center takes up the largest share of revenue at about $150Bn of that pie and gaming, auto, and professional visualization (Omniverse) take the rest, a new source would help a great deal when data center revenue growth slows down.

In the past two years, Nvidia has monopolizedized the AI data center market with the best-designed, highest-performing chips. Nvidia would likely make a significant dent in the PC market as well as the new paradigm in edge computing with all the computing power and constant innovation and upgrade at its disposal.

The third announcement was for COSMOS – a significant improvement over their Omniverse and the biggest catalyst/enabler of “Physical AI”,  I’ll write a separate note on that.

Nvidia wants it all. That’s likely to be good news for consumers and trouble for the PC status quo.

Categories
AI Hardware Semiconductors Stocks

AI Takes Center Stage At 2025 CES

The mood at the CES (Consumer Electronics Show) is pretty upbeat. This is a hardware show and after years of taking a backseat to software, the massive computing power from GPUs for AI applications has propelled hardware to the forefront. And it’s not just servers, racks, data center peripherals, or networking, the range of hardware was impressive. From auto, robots, home appliances, entertainment, healthcare devices, industrial equipment, and smart glasses, to security.

Vendors, engineers, consulting firms, and VCs are noticeably excited for the future. 

Heavily focused on AI, the number of exhibitors with AI proof of concept and use cases was also quite large. Sure, there’s always hype, but a lot of them were genuine, with orders, use cases, and deployment.

Trends

Smarter hardware

Robotics – big push from computing power for robotic uses in supply chain, warehousing, and retail. Strong use of AI in industrial design, and industrial production.

Examples:  

  • Siemens’ collaboration with Nvidia and several others using Nvidia’s Cosmos (a big improvement on Omniverse) for pre-production simulation and assembly lines for quality control, and error reduction) 
  • Eaton with Intel for power systems. (Time reduction on compliance, process streamlining)
  • Accenture’s several clients, in fields like oil drilling, and IoT applications.

Vendors can do a lot more now and are seeing demand for better AI products.

Inference

Both edge AI in PCs and IoT endpoints are going to be huge. A SaaS cyber security firm (among others) complained about the crazy fees she pays AWS (somebody’s got to pay for that $100Bn of Capex for Nvidia’s GPUs!). The need to increase inference and solutions at the endpoint will be a major trend, and the increase in the number of AI PCs coming to market confirms the shift to the edge.

Agentic AI or Agents

I had lengthy discussions with  Nvidia and AMD engineers on Agentic AI, which has become a bit of a buzzword with some hype about it really being a chatbot with a fancier name, but in several cases that I saw, was a major improvement over a chatbot.

Agentic has to mean enhanced or better solutions than a chatbot –  a chatbot or a ChatGPT query gives you a simple answer to a query from a data set. An Agent must be able to give you significantly more feedback from analyzing a wider set of data, or more than one data set. It is more interactive and provides feedback and looks for feedback – a loop that ends up with a better solution.

A comment from the VP of Accenture was very interesting. She said, that if agents have to be better than chatbots or the latest version of Alexa/Siri – if I ask for the next charging station for my EV, it should be able to know my history, deduce, which direction I’m going in, and also tell if there is a vegetarian eatery on the way (in the same direction)  – not just throw up 5 charging stations 2 in the wrong direction, as it currently does.

In my opinion, this is just the beginning, there will be secular growth for a while. I would strongly focus on AI with the usual caveat of being careful of stock valuations running ahead of growth and getting overpriced – we’ll need to be patient with getting the right price.

For AI to continue growing the industry has to get more democratized – more developers, a wider ecosystem, more use cases and I saw that in spades, which gives me a lot of confidence in its future – Nvidia, AMD, Qualcomm, Amazon, Google, and Intel had multiple partners all over the conference. 

The Agent could be your interface on your iPhone – this is an interesting idea brought up by a VP from Qualcomm.

Categories
AI Cloud Service Providers Semiconductors Stocks

Nvidia Is An Excellent Long Term Investment

Hyperscaler Capex Shows Strong Demand For Nvidia’s (NVDA) GPUs.

I know there is excitement in the markets as Nvidia reports Q3-FY2025 earnings after the market on Wednesday 11/20. Nvidia earnings watch parties have become part of the Zeitgeist, and its quarterly earnings are one of the most closely watched events each quarter.

I, however, don’t believe in quarterly gyrations and have been a long-term investor in Nvidia since 2017, having recommended it more than two years ago and then in March 2023 and again in May 2023 as part of an industry article on auto-tech.

I believe the Blackwell ramp is going strong, and reports regarding rack heating issues are just noise in a program of this size.

Capex from hyperscalers will continue to fuel demand for Nvidia’s GPUs in the next year and beyond and even though it’s expensive it remains a great long-term investment.

Capex from hyperscalers – Nvidia’s biggest customers.

AI spending from the hyperscalers is expected to increase to $225Bn in 2024. Cumulatively in the first 9 months of the year, the key hyperscalers who are Nvidia’s biggest clients, have already spent $170Bn, on Capex — 56% higher than the previous year. Here are the estimates for the full year 2024, 

  1. Amazon (AMZN) $75Bn 
  2. Alphabet (GOOG) $50Bn
  3. Meta (META) $38Bn to $40Bn
  4. Microsoft (MSFT) $60Bn

On their earnings call, hyperscalers’ management committed to continued Capex spending in 2025, but not at the same pace of over 50% seen in 2024.

When quizzed by analysts, hyperscalers also talked about AI revenues, which though are still relatively small compared to the amount of Capex spent, it is growing and growing within their products. Amazon mentioned that its AI business through AWS is at a multibillion-dollar revenue run rate growing in triple-digits year, while Microsoft’s CEO stated that its AI business is on track to surpass $10 billion in annual revenue run rate in Q2-FY2025. 

Meta and Alphabet had more indirect inferences about AI revenues. For example, Meta believes that its AI tools improve conversion rates for its advertisers, which creates more demand. On the consumer side, Meta believes that their AI has led to more time spent on Facebook and Instagram. Similarly, Alphabet also spoke about Gemini improving the user experience and its use of AI in search. Seven of the company’s major products—with more than two billion users—have incorporated Google’s AI Gemini model, While Capex from hyperscalers also goes towards infrastructure, and building, which take longer to show good returns, a fairly large chunk goes towards GPUs, which bodes well for Nvidia, which controls more than 80% of the AI-GPU market.

Besides Capex, I also believe in AI and there are several areas where AI has already shown promise.

Code Generation

The low-hanging fruit is being plucked: A quarter of new code at companies like Google is now initially generated by AI and then reviewed by staff. Similarly, GitLabs and GitHub, are providing Dev-Op teams similar offerings.

Parsing and synthesizing data for product usage:

Partha Ranganathan, a technical fellow at Google Cloud, says he’s seeing more customers using AI to synthesize and analyze a large amount of complex data using a conversational interface.

Other enterprise software companies see huge upsides in selecting a large-language model and fine-tuning the model with their own unique data applied to their own product needs.

I recommended Duolingo (DUOL) for the same reasons, their own AI strengths better their language app, creating a virtuous flywheel of data generation from their own users to create an even better product – data that exists within Duolingo, which is more powerful and useful than a generic ChatGPT product.

Using AI for medical breakthroughs

Pharmaceutical giants like Bristol Myers are using AI for drug discovery at a pace that was impossible before AI and LLMs became available. These are computational problems that need powerful GPUs to research, compute, and process for clinical trials.

Who is the indispensable, ubiquitous, and default option to turn their dreams into reality? – Nvidia and its revolutionary Blackwell GPUs – the GB200 NVL72 AI system, which incorporates 72 GPUs, linked together inside one server rack differentiating Nvidia from its lesser lights like AMD and Broadcom, which at a run rate of $5.5Bn and $11Bn, respectively are minnows compared to the $130Bn behemoth with 80% of that revenue from AI/Datacenter GPUs.

I believe we are in the first innings of AI and Nvidia will continue to lead the way. I continue to buy Nvidia on declines.

Categories
AI Semiconductors Stocks

Nvidia – The Blackwell Ramp

Nvidia (NVDA) $121 (AI) (Semiconductors)

And here we are ramping Blackwell, and it’s in full production,” said Nvidia CEO Jensen Huang, during the Goldman Sachs Communacopia + Technology Conference. “We’ll ship in Q4 and scale it — start scaling in Q4 and into next year. And the demand on it is so great … and so the intensity is really, really quite extraordinary.”

https://seekingalpha.com/news/4152814-nvidia-trends-up-as-blackwell-release-date-nears

“Blackwell chips are expected to see 450,000 units produced in the fourth quarter of 2024, translating into a potential revenue opportunity exceeding $10B for Nvidia,” according to a post today on X.

The estimate during the August conference call was for $3Bn Blackwell revenue in Q4, so this is a big change. Fundamentally there wasn’t any real difference, just the quarterly cadence from Q4 to Q1, but this does help the stock in the short term and more importantly should put to rest any rumors or doubts about Blackwell design flaws.

Categories
Semiconductors Stocks

Nvidia (NVDA) Q1-2024 Earnings Preview: High Expectations and Market Optimism

Nvidia Earnings Preview – Q1-2024 

The big event is finally here (Post-market Wednesday, May 22nd) and expectations are sky-high! 

Consensus estimates are for earnings of $5.58 (up 412% YoY) and revenues of $24.6Bn, (up 242% YoY). However, analysts seem to be pointing out that anything less than $5.75 and $26Bn would lead to disappointments. Similarly, expectations for higher guidance for Q2 are also, well, high. Just meeting consensus estimates of $6 per share and $27Bn won’t cut it.

Wall Street remains optimistic – the average price target is $1,040 a 9% upside, with a high of $1,400 from Rosenblatt Securities, who believe that there won’t be any air pockets transitioning from the H(Hopper) series to the B (Blackwell) series, even as AWS this morning confirmed that they would wait for the Blackwell to ship before buying more Hoppers.

Other Wall Street analysts also have higher-than-average targets from $1,100 (Barclays) to $1,200 (Baird).

Seeking Alpha analysts, not to be outdone also talk of the large and growing TAM, with one estimate of $600Bn by 2030, extrapolating growth from the Chips Act, the massive Capital expenditures from mega-caps like Microsoft, Google, Meta, and Oracle, plus the partnership with Dell, new AI use cases and even proxying TSM’s manufacturing capacity. So yes, there are plenty of defensible theories about why this AI gravy train won’t slow down.

For my part, I last bought Nvidia for around $780 on April 22nd, and with a high exposure in it, don’t plan to add more for now. It should remain a very strong, high-conviction, core holding for a long time. I will be looking out for other AI stories.