Fountainheadinvesting

Fountainhead Investing

  • Objective Analysis: Research On High Quality Companies With Sustainable Moats
  • Tech Focused: 60% Allocated To AI, Semiconductors, Technology

5 Star Tech Analyst Focused On Excellent Companies With Sustainable Moats

Categories
Ad Tech AI Cloud Service Providers Industry Stocks

Amazon Is A Good Bargain At $202

I’ve been adding Amazon (AMZN) to my portfolio in the past week; it is a bargain at $202, having dropped almost 20% from its high of $242.

Amazon has 4 businesses.

Amazon Web Services

AWS is a cloud services behemoth and market leader with $ 108 Bn in 2024 sales, and still growing at 19%. That is remarkable growth for a market leader of that size with two other 800-pound Gorillas, Alphabet and Microsoft, chasing it. It generated operating profits of $39 Bn last year, a growth of 66% with an operating profit margin of 37%. This is Amazon’s most profitable segment and the growth engine, which powers everything.

Advertising

Amazon includes its advertising revenues in the online retail sales segment, but its advertising revenue last year was estimated between $ 56 Bn to $ 64 Bn in 2024, growing around 20% a year. This is also a high operating margin business, generating over 20% in operating profits.

Prime Subscriptions

Amazon doesn’t disclose its Prime subscriber numbers, but we estimate about 200Mn subscribers, including 180Mn in the US in 2024, generating over $40Bn in revenue.

This is another sustainable, sticky, and high-margin business, I’d value it at about 9x sales or $360 Bn.

I used a 9-10x multiple for the high-growth, high-profit margin, and sustainable businesses.

Online and physical retail sales in the US and abroad

These include third-party sales. Physical sales revenues are minuscule compared to total retail sales; loss leaders to expand reach and for analytic purposes, including in the online retail business. Amazon had a whopping $ 431 Bn in 2024. While online domestic and international sales are a drag, growing slower in single digits, they’re not significantly slower than Walmart’s sales growth and margins.

Amazon Segment Sales: Sources Amazon

Based on the Sum of the Parts schedule above, we’re getting the online and physical retail operations of $431 Bn at a market cap of just $170 Bn. The multiple of 0.4 is much lower than Walmart’s multiple of 0.74, or 40%.

Amazon has been spending heavily on Capex for AI to gear AWS and expand its web service offerings. In this arms race, they are scheduled to spend $100 Bn in 2025 to maintain and possibly expand their leadership.

We haven’t even valued all their investments and partnerships under AI development. That can be very valuable in the future.

I’d continue to buy the stock on declines.

Categories
AI Cloud Service Providers Industry Semiconductors Stocks

Credo Technology (CRDO) $46 Is A Great Pick And Shovels Play On AI

While all eyes and ears are on tariff uncertainties and geopolitical risks, we remain focused on finding good investments for the long term – tuning out the drama and volatility.

Excellent Q3-FY2025 results

Credo Technologies (CRDO) supplies high-quality Active Electric Cables (AECs) to data centers, counting on Amazon, Microsoft, and other hyperscalers as its biggest customers. The stock dropped 14% today to $46.75 in spite of excellent Q3-FY2025 results with a 154% increase in sales to $135Mn Vs $120Mn expected and a sizable improvement in gross and operating margins, which is unusual when you’re ramping up production for a customer like Amazon.

Revenue guidance for the next quarter was even more impressive at 162% growth to a midpoint of $160Mn. For the full year ending in April 2025, Credo is expected to grow revenues to $427Mn – a whopping 121% increase, over the previous year.

Good pick and shovels play in data center and AI

Credo is a pick and shovels AI/GPU/Data center play as data centers ramp up all over the world for accelerated computing. Its key products are essentially AEC replacements for optical cables — a play on back-end networking of high, and reliable bandwidth for data center GPUs and GPU systems like the Nvidia Blackwell N36 and N72, which are expected to start ramping up in the 2nd quarter of 2025.

Data center equipment suppliers have become very crucial parts of the AI/GPU supply chain, and Credo’s results certainly speak volumes of their capacity to scale and scale profitably, which is even more admirable.

Its founders are from Marvell (MRVL), there is a fair amount of credibility and experience.

They are general purpose and custom silicon agnostic, which is good because get business from Nvidia and from ASIC players like Amazon and Google.

The business is also GAAP breaking even in FY2025, another exception for such a small company.

Credo had gross GAAP margins of 63.6%, and GAAP operating margins of 20% and a stunning Adjusted Operating Margin of 31.4%, which is astonishing for a fledgling 400Mn operation with Amazon as its main customer.

Key Risks 

Customer concentration – not likely to change soon, the nature of the industry currently needs high volume from hyperscalers.

AEC cables will become a commodity after 3-5 years, so they’ll need to maintain their growth without dropping prices.

Valuation

Credo’s valuation is not expensive at 11x sales as the revenue growth is easily going to surpass 60% in FY2026 and 30% in FY2027, after growing 120% in FY2025. The P/S to growth ratio drops to a low of 0.2 with such high growth. Furthermore, it has an operating profit margin of 20% easily adding to more than the rule of 40, or 60+20 = 80.

The drop today was ostensibly because of customer concentration – Amazon 68%. But analysts and investors should have known this; I believe the correction is overdone and Credo should resume its upward march again. I bought at 45.75 today, the stock is down almost 50% from its all-time high of $86.69, but still up 187% in the past year.

I’m targeting a return of 24% per year or double in 3.

Categories
AI Cloud Service Providers Industry Semiconductors Stocks

Nebius (NBIS) $45 Has Long-Term Potential But May Be Priced To Perfection


While Nebius has shot up 135% in the past year and is approaching fever pitch as a speculative AI infrastructure investment, it does have long-term potential to justify buying on declines.
Nebius was carved out of the Yandex group, an erstwhile Russian company, known as the Yahoo of Russia. After sanctions due to the Ukraine war and the resulting spinoff, this is a European company with US operations with little or no Russian exposure or additional geo-political risks.
Nvidia has a 0.3% stake in the company, and a strategic partnership to expand AI infrastructure to Small and Medium businesses beyond hyperscalers.
Nebius has five revenue segments – Data Center, Toloka, Triple Ten, Clickhouse and AVRide.

I want to focus on the main data center segment in this article.
Datacenter
The best and most strategic segment is the data center, and the key reason to invest in the company is to take full advantage of AI needs beyond the hyperscalers. I expect at least 100% annual revenue growth in the next two years from the data center, slowing down to 50% in year 3.
Nebius is going all out in creating enough capacity for demand in the next two to three years.
It launched its first data center in the US, in Kansas City to start operations in Q1 2025 with an initial capacity of 5 MW, scalable up to 40 MW.
Further expansion plans – most likely all of that is Nvidia’s B200 GPUs.
• Finland 25MW to 75MB by late 2025 or early 2026.
• France – 5MW Launching in November 2025.
• Kansas City – Second facility with 5MW to expand to 40MW.
• One to two further greenfield data centers in Europe.
Datacenter offerings: Either as computing power or GPU rentals, or the more specialized PaaS (Platform As A Service) with its AI studio, which gives customers choices of OpenAI or DeepSeek models among others. It is priced based on usage and token generation to cater to medium-sized, smaller, and/or specialized domain-specific customers.
Fragmentation likely: As the AI data center industry progresses, I believe, Inferencing and modeling requirements will be fragmented and domain-specific. The DeepSeek software and modeling workarounds do suggest this market could easily be targeted by customized requirements, where brute computing power as the norm will morph into specialized or customized requirements. In which case while customers could contract for larger GPU training clusters, they would also look for cheaper inference solutions, which rely on software enhancements. This is likely to happen over time and may well work to Nebius’ advantage since they want to go beyond pure GPU rentals and provide a full stack – this could be both a challenge and an opportunity and it would be crucial to get more visibility in Nebius’ offerings or services as the year progresses specially compared to competition like CoreWeave.
Lowering training costs: While there remains a huge cloud about what DeepSeek did spend on training, and even as 2025 seems to be secure because of the large Capex of $320 Bn committed by the hyperscalers, it would be remiss to not acknowledge that the trend would seek lower training costs as well – in which case a) data center computing power will be at risk and b) pricing could be the main differentiator. As of now, Goldman Sachs is projecting data center demand to exceed supply by about 2:1, and the gap is unlikely to be filled even with rapid deployment through 2026.
Spend more to earn more: Most of the forecasted growth is based on Capex possibly over $2.5Bn in 2025 with an additional $2.5Bn to $3Bn in 2026. Currently, Nebius is well capitalized with about $2.9Bn in cash, but if data centers don’t generate enough cash, there could be dilution to raise more capital or the sale of stakes in their 4 other businesses. This is very likely to happen in 2026.

Negatives and challenges

Provide value to customers beyond brute computing power: Over time data center rentals will get commoditized and become price-sensitive. The DeepSeek modeling workarounds do suggest that brute computing power will morph into smaller specialized or customized requirements. This could also work to Nebius’ advantage, since they can provide a full stack, i.e. –a challenge and an opportunity.
Pricing could be a challenge: The trend towards seeking lower training costs should continue – as of now Goldman Sachs is projecting data center demand to exceed supply by about 2:1, and the gap is unlikely to be filled even with rapid deployment through 2026, but Nebius needs to stay on top of it, to ensure they generate enough cash to continue spending on growth.
High Capex Needs: Most of the forecasted growth is based on Capex possibly $2.5Bn to $3Bn in 2025 and 2029 each – currently Nebius is well capitalized with over $2.9Bn in cash, but investors will need to be patient with this outlay first and prepared for dilution.

Valuation:

Nebius had forecasted to reach an ARR (Annual Recurring Revenue) Run Rate of – $750Mn to $1Bn for 2025, which is based on about $60-80Mn of ARR in Dec 2025 times 12. It’s not the ARR in February. It would grow from the current $300 Mn to $ 875 Mn by the end of 2025. Normally annual revenues are much lower than ARR – a lot of ARR is deferred revenue because the ARR includes contracted revenues, which are then pro-rated for the year. An ARR of $875Mn at the midpoint could imply 2025 revenue of between $400 and $500Mn. (This is still about 3x the estimated 2024 revenue of $137Mn – so there is tremendous growth. (But at this stage a lot of estimates!)
At a market cap of $10.4Bn, we’re looking at over 20x to 25x, 2025 revenue, so the price may have gotten ahead of itself.
This is a thinly traded company with rampant speculation, and I think the best move would be to sell 25% to 50% before earnings, should the quarterly results and forecasts disappoint. I’m already making a decent profit in a short time and keep the rest for the long term. Nebius reports pre-market on Thursday 20th, Feb.
I would like to see more visibility before committing to invest more.

Competition

CoreWeave (which is private) and also an Nvidia strategic partner had estimated revenue of $2.4Bn in 2024, and with the addition of 9 new data centers to 23 very likely to have around $8Bn of sales in 2025.
CoreWeave was last valued at around $23Bn but is targeting an IPO valuation of $35Bn thus giving it an estimated sales multiple of anywhere between just 4-9x for 2025, way below Nebius.
Even if we assume that the other businesses contribute an additional 25% or $125Mn in 2025 revenues we’re still valuing Nebius much higher than CoreWeave – a larger and more established competitor with Nvidia as a partner, and Microsoft as a customer.

That makes me wary; I’d be happy if my sales estimates are too low, but if they are not, then I would rather wait for dips.

Categories
Cloud Service Providers Industry Semiconductors Stocks Technology

Marvell’s Investment Case Got Stronger With Hyperscaler Capex

Marvell Technology (MRVL) $114

I missed buying this in the low 90s, waiting to see if their transformation to an AI chip company was complete. Having a cyclical past, with non-performing business segments made me hesitate, besides far too many promises have been made in the AI space only for investors to be disappointed.

Marvel has been walking the talk, Q3 results in Dec 2024 were exemplary, and guidance even better.

Hyperscaler demand

With a planned Capex of $105Bn for 2025, Amazon confirmed on their earnings call that the focus will continue on custom silicon and inferencing. Amazon and Marvell have a five-year, “multi-generational” agreement for Marvell to provide Amazon Web Services with the Trainium and Inferentia chips and other data center equipment. Since the deal is “multi-generational,” Marvell will continue to supply the released Trainium2 5nm (Trn2) while also supplying the newly-announced Trainium3 (Trn3) on the 3nm process node expected to ship at the end of 2025. Amazon is an investor in Anthropic with plans to build a supercomputing system with “hundreds of thousands” of Trainium2 chips called Project Rainier. The DeepSeek aftermath does suggest a further democratization of AI, as inference starts gaining prominence from 2026.

Critically, like other hyperscalers Microsoft, Meta, and Alphabet, Amazon announced a high Capex (Capital Expenditure) plan of $105Bn for 2025, 27% higher than 2024, which itself was 57% higher than the previous year, for AI cloud and datacenter buildout. It was the last of the big four to confirm that massive AI spending was very much on the cards for 2025.

Here’s the scorecard for 2025 Capex, totaling over $320Bn. A few months back, estimates were swirling for $250 to $275. Goldman had circulated $300Bn in total Capex for the year, and these four have already planned more.

Amazon $105Bn
Microsoft $80Bn
Alphabet $75Bn
Meta $60 to $65Bn
Total $320Bn

The earnings call discussed DeepSeek R1 and the lower AI cost structure that it may presage, with the possibility of lower revenue for AI cloud services.

“We have never seen that to be the case,” Amazon CEO Andy Jassy said on the call. “What happens is companies will spend a lot less per unit of infrastructure, and that is very, very useful for their businesses. But then they get excited about what else they could build that they always thought was cost prohibitive before, and they usually end up spending a lot more in total on technology once you make the per unit cost less.”
Amazon plans to spend heavily on custom silicon and focus on inference as well besides buying Blackwells by the truckload.

Q3-FY2025

Marvell reported impressive Q3 results that beat revenue estimates by 4% and adjusted EPS estimates by 5.5%, led by strong AI demand. FQ3 revenue accelerated to 6.9% YoY and 19.1% QoQ growth to $1.52 billion, helped by a stronger-than-expected ramp of the AI custom silicon business.

For the next quarter, management expects revenue to grow to 26.2% YoY and 18.7% QoQ to $1.8 billion at the midpoint. The Q4 guide beats revenue estimates by 9.1% and adjusted EPS estimates by 13.5%. Management expects to significantly exceed the full-year AI revenue target of $1.5 billion and indicated that it could easily beat the FY2026 AI revenue target of $2.5 billion.

Marvell has other segments, which account for 27% of the business that are not performing as well, but they’re going full steam ahead to focus on the custom silicon business and expect total data center to exceed 73% of revenue in the future.

  • Adjusted operating margin – 29.7% V 29.8% last year, and better than the management guide of 28.9%.
  • Management guidance for Q4 is even higher at 33%.
  • Adjusted net income – $373 Mn or 24.6% of revenue compared to $354.1 Mn or 25% of revenue last year.
  • Management has also committed to GAAP profitability in Q4, and continued improvements.

Custom Silicon – There are estimates of a TAM (Total Addressable Market) of $42 billion for custom silicon by CY2028, of which Marvell could take 20% market share or $8Bn of the custom silicon AI opportunity, I suspect we will see a new forecast when the company can more openly talk about an official announcement. On the networking side, the TAM is another $31 billion.

“Oppenheimer analyst Rick Schafer thinks that each of Marvell’s four custom chips could achieve $1 billion in sales next year. Production is already ramping up on the Trainium chip for Amazon, along with the Axion chip for the Google unit of Alphabet. Another Amazon chip, the Inferentia, should start production in 2025. Toward the end of next year, deliveries will begin on Microsoft’s Maia-2, which Schafer hopes will achieve the largest sales of all.”

Key weaknesses and challenges

Marvell carries $4Bn in legacy debt, which will weigh on its valuation.

The stock is already up 70% in the past year, and is volatile – it dropped $26 from $126 after the DeepSeek and tariffs scare.

Custom silicon, ASICs (Application Specific Integrated Circuits) have strong competition from the likes of Broadcom and everyone is chasing market leader Nvidia. Custom silicon as the name suggests is not widely used like an Nvidia GPU and will encounter more difficult sales cycles and buying programs.

Drops in AI buying from data center giants will hurt Marvell.

Over 50% of Marvell’s revenue comes from China, and it could become a victim of a trade war.

Valuation: The stock is selling for a P/E of 43, with earnings growth of 80% in FY2025 and 30% after that for the next two years – that is reasonable. It has a P/S ratio of 12.6, with growth of 25%. It’s a bit expensive on the sales metric, but with AI taking an even larger share of the revenue pie, this multiple could increase.

Categories
AI Cloud Service Providers Industry Semiconductors Stocks

Hyperscalers, Meta and Microsoft Confirm Massive Capex Plans

Meta (META) has committed to $60-65Bn of Capex and Microsoft (MSFT) $80Bn: After the DeepSeek revelations, this is a great sign of confidence for Nvidia (NVDA), Broadcom (AVGO), and Marvel (MRVL) and other semiconductor companies. Nvidia, Broadcom, and Marvell should continue to see solid demand in 2025.

Meta CEO, Mark Zuckerberg also mentioned that one of the advantages that Meta has (and other US firms by that same rationale) is that they will have a continuous supply of chips, which DeepSeek will not have, and the likes of US customers like Meta will easily outperform when it comes to scaling and servicing customers. (They will fine-tune Capex between training and inference). Meta would be looking at custom silicon as well for other workloads, which will help Broadcom and Marvell.

Meta executives specifically called out a machine-learning system designed jointly with Nvidia as one of the factors driving better-personalized advertising. This is a good partnership and I don’t see it getting derailed anytime soon.

Meta also talked about how squarely focused they were on software and algorithm improvements. Better inference models are the natural progression and the end goal of AI. The goal is to make AI pervasive in all kinds of apps for consumers/businesses/medical breakthroughs, and so on. For that to happen you still need scalable computing power to reach a threshold when the models have been trained enough to provide better inference and/or be generative enough, to do it for a specific domain or area of expertise.

This is the tip of the iceberg, we’re not anywhere close to reducing the spend. Most forecasts that I looked at saw data center training spend growth slowing down only in 2026, and then spending on inference growing at a slower speed. Nvidia’s consensus revenue forecasts show a 50% revenue gain in 2025 and 25% thereafter, so we still have a long way to go.

I also read that Nvidia’s GPUs are doing 40% of inference work, they’re very much on the ball on inference.

The DeepSeek impact: If DeepSeek’s breakthrough in smarter inference were announced by a non-Chinese or an American company and if they hadn’t claimed a cheaper cost, it wouldn’t have made the impact it did.  The surprise element was the reported total spend, and the claim that they didn’t have access to GPUs – it was meant to shock and awe and create cracks in the massive spending ecosystem, which it is doing. But the reported total spend or not using high GPUs doesn’t seem plausible, at least to me. Here’s my earlier article detailing some of the reasons. The Chinese government subsidized every export entry to the world, from furniture to electric vehicles, so why not this one? That has been their regular go-to-market strategy. 

Cheaper LLMs are not a plug-and-play replacement. They will still require significant investment and expertise to train and create an effective inference model. I think the GPU requirements will not diminish because you need GPUs for training and time scaling, smarter software will still need to distill data. 

Just as a number aiming at a 10x reduction in cost is a good target but it will compromise quality and performance. Eventually, the lower-tier market will get crowded and commoditized – democratized if you will, which may require cheaper versions of hardware and architecture from AI chip designers, as an opportunity to serve lower-tier customers.

American companies will have to work harder, for sure – customers want cheap (Databricks’ CEO’s phone hasn’t stopped ringing for alternative solutions) unless they TikTok this one as well…..

Categories
AI Cloud Service Providers Industry Semiconductors Stocks

DeepSeek Hasn’t Deep-Sixed Nvidia

01/28/2025

Here is my understanding of the DeepSeek breakthrough and its repercussions on the AI ecosystem

DeepSeek used “Time scaling” effectively, which allows their r1 model to think deeper at the inference phase. By using more power instead of coming up with the answer immediately, the model will take longer to research for a better solution and then answer the query, better than existing models.

How did the model get to that level of efficiency?

DeepSeek used a lot of interesting and effective techniques to make better use of its resources, and this article from NextPlatform does an excellent job with the details.

Besides effective time scaling the model distilled the answers from other models including ChatGPT’s models.

What does that mean for the future of AGI, AI, ASI, and so on?

Time scaling will be adopted more frequently, and tech leaders across Silicon Valley are responding to improve their methods as cost-effectively as possible. That is the logical and sequential next step – for AI to be any good, it was always superior inference that was going to be the differentiator and value addition.

Time scaling can be done at the edge as the software gets smarter.

If the software gets smarter, will it require more GPUs?

I think the GPU requirements will not diminish because you need GPUs for training and time scaling, smarter software will still need to distill data. 

Cheaper LLMs are not a plug-and-play replacement. They will still require significant investment and expertise to train and create an effective inference model. Just as a number aiming at a 10x reduction in cost is a good target but it will compromise quality and performance. Eventually, the lower-tier market will get crowded and commoditized – democratized if you will, which may require cheaper versions of hardware and architecture from AI chip designers, as an opportunity to serve lower-tier customers.

Inferencing

Over time, yes inference will become more important – Nvidia has been talking about the scaling law, which diminishes the role of training and the need to get smarter inference for a long time. They are working on this as well, I even suspect that the $3,000 Digits they showcased for edge computing will provide some of the power needed.

Reducing variable costs per token/query is huge: The variable cost will reduce, which is a huge boon to the AI industry, previously retrieving and answering tokens cost more than the entire monthly subscription to ChatGPT or Gemini.

From Gavin Baker on X on APIs and Costs:

R1 from DeepSeek seems to have done that, “r1 is cheaper and more efficient to inference than o1 (ChatGPT). r1 costs 93% less to *use* than o1 per each API, can be run locally on a high end work station and does not seem to have hit any rate limits which is wild.

However, “Batching massively lowers costs and more compute increases tokens/second so still advantages to inference in the cloud.”

It is comparable to o1 from a quality perspective although lags o3.

There were real algorithmic breakthroughs that led to it being dramatically more efficient both to train and inference.  

On training costs and real costs:

Training in FP8, MLA and multi-token prediction are significant.  It is easy to verify that the r1 training run only cost $6m.

The general consensus is that the “REAL” costs with the DeepSeek model much larger than the $6Mn given for the r1 training run.

Omitted are:

Hundreds of millions of dollars on prior research and has access to much larger clusters.

Deepseek likely had more than 2048 H800s;  An equivalently smart team can’t just spin up a 2000 GPU cluster and train r1 from scratch with $6m.  

There was a lot of distillation – i.e. it is unlikely they could have trained this without unhindered access to GPT-4o and o1, which is ironical because you’re banning the GPU’s but giving access to distill leading edge American models….Why buy the cow when you can get the milk for free?

The NextPlatform too expressed doubts about DeepSeek’s resources

We are very skeptical that the V3 model was trained from scratch on such a small cluster.

A schedule of geographical revenues for Nvidia’s Q3-FY2025 showed 15% of Nvidia’s or over $4Bn revenue “sold” to Singapore, with the caveat that it may not be the ultimate destination, which also creates doubts that DeepSeek may have gotten access to Nvidia’s higher-end GPUs despite the US export ban or stockpiled them before the ban. 

Better software and inference is the way of the future

As one of the AI vendors at CES told me, she had the algorithms to answer customer questions and provide analytical insides at the edge for several customers – they have the data from their customers and the software, but they couldn’t scale because AWS was charging them too much for cloud GPU usage when they didn’t need that much power. So besides r1’s breakthrough in AGI, this movement has been afoot for a while, and this will spur investment and innovation in inference. We will definitely continue to see demand for high-end Blackwell GPUs to train data and create better models for at least the next 18 months to 24 months after which the focus should shift to inference and as Nvidia’s CEO said, 40% of their GPUs are already being used for inference.

Categories
Cloud Service Providers Enterprise Software Stocks

Oracle Deserves A Seat At The AI Table

Oracle (ORCL) $166 is a solid investment opportunity for 3-5 years, with a decent shot at growing data center cloud revenues faster than its other businesses, with a push from AI requirements from clients like Meta. Hyperscalers and cloud service providers are expected to spend a capex of $300Bn in 2025, boosting cloud infrastructure providers such as Oracle.

Oracle’s earnings should grow between 16-18% in the next 3-5 years- and it’s very reasonably priced at 24x forward earnings of $7.05. 

Its 31% GAAP operating margins are another sign of strength, especially for a legacy/mature $52Bn+ tech company. 

Revenues should grow at 12-14% annually in the next 3-4 years, which is impressive for a company of that size. Oracle’s P/S multiple is not expensive at 7X sales.

Oracle Cloud Infrastructure’s robust growth is a big catalyst for the company and the stock, and while the near-term Oracle’s FQ2 double miss disappointed investors, the price drop from $191 has created an opportunity for investors. Besides Oracle could gain market share over time.

Oracle’s modular approach and scalable infrastructure offer cost competitiveness, attracting customers. The company’s ability to scale AI clusters, demonstrated by the deployment of a 65,000 NVIDIA H200 supercomputer and a 336% surge in GPU consumption last quarter, further highlights its appeal.

Furthermore, Oracle’s strengthened partnership with Meta for AI training underscores its attractiveness to both large enterprises and smaller businesses. This reinforces the effectiveness of Oracle’s modular strategy, which aims to provide customers with an improved total cost of ownership (TCO) compared to leading hyperscaler competitors.

Their overall cloud segment is about 55% of revenues and growing at 25%, but the licensing segment has been stagnant for the past two years. Over time this will tilt more decisively towards the cloud, allowing them to either increase or maintain their multiples/valuation.

Categories
AI Cloud Service Providers Technology

Alphabet (GOOG) $165, Beset By Legal Issues Could Stay Range-Bound

An interesting article in the Wall Street Journal discusses Google’s anti-trust case in more detail. Quoting from the article:

“Some of the DOJ’s proposals were expected, such as the divestiture of the Chrome browser and a ban on payments to Apple AAPL in exchange for default or preferred placement of Google’s search engine on Apple’s devices”, which are minor and something Google could take in its stride.

But the government’s proposal of “Restoring Competition Through Syndication And Data Access”, could be more harmful in the long run.

Restoring competition through access, which involves Google providing its search index—essentially the massive database it has about all sites on the web—to rivals and potential rivals at a “marginal cost.”, in my opinion, is stripping Google of its IP, and competitive advantages, which it has built through decades of human and monetary capital. It is draconian and a massive overreach. It gets worse, if the government has its way, Google would also have to give those same parties full access to user and advertising data at no charge for 10 years.

For now, it’s a wish list, a starting point of a high ask, which I’m sure the government expects to be whittled down to something less harmful and gives it some bragging rights.

Points to consider

  1. This could harm/scare other tech giants.
  2. The Turney Act makes this government agnostic, it guarantees judicial oversights for antitrust actions.
  3. Alphabet has significant and solid resources and defensible arguments to fight this, mainly the 2 decades of resources put into building this moat.
  4. The stock is likely to stay range-bound or sideways because of the legal issues, where most investors would likely be cautious, even though this morning itself there have been strong buy calls from analysts.

I’m definitely going to hold on. While it is bad news that the DOJ is recommending that Google be forced to sell Chrome, it’s not written in stone, and there’s a small likelihood of it actually happening.

Here are several aspects to consider.

The Chrome divestiture is not devastating: Chrome, if divested could be valued at an estimated $20Bn, according to Bloomberg Intelligence, about 1% of Alphabet’s market cap of $2Tr, so it’s relatively less harmful.

All Roads Lead To Google Search: Even if the spinoff did happen, that doesn’t mean users would ditch Google’s search engine for rivals such as Bing and Safari, which account for less than 15% of the overall market.

The judge is unlikely to take up the recommendation: There is also the possibility the breakup doesn’t happen. Judge Amit Mehta, who will address Google’s illegal monopolization, could follow precedent.

“I think it’s unlikely because Judge Mehta is a very by-the-book kind of judge, and while breakups are a possible remedy under the antitrust laws, they have been generally disfavored over the last 40 years,” said Rebecca Haw Allensworth, a professor and associate dean for research at Vanderbilt Law School, in an email Monday. “He is very interested in following precedent, as was clear from his merits opinion in August, and the most relevant precedent here is Microsoft.”

The chances of an appeal are very strong: In June 2000, a judge ordered the breakup of Microsoft but that decision was later reversed on appeal. Google has stated that would appeal vigorously.

One of the analysts I follow had a fair point about some of Google’s “predatory or abusive” tactics on their ad-tech platforms, for which there are guidelines/rules that can be enforced for specific violations. But to get into a “European” mindset about regulating companies just because they have strong competitive advantages/moats is completely wrong, in my opinion. If Google didn’t pay Apple $20Bn to be its default search engine, Apple users would still prefer Google Search to Safari or Bing – this was in the court documents. Penalizing them (Google) is a massive overreach.

Google built this from scratch with tons of human and financial capital, at a time when there were several larger search engines in a fledgling, growing internet. The iPhone explosion came later. I would be very surprised if the government succeeds in destroying Alphabet.

Here is a sum of the parts valuation, which based on these estimates gives Google a higher valuation than its current market cap of $2.1Tr

Here are the WSJ and Barrons’ articles.

Categories
AI Cloud Service Providers Semiconductors Stocks

Nvidia Is An Excellent Long Term Investment

Hyperscaler Capex Shows Strong Demand For Nvidia’s (NVDA) GPUs.

I know there is excitement in the markets as Nvidia reports Q3-FY2025 earnings after the market on Wednesday 11/20. Nvidia earnings watch parties have become part of the Zeitgeist, and its quarterly earnings are one of the most closely watched events each quarter.

I, however, don’t believe in quarterly gyrations and have been a long-term investor in Nvidia since 2017, having recommended it more than two years ago and then in March 2023 and again in May 2023 as part of an industry article on auto-tech.

I believe the Blackwell ramp is going strong, and reports regarding rack heating issues are just noise in a program of this size.

Capex from hyperscalers will continue to fuel demand for Nvidia’s GPUs in the next year and beyond and even though it’s expensive it remains a great long-term investment.

Capex from hyperscalers – Nvidia’s biggest customers.

AI spending from the hyperscalers is expected to increase to $225Bn in 2024. Cumulatively in the first 9 months of the year, the key hyperscalers who are Nvidia’s biggest clients, have already spent $170Bn, on Capex — 56% higher than the previous year. Here are the estimates for the full year 2024, 

  1. Amazon (AMZN) $75Bn 
  2. Alphabet (GOOG) $50Bn
  3. Meta (META) $38Bn to $40Bn
  4. Microsoft (MSFT) $60Bn

On their earnings call, hyperscalers’ management committed to continued Capex spending in 2025, but not at the same pace of over 50% seen in 2024.

When quizzed by analysts, hyperscalers also talked about AI revenues, which though are still relatively small compared to the amount of Capex spent, it is growing and growing within their products. Amazon mentioned that its AI business through AWS is at a multibillion-dollar revenue run rate growing in triple-digits year, while Microsoft’s CEO stated that its AI business is on track to surpass $10 billion in annual revenue run rate in Q2-FY2025. 

Meta and Alphabet had more indirect inferences about AI revenues. For example, Meta believes that its AI tools improve conversion rates for its advertisers, which creates more demand. On the consumer side, Meta believes that their AI has led to more time spent on Facebook and Instagram. Similarly, Alphabet also spoke about Gemini improving the user experience and its use of AI in search. Seven of the company’s major products—with more than two billion users—have incorporated Google’s AI Gemini model, While Capex from hyperscalers also goes towards infrastructure, and building, which take longer to show good returns, a fairly large chunk goes towards GPUs, which bodes well for Nvidia, which controls more than 80% of the AI-GPU market.

Besides Capex, I also believe in AI and there are several areas where AI has already shown promise.

Code Generation

The low-hanging fruit is being plucked: A quarter of new code at companies like Google is now initially generated by AI and then reviewed by staff. Similarly, GitLabs and GitHub, are providing Dev-Op teams similar offerings.

Parsing and synthesizing data for product usage:

Partha Ranganathan, a technical fellow at Google Cloud, says he’s seeing more customers using AI to synthesize and analyze a large amount of complex data using a conversational interface.

Other enterprise software companies see huge upsides in selecting a large-language model and fine-tuning the model with their own unique data applied to their own product needs.

I recommended Duolingo (DUOL) for the same reasons, their own AI strengths better their language app, creating a virtuous flywheel of data generation from their own users to create an even better product – data that exists within Duolingo, which is more powerful and useful than a generic ChatGPT product.

Using AI for medical breakthroughs

Pharmaceutical giants like Bristol Myers are using AI for drug discovery at a pace that was impossible before AI and LLMs became available. These are computational problems that need powerful GPUs to research, compute, and process for clinical trials.

Who is the indispensable, ubiquitous, and default option to turn their dreams into reality? – Nvidia and its revolutionary Blackwell GPUs – the GB200 NVL72 AI system, which incorporates 72 GPUs, linked together inside one server rack differentiating Nvidia from its lesser lights like AMD and Broadcom, which at a run rate of $5.5Bn and $11Bn, respectively are minnows compared to the $130Bn behemoth with 80% of that revenue from AI/Datacenter GPUs.

I believe we are in the first innings of AI and Nvidia will continue to lead the way. I continue to buy Nvidia on declines.

Categories
AI Cloud Service Providers Stocks

Alphabet Deserves A Better Valuation

I had recommended Alphabet (GOOG) as a great long-term buy between $150 and $170 on several occasions.

Last evening, Google knocked it out of the park with really stellar results. I bought more shares this morning, and am reiterating a Buy.

I believe analysts’ consensus earnings are a bit conservative and Google will continue to beat estimates with better growth and operating margins.

Google’s earnings quality is better than several tech giants for the following reasons.

  • It has a near monopoly in Search
  • Market leadership in media with YouTube.
  • A strong first-mover advantage with Waymo.
  • A fast-growing Google Cloud business, third only to and catching up with Azure and AWS.

Its earnings and growth are sustainable, thus it deserves a better valuation and multiple.

Let’s take a closer look at Q3 earnings.

Q3 GAAP EPS came in at $2.12 per share, beating expectations of $1.85 per share $0.27, or 14% – This was a substantial beat.

Revenue of $88.3Bn (+14.9% Y/Y) beat by $2.05B or 3%.

Consolidated Alphabet revenues in Q3 2024 increased 15%, or 16% in constant currency, YoY to $88.3Bn reflecting strong momentum across the business.

Google Services revenues increased 13% to $76.5 billion, led by strength across Google Search & other, Google subscriptions, platforms, and YouTube ads.

Total operating income increased 34% and operating margin percent jumped a huge 4.5% to 32%.

Google Cloud revenues grew a whopping 35% to $11.4Bn led by accelerated growth in Google Cloud Platform (GCP) across AI Infrastructure, Generative AI Solutions, and core GCP products, with record operating margins of 17% as the cost per AI query decreased by 90% over the past 18 months.

Cloud titans Amazon (AWS) and Microsoft (Azure) have commanded huge valuations for their cloud computing businesses; with Google Cloud growing at 35%, it should continue to narrow the gap over the next 5 years. Also importantly, AWS and Azure have operating margins over 30%, and should Google continue to scale and leverage their existing fixed costs, they can reach the same margins. I also believe as they get better at AI, they should be able to charge more.

Based on consensus analysts’ estimates Alphabet’s EPS should grow to $11.60 in 2027 from $5.80 in 2023 – that’s an annual growth rate of 18%. Comparatively, Apple‘s estimated EPS growth through FY2027 is slower at 14%, and it sports a P/E of 33 compared to Google’s 22. Alphabet’s P/E is closer to the S&P 500’s P/E of 21!

I believe this is too low, and there is a lot of potential for its stock to appreciate just on the lower valuation.

Besides the strong EPS, a lot of Google’s expenses are noncash depreciation and amortization and their cash flow margins are strong. They generated operating cash of $31Bn on $88Bn last quarter, or a 35% cash flow margin.

The antitrust regulation will remain a possible negative on Alphabet, but the final decision is still years away as Alphabet vigorously appeals the decision.

I recommend Alphabet as a buy at $176