Fountainheadinvesting

Fountainhead Investing

  • Objective Analysis: Research On High Quality Companies With Sustainable Moats
  • Tech Focused: 60% Allocated To AI, Semiconductors, Technology

5 Star Tech Analyst Focused On Excellent Companies With Sustainable Moats

Categories
Industry Stocks Technology

Confluent’s Excellent Quarter Is A Major Inflection Point

02/11/2025

Confluent (CFLT) $37 – Still worth buying.

I’ve owned it for over two years but will pyramid (add smaller quantities on a large base) it further.

Why is this company still worth investing in after a 20% post-earning bump?

Four important catalysts

Databricks partnership: The partnership with Databricka, which is much better known and valued increases brand awareness and opens a lot of new opportunities and doors.

This could accelerate growth from the current 22-23%.

Strong customer base: 90% of its revenues are coming from 100K + ARR clients.

The $1Mn+cohort saw the highest growth, and Confluent managed a net ARR of 117%, indicating strong upselling.

A changing data processing market: The entire batch processing model could be up for grabs – customers moving at the speed of light and willing to pay for the latest technology could be a huge TAM. 

This is a paradigm shift, which Confluent has been trying to build into for a decade. 2025 might be that inflection year, with all the AI build-outs and use cases that are likely to need live processing – Confluent is the leader in that field. To be sure it’s not going to throw data processing models into obsolescence, why would you spend money on data that doesn’t need to be processed in real-time, but could take a large chunk of that market?

Snowflake acquiring RedPanda: Snowflake is reportedly trying to buy streaming competitor RedPanda for about 40x sales: While it’s not an obvious comparison, Red Panda is supposedly less than 10% of Confluent’s revenues but growing at 200-300%. But it’s the synergy with the larger data provider that’s getting it a massive price tag – Snowflake would love to have this arrow in its quiver of data tools.

Confluent is best positioned to take advantage of the possible shift from batch processing to processing in data streaming; its founders invented Apache Kafka, the open-source model for data streaming. And while its own invention is available for free – managing and maintaining it at scale needs the paid version. Over the years with the focus on Confluent Cloud, Confluent gets 90% of its $1Bn revenue from customers over $100K in annual revenue. 

Confluent has the cash the tech chops and the focus – sure Apache Kafka is open source and many cloud service providers like AWS and Microsoft also provide enough competition, but no one has the product breadth that Confluent does.

I would not be surprised if Confluent’s multiple expands from the current 8x sales after this earnings call.

Here are the details of the December 2024, 4th quarter earnings:

  • Q4 Non-GAAP EPS of $0.09 beat by $0.03.
  • Revenue of $261.2Mn (+22.5% Y/Y) beat by $4.32Mn.
  • Q4 subscription revenue of $251Mn up 24% YoY
  • Confluent Cloud revenue of $138Mn up 38% YoY
  • 2024 subscription revenue of $922Mn up 26% YoY
  • Confluent Cloud revenue of $492Mn up 41%YoY
  • 1,381 customers with $100,000 or greater in ARR, up 12% YoY.
  • 194 customers with $1Mn or greater in ARR, 23% YoY.

Financial Outlook

Q1 2025 OutlookFY 2025 Outlook
Subscription Revenue$253-$254 million$1.117-$1.121 billion
Non-GAAP Operating Margin~3%~6%
Non-GAAP Net Income Per Diluted Share$0.06-$0.07 vs. consensus of $0.06~$0.35 vs. consensus of $0.35

Categories
AI Industry Semiconductors Stocks

ASML – An Excellent Company That’s Still A Bargain

The Monopoly

As the source of all things AI and related, ASML is (and has been for the past decade) the monopoly for EUV lithography machines that power the most advanced GPUs from Nvidia and others. There’s no other manufacturer that can do this at scale.

Around October 2024, on their earnings call, they disappointed the market with a 10-15% lower than forecast revenue for 2025-2026, as one of their customers (very likely Intel) did not place orders for a large order of EUV machines as expected. Intel’s troubles are well known and this order is unlikely to come back. They also feared export controls to China and/or weakness in Chinese demand after 3-4 years of rapid growth.

I bought and recommended buying on 10/27/2024 at $690, with the following comments

Sure it could stay sluggish, range-bound, or fall till there’s some improvement in bookings, export controls to China, etc. Perhaps, that may not even happen for a while.

I think that’s an acceptable risk, now I’m getting a monopoly at a 37% drop from its 52-week high of $1,110, still growing revenue at 12% and EPS at 22%, selling for 8x sales and 25x earnings.

With TSM’s results, we saw how strong AI semiconductor demand still is and there was absolutely no let-up in their guidance.

A monopoly for AI chip production – an essential cog, without which AI is not possible – is definitely worth the risk

Fast Forward to the next quarter, the dynamic is much better and the price hasn’t shot beyond affordable.  

Bottom line: A must-have, it’s always going to be priced at a premium given its monopoly status and the strength of the AI market, so returns are likely not going to be like a fast grower tech but I’m confident of getting 14-16% annualized return in the next 5-10 years.

ASML’s Q4-2024 results on 01/29/2025 were excellent:

ASML beats expectations as bookings soared.

The EUV, machines leader grew Q4 revenues 28% YoYr to €9.26B, and 24% QoQ, beating estimates.

Bookings: ASML’s Q4 bookings came in huge at €7.09B, way ahead of estimates of €3.53B., with net new adds of €3B.

On the earnings call, CEO Christophe Fouquet had this to say about AI and sales to China:

AI is the clear driver. I think we started to see that last year. In fact, at this point, we really believe that AI is creating a shift in the market and we have seen customers benefiting from it very strongly. Others maybe a bit less.

We had a lot of discussion about China in 2023-2024 because our revenue in China was extremely high. We have explained that this was caused by the fact that we are still working on some backlog created in 2022, when our capacity was not big enough to fulfil the whole market. 2025 will be a year where we see China going back to a more normal ratio in our business. We are going to see numbers people used to see before 2023.

USA led sales with with 28% share in the fourth quarter of 2024, edging China’s 27% of about €7.12Bn

Challenges remain as the AI arms race gets hotter:

ASML has not been able to sell its EUV machines to China because of U.S.-led export curbs to restrict China from getting advanced lithography equipment to manufacture cutting-edge chips like the H100s from Nvidia, or the new generation Blackwells.

From 2025, ASML will provide a backlog of orders on an annual basis instead of bookings to more accurately reflect its business.

Guidance: 2025 total net sales remain the same, between €30B and €35B. Q1-2025 is slightly higher with total net sales to be between €7.5B and €8.0B versus consensus of €7.24B.

ASML remains an excellent opportunity and I plan to add it on declines.

Categories
AI Enterprise Software Industry Market Outlook Stocks

AI And The Multiplier Effect From Software

02/11/2025

The Software Multiplier Effect: An interesting note from Wedbush’s Dan Ives on Artificial Intelligence, who believes that software AI players will likely get 8 times the revenue of hardware sellers. I.e., a multiplier effect of 8:1 from software.

He is directionally right, and I do agree with him about the multiplier effect of software, services, and platforms on top of hardware sales. I had done a primary study several years ago with companies like Oracle, IBM, and Salesforce among others, and we saw similar feedback of about 6 to 1 for software spend to hardware spend, over time. People naturally cost more.

Nonetheless, regardless of whether it is 6 to 1 or 8 to 1, both numbers are huge and extremely likely in my opinion in the next 5 to 10 years and Palantir’s (PLTR) Dec quarter earnings hit it out of the park.

Dan Ives said:

Palantir Technologies (NASDAQ:PLTR) and Salesforce (NYSE:CRM) remain the two best software plays on the AI Revolution for 2025.

The firm also recommended other software vendors such as Oracle (ORCL) IBM (IBM), Innodata (INOD) Snowflake (SNOW), MongoDB (MDB), Elastic (ESTC), and Pegasystems (PEGA) enjoying the AI spoils.

Analysts led by Daniel Ives said:

Palantir has been a major focus during the AI Revolution with expanding use cases for its marquee products leading to a larger partner ecosystem with rapidly rising demand across the landscape for enterprise-scale and enterprise-ready generative AI.

Major Growth Expected: The analysts added that this will be a major growth driver for the U.S. Commercial business over the next 12 to 18 months as more enterprises take the AI path with Palantir. They believe “Palantir has a credible path to morph into the next Oracle over the coming decade” with Artificial Intelligence Platform, or AIP, leading the way.

Wedbush’s feedback about budget allocations is very helpful and even if one discounted Dan Ives’ perpetual optimism and bullishness by some, it’s a great indicator that this will be a favored sector in 2025-2028.

Ives and his team have been tracking several large companies that are or are planning to use AI path in 2025 to gauge enterprise AI spending, use cases, and which vendors are separating from the pack in the AI Revolution.

The numbers are gratifying:

Analysts expect that AI now consists of about 10% of many IT budgets for 2025 they are tracking and in some cases up to 15%, as many chief information officers, or CIOs, have accelerated their AI strategy over the next six to nine months as monetization of this key theme is starting to become a reality across many industries.

“While the first steps in AI deployments are around Nvidia (NVDA) chips and the cloud stalwarts, importantly we estimate that for every $1 spent on Nvidia, there is an $8-$10 multiplier across the rest of the tech ecosystem,” said Ives and his team.

What’s more important?

Analysts noted that about 70% of customers they have talked to have accelerated their AI budget dollars and initiatives over the last six months. The analysts added that herein is the huge spending that is now going on in the tech world, with $2T of AI capital expenditure over the next three years fueling this generational tech spending wave.

Hyperscalers indicated supreme confidence in their AI strategy committing in excess of $300Bn in Capex for 2025, which is historic. Amazon’s CEO Any Jassy was categorical in stating AWS doesn’t spend till they’re certain of demand.

Ives had this to add, underscoring Amazon’s confidence.

In addition, Ives and his team said that they are seeing many IT departments focused on foundational hyperscale deployments for AI around Microsoft (MSFT) Amazon (AMZN), and Google (GOOG) (GOOGL) with a focus on software-driven use cases currently underway.

“The AI Software era is now here in our view,” said Ives and his team. Wedbush’s team strongly believes that the broader software space will expand the AI revolution further, cementing what I saw at the CES last month. There is so much computing power available and so many possibilities of use cases exploding that this space could see a major inflection point in 2025-2026.

Large language models, or LLM, and the adoption of generative AI should be a major catalyst for the software sector.

Categories
Cloud Service Providers Industry Semiconductors Stocks Technology

Marvell’s Investment Case Got Stronger With Hyperscaler Capex

Marvell Technology (MRVL) $114

I missed buying this in the low 90s, waiting to see if their transformation to an AI chip company was complete. Having a cyclical past, with non-performing business segments made me hesitate, besides far too many promises have been made in the AI space only for investors to be disappointed.

Marvel has been walking the talk, Q3 results in Dec 2024 were exemplary, and guidance even better.

Hyperscaler demand

With a planned Capex of $105Bn for 2025, Amazon confirmed on their earnings call that the focus will continue on custom silicon and inferencing. Amazon and Marvell have a five-year, “multi-generational” agreement for Marvell to provide Amazon Web Services with the Trainium and Inferentia chips and other data center equipment. Since the deal is “multi-generational,” Marvell will continue to supply the released Trainium2 5nm (Trn2) while also supplying the newly-announced Trainium3 (Trn3) on the 3nm process node expected to ship at the end of 2025. Amazon is an investor in Anthropic with plans to build a supercomputing system with “hundreds of thousands” of Trainium2 chips called Project Rainier. The DeepSeek aftermath does suggest a further democratization of AI, as inference starts gaining prominence from 2026.

Critically, like other hyperscalers Microsoft, Meta, and Alphabet, Amazon announced a high Capex (Capital Expenditure) plan of $105Bn for 2025, 27% higher than 2024, which itself was 57% higher than the previous year, for AI cloud and datacenter buildout. It was the last of the big four to confirm that massive AI spending was very much on the cards for 2025.

Here’s the scorecard for 2025 Capex, totaling over $320Bn. A few months back, estimates were swirling for $250 to $275. Goldman had circulated $300Bn in total Capex for the year, and these four have already planned more.

Amazon $105Bn
Microsoft $80Bn
Alphabet $75Bn
Meta $60 to $65Bn
Total $320Bn

The earnings call discussed DeepSeek R1 and the lower AI cost structure that it may presage, with the possibility of lower revenue for AI cloud services.

“We have never seen that to be the case,” Amazon CEO Andy Jassy said on the call. “What happens is companies will spend a lot less per unit of infrastructure, and that is very, very useful for their businesses. But then they get excited about what else they could build that they always thought was cost prohibitive before, and they usually end up spending a lot more in total on technology once you make the per unit cost less.”
Amazon plans to spend heavily on custom silicon and focus on inference as well besides buying Blackwells by the truckload.

Q3-FY2025

Marvell reported impressive Q3 results that beat revenue estimates by 4% and adjusted EPS estimates by 5.5%, led by strong AI demand. FQ3 revenue accelerated to 6.9% YoY and 19.1% QoQ growth to $1.52 billion, helped by a stronger-than-expected ramp of the AI custom silicon business.

For the next quarter, management expects revenue to grow to 26.2% YoY and 18.7% QoQ to $1.8 billion at the midpoint. The Q4 guide beats revenue estimates by 9.1% and adjusted EPS estimates by 13.5%. Management expects to significantly exceed the full-year AI revenue target of $1.5 billion and indicated that it could easily beat the FY2026 AI revenue target of $2.5 billion.

Marvell has other segments, which account for 27% of the business that are not performing as well, but they’re going full steam ahead to focus on the custom silicon business and expect total data center to exceed 73% of revenue in the future.

  • Adjusted operating margin – 29.7% V 29.8% last year, and better than the management guide of 28.9%.
  • Management guidance for Q4 is even higher at 33%.
  • Adjusted net income – $373 Mn or 24.6% of revenue compared to $354.1 Mn or 25% of revenue last year.
  • Management has also committed to GAAP profitability in Q4, and continued improvements.

Custom Silicon – There are estimates of a TAM (Total Addressable Market) of $42 billion for custom silicon by CY2028, of which Marvell could take 20% market share or $8Bn of the custom silicon AI opportunity, I suspect we will see a new forecast when the company can more openly talk about an official announcement. On the networking side, the TAM is another $31 billion.

“Oppenheimer analyst Rick Schafer thinks that each of Marvell’s four custom chips could achieve $1 billion in sales next year. Production is already ramping up on the Trainium chip for Amazon, along with the Axion chip for the Google unit of Alphabet. Another Amazon chip, the Inferentia, should start production in 2025. Toward the end of next year, deliveries will begin on Microsoft’s Maia-2, which Schafer hopes will achieve the largest sales of all.”

Key weaknesses and challenges

Marvell carries $4Bn in legacy debt, which will weigh on its valuation.

The stock is already up 70% in the past year, and is volatile – it dropped $26 from $126 after the DeepSeek and tariffs scare.

Custom silicon, ASICs (Application Specific Integrated Circuits) have strong competition from the likes of Broadcom and everyone is chasing market leader Nvidia. Custom silicon as the name suggests is not widely used like an Nvidia GPU and will encounter more difficult sales cycles and buying programs.

Drops in AI buying from data center giants will hurt Marvell.

Over 50% of Marvell’s revenue comes from China, and it could become a victim of a trade war.

Valuation: The stock is selling for a P/E of 43, with earnings growth of 80% in FY2025 and 30% after that for the next two years – that is reasonable. It has a P/S ratio of 12.6, with growth of 25%. It’s a bit expensive on the sales metric, but with AI taking an even larger share of the revenue pie, this multiple could increase.

Categories
Consumer Discretionary Industrials Industry Stocks

Ferrari: An Iconic But Overpriced Brand

Ferrari (RACE) $464

Positives

Unlike other auto companies, Ferrari’s brand strength and exclusivity provide it with a deep moat, leading to stable cash flows and high profit margins. In its high-priced ultra-luxury segment it doesn’t have any competitors. There are notables such as Maserati and Porsche, but Ferrari’s roars and soars above them.

The upcoming all-electric Ferrari model, while a significant shift, is expected to maintain the brand’s iconic status and appeal to wealthy customers.

Excellent operating leverage – sales growth of 7% has been consistently providing earnings growth of 15%

Massive pricing power – unit sales hardly grow 2-3% the rest is all pricing.

Operating margins of 26-28%, no one else in auto is even close to that.

There are a lot of growth opportunities, it plans to launch 15 new models by 2026, anticipating 12% revenue growth from FY25 onwards, supported by high personalization and a positive country mix. This could change the growth trajectory from the usual 7%.

Negatives

Valuation doesn’t leave much room for appreciation: Because it’s such an excellent premium brand without serious competition and stable growth, conventional pricing/valuation hardly applies to it –  but Ferrari’s current valuation with a P/E Ratio of 50x and 0.60% dividend yield begs the question, how much more can you get from it?

The stock has already returned 25% in the past year and 174% in the past five years, these are way above its historical averages.

Key risks include product concentration, dependency on Formula 1 sponsorships, and potential US tariffs on European manufacturers impacting costs.

Current Earnings

Q4 results were great: Led by growing demand for personalized vehicles, a strong product mix, and limited exposure to China.

Ferrari managed a strong 14% revenue with just a 2% improvement in shipments – everything else was price increases, leveraging its enormous brand, which has no price elasticity. As a result, profits swelled by 31%, leading to earnings of €2.14 ($2.21), which beat Wall Street’s expectations of €1.84 ($1.90).

“On these solid foundations, we expect further robust growth in 2025, that will allow us to reach one year in advance the high-end of most of our profitability targets for 2026,” said CEO Benedetto Vigna.

Guidance: A little more caution, due to higher supply chain costs and a higher tax rate in Italy. Accordingly, net revenue is expected to increase by ~5% to €7.0B ($7.23B), contributing to a profit of €8.60 ($8.89) per share. This is below the consensus estimates of €7.12B ($7.36B) and €9.07 (9.37), respectively. So far the stock has taken it well. (I guess that’s inelastic too!)

Regionally, sales were strongest in the Americas with shipments up 8% – (it looks like some of our stock trading profits have gone to Ferrari), followed by a 6% gain in APAC (excluding Mainland China, Hong Kong, and Taiwan). Sales in China, Hong Kong, and Taiwan fell 21%, but that’s less than 1% of total Ferrari sales.

FY2024 sales included ten internal combustion engine models and six hybrid engine models, which represented 49% and 51% of total shipments, respectively.

Given the focus on EVs, I expect that trend to continue. If Ferrari’s expansion drive to grow sales 12-15% a year with a stronger lineup of new models starts showing success, I could just end up buying it – if you can’t buy the car, it would be fun to make money off the stock.

Categories
AI Cloud Service Providers Industry Semiconductors Stocks

Hyperscalers, Meta and Microsoft Confirm Massive Capex Plans

Meta (META) has committed to $60-65Bn of Capex and Microsoft (MSFT) $80Bn: After the DeepSeek revelations, this is a great sign of confidence for Nvidia (NVDA), Broadcom (AVGO), and Marvel (MRVL) and other semiconductor companies. Nvidia, Broadcom, and Marvell should continue to see solid demand in 2025.

Meta CEO, Mark Zuckerberg also mentioned that one of the advantages that Meta has (and other US firms by that same rationale) is that they will have a continuous supply of chips, which DeepSeek will not have, and the likes of US customers like Meta will easily outperform when it comes to scaling and servicing customers. (They will fine-tune Capex between training and inference). Meta would be looking at custom silicon as well for other workloads, which will help Broadcom and Marvell.

Meta executives specifically called out a machine-learning system designed jointly with Nvidia as one of the factors driving better-personalized advertising. This is a good partnership and I don’t see it getting derailed anytime soon.

Meta also talked about how squarely focused they were on software and algorithm improvements. Better inference models are the natural progression and the end goal of AI. The goal is to make AI pervasive in all kinds of apps for consumers/businesses/medical breakthroughs, and so on. For that to happen you still need scalable computing power to reach a threshold when the models have been trained enough to provide better inference and/or be generative enough, to do it for a specific domain or area of expertise.

This is the tip of the iceberg, we’re not anywhere close to reducing the spend. Most forecasts that I looked at saw data center training spend growth slowing down only in 2026, and then spending on inference growing at a slower speed. Nvidia’s consensus revenue forecasts show a 50% revenue gain in 2025 and 25% thereafter, so we still have a long way to go.

I also read that Nvidia’s GPUs are doing 40% of inference work, they’re very much on the ball on inference.

The DeepSeek impact: If DeepSeek’s breakthrough in smarter inference were announced by a non-Chinese or an American company and if they hadn’t claimed a cheaper cost, it wouldn’t have made the impact it did.  The surprise element was the reported total spend, and the claim that they didn’t have access to GPUs – it was meant to shock and awe and create cracks in the massive spending ecosystem, which it is doing. But the reported total spend or not using high GPUs doesn’t seem plausible, at least to me. Here’s my earlier article detailing some of the reasons. The Chinese government subsidized every export entry to the world, from furniture to electric vehicles, so why not this one? That has been their regular go-to-market strategy. 

Cheaper LLMs are not a plug-and-play replacement. They will still require significant investment and expertise to train and create an effective inference model. I think the GPU requirements will not diminish because you need GPUs for training and time scaling, smarter software will still need to distill data. 

Just as a number aiming at a 10x reduction in cost is a good target but it will compromise quality and performance. Eventually, the lower-tier market will get crowded and commoditized – democratized if you will, which may require cheaper versions of hardware and architecture from AI chip designers, as an opportunity to serve lower-tier customers.

American companies will have to work harder, for sure – customers want cheap (Databricks’ CEO’s phone hasn’t stopped ringing for alternative solutions) unless they TikTok this one as well…..

Categories
AI Cloud Service Providers Industry Semiconductors Stocks

DeepSeek Hasn’t Deep-Sixed Nvidia

01/28/2025

Here is my understanding of the DeepSeek breakthrough and its repercussions on the AI ecosystem

DeepSeek used “Time scaling” effectively, which allows their r1 model to think deeper at the inference phase. By using more power instead of coming up with the answer immediately, the model will take longer to research for a better solution and then answer the query, better than existing models.

How did the model get to that level of efficiency?

DeepSeek used a lot of interesting and effective techniques to make better use of its resources, and this article from NextPlatform does an excellent job with the details.

Besides effective time scaling the model distilled the answers from other models including ChatGPT’s models.

What does that mean for the future of AGI, AI, ASI, and so on?

Time scaling will be adopted more frequently, and tech leaders across Silicon Valley are responding to improve their methods as cost-effectively as possible. That is the logical and sequential next step – for AI to be any good, it was always superior inference that was going to be the differentiator and value addition.

Time scaling can be done at the edge as the software gets smarter.

If the software gets smarter, will it require more GPUs?

I think the GPU requirements will not diminish because you need GPUs for training and time scaling, smarter software will still need to distill data. 

Cheaper LLMs are not a plug-and-play replacement. They will still require significant investment and expertise to train and create an effective inference model. Just as a number aiming at a 10x reduction in cost is a good target but it will compromise quality and performance. Eventually, the lower-tier market will get crowded and commoditized – democratized if you will, which may require cheaper versions of hardware and architecture from AI chip designers, as an opportunity to serve lower-tier customers.

Inferencing

Over time, yes inference will become more important – Nvidia has been talking about the scaling law, which diminishes the role of training and the need to get smarter inference for a long time. They are working on this as well, I even suspect that the $3,000 Digits they showcased for edge computing will provide some of the power needed.

Reducing variable costs per token/query is huge: The variable cost will reduce, which is a huge boon to the AI industry, previously retrieving and answering tokens cost more than the entire monthly subscription to ChatGPT or Gemini.

From Gavin Baker on X on APIs and Costs:

R1 from DeepSeek seems to have done that, “r1 is cheaper and more efficient to inference than o1 (ChatGPT). r1 costs 93% less to *use* than o1 per each API, can be run locally on a high end work station and does not seem to have hit any rate limits which is wild.

However, “Batching massively lowers costs and more compute increases tokens/second so still advantages to inference in the cloud.”

It is comparable to o1 from a quality perspective although lags o3.

There were real algorithmic breakthroughs that led to it being dramatically more efficient both to train and inference.  

On training costs and real costs:

Training in FP8, MLA and multi-token prediction are significant.  It is easy to verify that the r1 training run only cost $6m.

The general consensus is that the “REAL” costs with the DeepSeek model much larger than the $6Mn given for the r1 training run.

Omitted are:

Hundreds of millions of dollars on prior research and has access to much larger clusters.

Deepseek likely had more than 2048 H800s;  An equivalently smart team can’t just spin up a 2000 GPU cluster and train r1 from scratch with $6m.  

There was a lot of distillation – i.e. it is unlikely they could have trained this without unhindered access to GPT-4o and o1, which is ironical because you’re banning the GPU’s but giving access to distill leading edge American models….Why buy the cow when you can get the milk for free?

The NextPlatform too expressed doubts about DeepSeek’s resources

We are very skeptical that the V3 model was trained from scratch on such a small cluster.

A schedule of geographical revenues for Nvidia’s Q3-FY2025 showed 15% of Nvidia’s or over $4Bn revenue “sold” to Singapore, with the caveat that it may not be the ultimate destination, which also creates doubts that DeepSeek may have gotten access to Nvidia’s higher-end GPUs despite the US export ban or stockpiled them before the ban. 

Better software and inference is the way of the future

As one of the AI vendors at CES told me, she had the algorithms to answer customer questions and provide analytical insides at the edge for several customers – they have the data from their customers and the software, but they couldn’t scale because AWS was charging them too much for cloud GPU usage when they didn’t need that much power. So besides r1’s breakthrough in AGI, this movement has been afoot for a while, and this will spur investment and innovation in inference. We will definitely continue to see demand for high-end Blackwell GPUs to train data and create better models for at least the next 18 months to 24 months after which the focus should shift to inference and as Nvidia’s CEO said, 40% of their GPUs are already being used for inference.

Categories
AI Semiconductors Stocks

Micron’s Low Price Is A Gift

Micron’s data center revenue should grow 91% and 38% in FY2025 and FY2026, driven by cloud server DRAM and HBM.

The market is not assigning a strong multiple to Micron’s largest, most profitable, and fastest-growing segment, with HBM3E contributing significantly, and future growth expected from HBM4.

Micron should gain from an extremely strong AI market as evidenced by huge CAPEX from hyperscalers, Nvidia’s Blackwell growth, and Taiwan Semiconductors’s forecasts.

Consumer NAND business faced challenges due to inventory reductions, seasonal slowdowns, and delayed PC refresh cycles, impacting Q2 revenue guidance and margins.

Despite short-term consumer weakness, Micron’s strong data center prospects and attractive valuation make it a compelling buy, especially at the current price of $90

You can read the entire article on Seeking Alpha; Micron (MU) dropped a massive 15% after DeepSeek deep-sixed the market. Nvidia (NVDA) too dropped 14%, but has begun to recover and I expect Micron to recover as well.

Categories
AI Stocks Technology

UiPath: The Path Forward Is Getting Smoother

  • UiPath’s competitive edge lies in its AI integration, SAP partnership and industry-agnostic automation solutions, making it a strong contender in RPA despite generative AI threats.
  • Recent struggles were due to sales execution issues, and competition, but the company shows signs of recovery with strategic changes and a focus on large clients and collaborations.
  • Founder Daniel Dines’ return as CEO, workforce reductions, and strategic partnerships, especially with SAP, are pivotal in steering UiPath back on track.
  • Despite current challenges, UiPath’s strong cash position, cost-saving measures, and promising AI Agentic capabilities make it a worthwhile investment with limited downside risk.

UiPath’s (PATH) updated financial forecast and current valuation do make a great case for investment as a GARP, now that it’s likely to grow only in the mid-teens, valued at just 5X sales, and 25x adjusted earnings. Besides, cash flow is almost double the adjusted operating income, so that too is a plus. I own some and plan to accumulate on declines.

Here’s the complete article on Seeking Alpha.

Categories
AI Semiconductors Stocks

Taiwan Semiconductor  Manufacturing (TSM)  Hits It Out Of The Park

What a great start to the earnings season! 

TSM is up 5% premarket after a massive beat and terrific guidance for AI demand.

TSM, the indispensable chip producer and source for some of the world’s largest tech companies, including Apple (AAPL), Nvidia (NVDA), AMD (AMD), and other chipmakers produced outstanding results this morning.

Q4 Metrics

Sales up 37% YoY to $26.88B

Earnings per American Depositary Receipt $2.24, up 69% YoY compared to $1.44 

Both top and bottom line numbers surpassed analysts’ expectations.

“Our business in the fourth quarter was supported by strong demand for our industry-leading 3nm and 5nm technologies,” said Wendell Huang, senior VP and CFO of TSM in the earnings press release.

Strong AI momentum

TSMC’s brilliant Q4 results beat management’s guidance, and confirmed the strong AI momentum for 2025, disproving any notions about reduced or waning demand. With $300Bn in planned Capex by just the hyperscalers, I believe investors’ concerns are misplaced.

Management highlighted TSM’s growing collaborations with the memory industry during the earnings call, reinforcing confidence in strong and accelerated HBM demand, which bodes well for HBM makers like Micron (MU).

Closer interaction with HBM makers also suggests a strong foundation for the potential ramp of TSM’s 3nm node and upcoming 2nm node, which would be essential for AI development.

TSM has de-risked geo-political issues with its overseas factories in Arizona and Kumamoto. It also noted that it could manage further export controls from the US government to China, besides only 11% of their sales were to China.

“Let me assure you that we have a very frank and open communication with the current government and with the future one also,” said Wei when asked about the current and next U.S. administrations.

Any de-risking is a tailwind for increasing multiples and valuation for the stock. I had recommended TSM earlier stating that the geo-political concerns shouldn’t devalue the crown jewel of the semiconductor industry.

Q4 Revenue by Technology

TSM said 3nm process technology contributed 26% of total wafer revenue in the fourth quarter, versus 15% in the year-ago period, and 20% in the third quarter of 2024.

The 5nm process technology accounted for 34% of total wafer revenue, compared to 35% in the same period a year ago, and 32% in the third quarter of 2024. Meanwhile, 7nm accounted for 14% of total wafer revenue in the fourth quarter versus 17% a year earlier, and in the third quarter of 2024.

Total 3nm+5nm = 26+34 = 60% – that’s a fantastic high-margin business.

Advanced technologies (7nm and below) accounted for 74% of wafer revenue.

Q4 Revenue by Platform

High Performance Computing represented 53% of net revenue, up from 43% in the fourth quarter of 2023. – Nvidia, AMD, Broadcom.

The company’s smartphone segment represented 35% of net revenue, versus 43% in the year-ago period. – Apple

Q4 Revenue by Geography

Revenue from China — Taiwan Semi’s second-biggest market by revenue — accounted for 9% of the total net revenue in the period, down from 11% in the year-ago period and in the third quarter of 2024.

North America accounted for 75% of total net revenue coming from it, compared to 72% a year earlier, and 71% in the third quarter of 2024.

Outlook

“Moving into the first quarter of 2025, we expect our business to be impacted by smartphone seasonality, partially offset by continued growth in AI-related demand,” said Huang.

TSM expects capital expenditure to be between $38B to $42B in 2025. The amount is up to 19% more than analysts’ expectations, according to a Bloomberg report.

In the first quarter of 2025, TSM expects revenue to be between $25B and $25.8B (midpoint at $25.4B), consensus of $24.75B. That’s a raise.