Fountainheadinvesting

Fountainhead Investing

  • Objective Analysis: Research On High Quality Companies With Sustainable Moats
  • Tech Focused: 60% Allocated To AI, Semiconductors, Technology

5 Star Tech Analyst Focused On Excellent Companies With Sustainable Moats

Categories
AI Cloud Service Providers Industry Semiconductors Stocks

Hyperscalers, Meta and Microsoft Confirm Massive Capex Plans

Meta (META) has committed to $60-65Bn of Capex and Microsoft (MSFT) $80Bn: After the DeepSeek revelations, this is a great sign of confidence for Nvidia (NVDA), Broadcom (AVGO), and Marvel (MRVL) and other semiconductor companies. Nvidia, Broadcom, and Marvell should continue to see solid demand in 2025.

Meta CEO, Mark Zuckerberg also mentioned that one of the advantages that Meta has (and other US firms by that same rationale) is that they will have a continuous supply of chips, which DeepSeek will not have, and the likes of US customers like Meta will easily outperform when it comes to scaling and servicing customers. (They will fine-tune Capex between training and inference). Meta would be looking at custom silicon as well for other workloads, which will help Broadcom and Marvell.

Meta executives specifically called out a machine-learning system designed jointly with Nvidia as one of the factors driving better-personalized advertising. This is a good partnership and I don’t see it getting derailed anytime soon.

Meta also talked about how squarely focused they were on software and algorithm improvements. Better inference models are the natural progression and the end goal of AI. The goal is to make AI pervasive in all kinds of apps for consumers/businesses/medical breakthroughs, and so on. For that to happen you still need scalable computing power to reach a threshold when the models have been trained enough to provide better inference and/or be generative enough, to do it for a specific domain or area of expertise.

This is the tip of the iceberg, we’re not anywhere close to reducing the spend. Most forecasts that I looked at saw data center training spend growth slowing down only in 2026, and then spending on inference growing at a slower speed. Nvidia’s consensus revenue forecasts show a 50% revenue gain in 2025 and 25% thereafter, so we still have a long way to go.

I also read that Nvidia’s GPUs are doing 40% of inference work, they’re very much on the ball on inference.

The DeepSeek impact: If DeepSeek’s breakthrough in smarter inference were announced by a non-Chinese or an American company and if they hadn’t claimed a cheaper cost, it wouldn’t have made the impact it did.  The surprise element was the reported total spend, and the claim that they didn’t have access to GPUs – it was meant to shock and awe and create cracks in the massive spending ecosystem, which it is doing. But the reported total spend or not using high GPUs doesn’t seem plausible, at least to me. Here’s my earlier article detailing some of the reasons. The Chinese government subsidized every export entry to the world, from furniture to electric vehicles, so why not this one? That has been their regular go-to-market strategy. 

Cheaper LLMs are not a plug-and-play replacement. They will still require significant investment and expertise to train and create an effective inference model. I think the GPU requirements will not diminish because you need GPUs for training and time scaling, smarter software will still need to distill data. 

Just as a number aiming at a 10x reduction in cost is a good target but it will compromise quality and performance. Eventually, the lower-tier market will get crowded and commoditized – democratized if you will, which may require cheaper versions of hardware and architecture from AI chip designers, as an opportunity to serve lower-tier customers.

American companies will have to work harder, for sure – customers want cheap (Databricks’ CEO’s phone hasn’t stopped ringing for alternative solutions) unless they TikTok this one as well…..

Categories
AI Cloud Service Providers Industry Semiconductors Stocks

DeepSeek Hasn’t Deep-Sixed Nvidia

01/28/2025

Here is my understanding of the DeepSeek breakthrough and its repercussions on the AI ecosystem

DeepSeek used “Time scaling” effectively, which allows their r1 model to think deeper at the inference phase. By using more power instead of coming up with the answer immediately, the model will take longer to research for a better solution and then answer the query, better than existing models.

How did the model get to that level of efficiency?

DeepSeek used a lot of interesting and effective techniques to make better use of its resources, and this article from NextPlatform does an excellent job with the details.

Besides effective time scaling the model distilled the answers from other models including ChatGPT’s models.

What does that mean for the future of AGI, AI, ASI, and so on?

Time scaling will be adopted more frequently, and tech leaders across Silicon Valley are responding to improve their methods as cost-effectively as possible. That is the logical and sequential next step – for AI to be any good, it was always superior inference that was going to be the differentiator and value addition.

Time scaling can be done at the edge as the software gets smarter.

If the software gets smarter, will it require more GPUs?

I think the GPU requirements will not diminish because you need GPUs for training and time scaling, smarter software will still need to distill data. 

Cheaper LLMs are not a plug-and-play replacement. They will still require significant investment and expertise to train and create an effective inference model. Just as a number aiming at a 10x reduction in cost is a good target but it will compromise quality and performance. Eventually, the lower-tier market will get crowded and commoditized – democratized if you will, which may require cheaper versions of hardware and architecture from AI chip designers, as an opportunity to serve lower-tier customers.

Inferencing

Over time, yes inference will become more important – Nvidia has been talking about the scaling law, which diminishes the role of training and the need to get smarter inference for a long time. They are working on this as well, I even suspect that the $3,000 Digits they showcased for edge computing will provide some of the power needed.

Reducing variable costs per token/query is huge: The variable cost will reduce, which is a huge boon to the AI industry, previously retrieving and answering tokens cost more than the entire monthly subscription to ChatGPT or Gemini.

From Gavin Baker on X on APIs and Costs:

R1 from DeepSeek seems to have done that, “r1 is cheaper and more efficient to inference than o1 (ChatGPT). r1 costs 93% less to *use* than o1 per each API, can be run locally on a high end work station and does not seem to have hit any rate limits which is wild.

However, “Batching massively lowers costs and more compute increases tokens/second so still advantages to inference in the cloud.”

It is comparable to o1 from a quality perspective although lags o3.

There were real algorithmic breakthroughs that led to it being dramatically more efficient both to train and inference.  

On training costs and real costs:

Training in FP8, MLA and multi-token prediction are significant.  It is easy to verify that the r1 training run only cost $6m.

The general consensus is that the “REAL” costs with the DeepSeek model much larger than the $6Mn given for the r1 training run.

Omitted are:

Hundreds of millions of dollars on prior research and has access to much larger clusters.

Deepseek likely had more than 2048 H800s;  An equivalently smart team can’t just spin up a 2000 GPU cluster and train r1 from scratch with $6m.  

There was a lot of distillation – i.e. it is unlikely they could have trained this without unhindered access to GPT-4o and o1, which is ironical because you’re banning the GPU’s but giving access to distill leading edge American models….Why buy the cow when you can get the milk for free?

The NextPlatform too expressed doubts about DeepSeek’s resources

We are very skeptical that the V3 model was trained from scratch on such a small cluster.

A schedule of geographical revenues for Nvidia’s Q3-FY2025 showed 15% of Nvidia’s or over $4Bn revenue “sold” to Singapore, with the caveat that it may not be the ultimate destination, which also creates doubts that DeepSeek may have gotten access to Nvidia’s higher-end GPUs despite the US export ban or stockpiled them before the ban. 

Better software and inference is the way of the future

As one of the AI vendors at CES told me, she had the algorithms to answer customer questions and provide analytical insides at the edge for several customers – they have the data from their customers and the software, but they couldn’t scale because AWS was charging them too much for cloud GPU usage when they didn’t need that much power. So besides r1’s breakthrough in AGI, this movement has been afoot for a while, and this will spur investment and innovation in inference. We will definitely continue to see demand for high-end Blackwell GPUs to train data and create better models for at least the next 18 months to 24 months after which the focus should shift to inference and as Nvidia’s CEO said, 40% of their GPUs are already being used for inference.

Categories
AI Semiconductors Stocks

Taiwan Semiconductor  Manufacturing (TSM)  Hits It Out Of The Park

What a great start to the earnings season! 

TSM is up 5% premarket after a massive beat and terrific guidance for AI demand.

TSM, the indispensable chip producer and source for some of the world’s largest tech companies, including Apple (AAPL), Nvidia (NVDA), AMD (AMD), and other chipmakers produced outstanding results this morning.

Q4 Metrics

Sales up 37% YoY to $26.88B

Earnings per American Depositary Receipt $2.24, up 69% YoY compared to $1.44 

Both top and bottom line numbers surpassed analysts’ expectations.

“Our business in the fourth quarter was supported by strong demand for our industry-leading 3nm and 5nm technologies,” said Wendell Huang, senior VP and CFO of TSM in the earnings press release.

Strong AI momentum

TSMC’s brilliant Q4 results beat management’s guidance, and confirmed the strong AI momentum for 2025, disproving any notions about reduced or waning demand. With $300Bn in planned Capex by just the hyperscalers, I believe investors’ concerns are misplaced.

Management highlighted TSM’s growing collaborations with the memory industry during the earnings call, reinforcing confidence in strong and accelerated HBM demand, which bodes well for HBM makers like Micron (MU).

Closer interaction with HBM makers also suggests a strong foundation for the potential ramp of TSM’s 3nm node and upcoming 2nm node, which would be essential for AI development.

TSM has de-risked geo-political issues with its overseas factories in Arizona and Kumamoto. It also noted that it could manage further export controls from the US government to China, besides only 11% of their sales were to China.

“Let me assure you that we have a very frank and open communication with the current government and with the future one also,” said Wei when asked about the current and next U.S. administrations.

Any de-risking is a tailwind for increasing multiples and valuation for the stock. I had recommended TSM earlier stating that the geo-political concerns shouldn’t devalue the crown jewel of the semiconductor industry.

Q4 Revenue by Technology

TSM said 3nm process technology contributed 26% of total wafer revenue in the fourth quarter, versus 15% in the year-ago period, and 20% in the third quarter of 2024.

The 5nm process technology accounted for 34% of total wafer revenue, compared to 35% in the same period a year ago, and 32% in the third quarter of 2024. Meanwhile, 7nm accounted for 14% of total wafer revenue in the fourth quarter versus 17% a year earlier, and in the third quarter of 2024.

Total 3nm+5nm = 26+34 = 60% – that’s a fantastic high-margin business.

Advanced technologies (7nm and below) accounted for 74% of wafer revenue.

Q4 Revenue by Platform

High Performance Computing represented 53% of net revenue, up from 43% in the fourth quarter of 2023. – Nvidia, AMD, Broadcom.

The company’s smartphone segment represented 35% of net revenue, versus 43% in the year-ago period. – Apple

Q4 Revenue by Geography

Revenue from China — Taiwan Semi’s second-biggest market by revenue — accounted for 9% of the total net revenue in the period, down from 11% in the year-ago period and in the third quarter of 2024.

North America accounted for 75% of total net revenue coming from it, compared to 72% a year earlier, and 71% in the third quarter of 2024.

Outlook

“Moving into the first quarter of 2025, we expect our business to be impacted by smartphone seasonality, partially offset by continued growth in AI-related demand,” said Huang.

TSM expects capital expenditure to be between $38B to $42B in 2025. The amount is up to 19% more than analysts’ expectations, according to a Bloomberg report.

In the first quarter of 2025, TSM expects revenue to be between $25B and $25.8B (midpoint at $25.4B), consensus of $24.75B. That’s a raise.

Categories
Semiconductors Stocks

The Knee Jerk Reaction To Micron’s Q1-25 Is A Gift

Micron Technology’s fiscal Q1-2025 earnings report offered two stories: a dramatic surge in data center revenue and a troubling outlook for its consumer-facing NAND business. Despite the strong performance in high-growth areas like AI and data centers, the company’s stock took a sharp dip after hours due to concerns about consumer weakness, especially in the NAND segment.

I’ve owned and recommended Micron for a while now, and even took some profits in June 2024 at $157, when it rose far above what I felt was its intrinsic value. Since it’s a cyclical stock in a commodity cyclical memory semiconductor business, getting a good price is unusually important, and it is crucial to take profits when the stock gets ahead of itself.

Micron’s (MU) stock slumped from $108 on weak guidance for the next quarter, and now at $89, it looks very attractive at this price. I’ve started buying again.

Record-breaking data center performance

Micron reported impressive growth in its Compute and Networking Business Unit (CNBU), which saw a 46% quarter-over-quarter (QoQ) and 153% year-over-year (YoY) revenue jump, reaching a record $4.4Bn. This success was largely driven by cloud server DRAM demand and a surge in high-bandwidth memory (HBM) revenue. In fact, data center revenue accounted for over 50% of Micron’s Q1-FY2025 total revenue of $8.7Bn, a milestone for the company.

HBM Revenue was a standout, with analysts estimating that the company generated $800 to $900Mn in revenue from this segment during the quarter. Micron’s HBM3E memory, which is used in products like Nvidia’s B200 and GB200 GPUs, has been a significant contributor to the company’s data center growth. Micron’s management also raised their total addressable market (TAM) forecast for HBM in 2025, increasing it from $25 billion to $30 billion—a strong indicator of the company’s growing confidence in its AI and server business.

Looking ahead, Micron remains optimistic about the long-term prospects of HBM4, with the expectation of substantial growth in the coming years. The company anticipates that HBM4 will be ready for volume production by 2026, offering 50% more performance than its predecessor, HBM3E, and potentially reaching a $100 billion TAM by 2030.

Consumer weakness and NAND woes

While Micron’s data center performance was strong, the company’s consumer-facing NAND business painted a less rosy picture. Micron forecasted a near 10% sequential decline in Q2 revenue, to $7.9Bn far below the consensus estimate of $8.97 billion, setting a negative tone for the future. This decline was primarily attributed to inventory reductions in the consumer market, a seasonal slowdown, and a delay in the expected PC refresh cycle – a segment that has also derailed other semis such as Advanced Micro Devices (AMD), and Lam Research (LRCX) among others. While NAND bit shipments grew by 83% YoY, a weak demand environment for consumer electronics—especially in the PC and smartphone markets—weighed heavily on performance.

Micron’s CEO, Sanjay Mehrotra, emphasized that the consumer market weakness was temporary and that the company expected to see improvements by early 2025. The company also noted the challenges posed by excess NAND inventory at customers, especially in the smartphone and consumer electronics markets. In particular, Micron’s NAND SSD sales to the data center sector moderated, leading to further concerns about demand sustainability. The slowdown in the consumer space and the underloading of NAND production is expected to continue into Q3, with Micron’s management projecting lower margins for the foreseeable future due to these supply-demand imbalances.

Micron reported Q1 revenue of $8.71 billion, up 84.3% YoY, and in line with consensus estimates. However, the company’s Q2 guidance of $7.9Bn (a 9.3% sequential decline) was notably weaker than the $8.97Bn analysts had expected. The guidance miss sent Micron’s stock down significantly in after-hours trading.

Financial highlights: Strong margins and profitability, but challenges ahead

Improving Margins: Micron’s gross margin for Q1 came in at 38.4%, an improvement of 3.1 percentage points QoQ, largely driven by the strength of HBM and data center DRAM. However, the outlook for Q2 is less optimistic, with gross margins expected to decline by about 1 percentage point due to continued weakness in NAND, along with seasonal factors and underloading impacts.

Micron’s operating margin for the quarter was 25.0%, ahead of guidance, reflecting the company’s tight cost control and strong performance in high-margin segments like HBM. However, for Q2, Micron expects operating margins to contract, with GAAP operating margin expected to drop to 21.8%.

Profitability also improved significantly with GAAP net income rising 111% QoQ to $1.87Bn, resulting in a GAAP EPS of $1.67, compared to a loss of $1.10 in the year-ago quarter. However, Micron guided for a significant drop in EPS for Q2, forecasting GAAP EPS of $1.26, well below the $1.96 expected by analysts.

Cash flow and capital investments

Micron’s cash flow generation remained robust, with operating cash flow (OCF) increasing by 130% YoY to $3.24 billion, but free cash flow (FCF) was more limited due to significant capital expenditures (CapEx) of $3.1 billion. The company also outlined its intention to spend around $14 billion in CapEx in FY25, primarily to support the growth of HBM and other high-margin data center products. In my opinion, this is a necessity to stay close to SK Hynix and Samsung, its biggest rivals in HBM, who also have a large chunk of the market and can easily match Micron in product improvements necessary to supply to the likes of Nvidia (NVDA). High Capex also increases its ability to scale and improve margins down the road, leading to greater cash generation.

Going home: Micron also announced a $6.1 billion award from the U.S. Department of Commerce under the CHIPS and Science Act to support advanced DRAM manufacturing in Idaho and New York. This partnership aligns with Micron’s long-term growth strategy in the data center and AI segments.

Micron is a bargain

I’m buying the stock with the risk that it could stay range-bound for a few months.

The company’s earnings call reflected a clear divergence in the outlook for its two key segments: data center and consumer electronics. Management sounded confident about data center growth, driven by strong demand for AI-driven applications while providing a more cautious forecast for the consumer NAND business, where inventory corrections and weakened demand are expected to persist through Q2 2024.

I’m very confident about Micron’s continued strength in the data center market, driven by AI and cloud computing, and believe that the prolonged weakness in consumer-facing NAND and PC markets in the short term is an opportunity to buy the stock at a bargain price. The market’s reaction suggests that investors were caught off guard by the unexpected weakness in the consumer business, but this has been persisting as I mentioned earlier with AMD, and Lam Research, and even before

earnings at $109, Micron was a lot below its 52-week high of $158. The further knee-jerk reaction is a boon for the bargain hunter.

For now, I’m not worried if the stock remains range bound – at $89, the downside is seriously limited and its future success will remain squarely on HBM data center demand. Its largest customer Nvidia is forecast to generate $200Bn worth of data center revenue from its Blackwell line and Micron will reap a good chunk of that.

Micron is priced at 13x FY Aug – 2025, with consensus analyst earnings of $6.93, which is forecast to grow to $11.53 in FY2026, a whopping jump of 66%, bringing the P/E multiple down to just 8. Even for a cyclical that’s a low. Besides, Micron is also growing revenues at 28% next year on the back of a 40% increase in FY 2025, which took it soaring past its previous cyclical high of $31Bn in FY2022. With data center revenue contributing more than 50% of the total, Micron does deserve a better valuation.

Super Micro Computer- SMCI $26 – Avoid Till We Get Restated Financials

I think Super Micro Computer, (SMCI) should be avoided for the following reasons:

Restatements could alter the financials significantly: The odds are high that the company will end up restating past financials. The Auditor’s resignation, its CEO’s pay package, and its past settlement with the SEC in 2018 all suggest that SMCI will have to restate its financial statements.

No 10K: SMCI has delayed its annual 10-K filing and without an audited 10K, we have precious little faith in the numbers. 

SMCI has priors: This was not their first time. In 2018, the company settled charges with the SEC for improper accounting. As stated in the article from the SEC website….

“According to the SEC’s orders, Super Micro executives, including CFO, Hideshima, pushed employees to maximize end-of-quarter revenue, yet failed to devise and maintain sufficient internal accounting controls to accurately record revenue. As a result, the orders find Super Micro improperly and prematurely recognized revenue, including recognizing revenue on goods sent to warehouses but not yet delivered to customers, shipping goods to customers prior to customer authorization, and shipping misassembled goods to customers. The orders also find that Super Micro misused its cooperative marketing program, which entitles customers to reimbursement for a portion of cooperative marketing costs. According to the orders, Super Micro improperly reduced the liabilities accrued for the program in order to avoid recognizing a variety of expenses unrelated to marketing, including for Christmas gifts and to store goods.”

Tainted: To me the alleged malfeasance creates significant doubts for investors that will not go away for a while, and it’s very likely that Wall Street will not touch it, fearing liability for not doing enough due diligence. Also, I can’t imagine SMCI getting a decent multiple in the future for the same reasons until faith is restored in management and its books.

Possible Loan Default:  There is a significant risk of Super Micro defaulting on the company’s Term Loan Agreement with Bank of America. 

Compensation tied to aggressive revenue targets: The CEO’s unusual compensation package, with virtually no base salary and bonuses tied to very aggressive revenue and share price targets, is a dangerous and potentially abusive practice.

Possible delisting from the Nasdaq, and getting thrown out of the S&P 500: The chances of both are high as a new auditor needs to be found, and a substantial amount of restatement work needs to be done.

A great business: Sure, SMCI has a great server liquid cooling business, which has tremendous potential and demand for server racks for AI GPUs, right now and for the foreseeable future. However, as we saw Nvidia has already started shifting this business.

Ignore the low valuation: I’m going to ignore the current low valuation of 8x earnings and 0.5x sales, which are much lower than competitors such as Dell (DELL) and Oracle (ORCL), simply because I have no idea what the restated financials will look like, and the valuation could be a lot different

The only way I would invest is the possibility of a merger/acquisition/White Knight, that will keep the stock afloat. The CEO owns close to 10% of the company so I doubt if he would let this go without a fight.

The other problem I foresee is the difficulty of getting good and timely information. For the most part, we rely on analysts who expect a proper set of books. Instead of focusing on the fundamentals, we’ll be spending far too much time, and worse without success trying to figure out what is genuine or not, It would make better sense to invest in other GPU/semiconductor businesses.

Categories
Semiconductors Stocks

AMD Bucks The Trend – The Stock Is Up 5% 

  • Advanced Micro Devices press release (NASDAQ:AMD): Q2 Non-GAAP EPS of $0.69 beats by $0.01. 
  •  
  • Revenue of $5.84B (+9.0% Y/Y) beats by $120M. 
  •  
  • Record Data Center segment revenue of $2.8 billion was up 115% year-over-year primarily driven by the steep ramp of AMD Instinct™ GPU shipments, and strong growth in 4th Gen AMD EPYC™ CPU sales. Revenue increased 21% sequentially primarily driven by the strong ramp of AMD Instinct GPU shipments. 
  •  
  • Client segment revenue was $1.5 billion, up 49% year-over-year and 9% sequentially primarily driven by sales of AMD Ryzen™ processors. 
  • Gaming segment revenue was $648 million, down 59% year-over-year and 30% sequentially primarily due to a decrease in semi-custom revenue. 
  •  
  • For the third quarter of 2024, AMD expects revenue to be approximately $6.7 billion vs. $6.61B consensus, plus or minus $300 million. At the mid-point of the revenue range, this represents year-over-year growth of approximately 16% and sequential growth of approximately 15%. Non-GAAP gross margin is expected to be approximately 53.5%.