Fountainheadinvesting

Fountainhead Investing

  • Objective Analysis: Research On High Quality Companies With Sustainable Moats
  • Tech Focused: 60% Allocated To AI, Semiconductors, Technology

5 Star Tech Analyst Focused On Excellent Companies With Sustainable Moats

Categories
AI Cloud Service Providers Industry Semiconductors Stocks

Hyperscalers, Meta and Microsoft Confirm Massive Capex Plans

Meta (META) has committed to $60-65Bn of Capex and Microsoft (MSFT) $80Bn: After the DeepSeek revelations, this is a great sign of confidence for Nvidia (NVDA), Broadcom (AVGO), and Marvel (MRVL) and other semiconductor companies. Nvidia, Broadcom, and Marvell should continue to see solid demand in 2025.

Meta CEO, Mark Zuckerberg also mentioned that one of the advantages that Meta has (and other US firms by that same rationale) is that they will have a continuous supply of chips, which DeepSeek will not have, and the likes of US customers like Meta will easily outperform when it comes to scaling and servicing customers. (They will fine-tune Capex between training and inference). Meta would be looking at custom silicon as well for other workloads, which will help Broadcom and Marvell.

Meta executives specifically called out a machine-learning system designed jointly with Nvidia as one of the factors driving better-personalized advertising. This is a good partnership and I don’t see it getting derailed anytime soon.

Meta also talked about how squarely focused they were on software and algorithm improvements. Better inference models are the natural progression and the end goal of AI. The goal is to make AI pervasive in all kinds of apps for consumers/businesses/medical breakthroughs, and so on. For that to happen you still need scalable computing power to reach a threshold when the models have been trained enough to provide better inference and/or be generative enough, to do it for a specific domain or area of expertise.

This is the tip of the iceberg, we’re not anywhere close to reducing the spend. Most forecasts that I looked at saw data center training spend growth slowing down only in 2026, and then spending on inference growing at a slower speed. Nvidia’s consensus revenue forecasts show a 50% revenue gain in 2025 and 25% thereafter, so we still have a long way to go.

I also read that Nvidia’s GPUs are doing 40% of inference work, they’re very much on the ball on inference.

The DeepSeek impact: If DeepSeek’s breakthrough in smarter inference were announced by a non-Chinese or an American company and if they hadn’t claimed a cheaper cost, it wouldn’t have made the impact it did.  The surprise element was the reported total spend, and the claim that they didn’t have access to GPUs – it was meant to shock and awe and create cracks in the massive spending ecosystem, which it is doing. But the reported total spend or not using high GPUs doesn’t seem plausible, at least to me. Here’s my earlier article detailing some of the reasons. The Chinese government subsidized every export entry to the world, from furniture to electric vehicles, so why not this one? That has been their regular go-to-market strategy. 

Cheaper LLMs are not a plug-and-play replacement. They will still require significant investment and expertise to train and create an effective inference model. I think the GPU requirements will not diminish because you need GPUs for training and time scaling, smarter software will still need to distill data. 

Just as a number aiming at a 10x reduction in cost is a good target but it will compromise quality and performance. Eventually, the lower-tier market will get crowded and commoditized – democratized if you will, which may require cheaper versions of hardware and architecture from AI chip designers, as an opportunity to serve lower-tier customers.

American companies will have to work harder, for sure – customers want cheap (Databricks’ CEO’s phone hasn’t stopped ringing for alternative solutions) unless they TikTok this one as well…..

Categories
AI Cloud Service Providers Industry Semiconductors Stocks

DeepSeek Hasn’t Deep-Sixed Nvidia

01/28/2025

Here is my understanding of the DeepSeek breakthrough and its repercussions on the AI ecosystem

DeepSeek used “Time scaling” effectively, which allows their r1 model to think deeper at the inference phase. By using more power instead of coming up with the answer immediately, the model will take longer to research for a better solution and then answer the query, better than existing models.

How did the model get to that level of efficiency?

DeepSeek used a lot of interesting and effective techniques to make better use of its resources, and this article from NextPlatform does an excellent job with the details.

Besides effective time scaling the model distilled the answers from other models including ChatGPT’s models.

What does that mean for the future of AGI, AI, ASI, and so on?

Time scaling will be adopted more frequently, and tech leaders across Silicon Valley are responding to improve their methods as cost-effectively as possible. That is the logical and sequential next step – for AI to be any good, it was always superior inference that was going to be the differentiator and value addition.

Time scaling can be done at the edge as the software gets smarter.

If the software gets smarter, will it require more GPUs?

I think the GPU requirements will not diminish because you need GPUs for training and time scaling, smarter software will still need to distill data. 

Cheaper LLMs are not a plug-and-play replacement. They will still require significant investment and expertise to train and create an effective inference model. Just as a number aiming at a 10x reduction in cost is a good target but it will compromise quality and performance. Eventually, the lower-tier market will get crowded and commoditized – democratized if you will, which may require cheaper versions of hardware and architecture from AI chip designers, as an opportunity to serve lower-tier customers.

Inferencing

Over time, yes inference will become more important – Nvidia has been talking about the scaling law, which diminishes the role of training and the need to get smarter inference for a long time. They are working on this as well, I even suspect that the $3,000 Digits they showcased for edge computing will provide some of the power needed.

Reducing variable costs per token/query is huge: The variable cost will reduce, which is a huge boon to the AI industry, previously retrieving and answering tokens cost more than the entire monthly subscription to ChatGPT or Gemini.

From Gavin Baker on X on APIs and Costs:

R1 from DeepSeek seems to have done that, “r1 is cheaper and more efficient to inference than o1 (ChatGPT). r1 costs 93% less to *use* than o1 per each API, can be run locally on a high end work station and does not seem to have hit any rate limits which is wild.

However, “Batching massively lowers costs and more compute increases tokens/second so still advantages to inference in the cloud.”

It is comparable to o1 from a quality perspective although lags o3.

There were real algorithmic breakthroughs that led to it being dramatically more efficient both to train and inference.  

On training costs and real costs:

Training in FP8, MLA and multi-token prediction are significant.  It is easy to verify that the r1 training run only cost $6m.

The general consensus is that the “REAL” costs with the DeepSeek model much larger than the $6Mn given for the r1 training run.

Omitted are:

Hundreds of millions of dollars on prior research and has access to much larger clusters.

Deepseek likely had more than 2048 H800s;  An equivalently smart team can’t just spin up a 2000 GPU cluster and train r1 from scratch with $6m.  

There was a lot of distillation – i.e. it is unlikely they could have trained this without unhindered access to GPT-4o and o1, which is ironical because you’re banning the GPU’s but giving access to distill leading edge American models….Why buy the cow when you can get the milk for free?

The NextPlatform too expressed doubts about DeepSeek’s resources

We are very skeptical that the V3 model was trained from scratch on such a small cluster.

A schedule of geographical revenues for Nvidia’s Q3-FY2025 showed 15% of Nvidia’s or over $4Bn revenue “sold” to Singapore, with the caveat that it may not be the ultimate destination, which also creates doubts that DeepSeek may have gotten access to Nvidia’s higher-end GPUs despite the US export ban or stockpiled them before the ban. 

Better software and inference is the way of the future

As one of the AI vendors at CES told me, she had the algorithms to answer customer questions and provide analytical insides at the edge for several customers – they have the data from their customers and the software, but they couldn’t scale because AWS was charging them too much for cloud GPU usage when they didn’t need that much power. So besides r1’s breakthrough in AGI, this movement has been afoot for a while, and this will spur investment and innovation in inference. We will definitely continue to see demand for high-end Blackwell GPUs to train data and create better models for at least the next 18 months to 24 months after which the focus should shift to inference and as Nvidia’s CEO said, 40% of their GPUs are already being used for inference.

Categories
AI Semiconductors Stocks

Micron’s Low Price Is A Gift

Micron’s data center revenue should grow 91% and 38% in FY2025 and FY2026, driven by cloud server DRAM and HBM.

The market is not assigning a strong multiple to Micron’s largest, most profitable, and fastest-growing segment, with HBM3E contributing significantly, and future growth expected from HBM4.

Micron should gain from an extremely strong AI market as evidenced by huge CAPEX from hyperscalers, Nvidia’s Blackwell growth, and Taiwan Semiconductors’s forecasts.

Consumer NAND business faced challenges due to inventory reductions, seasonal slowdowns, and delayed PC refresh cycles, impacting Q2 revenue guidance and margins.

Despite short-term consumer weakness, Micron’s strong data center prospects and attractive valuation make it a compelling buy, especially at the current price of $90

You can read the entire article on Seeking Alpha; Micron (MU) dropped a massive 15% after DeepSeek deep-sixed the market. Nvidia (NVDA) too dropped 14%, but has begun to recover and I expect Micron to recover as well.

Categories
AI Stocks Technology

UiPath: The Path Forward Is Getting Smoother

  • UiPath’s competitive edge lies in its AI integration, SAP partnership and industry-agnostic automation solutions, making it a strong contender in RPA despite generative AI threats.
  • Recent struggles were due to sales execution issues, and competition, but the company shows signs of recovery with strategic changes and a focus on large clients and collaborations.
  • Founder Daniel Dines’ return as CEO, workforce reductions, and strategic partnerships, especially with SAP, are pivotal in steering UiPath back on track.
  • Despite current challenges, UiPath’s strong cash position, cost-saving measures, and promising AI Agentic capabilities make it a worthwhile investment with limited downside risk.

UiPath’s (PATH) updated financial forecast and current valuation do make a great case for investment as a GARP, now that it’s likely to grow only in the mid-teens, valued at just 5X sales, and 25x adjusted earnings. Besides, cash flow is almost double the adjusted operating income, so that too is a plus. I own some and plan to accumulate on declines.

Here’s the complete article on Seeking Alpha.

Categories
AI Semiconductors Stocks

Taiwan Semiconductor  Manufacturing (TSM)  Hits It Out Of The Park

What a great start to the earnings season! 

TSM is up 5% premarket after a massive beat and terrific guidance for AI demand.

TSM, the indispensable chip producer and source for some of the world’s largest tech companies, including Apple (AAPL), Nvidia (NVDA), AMD (AMD), and other chipmakers produced outstanding results this morning.

Q4 Metrics

Sales up 37% YoY to $26.88B

Earnings per American Depositary Receipt $2.24, up 69% YoY compared to $1.44 

Both top and bottom line numbers surpassed analysts’ expectations.

“Our business in the fourth quarter was supported by strong demand for our industry-leading 3nm and 5nm technologies,” said Wendell Huang, senior VP and CFO of TSM in the earnings press release.

Strong AI momentum

TSMC’s brilliant Q4 results beat management’s guidance, and confirmed the strong AI momentum for 2025, disproving any notions about reduced or waning demand. With $300Bn in planned Capex by just the hyperscalers, I believe investors’ concerns are misplaced.

Management highlighted TSM’s growing collaborations with the memory industry during the earnings call, reinforcing confidence in strong and accelerated HBM demand, which bodes well for HBM makers like Micron (MU).

Closer interaction with HBM makers also suggests a strong foundation for the potential ramp of TSM’s 3nm node and upcoming 2nm node, which would be essential for AI development.

TSM has de-risked geo-political issues with its overseas factories in Arizona and Kumamoto. It also noted that it could manage further export controls from the US government to China, besides only 11% of their sales were to China.

“Let me assure you that we have a very frank and open communication with the current government and with the future one also,” said Wei when asked about the current and next U.S. administrations.

Any de-risking is a tailwind for increasing multiples and valuation for the stock. I had recommended TSM earlier stating that the geo-political concerns shouldn’t devalue the crown jewel of the semiconductor industry.

Q4 Revenue by Technology

TSM said 3nm process technology contributed 26% of total wafer revenue in the fourth quarter, versus 15% in the year-ago period, and 20% in the third quarter of 2024.

The 5nm process technology accounted for 34% of total wafer revenue, compared to 35% in the same period a year ago, and 32% in the third quarter of 2024. Meanwhile, 7nm accounted for 14% of total wafer revenue in the fourth quarter versus 17% a year earlier, and in the third quarter of 2024.

Total 3nm+5nm = 26+34 = 60% – that’s a fantastic high-margin business.

Advanced technologies (7nm and below) accounted for 74% of wafer revenue.

Q4 Revenue by Platform

High Performance Computing represented 53% of net revenue, up from 43% in the fourth quarter of 2023. – Nvidia, AMD, Broadcom.

The company’s smartphone segment represented 35% of net revenue, versus 43% in the year-ago period. – Apple

Q4 Revenue by Geography

Revenue from China — Taiwan Semi’s second-biggest market by revenue — accounted for 9% of the total net revenue in the period, down from 11% in the year-ago period and in the third quarter of 2024.

North America accounted for 75% of total net revenue coming from it, compared to 72% a year earlier, and 71% in the third quarter of 2024.

Outlook

“Moving into the first quarter of 2025, we expect our business to be impacted by smartphone seasonality, partially offset by continued growth in AI-related demand,” said Huang.

TSM expects capital expenditure to be between $38B to $42B in 2025. The amount is up to 19% more than analysts’ expectations, according to a Bloomberg report.

In the first quarter of 2025, TSM expects revenue to be between $25B and $25.8B (midpoint at $25.4B), consensus of $24.75B. That’s a raise.

Categories
AI Stocks

CES Note On Nvidia (NVDA)

Nvidia Announcements: Jensen Huang CEO delivered the CES (Consumer Electronic Show) keynote on January 6th, 2025. An immensely popular event, with about 140,000 attendees, the charismatic Jensen drew a packed crowd. Jensen did not disappoint.

New Gaming Cards: Nvidia’s new line of RTX 50 Series gaming graphics cards is based on the company’s Blackwell chip architecture. Considering the massive leap Blackwell has made over Hopper, its previous iteration in data center applications, getting it to work in gaming is a huge deal – a massive improvement with superior rendering and higher frame rates for gamers.

Digits – The Linux-based desktop computer with the GB 10 Grace Blackwell Superchip with a CPU and GPU. A first in its history at $3,000 “Placing an AI supercomputer on the desks of every data scientist, AI researcher, and student empowers them to engage and shape the age of AI,” Huang said.

It may seem like a niche product for high-end engineers/professors/scientists and researchers, but I think it’s a deliberate and excellent strategy to evangelize the product, through the folks who can develop use cases and apps that can further the market for cheaper at scale mass Digits in the future. 

This is straight from the Nvidia playbook from the last two decades – they have always involved the scientific and research community from the start. I can bet a large number of these are going to be distributed free to campuses.

I think Digits will turn out to be a very consequential product for Nvidia – with several billion in revenue in a few years. But forecasts aside, what’s key is that Digits uses a scaled-down version of Nvidia’s Grace AI server CPU technology. It’s packaged in a Mac Mini-sized form factor with the help of Taiwan-based MediaTek, which Huang commended for its expertise in building low-power chips. 

Quoting Tae Kim from Barron’s who authored an excellent book on Nvidia.

“Over time, the logical move for Nvidia would be to scale down this CPU further for consumer Windows laptops. By integrating its graphics expertise, MediaTek’s power-saving capabilities, and the efficiency of Arm-based CPU technology, Nvidia could create a processor that offers leading graphics for gaming and high performance for productivity, along with long battery life. While prior Arm-based Windows PCs have struggled with software compatibility, Nvidia’s top-notch software engineering could make it work.”

Huang strongly hinted it was likely to happen. “We architected a high-performance CPU with [MediaTek],” he said on Tuesday at a question-and-answer session with financial analysts at CES. “It was a great win-win.”

When pressed by an analyst if Digits was an iterative step toward moving into the PC market, “I’m going to have to wait to tell you that,” Huang said. “Obviously, we have plans.”

There are questions about why Nvidia chose Linux over Windows – and we should hear more about that at their GTC conference in March.

It could shake up the moribund PC market, which has been suffering from a lack of growth rates after COVID-19. Desktop PCs and laptop computers still generate large revenues for Intel and Advanced Micro Devices, the primary makers of x86-based processors – a legacy that could give way to ARM-based processors, which Apple uses. Analysts expect Intel to generate $30 billion in revenue from its client computing business in 2024, according to FactSet, while AMD has $6.7 billion in revenue in its client segment.

Billions of potential new revenue are at stake for Nvidia, which is forecasted to make $180Bn in sales in the 12 months ending January 2026. While data center takes up the largest share of revenue at about $150Bn of that pie and gaming, auto, and professional visualization (Omniverse) take the rest, a new source would help a great deal when data center revenue growth slows down.

In the past two years, Nvidia has monopolizedized the AI data center market with the best-designed, highest-performing chips. Nvidia would likely make a significant dent in the PC market as well as the new paradigm in edge computing with all the computing power and constant innovation and upgrade at its disposal.

The third announcement was for COSMOS – a significant improvement over their Omniverse and the biggest catalyst/enabler of “Physical AI”,  I’ll write a separate note on that.

Nvidia wants it all. That’s likely to be good news for consumers and trouble for the PC status quo.

Categories
AI Hardware Semiconductors Stocks

AI Takes Center Stage At 2025 CES

The mood at the CES (Consumer Electronics Show) is pretty upbeat. This is a hardware show and after years of taking a backseat to software, the massive computing power from GPUs for AI applications has propelled hardware to the forefront. And it’s not just servers, racks, data center peripherals, or networking, the range of hardware was impressive. From auto, robots, home appliances, entertainment, healthcare devices, industrial equipment, and smart glasses, to security.

Vendors, engineers, consulting firms, and VCs are noticeably excited for the future. 

Heavily focused on AI, the number of exhibitors with AI proof of concept and use cases was also quite large. Sure, there’s always hype, but a lot of them were genuine, with orders, use cases, and deployment.

Trends

Smarter hardware

Robotics – big push from computing power for robotic uses in supply chain, warehousing, and retail. Strong use of AI in industrial design, and industrial production.

Examples:  

  • Siemens’ collaboration with Nvidia and several others using Nvidia’s Cosmos (a big improvement on Omniverse) for pre-production simulation and assembly lines for quality control, and error reduction) 
  • Eaton with Intel for power systems. (Time reduction on compliance, process streamlining)
  • Accenture’s several clients, in fields like oil drilling, and IoT applications.

Vendors can do a lot more now and are seeing demand for better AI products.

Inference

Both edge AI in PCs and IoT endpoints are going to be huge. A SaaS cyber security firm (among others) complained about the crazy fees she pays AWS (somebody’s got to pay for that $100Bn of Capex for Nvidia’s GPUs!). The need to increase inference and solutions at the endpoint will be a major trend, and the increase in the number of AI PCs coming to market confirms the shift to the edge.

Agentic AI or Agents

I had lengthy discussions with  Nvidia and AMD engineers on Agentic AI, which has become a bit of a buzzword with some hype about it really being a chatbot with a fancier name, but in several cases that I saw, was a major improvement over a chatbot.

Agentic has to mean enhanced or better solutions than a chatbot –  a chatbot or a ChatGPT query gives you a simple answer to a query from a data set. An Agent must be able to give you significantly more feedback from analyzing a wider set of data, or more than one data set. It is more interactive and provides feedback and looks for feedback – a loop that ends up with a better solution.

A comment from the VP of Accenture was very interesting. She said, that if agents have to be better than chatbots or the latest version of Alexa/Siri – if I ask for the next charging station for my EV, it should be able to know my history, deduce, which direction I’m going in, and also tell if there is a vegetarian eatery on the way (in the same direction)  – not just throw up 5 charging stations 2 in the wrong direction, as it currently does.

In my opinion, this is just the beginning, there will be secular growth for a while. I would strongly focus on AI with the usual caveat of being careful of stock valuations running ahead of growth and getting overpriced – we’ll need to be patient with getting the right price.

For AI to continue growing the industry has to get more democratized – more developers, a wider ecosystem, more use cases and I saw that in spades, which gives me a lot of confidence in its future – Nvidia, AMD, Qualcomm, Amazon, Google, and Intel had multiple partners all over the conference. 

The Agent could be your interface on your iPhone – this is an interesting idea brought up by a VP from Qualcomm.

Categories
Ad Tech Stocks Technology

Roblox (RBLX) $58, A Solid But Overpriced Company

Roblox (RBLX) is a market leader for gaming apps and the short report from The Hindenburg. alleging irregularities in engagement metrics had a negligible impact on its share price.

In October 2024, Roblox shares dropped 9% after Hindenburg’s short thesis but quickly recovered, closing only 2% lower, highlighting investor resilience. The stock, which was coasting in the low forties then, has gained almost 50% since then to $58 today.

I believe it is a solid company. Although it is overpriced, it is worth considering as an investment if the price drops below $50.

Positives

Market Leader: Roblox is the number one grossing app for the iPad across the App Store, and regularly among the top 10 apps for the iPhone, across categories, according to data collected by Refinitiv. 

Not gaming the market: Roblox also showed strong App Store momentum across some of the biggest gaming markets in the world, including North America, Europe, and SEA. I believe these gross numbers are extremely difficult if not impossible to fudge, instead it supports Roblox’s strong commercial value and future prospects.

Good quarterly numbers and guidance: Roblox’s booking in Q2 grew around 22% YoY, to $955Mn, and it guided to $1-$1.025Bn for Q3.

Partnering with Shopify: The commercial integration partnership with Shopify also helps Roblox further build out its virtual market, with better monetization opportunities.

Wall Street likes it: Ken Gawrelski, an analyst from Wells Fargo, maintained a Buy rating on Roblox, raising the price target to $58.00. He observes that the company’s strong engagement trends continue to outperform expectations, with a significant increase in concurrent users and app downloads, indicating robust user growth. These factors contribute to a raised third-quarter total bookings growth forecast, which is now expected to surpass the company’s guidance and the consensus estimates. 

Great monetization tools: Roblox’s expansion of monetization tools, and strength in in-game spending is a significant competitive advantage for driving long-term developer and user engagement on the platform. Shopify and other initiatives are expected to enable developers to better monetize their offerings. 

The trend is their friend: The strategic shift towards direct response advertising, including new partnerships and live commerce testing, indicates that Roblox can make the most of the new opportunities in digital advertising. 

These initiatives give me confidence in the company’s sustainable long-term revenue growth.

Negatives

Hindenburg’s biggest grouse was the possibility of fudging and overstating user growth and engagement numbers, which though denied strongly by the company and discarded by analysts could create doubts about the valuation in the future.

Revenue growth forecasts for the next 3 years is around 16-18% and with the stock selling at 7.5x sales, it is expensive and a quarterly miss could lead to a large drop. Even the positive Wells Fargo analyst had a price target of $58, we’re already crossed that level.

It is loss-making on a GAAP basis with heavy stock-based compensation, which likely sets a cap on its valuation. That said cash flow is strong – around 19% of revenue.

Overall, Hindenburg didn’t make an impact, Roblox is performing well but I would be very careful about the price and get it lower to make a meaningful return.

Categories
Semiconductors Stocks

The Knee Jerk Reaction To Micron’s Q1-25 Is A Gift

Micron Technology’s fiscal Q1-2025 earnings report offered two stories: a dramatic surge in data center revenue and a troubling outlook for its consumer-facing NAND business. Despite the strong performance in high-growth areas like AI and data centers, the company’s stock took a sharp dip after hours due to concerns about consumer weakness, especially in the NAND segment.

I’ve owned and recommended Micron for a while now, and even took some profits in June 2024 at $157, when it rose far above what I felt was its intrinsic value. Since it’s a cyclical stock in a commodity cyclical memory semiconductor business, getting a good price is unusually important, and it is crucial to take profits when the stock gets ahead of itself.

Micron’s (MU) stock slumped from $108 on weak guidance for the next quarter, and now at $89, it looks very attractive at this price. I’ve started buying again.

Record-breaking data center performance

Micron reported impressive growth in its Compute and Networking Business Unit (CNBU), which saw a 46% quarter-over-quarter (QoQ) and 153% year-over-year (YoY) revenue jump, reaching a record $4.4Bn. This success was largely driven by cloud server DRAM demand and a surge in high-bandwidth memory (HBM) revenue. In fact, data center revenue accounted for over 50% of Micron’s Q1-FY2025 total revenue of $8.7Bn, a milestone for the company.

HBM Revenue was a standout, with analysts estimating that the company generated $800 to $900Mn in revenue from this segment during the quarter. Micron’s HBM3E memory, which is used in products like Nvidia’s B200 and GB200 GPUs, has been a significant contributor to the company’s data center growth. Micron’s management also raised their total addressable market (TAM) forecast for HBM in 2025, increasing it from $25 billion to $30 billion—a strong indicator of the company’s growing confidence in its AI and server business.

Looking ahead, Micron remains optimistic about the long-term prospects of HBM4, with the expectation of substantial growth in the coming years. The company anticipates that HBM4 will be ready for volume production by 2026, offering 50% more performance than its predecessor, HBM3E, and potentially reaching a $100 billion TAM by 2030.

Consumer weakness and NAND woes

While Micron’s data center performance was strong, the company’s consumer-facing NAND business painted a less rosy picture. Micron forecasted a near 10% sequential decline in Q2 revenue, to $7.9Bn far below the consensus estimate of $8.97 billion, setting a negative tone for the future. This decline was primarily attributed to inventory reductions in the consumer market, a seasonal slowdown, and a delay in the expected PC refresh cycle – a segment that has also derailed other semis such as Advanced Micro Devices (AMD), and Lam Research (LRCX) among others. While NAND bit shipments grew by 83% YoY, a weak demand environment for consumer electronics—especially in the PC and smartphone markets—weighed heavily on performance.

Micron’s CEO, Sanjay Mehrotra, emphasized that the consumer market weakness was temporary and that the company expected to see improvements by early 2025. The company also noted the challenges posed by excess NAND inventory at customers, especially in the smartphone and consumer electronics markets. In particular, Micron’s NAND SSD sales to the data center sector moderated, leading to further concerns about demand sustainability. The slowdown in the consumer space and the underloading of NAND production is expected to continue into Q3, with Micron’s management projecting lower margins for the foreseeable future due to these supply-demand imbalances.

Micron reported Q1 revenue of $8.71 billion, up 84.3% YoY, and in line with consensus estimates. However, the company’s Q2 guidance of $7.9Bn (a 9.3% sequential decline) was notably weaker than the $8.97Bn analysts had expected. The guidance miss sent Micron’s stock down significantly in after-hours trading.

Financial highlights: Strong margins and profitability, but challenges ahead

Improving Margins: Micron’s gross margin for Q1 came in at 38.4%, an improvement of 3.1 percentage points QoQ, largely driven by the strength of HBM and data center DRAM. However, the outlook for Q2 is less optimistic, with gross margins expected to decline by about 1 percentage point due to continued weakness in NAND, along with seasonal factors and underloading impacts.

Micron’s operating margin for the quarter was 25.0%, ahead of guidance, reflecting the company’s tight cost control and strong performance in high-margin segments like HBM. However, for Q2, Micron expects operating margins to contract, with GAAP operating margin expected to drop to 21.8%.

Profitability also improved significantly with GAAP net income rising 111% QoQ to $1.87Bn, resulting in a GAAP EPS of $1.67, compared to a loss of $1.10 in the year-ago quarter. However, Micron guided for a significant drop in EPS for Q2, forecasting GAAP EPS of $1.26, well below the $1.96 expected by analysts.

Cash flow and capital investments

Micron’s cash flow generation remained robust, with operating cash flow (OCF) increasing by 130% YoY to $3.24 billion, but free cash flow (FCF) was more limited due to significant capital expenditures (CapEx) of $3.1 billion. The company also outlined its intention to spend around $14 billion in CapEx in FY25, primarily to support the growth of HBM and other high-margin data center products. In my opinion, this is a necessity to stay close to SK Hynix and Samsung, its biggest rivals in HBM, who also have a large chunk of the market and can easily match Micron in product improvements necessary to supply to the likes of Nvidia (NVDA). High Capex also increases its ability to scale and improve margins down the road, leading to greater cash generation.

Going home: Micron also announced a $6.1 billion award from the U.S. Department of Commerce under the CHIPS and Science Act to support advanced DRAM manufacturing in Idaho and New York. This partnership aligns with Micron’s long-term growth strategy in the data center and AI segments.

Micron is a bargain

I’m buying the stock with the risk that it could stay range-bound for a few months.

The company’s earnings call reflected a clear divergence in the outlook for its two key segments: data center and consumer electronics. Management sounded confident about data center growth, driven by strong demand for AI-driven applications while providing a more cautious forecast for the consumer NAND business, where inventory corrections and weakened demand are expected to persist through Q2 2024.

I’m very confident about Micron’s continued strength in the data center market, driven by AI and cloud computing, and believe that the prolonged weakness in consumer-facing NAND and PC markets in the short term is an opportunity to buy the stock at a bargain price. The market’s reaction suggests that investors were caught off guard by the unexpected weakness in the consumer business, but this has been persisting as I mentioned earlier with AMD, and Lam Research, and even before

earnings at $109, Micron was a lot below its 52-week high of $158. The further knee-jerk reaction is a boon for the bargain hunter.

For now, I’m not worried if the stock remains range bound – at $89, the downside is seriously limited and its future success will remain squarely on HBM data center demand. Its largest customer Nvidia is forecast to generate $200Bn worth of data center revenue from its Blackwell line and Micron will reap a good chunk of that.

Micron is priced at 13x FY Aug – 2025, with consensus analyst earnings of $6.93, which is forecast to grow to $11.53 in FY2026, a whopping jump of 66%, bringing the P/E multiple down to just 8. Even for a cyclical that’s a low. Besides, Micron is also growing revenues at 28% next year on the back of a 40% increase in FY 2025, which took it soaring past its previous cyclical high of $31Bn in FY2022. With data center revenue contributing more than 50% of the total, Micron does deserve a better valuation.

Categories
Cloud Service Providers Enterprise Software Stocks

Oracle Deserves A Seat At The AI Table

Oracle (ORCL) $166 is a solid investment opportunity for 3-5 years, with a decent shot at growing data center cloud revenues faster than its other businesses, with a push from AI requirements from clients like Meta. Hyperscalers and cloud service providers are expected to spend a capex of $300Bn in 2025, boosting cloud infrastructure providers such as Oracle.

Oracle’s earnings should grow between 16-18% in the next 3-5 years- and it’s very reasonably priced at 24x forward earnings of $7.05. 

Its 31% GAAP operating margins are another sign of strength, especially for a legacy/mature $52Bn+ tech company. 

Revenues should grow at 12-14% annually in the next 3-4 years, which is impressive for a company of that size. Oracle’s P/S multiple is not expensive at 7X sales.

Oracle Cloud Infrastructure’s robust growth is a big catalyst for the company and the stock, and while the near-term Oracle’s FQ2 double miss disappointed investors, the price drop from $191 has created an opportunity for investors. Besides Oracle could gain market share over time.

Oracle’s modular approach and scalable infrastructure offer cost competitiveness, attracting customers. The company’s ability to scale AI clusters, demonstrated by the deployment of a 65,000 NVIDIA H200 supercomputer and a 336% surge in GPU consumption last quarter, further highlights its appeal.

Furthermore, Oracle’s strengthened partnership with Meta for AI training underscores its attractiveness to both large enterprises and smaller businesses. This reinforces the effectiveness of Oracle’s modular strategy, which aims to provide customers with an improved total cost of ownership (TCO) compared to leading hyperscaler competitors.

Their overall cloud segment is about 55% of revenues and growing at 25%, but the licensing segment has been stagnant for the past two years. Over time this will tilt more decisively towards the cloud, allowing them to either increase or maintain their multiples/valuation.