This article is archived. For updated research and insights, please visit our new site Small Island Research Notes on Tech and Future.
Exploring Weak Signals: Broadcom’s Perspective on AI Training ASICs
In the rapidly evolving AI hardware space, discussions often center around the competition between different chip architectures, particularly GPUs and ASICs (Application-Specific Integrated Circuits). While NVIDIA’s GPU has traditionally dominated AI training, ASICs have generally been seen as more suitable for the AI inference stage. However, Broadcom’s recent commentary during its earnings call on AI training-specific ASICs has caught our attention, potentially signaling a subtle shift in the industry’s understanding of AI workloads and the role of custom chips.
Our Perspective
1. Broadcom’s Signal
Broadcom’s focus on the AI training market has sparked our reflection. It is widely assumed that ASICs are best suited for inference, which involves processing large volumes of low-latency, high-frequency operations during the deployment phase of AI models. In contrast, GPUs—especially NVIDIA’s GPUs—have been the dominant force in AI training, which requires significant computational power and data model adjustments.
However, Broadcom’s mention of AI training-specific ASICs during its earnings call seems to contradict this conventional wisdom. The company revealed that it has already developed custom accelerators for high-end clients (reportedly Google, Meta, ByteDance, and OpenAI, though unconfirmed) to train cutting-edge models. Broadcom emphasized that ASICs could scale and provide the necessary performance for training large models, indicating an untapped potential in the training market.
2. Weak Signal: Is Broadcom Noticing Future Trends?
Typically, we view NVIDIA’s GPUs as the dominant force in AI training, while ASICs are more effective in inference due to their ability to execute repetitive computations with custom design, minimizing costs and maximizing efficiency for specific models. After two years of AI training market growth, the market may shift toward inference in the coming years.
However, Broadcom suggests that its future $60B to $90B SAM will largely focus on AI training, with large clients/partners expected to adopt ASICs for training. Broadcom’s extensive technological advantages in front-end IC design, key IP integration, back-end IC layout, and wafer fabrication—along with its capabilities at the system and rack level—provide a significant competitive edge in the ASIC space.
Broadcom’s view on AI training and ASICs could be a weak signal (defined as subtle or seemingly insignificant signs pointing to a larger, widely unrecognized trend), indicating that AI training and ASICs may integrate in ways the market has not widely anticipated.
3. Driving ASIC Adoption in AI Training
Several factors may drive ASIC adoption in AI training:
3.1 ASIC vs. GPU: Efficiency Advantage
While GPUs offer flexibility and robust computational power, ASICs’ customized designs can optimize specific tasks, significantly reducing power consumption and enhancing performance for particular model training. Deep learning training ASICs could outperform general-purpose GPUs, especially for fixed and predictable workloads, such as large language model training.
3.2 Customization Demand from Large Enterprises
Massive companies like Google, Meta, and OpenAI are increasingly looking to optimize hardware specifically for certain AI workloads. ASICs’ high degree of customization can be tailored to specific training tasks, greatly boosting performance. This is crucial for these companies, which are pushing the boundaries of AI and handling cutting-edge models with enormous computational needs.
3.3 Scalability Requirements
Broadcom’s interest in AI training ASICs aligns with the demand for computational scalability from large enterprises. These companies not only need single accelerator solutions but seek to further expand to meet the demands of training large-scale models. ASICs’ scalability, especially for training large language models or other advanced AI systems, will help enhance the efficiency of these companies’ research.
4. Broadcom’s Target Market: Large Enterprises and Hyperscalers
Broadcom’s strategic focus is on large enterprises and hyperscale cloud companies, aligning with its strengths in large-scale enterprise hardware solutions. In fact, Broadcom has already secured multiple hyperscale companies as clients, which may indicate that large enterprises are open to transitioning to ASICs for model training. This suggests that AI infrastructure could undergo a transformation, shifting from GPU-dominated training workloads to a blend of ASIC and GPU solutions.
5. Future Impact: How Will the Market Respond?
If Broadcom’s forecast proves correct, the AI hardware market may undergo a transformation. The success of ASICs would represent a fundamental shift in AI training infrastructure, potentially moving away from GPUs as the primary hardware to a future where specialized accelerator chips are designed for different stages of AI development.
However, these are still early signs, and many questions remain:
- Can ASICs truly deliver the efficiency advantages that GPUs cannot match?
- Will large enterprises’ demand for ASICs continue to grow?
- Can Broadcom maintain its leadership in this space?
These questions will shape the future of AI training and hardware customization, revealing more challenges in the market.
Summary
Broadcom’s perspective on AI training and ASICs may signal a shift in AI hardware design and deployment. Although this idea is not widely recognized in the market yet, if Broadcom’s view gains traction, AI hardware infrastructure may evolve toward more customized and specialized solutions. The competition and convergence between ASICs and GPUs will become critical. In the coming years, the use of ASICs in AI training could be a driving force that reshapes the AI infrastructure landscape. This would not only represent a technological advancement in AI hardware but also influence the strategies of large enterprises and cloud providers regarding computational demands, pushing AI training toward a more professional and customized future.
This article is part of our Global Business Dynamics series.
It explores how companies, industries, and ecosystems are responding to global forces such as supply chain shifts, geopolitical changes, cross-border strategies, and market realignments.