This article is archived. For updated research and insights, please visit our new site Small Island Research Notes on Tech and Future.

How AWS Is Quietly Rewriting the Rules of the AI Server Supply Chain

Since early 2025, AWS’s Trainium orders have driven a short-term boom across Taiwan’s tech supply chain. But behind the surge lies a quiet restructuring of how that supply chain works. This piece explores how AWS is reshaping procurement and design control by delaying Trainium 3, releasing the transitional MAX version, and developing its own liquid cooling cabinet (IRHX). From chips to thermal infrastructure, AWS is extending its platform influence into the physical rhythm of data center operations. What looks like a wave of demand may, in fact, mark the beginning of a deeper shift in coordination and control.

Since late 2024, AWS has driven a notable surge across the AI server supply chain by pulling forward orders for its Trainium series. In particular, the ramp-up of Trainium 2 MAX during the first half of 2025 significantly boosted revenues for key component makers, including PCB, copper-clad laminate (CCL), and thermal module suppliers. Several Taiwanese vendors posted record-high revenues in June, leading analysts and investors to raise expectations across the sector.

Beneath this short-term boom, however, lies a deeper shift in rhythm and control. If we move the lens from “who’s placing orders” to “who’s rewriting the rules,” AWS’s actions appear less like a simple demand expansion and more like a structural reset. The delay of Trainium 3, the transitional release of Trainium 2 MAX, and the introduction of a proprietary liquid cooling system all signal a broader reconfiguration of supply chain cadence and design ownership.

The real transformation is not just about order volume. It’s about how AWS is quietly evolving from an ODM customer into the orchestrator of the entire ecosystem’s tempo.

The Delay Is Not Just Technical. It Is a Reset in Rhythm

According to Taiwan-based supply chain sources, the delay of Trainium 3 was largely due to AWS’s in-house liquid cooling system not being ready. To bridge the gap, AWS extended the lifecycle of Trainium 2 and released a transitional version called Trainium 2 MAX. This MAX version includes higher-bandwidth memory (HBM) but still uses air cooling. It was designed and manufactured by AWS’s internal Annapurna team, with former collaborator Marvell gradually stepping away.

At first, these looked like technical decisions: release a stopgap product when a delay occurs, shift the work internally when partnerships stall. But in hindsight, there is a deeper pattern. It is one of shifting control. These moves suggest AWS wasn’t just filling a timeline gap. It was quietly rewriting the operational rhythm of its entire supply chain, on its own terms.

Behind the Surge: Double Booking and the Risk of a Demand Gap

AWS’s recent surge in component orders has been impressive on the surface. But a closer look reveals a mismatch between upstream and downstream expectations. While upstream CoWoS capacity remains tight, downstream forecasts appear overly optimistic. This gap likely reflects AWS’s double-booking strategy for components such as PCBs. One key driver behind this is the ongoing shortage of high-performance fiberglass fabric, which is essential for the multi-layer boards used in AI servers. These boards rely on low-Dk and low-Df materials to ensure high-speed signal stability, but these materials are in short supply and come from only a few sources.

To secure enough inventory, AWS may have placed double orders with PCB suppliers. While this approach cannot guarantee delivery timelines, it can help AWS lock in scarce capacity when supply is constrained. However, this also passes significant risk downstream. If AWS later adjusts its demand, suppliers could suddenly face sharp order reductions, exposing the entire chain to an abrupt freeze.

Double booking has become a common tactic across the AI server space as companies race to build out infrastructure. But for suppliers, it often means committing to production without real visibility into sustained demand. The revenue spikes seen today may be built on a fragile foundation of unrecognized risk.

This raises the question: Is the current revenue growth a reflection of genuine demand, or the result of a supply rhythm out of sync with actual market needs? With Trainium 3 yet to reach mass production, the industry may be heading into a sudden demand gap between late 2025 and early 2026.

Architecture Shifts Are Redefining Component Roles and Value

The Trainium 2 motherboard was designed with two chips on a single board. For Trainium 3, AWS is expected to move toward a four-chip configuration on a single board. While this appears to double the chip count, the broader design trend points toward integration and modularization. Many components that were previously treated as separate parts, such as power systems, cooling, and rail mounts, are now being consolidated and shared across systems. This shift is compressing both material usage and pricing per component.

AWS’s push into custom water-cooling systems has accelerated this trend. As cooling modules and chassis designs move from off-the-shelf parts to fully integrated systems, components are no longer priced individually but are bundled into broader infrastructure solutions. This further reduces the unit value of each part.

As a result, suppliers who gained during the Trainium 2 phase such as PCB manufacturers, CCL providers, and rail system vendors are now under pressure as both average selling prices and content per unit are beginning to shrink in the Trainium 3 cycle. As modular designs become more centralized, the value that each supplier adds is steadily declining.

To reinforce this structural shift, AWS is also expanding its supplier base. The company is moving away from exclusive partnerships and toward a multi-vendor, open certification model. This not only helps diversify risk but also introduces more pricing competition, effectively reshaping the balance of power across the supply chain.

AWS’s In-House Liquid Cooling Signals a Fundamental Shift in Supply Chain Models

The most important shift is not the hardware upgrade in Trainium, but AWS’s decision to move forward with its own in-house liquid cooling cabinet design, known as the In-Row Heat Exchanger (IRHX). This initiative aims to address past challenges in deployment speed and water efficiency. More significantly, it allows AWS to break away from branded solution providers like Vertiv or BOYD and take ownership of the design process while outsourcing component procurement and assembly.

This is more than a cooling upgrade. When liquid cooling transitions from brand-owned to platform-led, the balance of power shifts from midstream suppliers to the platform itself. AWS is not just optimizing performance. It is reshaping the fundamental question of who designs and who assembles the infrastructure behind AI.

AWS has already expanded its influence through in-house chip development with Graviton and Trainium. But the launch of IRHX marks the first time AWS is extending control into the data center’s cooling infrastructure. This shift is not just about energy efficiency. It reflects AWS’s move toward leading the design and deployment rhythms of physical infrastructure.

This shift means AWS is no longer simply a buyer. It is becoming the coordinator of design integration, material sourcing, and assembly timing. For example, while companies like Auras don’t supply the full IRHX system, they may still participate by providing key components such as fans or manifolds, as long as they align with AWS’s design specifications.

As this transition unfolds, the competitive barrier will no longer be defined by manufacturing scale or cost. The true differentiator will lie in how well suppliers understand and adapt to AWS’s design language and deployment cadence. In the next phase of the supply chain, staying aligned with the platform’s evolving architecture will be critical for long-term participation.

The Rules of the Supply Chain Are Quietly Changing

In the short term, the Trainium build-up has boosted the revenues and market valuations of many Taiwanese suppliers. But from a medium-term perspective, this surge reflects more than just demand. It reveals how AWS is gradually internalizing control over supply chain rhythms in response to delays. This shift could lead to future shipment gaps and declining value per unit, posing structural challenges for ODMs and component makers.

What truly matters is how AWS is using this moment to redefine supply chain architecture, cadence, and decision-making authority. Rather than simply outsourcing and integrating, AWS is setting its own design and procurement processes. This includes defining system specifications, planning materials, and reshaping the roles of its suppliers. The rules of the ecosystem are being rewritten as a result.

This may not be the most visible battle in the AI infrastructure race, but it could quietly shape the next round of cost structures, deployment timelines, and power dynamics. From custom chips to cooling systems, AWS is extending its design leadership into server hardware and data center buildout schedules.

While the current order momentum may feel reassuring for suppliers in Taiwan, the more lasting shift lies in how platform companies are quietly redefining what it means to be a supplier in the AI server supply chain and determining who gets to participate in the ecosystem. If we overlook this strategic transition already underway, we risk misjudging competitive thresholds, misallocating resources, and missing the right moment to adapt and respond.

From GPU clouds that financialize compute to Wolfspeed’s capital bottleneck and now to AWS’s quiet reshaping of supply chain architecture. These are not isolated cases. They are different chapters of the same shift: power is moving closer to the platform and farther from those who only manufacture.

This article is part of our Taiwan Tech and Market Shifts series.
It explores how Taiwan’s tech industries are adapting to global shifts in supply chains, manufacturing, policy, and innovation.

See more in this category, or explore more notes here.

Note: AI tools were used both to refine clarity and flow in writing, and as part of the research methodology (semantic analysis). All interpretations and perspectives expressed are entirely my own.
Published On: July 15th, 2025Categories: Taiwan Tech and Market ShiftsTags: , , , ,