The SemiAnalysis AI Networking Model is built to decode the increasingly critical networking layer of Al and cloud infrastructure. This model completes our suite by providing granular visibility into Switches, Transceivers, Cables, AEC/DACs for scale up, scale-out backend, front end, and out of band networks for Al, offering a deeper understanding of scaling limits, design architectures, and vendor dynamics.
The model includes the following topics:
Detailed Cluster Configuration Analysis by each hyperscaler:
- Over 80 configuration panels detailing how each hyperscaler (Microsoft, Google, Meta, Amazon, Oracle, X.AI) as well as how Neoclouds build their AI Cluster networks, covering scale-up, scale-out, front-end, and out of band networks for each accelerator type.
- Provides all configurations in use across different accelerator SKUs for each Hyperscaler as well as for the Neoclouds.
- For each configuration: SKU and quantity of switches, optical modules, fibers, AECs, ACCs, and DACs are utilized.
- For each configuration: the attachment ratios of all networking devices (switches, optical modules, fibers, AECs, ACCs, DACs, etc.) to accelerators (GPUs/ASICs).
- For each configuration: the pricing and power consumption of all the networking devices involved, including switches, optical modules, fibers, AECs, ACCs, DACs, etc.
Bottom-up build-up of AI data center networking structures for each hyperscaler (Microsoft, Google, Meta, Amazon, Oracle, X.AI) well as for the Neoclouds:
- Networking configuration breakdown per each accelerator they deploy (H100/H200/B100/B200/GB200 NVL36/GB200 NVL72/B300/GB300 NVL/VR200/MI300/MI325/MI350/custom ASICS…).
- Summary of by-customer as well as market-wide units and spend for switches, transceivers and cables. Includes forecasts on market-wide AI volumes for 200G/400G/800G/1.6T transceivers. Includes forecasts on market-wide unit shipments. of Broadcom, Arista and Nvidia switches.
- Detailed breakdown by SKU of Switches, Optical Modules, AECs/ACCs/DACs procured by each hyperscaler for their AI data centers, including detailed information on quantity, pricing and total spend per SKU; users will also be able to trace these volumes and spending back to specific accelerator and networking configuration they originate from.
- Switch vendor breakdown (wallet share) for each switch SKU the hyperscaler procures; users will be able to see how each hyperscaler allocates its networking dollars among different switch vendors for each type of switch it procures.
- Switch vendors covered are: Nvidia, Arista, Celestica, Cisco, Accton, Juniper, Nexthop, Huawei, Broadcom, Marvell, and others.
- Optical module vendors covered are: Nvidia, Zhongji Innolight, Coherent, Eoptolink, Fabrinet, TFC Optical, Lumentum/Cloudlight, AAOI, Accelink, Source Photonics and others.
- AEC/ACC/DAC/fiber vendors covered are: Nvidia, Credo, Astera Labs, Amphenol, TE Connectivity, Molex, Luxshare, Broadex and others.
Top-down analysis of total market conditions and vendor market shares for optical modules, switches, and AECs:
- Total market volume for optical modules by speed (1.6T / 800G / 400G), and market share breakdown by major vendors (Zhongji Innolight, Coherent, Eoptolink, Fabrinet, TFC Optical, Lumentum/Cloudlight, AAOI, Accelink, Source Photonics, and others), quarterly from 2023 to 1Q 2025.
- Optical module total market volume and vendor market share forecasts by speed (1.6T / 800G / 400G) for 2025 and 2026.
- Detailed breakdown of vendor wallet share at each hyperscaler for optical modules (1.6T/ 800G/ 400G), switches (BlackBox/ WhiteBox), and AECs (800G/400G).
Master pricing table of key networking devices and components used in AI data centers including pricing for Hyperscalers/Neocloud Giants/Emerging Neoclouds as well as power budgets for:
- Switches
- Optical modules/ transceivers
- Co-packaged Optics (CPO) components
- AECs/ ACCs/ DACs
- Fibers
- NICs
- Cooling components
- Connectors
- etc.


