Broadcom used the OCP stage to show two sides of the same networking push: bigger switch silicon and faster NICs tuned for AI fabrics. ServeTheHome documented both the massive Tomahawk 6 102.4T switch silicon and the Thor Ultra 800GbE NICs with on-floor hardware, signaling that higher link rates and denser faceplates are moving from slides to shipping gear (STH Tomahawk 6 photos and analysis; STH on Thor Ultra 800GbE NICs). For operators, the near-term implication is simple: rack design, congestion control, and perf/W math are changing.
Why 800GbE now for AI fabrics
Large training jobs and scaled inference have exposed the limits of 400GbE leaf–spine designs under incast and microburst traffic. By pairing a 102.4 Tb/s switch with 800GbE endpoints, operators can lift east–west bandwidth and reduce per-hop queuing. Broadcom’s Tomahawk 6 series is positioned to anchor this shift with a single-chip 102.4T radix and porting options that compress stages in AI pods (see the Broadcom Tomahawk 6 series overview). STH’s reporting underscores that these are not paper launches—the silicon and boards were on the OCP floor, with images that illustrate package size, thermal budget, and faceplate constraints (STH Tomahawk 6 photos and analysis).
Tomahawk 6 102.4T and Thor Ultra 800GbE: key advances
Tomahawk 6 consolidates a full 102.4 Tb/s of fabric capacity and exposes high-count SerDes across two speed families, enabling system builders to target either maximum port density or maximum per-port rate from the same core architecture. Practical faceplate mappings include 128×800GbE, 256×400GbE, or 512×200GbE on a single device (see the Broadcom Tomahawk 6 product page and shipping announcement). The physical scale of the package and heatspreader, plus the challenge of routing this many high-speed lanes within standard chassis envelopes, is evident in STH’s photography.
Thor Ultra moves the endpoint to a single 800GbE link per NIC with a PCIe Gen6 x16 host interface, shipping in standard PCIe add-in and OCP NIC 3.0 forms. It is pitched with Ultra Ethernet Consortium alignment and a programmable congestion pipeline that includes selective retransmission and Congestion Signaling (CSIG) intended to control tail latency under AI traffic patterns (STH on Thor Ultra 800GbE NICs). The design goal is straightforward: more gradient traffic per GPU server while keeping queueing predictable when bursts hit.
Co-packaged optics (Davisson) vs pluggables: trade-offs
Tomahawk 6 also arrives in a co-packaged optics (CPO) variant—Davisson—that places optical engines at the package boundary to shorten electrical reach and cut front‑panel power. STH reports that Davisson-class systems are now shipping, marking a notable inflection because it shifts part of the thermal and serviceability model from pluggables to in-chassis optical subsystems (STH on Davisson shipping; Broadcom 102.4T shipping release). CPO frees faceplate real estate and can reduce losses across the panel; pluggables retain field-replaceability and familiar sparing practices. In dense AI rows approaching power and panel limits, CPO becomes attractive; where operational flexibility dominates, pluggables will continue to make sense.
Designing AI pods: 128×800G leafs, fewer hops
At rack and pod scale, the two products connect tightly. A 102.4T leaf can serve 128 GPU hosts at 800GbE, or double the host count at 400GbE, enabling fatter pods with fewer stages. Fewer hops reduce bisection congestion and optics count per unit of delivered bandwidth—two contributors to better perf/W and shorter time-to-train. On the server side, a single-port 800GbE NIC simplifies PCB real estate and trace routing versus dual-port 400G layouts. With a programmable congestion pipeline on the NIC and UEC-aligned features in the switch, operators gain levers to tune for incast-heavy training and bursty inference mixes without bespoke firmware forks.
As pods grow, failure domains and path diversity must be revisited. 128×800G leafs change spine design, cabling counts, and maintenance windows. The practical questions become: how many spines per pod to contain blast radius, what oversubscription is acceptable under training load, and where to place ECMP boundaries so policy remains stable during incremental expansions.
Ethernet-first strategy: UEC alignment and ecosystem
Broadcom’s posture is an Ethernet-first response to AI scaling pressure. Thor Ultra’s UEC alignment and CSIG support, combined with Tomahawk 6’s ability to span both pluggable and co-packaged optics designs, signal intent to meet AI fabrics on their terms: predictable latency under load and power trimmed at the optics boundary (STH on Thor Ultra 800GbE NICs). This also puts pressure on 800G optics vendors to deliver reliable modules in volume and nudges OEMs to offer both traditional leaf–spine builds and CPO-centric chassis.
For adopters, the standards posture matters. Congestion features that work across Tomahawk 5/6-class switches and UEC-compliant NICs de-risk mixed environments. Operators can advance policy using open guardrails rather than bespoke stacks, accelerating time-to-steady-state on new racks. That becomes a competitive factor as teams weigh vertically integrated fabrics against an Ethernet path that stays close to the mainstream ecosystem.
Deployment details to get right
The technology arc is clear, but success hinges on three areas that often decide whether early racks meet SLOs:
Optics and power. Pairing 800G NICs with matching optics adds meaningful watts per server, and 102.4T switches push faceplate and mid-plane thermals. Budget power and airflow with margin for worst-case bursts, and verify that sled and chassis airflow patterns align with module placement.
Topology and oversubscription. 128×800G leaves change cabling plants and spine counts. Revisit oversubscription ratios and path diversity so growth does not expand failure domains or introduce hidden hotspots.
Congestion policy. Treat pipeline tuning like code. Version policies, test against representative training and inference traces, and audit changes cluster-by-cluster to manage tail latency under incast.
Supply signals: 800G optics, PCIe Gen6 servers, and CPO
Two forces will set the slope of adoption. First, optics availability and price still gate 800G scale; in bandwidth-heavy racks, optics dominate BOM. Co-packaged optics shifts service and sparing models—an upside in dense rows with panel constraints, but a planning change nonetheless. Second, server readiness matters: PCIe Gen6 hosts are required to expose Thor Ultra’s full headroom, and OCP NIC 3.0 carriers must be tuned for NIC and optics thermals. Broadcom’s messaging that Tomahawk 6 silicon and Davisson systems are shipping suggests the switch side is not the long pole; the cadence will be paced by optics supply and server refresh cycles (Broadcom 102.4T shipping release; STH on Davisson shipping).
Outlook across the next two refresh cycles
The near arc is straightforward: GPU-dense AI pods begin adopting 800G NICs where job sizes justify the power and optics cost, while 400G remains the default elsewhere. As PCIe Gen6 servers become common and as UEC-aligned congestion features stabilize across switch and NIC vendors, 800G will graduate from pilot racks to standard AI sleds in mainstream clusters. Switch radix at 102.4T becomes the planning unit, with many operators choosing fewer, fatter pods to limit hop counts and simplify cabling.
As second-wave hardware and optics SKUs arrive with better perf/W and improved thermal behavior, expect a more decisive pivot. Davisson-class CPO will see uptake in high-density rows where faceplate power and panel real estate are already at the edge, while pluggables will persist where serviceability and modular spares dominate. Through the following budget cycle, procurement teams are likely to treat 800G as the default for new AI racks, with 400G reserved for CPU-heavy or storage tiers.
What to do next for AI rack refreshes
Operators planning AI capacity should move now on a few concrete steps:
- Align server refreshes to bring PCIe Gen6 and OCP NIC 3.0 sleds into GPU racks so 800G endpoints can run at full headroom (STH on Thor Ultra 800GbE NICs).
- Redesign leaf–spine layers around 102.4T leaves and target 128×800G or 256×400G faceplates to cut hops and optics per delivered bandwidth (see Broadcom Tomahawk 6).
- Pre-budget optics, power, and thermals with a clear CPO vs pluggable policy per row, and codify congestion control settings with staged rollouts and audits (STH on Davisson shipping).

