When adding external capacity to a Pure Storage FlashArray via a DirectFlash Shelf (DFS), Implementation Engineers must adhere to strict, high-speed networking port assignments to ensure maximum backend throughput. Unlike older Fibre Channel or basic SAS expansion shelves that use dedicated SAS cabling, a modern DFS relies on massive 50GbE or 100GbE bandwidth using the NVMe-oF (RDMA over Converged Ethernet - RoCE) protocol to maintain NVMe-level speeds across the external fabric.
According to the official FlashArray//XR4 Port Usage and Definitions hardware matrix, the onboard 100GbE RoCE ports explicitly dedicated to the first loop of a DirectFlash Shelf are physically located on the controllers and identified within the Purity operating system as ETH 18 and ETH 19 .
These specific ports are hardware-optimized to handle the extreme backend IOPS generated by the external DirectFlash Modules communicating back to the primary chassis. Utilizing standard 1GbE/10GbE management/replication ports (such as ETH 0/1) or other generic PCIe Ethernet host interfaces (such as ETH 10/11) is unsupported for the primary DFS backend loop. Plugging the shelf into the wrong ports will result in cabling validation failures during the hardware_check.py execution, preventing the new capacity from being recognized by the array.
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit