
Large data centers interconnect bottlenecks
Author(s) -
Ali Ghiasi
Publication year - 2015
Publication title -
optics express
Language(s) - Uncategorized
Resource type - Journals
SCImago Journal Rank - 1.394
H-Index - 271
ISSN - 1094-4087
DOI - 10.1364/oe.23.002085
Subject(s) - application specific integrated circuit , ethernet , interconnection , computer science , photonics , silicon photonics , optical switch , embedded system , computer hardware , electronic engineering , engineering , telecommunications , physics , optics
Large data centers interconnect bottlenecks are dominated by the switch I/O BW and the front panel BW as a result of pluggable modules. To overcome the front panel BW and the switch ASIC BW limitation one approach is to either move the optics onto the mid-plan or integrate the optics into the switch ASIC. Over the last 4 years, VCSEL based optical engines have been integrated into the packages of large-scale HPC routers, moderate size Ethernet switches, and even FPGA's. Competing solutions based on Silicon Photonics (SiP) have also been proposed for integration into HPC and Ethernet switch packages but with better integration path through the use of TSV (Through Silicon Via) stack dies. Integrating either VCSEL or SiP based optical engines into complex ASIC package that operates at high temperatures, where the required reliability is not trivial, one should ask what is the technical or the economic advantage before embarking on such a complex integration. High density Ethernet switches addressing data centers currently in development are based on 25G NRZ signaling and QSFP28 optical module that can support up to 3.6 Tb of front panel bandwidth.