We are still quite a distance away from achieving commercial Terabit LAN speeds.
We are still quite a distance away from achieving commercial Terabit LAN speeds.
Impressive performance. It's 10 times faster than 100 gig switches, and it's hard to imagine a use case that would handle 125 gigabytes per second (1 terabit per second).
It seems there will be a wait, but 400 Gbps is something real: https://en.wikipedia.org/wiki/Terabit_Ethernet. Large server farms require handling traffic from many high-speed applications. Wikipedia notes that companies like Google and Facebook might find interest. For everyday users, it probably won’t matter much.
Usually it's the processing power that limits performance, not the cables.
Processing speed, cables, connectors, and protocols matter a lot. As @Mel0n mentioned earlier, a 1 Tbps link can handle up to 125 GB/s. A PCIe 4.0 NVMe SSD delivers about 7 GB/s, while DDR5 RAM reaches 51 GB/s. The only components surpassing that are GPUs. What scenario would need such high network bandwidth for everyday users?
I can handle that now with a 32-port 100G switch and NIC teaming. Your CPU, RAM, and PCIe bus won’t be the issue.
We won't encounter 1Tbps connections anytime soon. The latest technology reaches 400Gb, with 800Gb arriving soon. Following that are 1.6Tb and then 3.2Tb, unless major shifts happen in the coming years.
The technology behind single mode fiber is essentially limitless in theory, but real-world challenges come from optics, SerDes lanes in ASICs, and thermal management. Currently available ASICs supporting 8x112Gb SerDes are just launching. Some designs could reach 32x56Gb SerDes to create a 1.6Tb interface, though practical limits suggest around 8 such ports—still modest compared to the 32/36/48 ports most switches offer per unit. Also, simply adding more ASICs to a switch won’t double bandwidth; you’d need six times as many, which significantly increases cost.