High-speed network connections reaching 100 Gbps.
High-speed network connections reaching 100 Gbps.
This can easily be done with Linux, Using Intel's DPDK. It's an extremely fast networking stack.
I don't include expense in that calculation. You can find ready-to-use components for 100 GbE, such as 40-gigabit equipment like Arista switches, often available used for less than $1,000. The main limitations usually come from storage or routing capacity, which vary based on your needs.
Costs related to deploying an open-source networking solution. Earlier discussions focused on hardware rather than software.
Beyond typical data centers and regular ISPs, there exists deep-sea fiber networks where performance is gauged in terabit/s. Of course, this data is pooled together, meaning no individual link, yet still quite remarkable.
They possess strong infrastructure. The fiber TV distributor consumes 1.45 gbps per channel in 1080p and a 4K signal for four channels, reaching up to 5.8 gbps from the television station to their servers. They then compress and send this data via landlines and satellites. With over 230 channels—based on information from someone who actually works there—that bandwidth is substantial in their network. This situation intensifies during live broadcasts because processing must happen instantly, requiring equipment capable of handling extremely high speeds.
We're deploying a 100gbps Ethernet ring with Nokia... it moves incredibly fast, though the expenses are steep. Our 100gbps CSFP units cost about 14k each for a 10km optical run. The 40km segments are roughly double that amount. These are the 10km CSFP units we rely on for shorter connections. You'll notice they're significantly bigger than SFP+ optics for 10gbps. This is the interface documentation highlighting the 100gbps spec. We're leveraging it as the core for all our services. Bandwidth will primarily drive internet traffic, but it won't reach saturation quickly. Even our data center operates mainly at 10gbps.