Top 40GB per second network for lightning-quick file transfers
Top 40GB per second network for lightning-quick file transfers
We now have the updated 802.3bz protocol enabling speeds of 2.5 Gbps over standard cat5e cables and up to 5 Gbps with regular cat6 connections, all at a significantly lower overall price than 10 Gbps solutions. This would allow streaming 30 GB movies in about a minute—roughly 48 seconds if the data transfer matches the 5 Gbps capability (~600 MB/s). It's still an emerging standard; no new cards or switches are available yet, according to my knowledge.
Individuals rushing to purchase this seem completely out of touch unless you're thinking about setting up a permanent RAM disk for your transfers... which is unrealistic given current hardware limitations. Running a large enough RAM size would be impractical, or you'd need several NVMe SSDs to even approach the 2Gbit/s sequential speed limit. I find plenty of this equipment in the DC; Infiniband works for specific scenarios like video transfers but comes with high costs if you want sustained performance. Switching and routing over Infiniband isn't inexpensive, though a Linux/BSD host can help manage the connection to cut down on router expenses if you're comfortable with it. I recommend considering 10GbE instead—it offers more flexibility later without the steep Infiniband upgrade costs. Something obvious is being overlooked in the details shared. Also, doing a 10GbE cross-connect would be significantly cheaper using second-hand parts like the Mellanox Connect-X3 and the right cable.
This article turned into a video – what a privilege! I didn’t purchase the adapter mentioned originally; instead, I quickly found another option so global users could locate it easily. I bought two units: http://www.ebay.at/itm/Mellanox-ConnectX...1765696568 (ConnectX2 MHQH19B-XTR). I used WINOF drivers and launched opensm.exe from the installer folder on one machine, and no IP setup was required since Windows 10 automatically recognized the fastest connection after two minutes. Initially, I achieved 10 Gbit speeds. The issue turned out to be a cabling problem – passive copper QSFP+ only supports around 10Gbit. I then looked for QSFP+ fiber and found http://www.ebay.com/itm/Finisar-FCBN414Q...SwZJBX-iGF. After that, I still got a 16 Gbit RAM disk, which was too slow for me. Installing WINOF (the driver from Mellanox) gave a special option in Device Manager to boost single-port speed. That’s the only way. Single file transfers never exceeded 10Gbit, but multiple copies at once could saturate the link, yielding about 3.2 Gbyte/s for reads/writes. To test the full potential, I used lanbench or simultaneous file transfers. http://www.zachsaw.com/?pg=lanbench_tcp_..._benchmark helped me reach the PCIe 2.0 x8 maximum of 24 real Gbits. Conclusion: Still under 100 dollars. Read/write speeds are 3.2 Gbyte/s. Cables were more expensive than adapters. You don’t need Windows Server at all – Mellanox works with a fresh Windows 10 installation without drivers.
We’re seeing QSFP+ DAC cables function properly at 40Gb, but these are the official cables designed for the devices currently in use. It seems QSFP+ on the market today is mostly limited to early-generation equipment with some compatibility and support challenges. Additionally, the Mellanox ConnectX-2 performs significantly better than the QLogic version originally described.
I'm developing a game hosting business with a few partners. This solution could significantly cut down on server expenses. Using a unified, high-speed storage system instead of individual drives on each machine would help. If smaller arrays at the game servers are sufficient, these components might still offer a fast backbone for quick backups and snapshots. Plus, virtual machine clusters could benefit greatly—running all VM drives from central storage makes failover and server movement much easier.
You wouldn't require 40Gb and this configuration isn't reliable enough to risk your business. Opt for 10Gb; it allows multiple paths for greater bandwidth, though you don't necessarily need it. We handle more than 1000 VMs using just two 10Gb connections, even with high-end machines.
We operate several virtualized setups with a 10Gb backbone that supports over 3,000 VMs, yet it consistently falls short of reaching 10Gbit/s. Deploying 40Gbit infiniband isn't inexpensive, especially in this setup using an x-over cable between two nodes where the hardware is second-hand. When you incorporate infiniband switches, routers, or custom Infiniband solutions, plus the licensing costs for each switch port, the expenses can skyrocket. This situation often misleads people, so I recommend considering used infiniband switches without licensed ports and exploring licensing options from the manufacturer while they’re still covered. You’ll quickly understand why affordable options appear on eBay—unless you check if the port licenses are still valid.
You can use SFP+ and QSFP+ direct attached copper cables with Ethernet as well, you don't have to use RJ45. There is some really good and cheap switches on ebay that have 2/4 SFP+ ports and 24/48 1Gb ports. This way you can have 2/4 devices connecting at 10Gb and other devices at 1Gb but collectively the other devices can make use of the 10Gb if they are accessing a NAS/Server using the 10Gb port. Downside to RJ45 10Gb is that is has higher latency than SFP+/QSFP+ DAC and uses more power. This is a 10Gb Direct Attached Copper (DAC) Ethernet cable.
I highlighted details that were left out in the original video to avoid unnecessary spending. Please feel free to share your interest in exploring port licensing and warranty agreements. I’m targeting individuals who browse eBay after watching such content and purchase hardware they’re unfamiliar with. I mentioned 10Gbe as a more cost-effective choice since it reduces implementation expenses and you don’t need a converter when using standard 10Gbe. If you require a direct 40Gbit x-over-Ethernet link, proceed but be aware that adding Infiniband switching will likely lead to high costs unless you find a well-priced used setup with compatible licensing and a router for conversion. I use Infiniband between EMC XtremIO SANs, where performance matters more than typical traffic handling. It’s not meant for regular routing, but we’ve experimented with Linux/BSD routers to route Infiniband to Ethernet affordably. Either way, be prepared to invest if you buy used Infiniband gear on eBay.