F5F Stay Refreshed Power Users Networks High-speed network connections reaching 100 Gbps.

High-speed network connections reaching 100 Gbps.

High-speed network connections reaching 100 Gbps.

Pages (2): 1 2 Next
N
numblegs26
Member
197
02-08-2025, 10:43 PM
#1
I was working at the office today and had an intriguing idea. Sure, 10 Gbps is achievable now and isn’t too far away, but what about 100 Gbps? I’ve noticed networking gear online that handles it, though it’s quite costly and relies on fiber optics. Could there be storage solutions or protocols designed for such high speeds? Ignoring the cost, how challenging would it be to reach that level?
N
numblegs26
02-08-2025, 10:43 PM #1

I was working at the office today and had an intriguing idea. Sure, 10 Gbps is achievable now and isn’t too far away, but what about 100 Gbps? I’ve noticed networking gear online that handles it, though it’s quite costly and relies on fiber optics. Could there be storage solutions or protocols designed for such high speeds? Ignoring the cost, how challenging would it be to reach that level?

V
VIPfighter
Member
62
02-09-2025, 12:17 AM
#2
400Gbps is close to becoming available soon and should hit the market this year. GPU-driven systems are capable of handling 100Gbps connections efficiently for intensive calculations and specific workloads that benefit from GPU acceleration. I’m currently developing a project for a client as part of a broader initiative, requiring lossless 100Gbps links across their GPU clusters. It won’t be inexpensive, but given today’s pricing, it shouldn’t be overly costly—though the exact cost depends on your requirements. For a switching solution with solid L3 support, ports typically cost around $1000 or less per 100Gbps connection, which is quite reasonable. If you expand the capabilities needed, you can reach higher speeds, but for many fast data centers focused on switching and basic L3 features, $1000 or less per port remains a good value.
V
VIPfighter
02-09-2025, 12:17 AM #2

400Gbps is close to becoming available soon and should hit the market this year. GPU-driven systems are capable of handling 100Gbps connections efficiently for intensive calculations and specific workloads that benefit from GPU acceleration. I’m currently developing a project for a client as part of a broader initiative, requiring lossless 100Gbps links across their GPU clusters. It won’t be inexpensive, but given today’s pricing, it shouldn’t be overly costly—though the exact cost depends on your requirements. For a switching solution with solid L3 support, ports typically cost around $1000 or less per 100Gbps connection, which is quite reasonable. If you expand the capabilities needed, you can reach higher speeds, but for many fast data centers focused on switching and basic L3 features, $1000 or less per port remains a good value.

A
AutumnTechMC
Member
64
02-14-2025, 07:51 PM
#3
Internet usage at these rates might seem excessive right now, yet data centers and specialized cases such as NASA achieve extraordinary speeds. Most local transfers focus on moving huge data volumes between servers.
A
AutumnTechMC
02-14-2025, 07:51 PM #3

Internet usage at these rates might seem excessive right now, yet data centers and specialized cases such as NASA achieve extraordinary speeds. Most local transfers focus on moving huge data volumes between servers.

S
SarahFina
Member
51
03-07-2025, 11:41 PM
#4
100gb is achievable via fiber, but you need sufficient memory speed to handle it. The hard drive is the main concern. I searched online and found some useful info—this seems like something new, though I wasn’t sure at first (dyslexai).
S
SarahFina
03-07-2025, 11:41 PM #4

100gb is achievable via fiber, but you need sufficient memory speed to handle it. The hard drive is the main concern. I searched online and found some useful info—this seems like something new, though I wasn’t sure at first (dyslexai).

L
lTalonzl
Member
147
03-20-2025, 08:40 PM
#5
Datacenter links frequently utilize bundles of 100GbE as well. ISP backbone systems rely heavily on 100GbE, and I’ve seen several firms implementing this at their main headend facilities and remote sites too. They don’t connect from every branch but maintain a few 100Gb or 40Gb links to the ISP through their aggregation infrastructure.
L
lTalonzl
03-20-2025, 08:40 PM #5

Datacenter links frequently utilize bundles of 100GbE as well. ISP backbone systems rely heavily on 100GbE, and I’ve seen several firms implementing this at their main headend facilities and remote sites too. They don’t connect from every branch but maintain a few 100Gb or 40Gb links to the ISP through their aggregation infrastructure.

B
BloopStrike89
Junior Member
8
03-22-2025, 08:28 PM
#6
10Gb/S is widely used in enterprise environments
B
BloopStrike89
03-22-2025, 08:28 PM #6

10Gb/S is widely used in enterprise environments

J
JellyPlaysMC
Member
68
03-23-2025, 12:22 AM
#7
It’s still theoretical right now. I’m planning to use it for moving big files. What matters most is locating a storage medium that reads and writes quickly. I understand it’s possible with a few cards, fiber optic cables, and a smart switch.
J
JellyPlaysMC
03-23-2025, 12:22 AM #7

It’s still theoretical right now. I’m planning to use it for moving big files. What matters most is locating a storage medium that reads and writes quickly. I understand it’s possible with a few cards, fiber optic cables, and a smart switch.

R
Rosario17_
Posting Freak
897
03-29-2025, 07:26 PM
#8
For sending files, reaching such speeds will be extremely difficult unless you possess several high-performance SSDs arranged in a RAID configuration.
R
Rosario17_
03-29-2025, 07:26 PM #8

For sending files, reaching such speeds will be extremely difficult unless you possess several high-performance SSDs arranged in a RAID configuration.

E
EndShulker
Member
131
03-29-2025, 09:18 PM
#9
Similar to a PCIe SSD, though SATA 3.2 only reaches 16 Gbps. Devices with such high speeds aren’t widely available yet.
E
EndShulker
03-29-2025, 09:18 PM #9

Similar to a PCIe SSD, though SATA 3.2 only reaches 16 Gbps. Devices with such high speeds aren’t widely available yet.

R
Robater
Member
86
03-30-2025, 10:21 AM
#10
Most PCIe SSDs are constrained to PCIe 3.0 x4, meaning you’ll need multiple units and a single PCIe 3.0 x16 port. If you find a way to combine several into one high-speed slot, you can access a 100GbE connection for faster data transfers.
R
Robater
03-30-2025, 10:21 AM #10

Most PCIe SSDs are constrained to PCIe 3.0 x4, meaning you’ll need multiple units and a single PCIe 3.0 x16 port. If you find a way to combine several into one high-speed slot, you can access a 100GbE connection for faster data transfers.

Pages (2): 1 2 Next