F5F Stay Refreshed Power Users Networks We are still quite a distance away from achieving commercial Terabit LAN speeds.

We are still quite a distance away from achieving commercial Terabit LAN speeds.

We are still quite a distance away from achieving commercial Terabit LAN speeds.

C
ClarenceMate
Junior Member
13
06-15-2016, 12:49 AM
#1
Today's lab used 100 Gigabit switches, not Terabit ones.
C
ClarenceMate
06-15-2016, 12:49 AM #1

Today's lab used 100 Gigabit switches, not Terabit ones.

F
Fabiano_HD
Junior Member
36
06-19-2016, 10:35 PM
#2
Impressive performance. It's 10 times faster than 100 gig switches, and it's hard to imagine a use case that would handle 125 gigabytes per second (1 terabit per second).
F
Fabiano_HD
06-19-2016, 10:35 PM #2

Impressive performance. It's 10 times faster than 100 gig switches, and it's hard to imagine a use case that would handle 125 gigabytes per second (1 terabit per second).

1
10ukkie10
Member
180
07-01-2016, 03:48 AM
#3
It seems there will be a wait, but 400 Gbps is something real: https://en.wikipedia.org/wiki/Terabit_Ethernet. Large server farms require handling traffic from many high-speed applications. Wikipedia notes that companies like Google and Facebook might find interest. For everyday users, it probably won’t matter much.
1
10ukkie10
07-01-2016, 03:48 AM #3

It seems there will be a wait, but 400 Gbps is something real: https://en.wikipedia.org/wiki/Terabit_Ethernet. Large server farms require handling traffic from many high-speed applications. Wikipedia notes that companies like Google and Facebook might find interest. For everyday users, it probably won’t matter much.

L
LorrenK
Senior Member
703
07-02-2016, 05:52 PM
#4
Primarily focuses on the end-user perspective. Highly beneficial for server applications.
L
LorrenK
07-02-2016, 05:52 PM #4

Primarily focuses on the end-user perspective. Highly beneficial for server applications.

T
Tigrio
Member
54
07-06-2016, 07:57 PM
#5
The issue with faster connections is that users who need them often want more than one link for backup. This means we’ll likely only see terabit ports in practice, while companies purchasing these devices require multi-terabit capabilities.
T
Tigrio
07-06-2016, 07:57 PM #5

The issue with faster connections is that users who need them often want more than one link for backup. This means we’ll likely only see terabit ports in practice, while companies purchasing these devices require multi-terabit capabilities.

H
Hellswalrus
Junior Member
45
07-06-2016, 09:56 PM
#6
Usually it's the processing power that limits performance, not the cables.
H
Hellswalrus
07-06-2016, 09:56 PM #6

Usually it's the processing power that limits performance, not the cables.

P
Poop_Head27
Posting Freak
820
07-07-2016, 12:47 AM
#7
Processing speed, cables, connectors, and protocols matter a lot. As @Mel0n mentioned earlier, a 1 Tbps link can handle up to 125 GB/s. A PCIe 4.0 NVMe SSD delivers about 7 GB/s, while DDR5 RAM reaches 51 GB/s. The only components surpassing that are GPUs. What scenario would need such high network bandwidth for everyday users?
P
Poop_Head27
07-07-2016, 12:47 AM #7

Processing speed, cables, connectors, and protocols matter a lot. As @Mel0n mentioned earlier, a 1 Tbps link can handle up to 125 GB/s. A PCIe 4.0 NVMe SSD delivers about 7 GB/s, while DDR5 RAM reaches 51 GB/s. The only components surpassing that are GPUs. What scenario would need such high network bandwidth for everyday users?

P
Prometheon
Junior Member
5
07-07-2016, 06:51 AM
#8
I can handle that now with a 32-port 100G switch and NIC teaming. Your CPU, RAM, and PCIe bus won’t be the issue.
P
Prometheon
07-07-2016, 06:51 AM #8

I can handle that now with a 32-port 100G switch and NIC teaming. Your CPU, RAM, and PCIe bus won’t be the issue.

I
iTzPandaNuss
Member
144
07-07-2016, 01:42 PM
#9
We won't encounter 1Tbps connections anytime soon. The latest technology reaches 400Gb, with 800Gb arriving soon. Following that are 1.6Tb and then 3.2Tb, unless major shifts happen in the coming years.
I
iTzPandaNuss
07-07-2016, 01:42 PM #9

We won't encounter 1Tbps connections anytime soon. The latest technology reaches 400Gb, with 800Gb arriving soon. Following that are 1.6Tb and then 3.2Tb, unless major shifts happen in the coming years.

N
Nazeo_
Junior Member
41
07-08-2016, 04:09 PM
#10
The technology behind single mode fiber is essentially limitless in theory, but real-world challenges come from optics, SerDes lanes in ASICs, and thermal management. Currently available ASICs supporting 8x112Gb SerDes are just launching. Some designs could reach 32x56Gb SerDes to create a 1.6Tb interface, though practical limits suggest around 8 such ports—still modest compared to the 32/36/48 ports most switches offer per unit. Also, simply adding more ASICs to a switch won’t double bandwidth; you’d need six times as many, which significantly increases cost.
N
Nazeo_
07-08-2016, 04:09 PM #10

The technology behind single mode fiber is essentially limitless in theory, but real-world challenges come from optics, SerDes lanes in ASICs, and thermal management. Currently available ASICs supporting 8x112Gb SerDes are just launching. Some designs could reach 32x56Gb SerDes to create a 1.6Tb interface, though practical limits suggest around 8 such ports—still modest compared to the 32/36/48 ports most switches offer per unit. Also, simply adding more ASICs to a switch won’t double bandwidth; you’d need six times as many, which significantly increases cost.