F5F Stay Refreshed Power Users Networks Wireless might never match wired latency completely.

Wireless might never match wired latency completely.

Wireless might never match wired latency completely.

Pages (3): Previous 1 2 3
W
WarLord_24
Junior Member
15
04-30-2016, 05:46 AM
#21
Are you sure? There are numerous studies with various methods and testing techniques. Context matters a lot. You mentioned pilot tests, as well as flashing object tests in dark and flicker conditions where humans can notice them faster. I think we're comfortable with 20ms monitors or even 20ms mouse input lag.
W
WarLord_24
04-30-2016, 05:46 AM #21

Are you sure? There are numerous studies with various methods and testing techniques. Context matters a lot. You mentioned pilot tests, as well as flashing object tests in dark and flicker conditions where humans can notice them faster. I think we're comfortable with 20ms monitors or even 20ms mouse input lag.

B
brayofden
Member
59
04-30-2016, 07:43 AM
#22
I haven't reviewed the article or its links yet. Could you share the study or source you're referring to? That way I can provide a relevant reference for you.
B
brayofden
04-30-2016, 07:43 AM #22

I haven't reviewed the article or its links yet. Could you share the study or source you're referring to? That way I can provide a relevant reference for you.

O
OwesomeOtter
Junior Member
10
04-30-2016, 11:14 AM
#23
Instead of making the the differential between "wired and wireless" let's actually talk about the underlying signaling because that's actually what's at play here. Copper wire can carry analog radio waves or digital signals. Optical cables can carry analog light or digital light. And wireless can carry analog or digital radio signals. When we talk about "analog" we're talking about a direct analog-to-analog over a specified medium. Your analog telephone could only carry 33600 bits in the "audio spectrum" with 200ms of latency but it could carry 75megabits and have 2ms of latency using the exact same wire. The reason for that latency is because of the analog to digital conversion and digital to analog conversion back in the audio spectrum. It's this type of overhead that introduces latency. When you are optimizing for latency you want to eliminate ALL conversion steps between analog and digital. Cable works in a similar fashion with frequency division. So your analog television is between VHF channel 2 and VHF channel 13 as well as analog FM radio. Cable modems then divide the remaining frequency into 'lanes' so to speak to feed the customers. So this can be anywhere between a box outside your home to the inside of your home, or it could be between a box outside your home and a bigger box further away from your home and your house is sharing that same cable with 50 other people. Because 50 other people have cable modems they can IN FACT spy on your usage of the internet. The only thing keeping them from doing so is the modems having different hardware addresses. If you somehow cloned a cable modem and prevented it from sending any data to the cable plant, you could in fact receive everything that other customer does. That does not apply to Phone/ADSL/VDSL, because the concentration point has a router in between. So you can't clone someone's xDSL modem because there would be no handshake. That's one of the weird things about analog, is that there is a handshake and training mechanism that ensures that the device on both ends communicate at the agreed speed. The reason you could "spy" on a cable modem is because the speed is agreed upon beforehand via the frequency division. So as long as two customers are using the same frequency, you could in theory spy on it easily. In practice it's not that practical to do so since a wiretap on the signal would need to be another customer premises nearby, and it's simply easier to get access to the cable plant if it were so required for legal reasons. Where as DSL services are like this: One of the consequences of VDSL2 is that it's nearly impossible to do local loop unbundling (eg third party ISP) because of how noise is cancelled to reduce cross-talk. Cross talk is a problem with all wired systems, including ethernet and USB. The reason every signal pin has a corresponding ground pin is how cross-talk is reduced or eliminated. You can't simply solder all the ground pins together like Nvidia does. What happens is on the termination end (cable, ethernet, phone, etc) is that without a terminator, it "bounces" back and creates noise. Alright, so that's talking about "analog tech" over copper. What about optical tech? So optical's biggest problem is that the transceivers induce latency. This is why you often see these things in routers: Because the actual conversion between optical and digital happens in the SFP. So two ends must agree on the same physical medium. Optical systems use specific wavelengths (eg colors) and multiplexing in the same way that wireless does. Rotate the light or "sound" polarity of the signal and you double the bandwidth. If you want to see some massive confusions, look at the wikipedia page https://en.wikipedia.org/wiki/Small_Form..._Pluggable At any rate CWDM is used by optical and cable systems because that is the analog half of how it works. You induce latency every time you have to convert to and from optical, just like when you have to convert to and from analog with cable, and DSL. Here's a fun fact. There are two types of S/PDIF. One over coax (Which is basically just a RCA wire) and one over optical (TOSLINK), how do you think this works? Basically S/PDIF is a fixed 3.1Mbit optical signal in the 650nm range. So how does the Coax version work? It's the same digital protocol, just a different physical layer. Which brings us to wireless, and how it's basically hell. So Bluetooth, is "Wireless USB", "WiFi" is 802.11-family products, and "5G"/"4G"/"3G"/"LTE" refer to cellular networks. All three of these have the same problems, but their issues have more to do with the radios and less the standard they run on. So for Bluetooth, you're pretty much sending USB data over the 2.4Ghz spectrum using FHSS. This is incompatible with 2.4Ghz used for all other devices. This is why you either use all bluetooth, or zero bluetooth devices. FHSS makes it more reliable, but it still involves encryption, so it will always have more latency than a wired USB connection using the same protocol. The Physics does not allow bluetooth to be faster than USB. In case people don't know "what law of physics" I'm talking about, it's the inverse square law. The further away from your transmission, the more the signal is attenuated (same is true for Cable and DSL's analog portion.) The more errors there are, the more latency is created. When you throw in encryption into the mess, you've added two additional transformation steps. This is why it's impractical for someone to sniff a NFC signal from meters away, they literately need to be about an inch away, and a lot of "RF wallets" aren't really worth the marketing they're given. In order to do any kind of spying on wireless signals You need the keys at the point they're negotiated, or you need to brute-force the keys if the data has not rotated them. You will see OFDM, QAM, etc mentioned in Cable, DSL, and Wireless, because these are ways signals are passed through the physical medium (for wireless that is "air".) There will never be a point where a wireless signal is better than a wired signal unless you intentionally cripple the wired signal, or operate the wireless signal without it's encryption stage and reduce the signaling to be as error-resiliant as possible, which means lower bandwidth. When it comes to wireless, you can not both have security and high speed. You get one or the other, not both. With wired connections you always get both, even if the connection is not encrypted because it's impractical to capture the analog signal. You need a device that is "cloned" from the device you wish to eavesdrop on. So a router or switch can be set to duplicate the signal being sent to another port, but that is part of dealing with ethernet itself, and you can do that with WiFi by dumping the packets received at either end of the wireless connection at the access point. This is why "wireless grid" networks are a terrible idea because your signal is being decrypted and re-encrypted to be re-broadcast to other AP's. Pretty much anyone can attach their device and grab everything on the grid if they were to maliciously tell the grid that it is the exit node to the internet. Even then, how do you even trust the exit node? Just like WiFi at McDonalds. You can not. You de-facto must use a VPN with Unecrypted/Untrusted networks, and even then, it must be a VPN you control the exit point of. If you simply sign up for, oh let's say tunnel bear, you don't actually know anything about the intermediatory nodes it uses. That's why corporate VPN's are so high latency, because you are literately sending all your data to some other part of the world and back for everything. You only start to approach "near wired" bandwidth and latency with high frequencies with very short ranges. But it never crosses that point, and I doubt you could find any evidence of that with consumer equipment. With certain carrier-grade equipment (eg satellite, microwave, and other point-to-point systems) yes, you probably can get pretty close to what you get with cable, because it's ultimately still an analog radio on both sides. But the reason for that is because of the large antenna's used. Backhaul capacity per distributed site. Source: Ericsson 2022 If you have two point-to-point microwave links, the only thing keeping those connections with large antennas from operating is snow/rain fade. Where you get into the "ah, wireless is faster than optical fiber" argument is when you count the actual distance it needs to travel. Because a wireless link adds latency with every hop, but the end-point is closer to the customer, it is technically possible to get a 5G ISP to have a faster connection than it's optical connection assuming that the ISP has done everything possible to not optimize their optical network. So the further you get away from the urban core, the more likely a wireless connection "could" be a tiny big faster in latency, if you ignore the entire encryption layer as well as the accounting mechanic needed by WISP's to bill you. As I said, pretty much a WISP can never get a 5G or even an older 4G-LTE system to approach optical/copper speeds without jettisson'ing the security layer. So the short answer is always "No." All wireless technology is going to be higher latency than it's exactly equal wired system due to the analog to digital conversion to that system. You can't make data arrive before it's sent. It's only when the technology is not exactly the same, eg comparing Copper to Fiber, where it matters more. A wired Ethernet cable has 8 wires, a coax wire has 1. Coax always runs RF. Ethernet always runs digital based on voltage, and is in fact also measured in Mhz (GigE is 62.5Mhz) and the physical cables have a separate Mhz rating (250Mhz Cat6, 100Mhz Cat5). So you can not get "better" than digital to digital copper ethernet, except optical fiber ethernet when it comes to talking about distance/capacity since the capacity of an optical glass fiber is essentially unlimited as we know it (current research has it at petabytes.) I am not familiar with the math, but there is no presently known upper bounds assuming you don't melt the optical fiber or the equipment on either end. If you keep it within the scope of not melting, then the upper bound on what you can commercially use right now is a constantly moving target based on capacity or distance. https://www.nict.go.jp/en/press/2025/05/29-1.html At any rate. It is wishful thinking to expect reliability from any wireless tech, and that lack of reliability is why it's higher latency, wireless tech requires error correction, and error correction results needing redundant data or re-sending data at the radio level. Ethernet itself can have latency introduced from bad data, bad hardware or even just bad cables (often the most common reason), and yes, ethernet cables age, particularly those in a server room where the cables where they are not cable managed carefully.
O
OwesomeOtter
04-30-2016, 11:14 AM #23

Instead of making the the differential between "wired and wireless" let's actually talk about the underlying signaling because that's actually what's at play here. Copper wire can carry analog radio waves or digital signals. Optical cables can carry analog light or digital light. And wireless can carry analog or digital radio signals. When we talk about "analog" we're talking about a direct analog-to-analog over a specified medium. Your analog telephone could only carry 33600 bits in the "audio spectrum" with 200ms of latency but it could carry 75megabits and have 2ms of latency using the exact same wire. The reason for that latency is because of the analog to digital conversion and digital to analog conversion back in the audio spectrum. It's this type of overhead that introduces latency. When you are optimizing for latency you want to eliminate ALL conversion steps between analog and digital. Cable works in a similar fashion with frequency division. So your analog television is between VHF channel 2 and VHF channel 13 as well as analog FM radio. Cable modems then divide the remaining frequency into 'lanes' so to speak to feed the customers. So this can be anywhere between a box outside your home to the inside of your home, or it could be between a box outside your home and a bigger box further away from your home and your house is sharing that same cable with 50 other people. Because 50 other people have cable modems they can IN FACT spy on your usage of the internet. The only thing keeping them from doing so is the modems having different hardware addresses. If you somehow cloned a cable modem and prevented it from sending any data to the cable plant, you could in fact receive everything that other customer does. That does not apply to Phone/ADSL/VDSL, because the concentration point has a router in between. So you can't clone someone's xDSL modem because there would be no handshake. That's one of the weird things about analog, is that there is a handshake and training mechanism that ensures that the device on both ends communicate at the agreed speed. The reason you could "spy" on a cable modem is because the speed is agreed upon beforehand via the frequency division. So as long as two customers are using the same frequency, you could in theory spy on it easily. In practice it's not that practical to do so since a wiretap on the signal would need to be another customer premises nearby, and it's simply easier to get access to the cable plant if it were so required for legal reasons. Where as DSL services are like this: One of the consequences of VDSL2 is that it's nearly impossible to do local loop unbundling (eg third party ISP) because of how noise is cancelled to reduce cross-talk. Cross talk is a problem with all wired systems, including ethernet and USB. The reason every signal pin has a corresponding ground pin is how cross-talk is reduced or eliminated. You can't simply solder all the ground pins together like Nvidia does. What happens is on the termination end (cable, ethernet, phone, etc) is that without a terminator, it "bounces" back and creates noise. Alright, so that's talking about "analog tech" over copper. What about optical tech? So optical's biggest problem is that the transceivers induce latency. This is why you often see these things in routers: Because the actual conversion between optical and digital happens in the SFP. So two ends must agree on the same physical medium. Optical systems use specific wavelengths (eg colors) and multiplexing in the same way that wireless does. Rotate the light or "sound" polarity of the signal and you double the bandwidth. If you want to see some massive confusions, look at the wikipedia page https://en.wikipedia.org/wiki/Small_Form..._Pluggable At any rate CWDM is used by optical and cable systems because that is the analog half of how it works. You induce latency every time you have to convert to and from optical, just like when you have to convert to and from analog with cable, and DSL. Here's a fun fact. There are two types of S/PDIF. One over coax (Which is basically just a RCA wire) and one over optical (TOSLINK), how do you think this works? Basically S/PDIF is a fixed 3.1Mbit optical signal in the 650nm range. So how does the Coax version work? It's the same digital protocol, just a different physical layer. Which brings us to wireless, and how it's basically hell. So Bluetooth, is "Wireless USB", "WiFi" is 802.11-family products, and "5G"/"4G"/"3G"/"LTE" refer to cellular networks. All three of these have the same problems, but their issues have more to do with the radios and less the standard they run on. So for Bluetooth, you're pretty much sending USB data over the 2.4Ghz spectrum using FHSS. This is incompatible with 2.4Ghz used for all other devices. This is why you either use all bluetooth, or zero bluetooth devices. FHSS makes it more reliable, but it still involves encryption, so it will always have more latency than a wired USB connection using the same protocol. The Physics does not allow bluetooth to be faster than USB. In case people don't know "what law of physics" I'm talking about, it's the inverse square law. The further away from your transmission, the more the signal is attenuated (same is true for Cable and DSL's analog portion.) The more errors there are, the more latency is created. When you throw in encryption into the mess, you've added two additional transformation steps. This is why it's impractical for someone to sniff a NFC signal from meters away, they literately need to be about an inch away, and a lot of "RF wallets" aren't really worth the marketing they're given. In order to do any kind of spying on wireless signals You need the keys at the point they're negotiated, or you need to brute-force the keys if the data has not rotated them. You will see OFDM, QAM, etc mentioned in Cable, DSL, and Wireless, because these are ways signals are passed through the physical medium (for wireless that is "air".) There will never be a point where a wireless signal is better than a wired signal unless you intentionally cripple the wired signal, or operate the wireless signal without it's encryption stage and reduce the signaling to be as error-resiliant as possible, which means lower bandwidth. When it comes to wireless, you can not both have security and high speed. You get one or the other, not both. With wired connections you always get both, even if the connection is not encrypted because it's impractical to capture the analog signal. You need a device that is "cloned" from the device you wish to eavesdrop on. So a router or switch can be set to duplicate the signal being sent to another port, but that is part of dealing with ethernet itself, and you can do that with WiFi by dumping the packets received at either end of the wireless connection at the access point. This is why "wireless grid" networks are a terrible idea because your signal is being decrypted and re-encrypted to be re-broadcast to other AP's. Pretty much anyone can attach their device and grab everything on the grid if they were to maliciously tell the grid that it is the exit node to the internet. Even then, how do you even trust the exit node? Just like WiFi at McDonalds. You can not. You de-facto must use a VPN with Unecrypted/Untrusted networks, and even then, it must be a VPN you control the exit point of. If you simply sign up for, oh let's say tunnel bear, you don't actually know anything about the intermediatory nodes it uses. That's why corporate VPN's are so high latency, because you are literately sending all your data to some other part of the world and back for everything. You only start to approach "near wired" bandwidth and latency with high frequencies with very short ranges. But it never crosses that point, and I doubt you could find any evidence of that with consumer equipment. With certain carrier-grade equipment (eg satellite, microwave, and other point-to-point systems) yes, you probably can get pretty close to what you get with cable, because it's ultimately still an analog radio on both sides. But the reason for that is because of the large antenna's used. Backhaul capacity per distributed site. Source: Ericsson 2022 If you have two point-to-point microwave links, the only thing keeping those connections with large antennas from operating is snow/rain fade. Where you get into the "ah, wireless is faster than optical fiber" argument is when you count the actual distance it needs to travel. Because a wireless link adds latency with every hop, but the end-point is closer to the customer, it is technically possible to get a 5G ISP to have a faster connection than it's optical connection assuming that the ISP has done everything possible to not optimize their optical network. So the further you get away from the urban core, the more likely a wireless connection "could" be a tiny big faster in latency, if you ignore the entire encryption layer as well as the accounting mechanic needed by WISP's to bill you. As I said, pretty much a WISP can never get a 5G or even an older 4G-LTE system to approach optical/copper speeds without jettisson'ing the security layer. So the short answer is always "No." All wireless technology is going to be higher latency than it's exactly equal wired system due to the analog to digital conversion to that system. You can't make data arrive before it's sent. It's only when the technology is not exactly the same, eg comparing Copper to Fiber, where it matters more. A wired Ethernet cable has 8 wires, a coax wire has 1. Coax always runs RF. Ethernet always runs digital based on voltage, and is in fact also measured in Mhz (GigE is 62.5Mhz) and the physical cables have a separate Mhz rating (250Mhz Cat6, 100Mhz Cat5). So you can not get "better" than digital to digital copper ethernet, except optical fiber ethernet when it comes to talking about distance/capacity since the capacity of an optical glass fiber is essentially unlimited as we know it (current research has it at petabytes.) I am not familiar with the math, but there is no presently known upper bounds assuming you don't melt the optical fiber or the equipment on either end. If you keep it within the scope of not melting, then the upper bound on what you can commercially use right now is a constantly moving target based on capacity or distance. https://www.nict.go.jp/en/press/2025/05/29-1.html At any rate. It is wishful thinking to expect reliability from any wireless tech, and that lack of reliability is why it's higher latency, wireless tech requires error correction, and error correction results needing redundant data or re-sending data at the radio level. Ethernet itself can have latency introduced from bad data, bad hardware or even just bad cables (often the most common reason), and yes, ethernet cables age, particularly those in a server room where the cables where they are not cable managed carefully.

G
Gregosaur
Junior Member
16
04-30-2016, 07:21 PM
#24
I understand your point, but it seems the article doesn't fully capture what's being conveyed. I appreciate the effort to highlight certain aspects, though it feels somewhat simplistic. It brings back memories of the video where Linus discussed monitor response times and reactions—it was quite poor quality. Essentially, it's a basic human reaction time benchmark. My personal score would be around 100 milliseconds, but I could potentially trick it by predicting outcomes and getting 30 milliseconds. That doesn't add much value. There are plenty of studies available, but more focused ones—like reactions to rapid visual changes, peripheral stimuli, or gaming scenarios—show distinct differences compared to responses to color flashes or key presses in low-context settings. You can explore these yourself; I've done similar research years ago. Right now, I'm using a phone. Your approach seems to group every millisecond into one category without considering what humans actually perceive. Try testing with a 20ms monitor or input lag mouse and let me know how the display and feel differ. Your comparisons often ignore the context of what's being seen versus what a human brain can detect. If you want, I can explain how 20ms lag affects perception.
G
Gregosaur
04-30-2016, 07:21 PM #24

I understand your point, but it seems the article doesn't fully capture what's being conveyed. I appreciate the effort to highlight certain aspects, though it feels somewhat simplistic. It brings back memories of the video where Linus discussed monitor response times and reactions—it was quite poor quality. Essentially, it's a basic human reaction time benchmark. My personal score would be around 100 milliseconds, but I could potentially trick it by predicting outcomes and getting 30 milliseconds. That doesn't add much value. There are plenty of studies available, but more focused ones—like reactions to rapid visual changes, peripheral stimuli, or gaming scenarios—show distinct differences compared to responses to color flashes or key presses in low-context settings. You can explore these yourself; I've done similar research years ago. Right now, I'm using a phone. Your approach seems to group every millisecond into one category without considering what humans actually perceive. Try testing with a 20ms monitor or input lag mouse and let me know how the display and feel differ. Your comparisons often ignore the context of what's being seen versus what a human brain can detect. If you want, I can explain how 20ms lag affects perception.

D
DrewzySoccer7
Junior Member
3
05-02-2016, 03:17 AM
#25
Counterarguments to the idea that latency improvements aren't worth the effort. Some might argue it's pointless to optimize every hop in a chain if it doesn’t justify the cost. If you assume you can ignore latency in systems, you’ve already missed the mark. Many people underestimate the impact of small details, and assuming simplicity will save time often leads to wasted resources. There’s also value in choosing efficient solutions like dipoles over more complex ones, even if they limit form factors. For high-frequency wireless, mm/G bands can work well over long distances, offering practical alternatives.
D
DrewzySoccer7
05-02-2016, 03:17 AM #25

Counterarguments to the idea that latency improvements aren't worth the effort. Some might argue it's pointless to optimize every hop in a chain if it doesn’t justify the cost. If you assume you can ignore latency in systems, you’ve already missed the mark. Many people underestimate the impact of small details, and assuming simplicity will save time often leads to wasted resources. There’s also value in choosing efficient solutions like dipoles over more complex ones, even if they limit form factors. For high-frequency wireless, mm/G bands can work well over long distances, offering practical alternatives.

Pages (3): Previous 1 2 3