I launched an Azure machine to test potential latency improvements. Here are the findings.
I launched an Azure machine to test potential latency improvements. Here are the findings.
I wanted to test if I could outperform what my ISP offers for sending data across the United States. Why is this important? Mainly for gaming and remote desktop sessions where a high ping—over 50 ms—can noticeably disrupt real-time performance. ISPs are usually effective at finding good paths, but they also aim to keep costs low. The modern internet combines multiple fiber lines from various telecom firms, and your local ISP only handles a small slice of it. Most providers focus on serving business customers rather than individual users.
I explored Google Cloud, Amazon AWS, and Microsoft Azure, none of which had prior experience with them. Why would they be better? Their data centers are linked to all major telecoms, yet they don’t prioritize the cheapest routing. In reality, they tend to choose the most efficient routes. All three use similar methods for setting up OpenVPN servers, including pre-built firewalls and accounts.
Google Cloud stood out with its interface and support for browser SSH. Azure had a less-than-satisfactory user experience, reminiscent of older Windows interfaces. AWS was in the middle ground. This setup is essentially what "gaming VPNs" claim to achieve, but their actual data center locations are scattered. One might choose AWS, another a cheaper alternative like Bob's Datacenter LLC, but both struggle with routing efficiency.
The issue became clear when I tested latency from my local area using an AWS server through an Azure VPN. The results were disappointing: even direct pings to my ISP’s servers showed a minimum of 4 ms latency. I’m connected via coax over copper, and the exact path to fiber isn’t clear—could be anywhere from a few miles to tens of kilometers.
Running the test confirmed that no matter the effort, getting data out of my city still takes at least 7 ms. Even if my ISP improved its local routing or deployed low-latency services like Docisis Low Latency, the improvement would likely be less than 5 ms. For context, with FTTH or fiber internet, I’d probably reduce that time significantly.
Cost-wise, the selected option was near Azure’s lowest tier at about 6 cents per hour for a VM. You could turn it off when not in use, and uploading data would cost around 2 cents per GB. For a full month of 1TB upload traffic, that would be roughly $60—an unrealistic amount given most users only download.
In short, my ISP is decent, but the gains diminish with distance. Optimization could help, but likely not enough to beat what’s available.
You're checking the tools you're using to assess latency. The focus is on jitter levels, which can significantly affect observed delays without real improvements. It's common to encounter 10ms or more of jitter, particularly when other tasks are running alongside the test. For example, a 64-byte packet from 151.101.0.81 shows varying round-trip times, indicating instability in the connection.
It's simple to switch on/off during testing, though consistency matters. The difference isn't big for the west coast likely because it's close enough to my ISP. As the connection gets busier, the variation increases. Low Latency DOCSIS won't arrive soon enough to eliminate the 7ms delay and restore performance. Cablelabs says it's just a software update. My ISP estimates it'll take about two years.