MIT Network Congestion

, MIT Network Congestion, #Bizwhiznetwork.com Innovation ΛI

(Credit: Getty Images)
We’re all using more data than ever before, and the bandwidth caps ISPs force on us do little to slow people down — they’re just a tool to make more money. Legitimate network management has to go beyond penalizing people for using more data, but researchers from MIT say the algorithms that are supposed to do that don’t work as well as we thought. A newly published study suggests that it’s impossible for these algorithms to distribute bandwidth fairly. 
We’ve all been there, struggling to get enough bandwidth during peak usage to stream a video or upload large files. Your devices don’t know how fast to send packets because they lack information on upstream network conditions. If they send packets too slow, you waste available bandwidth. If they go too fast, packets can be lost, and resent packets cause delays. You have to rely on the network to adjust, which can be frustrating even though academics and businesses have spent years developing algorithms that are supposed to reduce the impact of network saturation. These systems, like the BBR algorithm devised by Google, aim to control delays from packets waiting in queues on the network to make sure everyone gets some bandwidth. 
But can this type of system ever be equitable? The new study contends that there will always be at least one sender who gets screwed in the deal. This hapless connection will get no data while others get a share of what’s available, a problem known as “starvation.” The team developed a mathematical model of network congestion and fed it all the algorithms currently used to control congestion. No matter what they did, every scenario ended up shutting out at least one user
Optical fiber, in blue and white
The problem appears to be the overwhelming complexity of the internet. Algorithms use signals like packet loss to estimate congestion, but packets can also be lost for reasons unrelated to congestion. This “jitter” delay is unpredictable and causes the algorithm to spiral toward starvation, say the researchers. This led the team to define these systems as “delay-convergent algorithms” to indicate that starvation is inevitable. 
Study author and MIT grad student Venkat Arun explains that the failure modes identified by the team have been present on the internet for years. The fact no one knew about them speaks to the difficulty of the problem. Existing algorithms may fail to avoid starvation, but the researchers believe a solution is possible. They continue to explore other classes of algorithms that could do a better job, perhaps by accepting wider variation in delay across a network. These same modeling tools could also help us understand other unsolved problems in networked systems.
Now read:
Subscribe Today to get the latest ExtremeTech news delivered right to your inbox.
© 1996-2022 Ziff Davis, LLC. PCMag Digital Group
ExtremeTech is among the federally registered trademarks of
Ziff Davis, LLC and may not be used by third parties without explicit permission.

source

About admin

Check Also

, Internet-Connected Vehicles Are Sharing Your Driving Habits With Insurers, #Bizwhiznetwork.com Innovation ΛI

Internet-Connected Vehicles Are Sharing Your Driving Habits With Insurers

If you have a car and a matching auto insurance policy, you’ve probably been offered …

Leave a Reply

Your email address will not be published. Required fields are marked *

Bizwhiznetwork Consultation