In the world of computing and networking, there is an ongoing battles to have the fastest speeds. In this competition for haste, there is a desire to have latency as low as possible to ensure those lightning-fast speeds. Edge computing and edge applications are a continuation of this effort to lower latency.
If you are unfamiliar with latency, it is the delay of time between the initiation of a command or input from one side to the reception from the other. It is usually measured in milliseconds (ms). High latency means there is a larger delay in sending to receiving; low latency means there is a small delay.
Latency is typically affected by distance, as longer distances from output to reception means the data being sent has a longer time to travel, while shorter distances will have lower latency as it is received at a quicker time. Latency can also affected by software and hardware elements in the network path, along with network congestion at traffic exchange points — a situation analogous to the problems people have when driving on freeways in traffic.
Online gamers experience high latency in perhaps in its most infuriating form as ‘lag.’ There is also the impact of latency on financial trading, satellites, fiber optics, and networks. All of those industries consider high latency highly inconvenient and a detriment to their effective functioning. A few milliseconds of latency could mean millions of dollars lost or potentially fatal disconnects with navigation equipment.
One critical purpose of edge computing and edge devices is to minimize the effect of latency on online functions. Compared to a centralized location as a main source of data processing and telecommunications, edge computing decentralizes this process and widens the modalities and locations where these actions can be performed to where it is happening. For edge applications, a centralized location typically struggles to accommodate the flood of data and information it receives, threatening to overload it and impact its performance, causing high latency and disruptions.
Because it is now more widely distributed, edge computing can lower latency, as it is performing processing and telecommunications closer to their use, like an Internet of Things (IoT) device. Additional benefits include improved consistency. Rather than moving from hundreds or thousands of kilometers away, an edge application may shorten that to merely tens of kilometers away or on-site, thus reducing latency.
Consider the range of edge applications today like IoT devices or smart cameras. High latency would inhibit their ability to operate in real-time as processing would be delayed. With low latency due to edge uses like local servers or a cloud, latency could be cut down considerably for effective use of edge applications.
A strong majority of business leaders in one survey said they sought low latency of 10ms or less to ensure the success of their applications, while 75 percent said they require 5ms or less for edge initiatives.
But it is possible that enterprises may be too obsessed with lowering the latency to consider whether they truly need such speeds. A market analysis from Spirent Communications and STL Partners found a disconnect from the demands of edge customers for 5G multi-edge computing (MEC) and the capabilities of vendors. STL Partners also found telecoms often struggle to meet consistent latency.
edge computing | gaming | interconnection | latency | network