Deploying adaptive AI in distributed water plants - Barbara Accoina

Monitoring at the edge of the third act of the internet

Categories Edge Computing News  |  Guest Posts
Monitoring at the edge of the third act of the internet

This is a guest post by Mehdi Daoudi, CEO and Co-Founder of Catchpoint.

Whether you’re in tech, media, retail, or any other business with or without a digital presence, the biggest challenge you are facing is how to deliver something to the last mile. If I own a grocery store, while it’s easy for me to have a big warehouse where I store and sell goods, no one will drive there if it’s not convenient. This is the reason why stores are located close to their customers – so anyone can stop on their way home and pick up their weekly groceries. The biggest challenge for everyone has been how to deliver any product or service as conveniently and as fast as possible to the end user.

Amazon has disrupted the retail industry with its ‘same day’ delivery, setting a very high bar for ‘last mile’ delivery. Along the same lines, their acquisition of Whole Foods Market shows that they see a big opportunity in disrupting the perishable goods industry by streamlining its delivery chain and offering a more convenient experience for getting weekly groceries. Given the above examples, it is apparent that ‘edge’ is not just a term for the computing industry – the concept is applicable to all customer facing industries.

If we now look at IT and computing specifically, one of the unavoidable technical limitations we are facing is that digital information cannot travel faster than the speed of light and although that’s incredibly fast, the further away a digital user is from where the information is sent, the longer it takes the user to get it – we call this ‘latency’.

Edge computing represents the third act of the internet

Today, the Internet has gone through a series of transformations to handle the last mile problem. We have moved from the monolithic approach of the early days where everything was delivered from a single centralized datacenter. Akamai pioneered the Second Act of the Internet when they launched the Content Delivery Network (CDN) to bring commonly used content closer to end users by caching it in decentralized datacenters. These datacenters, although closer to end users are still a significant distance away. Data packets have to travel hundreds of miles, thus remain subject to delays from network hops, best-effort-routing, and indeed, latency.

The next iteration of the Internet, currently in the process of being implemented by companies like Amazon, VaporIO, and Packet, is here to enable some of the incredible technologies that are yet to come, such as true virtual reality (VR), true virtual learning, and the plethora of innovations within the Internet of Things (IoT) industry. Due to their latency requirements and demand for super-fast connectivity, these applications need to get super close to the consumer.

The Third Act of the Internet is therefore all about how to get content and apps as close as possible to the digital user, whether a human, an autonomous car, or an item of wearable technology. That’s what the edge is about: how do we enable the smart revolution in response to the demand for super low latency?

Edge Computing Represents the Third Act of the Internet

Edge technology is an enabler of a more connected world

Edge computing will change the way we interact with technology; it will become even more ubiquitous than it is today. Imagine waking up and without asking Siri what the weather will be like, the answer is given to you: “Don’t forget your umbrella,” Siri will say before you walk out the door without it. This kind of ‘intelligent’ service requires edge computing. Ultimately, technology is just an enabler for a more connected, more AI-driven world in which the digital user is constantly getting feedback and receiving recommendations.

It’s always the applications that drive technological innovation. Think about the last decade and the advances we’ve seen in AI, such as Uber or Siri. These new technologies and applications have driven 4G adoption as well as innovation in cloud computing technologies. Similarly, the next iteration of apps, which demand ultra-low latency, will drive the growth of edge computing and 5G adoption.

The edge will become an extension of the cloud

Edge compute is essentially the next step in the evolution of distributed computing. Edge computing tackles the problem of how to take the compute experiences that are currently running in big datacenters or in a cloud provider and move them to hundreds of micro-datacenters located close to the end user. Such a location could be Grand Central terminal in NYC. A few million people pass through Grand Central each day, during which they’re constantly sending and receiving data. We need the data to be instantly available and the applications they’re using to be super-fast, so we can’t depend on hosting them in a datacenter located in upstate New York over a hundred miles away. Who will put ‘your’ data in Grand Central to enable this? The edge is about pushing the limits and bringing the apps closer to where your users are.

Still, despite this evolution, the edge won’t eliminate the need for cloud computing. Instead, the two will coexist. Edge computing is more distributed and lightweight; it’s about whatever needs to happen to get things closer to the end user. While some data processing will take place on your handheld device – your iPhone or your Android, or via the kind of AI chips that Tesla is deploying in their cars (since you can’t put thousands of servers at the edge and must instead rely on maybe only 20 or 30) – big processing will still reside in the big datacenters (cloud and traditional).

State of the Edge points out that “as the demand for edge applications grows, the cloud will drift closer to the edge.” Indeed, we are beginning to see cloud companies such as AWS and Microsoft disrupt the emerging edge ecosystem and announce edge compute resources to move data processing closer to the end user. The edge will become an extension of the cloud, an extension of the big datacenters. To think of it via another everyday analogy: you can have a big IKEA that houses and sells everything in an old Brooklyn port, but the little IKEA store in midtown Manhattan only sells the most common things.

Monitoring at the edge

The edge is a new frontier. It throws up many new challenges that we are considering on a daily basis here at Catchpoint.

One of these is dealing with the amount of data that monitoring systems will need to collect, which will be enormous as more and more “things” are connected to the Internet. IDC’s latest forecast estimates there will be 41.6 billion connected IoT devices by 2025, generating 79.4 zettabytes of data.

Solutions around gathering analytics from edge data centers are beginning to emerge, such as the new edge solutions recently announced by Dell that will include enhanced telemetry management and a streaming analytics engine, as well as micro datacenters and new edge servers. Challenges around analysis also apply. The more data is collected, the more we will need to develop rigorous systems of machine learning and artificial intelligence to help process it.

A second significant monitoring challenge is access. If 5G is fully deployed, due to the fact that high frequency radio waves struggle over long distances and through objects, there will be many more cell towers than today, with antennas as close as 500 feet apart. How many of these small cell sites do you monitor from? Similarly, how do you choose which edge datacenters to monitor from? How extensive does the monitoring footprint need to be, to truly cover the edge in all its manifestations?

Developing the right strategy

Developing the right strategy to gain a comprehensive picture of how things are performing from an edge perspective is critical. There has been a widespread evolution of the monitoring industry towards digital experience monitoring (DEM). This involves a significant shift from monitoring the health of a network or an application to instead monitoring the desired outcome which is what the user experiences.

Traditional monitoring tools focus on the infrastructure and applications you directly control, which leads to blind spots into other critical services and networks that lie between the hosting infrastructure and the end user. A good DEM solution takes a much more holistic approach to monitoring, looking at the digital performance of the entire IT infrastructure involved in delivering an application or a service to its end users.

Developers are running some application code at the edge using services from CDNs such as Akamai, Fastly, and Cloudflare. To ensure that API services (which currently power most AI-recommendations) are always on and as reliable as possible, the big CDN providers are beginning to offer edge services that move API traffic onto their edge networks so that they can serve API responses from edge servers instead of the origin servers. This is where API monitoring is already critical, and not just from an availability perspective, but by asking whether or not your API calls are returning the correct responses to ensure the integrity of the service.

Other edge monitoring issues and requirements:

• End-to-end visibility into application and network layer performance will become increasingly important as the cloud and the edge move closer together. This will require a global network of monitoring nodes near where application end users are located.

• You may know there’s an issue, but does it lie with a backbone provider or a wireless provider to the edge? If you have an app hosted on the edge and you are monitoring from a centralized AWS or Azure datacenter, it won’t tell you how your ‘edge app’ is performing or whether your customers can even access it. It is critical that the monitoring solution is location aware.

Developers need a monitoring platform that identifies problems by peeling back each of the layers that are involved in delivering a digital service to a user. This will continue to be the case as momentum for edge computing continues to grow and expanding the footprint of the monitoring network out to the edge will be critical for giving developers greater and greater visibility.

To summarize, developers need to take a user-centric perspective, which includes monitoring of all the components involved in the user journey. With the emergence of edge computing, taking a user-centric monitoring approach is more critical than ever before!

About the author

Mehdi Daoudi is CEO and Co-Founder of Catchpoint, a digital experience intelligence company. Daoudi spent more than ten years at Google and DoubleClick, where he was responsible for quality of services, buying, building, deploying, and using internal and external monitoring solutions to keep an eye on the DART infrastructure delivering billions of transactions a day. Mehdi holds a BS in international trade, marketing, and business from Institut Supérieur de Gestion (France).

Catchpoint offers the largest and most geographically distributed monitoring network in the industry. The company’s services help enterprises proactively detect, identify and validate user and application reachability, availability, performance and reliability, across an increasingly complex digital delivery chain. Industry leaders like Google, L’Oréal, Verizon, Oracle, LinkedIn, Honeywell, and Priceline are customers.

DISCLAIMER: Guest posts are submitted content. The views expressed in this blog are that of the author, and don’t necessarily reflect the views of Edge Industry Review (EdgeIR.com).

Article Topics

 |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Machine learning at the Edge

“Barbara

Latest News