Automating the Edge with Robotics

Cloud-Native at the Speed of Thought

Infrastructure shouldn’t slow you down
Categories Edge Computing News  |  Guest Posts
Cloud-Native at the Speed of Thought

By Nir Sheffi, CTO at Ridge 

One of the key lessons learned from COVID-19 is the importance of having a flexible IT environment that can be quickly adapted to changes in business needs. Organizations that were more advanced in their transition to modern digital platforms were better equipped to continue operations amidst lockdowns, social distancing, remote work, and supply chain disruptions. 

But let’s not fixate only on pandemics. The same lesson—about the need to remain constantly flexible—has been learned as we’ve progressed from one technological advancement to another. As IT and communications environments have evolved, applications have advanced in ways that could not have been foreseen a generation ago. Can anyone predict the applications that will take advantage of 5G, that will make the world spin in 2030?  

Building a resilient and agile IT environment that easily adapts to changes requires flexibility to constantly update and upgrade in step with changing needs and opportunities. It also requires organizations to reconstruct their IT infrastructures with agile development methodologies so they can easily release and update their applications. 

At Your (Micro) Service

To support agility and to exploit the tremendous potential of cloud-native applications, many organizations have adopted a microservice-based architecture. Microservices introduced the ability to break an application into various functions, each of which can be developed, deployed, and managed independently. 

Containers are particularly important as enablers of these flexible microservice-based application architectures. By packaging microservices into containers, organizations update and scale each microservice independently, with no disruption to other microservices. If and when one part of the application fails, or needs updating, the other parts will not be affected. 

However, things can get complicated pretty fast when you need to deploy and manage an application across multiple machines. Developers must handle scheduling, resource allocation, and other processes for each microservice.  

Kubernetes:  Complicated, but Manageable

A number of solutions of various levels of complexity were developed to fulfill the growing need to control these containers. Over time, Kubernetes, originally created by Google, emerged from the pack to become the de facto standard for automation and orchestration of manual container processes, such as deployment, management, and scaling. Kubernetes monitors containers 24/7 and ensures that they, and your application, are running optimally.  

All systems go? Not so fast. Although its ability to orchestrate containers may seem like magic to some, Kubernetes introduces its own complexities and its administration can become a time and labor-consuming effort. Kubernetes may, and frequently does, determine that physical resources need to be provisioned, configured, and updated.  If no one is “listening” to these signals and taking action, your application will get derailed.  

The complexity of managing resource-heavy cloud-native applications is therefore driving organizations that want to focus on application development — and not on managing infrastructure — to run their Kubernetes deployments through third-party cloud-based managed Kubernetes services. A managed Kubernetes service is easy to use and will ensure that the desired states, as requested by Kubernetes, will automatically be implemented on the physical resources. 

Salvation is Not Always in the Public Cloud

However, while these managed services ease the burden of managing Kubernetes clusters and their underlying infrastructure, developers still face a challenge that stems from the very nature of the public cloud: the computing resources are located in mega data centers. Public cloud-provided services may suffer from latency in the common scenario where data is transmitted from an end device to the cloud to be analyzed and then returned to the end device. 

Low latency and throughput are becoming increasingly critical as we progress to cloud-native applications with strict requirements for response time, such as autonomous vehicles, drones, telemedicine, and robotics. They can’t be effectively supported by a cloud many miles away.  Furthermore, many applications consume significant bandwidth, further reducing performance and increasing networking costs. 

Data sovereignty is another challenge: Over 100 countries have laws that require data and processing to be resident in-country. As workloads with user data are deployed around the globe, it is becoming increasingly difficult to meet each market’s unique data locality requirements. 

Of course, an enterprise can circumvent these problems with a DIY solution: bring the compute workloads to a local data center, in proximity to end-users.  Assuming that the enterprise would want to dedicate an entire IT staff to managing that single location, it could work. Some of the time.

Distributed cloud: At the edge of something great

To address this challenge, a new cloud paradigm has emerged: the distributed cloud. The distributed cloud brings computing resources to the network edge, closer to end-users. It combines the benefits of two worlds: the agility of the public cloud and the high performance of private infrastructure. 

A massively distributed cloud platform enables application developers to deliver modern workloads locally from a global network of data centers and local cloud providers. A heterogeneous selection of data centers and cloud providers is central to this model because, despite the size of AWS, Microsoft Azure and the like, no single provider has every country covered with equal service density.  Ridge Cloud is an example of a distributed cloud platform built by federating thousands of local data centers and cloud providers. Developers can use it independently or in a multi-cloud scheme as an extension of their public cloud deployments. Applications are deployed at the resolution of a geographic region or even of a metropolitan area. 

Developers can deliver modern workloads locally, through managed web services, from a globally distributed data center network. They describe their required resources as they deploy their Kubernetes clusters. Heterogeneous infrastructure becomes a homogeneous cloud computing platform, which is then leveraged to support the delivery of cloud services in proximity to end-users. And as a managed Kubernetes service, the distributed platform will adjust workloads by automatically spinning up computing instances wherever needed. 

The developer can then focus all of his or her attention on application development. 

From Here to There: Cloud native, managed Kubernetes and the Distributed Cloud

For developers, the big promise in cloud computing was the abstraction of infrastructure complexities that freed them to focus on writing great code. However, today’s containerized, microservices-based cloud-native applications are so complex that developers often find themselves spending more time dealing with infrastructure configuration than with coding. 

As the de-facto standard for container orchestration, Kubernetes plays an essential role as an enabler of cloud-native application deployments, offering unprecedented flexibility in moving workloads between environments. However, the full potential of mission-critical cloud-native applications, often with strict latency or throughput requirements, cannot be realized until they can be deployed anywhere to ensure superior performance. 

By offering an addition to the public cloud model, the distributed cloud model enables developers to seamlessly deploy and infinitely scale their applications anywhere, utilizing a global network of service providers instead of relying on the availability of compute resources in a specific location. Through managed Kubernetes services, even the most complex, resource-intensive applications can become cloud-native. 

It’s not just about speed and agility. For developers, cloud-nativity is about leveraging the cloud so that innovation can take place at the speed of thought, without being limited by infrastructure.  

About the author

Nir Sheffi is the CTO and co-founder of Ridge, the first distributed cloud. Ridge Cloud is a massively distributed cloud platform that enables application developers to deliver modern workloads locally from a global network of data centers and local cloud providers. Nir has twenty years of experience in hands-on R&D management, ranging from start-ups to large-scale organizations.

DISCLAIMER: Guest posts are submitted content. The views expressed in this blog are that of the author, and don’t necessarily reflect the views of Edge Industry Review (EdgeIR.com).

Article Topics

 |   |   |   |   |   |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Automating the Edge

“Automating

Deploying AI Models at the Edge

“Deploying

Latest News