Automating the Edge with Robotics

Skipping the K8mplexity: Edge container management that delights developers

Categories Edge Computing News  |  Guest Posts
Skipping the K8mplexity: Edge container management that delights developers

By Carl Moberg, CTO and co-founder Avassa

Cloud computing came into the mainstream in the 2010s and brought with it a set of benefits that reshaped IT as we know it. The cloud operational model allowed enterprises to host applications in massive third-party data centers with practically unlimited resources. This removed the need to track which applications were running on which specific server, by leaning heavily on the concept of virtualization.

With this sea-change came a slew of new tools centered around automating the operations of planet-scale application development and delivery. It also drove significant changes in how IT teams organize themselves and gave birth to practices like Agile and DevOps.

With this unprecedented rise in efficiency, it was unavoidable that the same kind of expectations would be applied to applications that need to run outside of the central data centers, physically close to the source of data sources or users. A renewed take on edge computing started in the late 2010s with the sole aim of applying the type of efficiencies gained from the cloud operations model to the distributed edge domain.

With this ambition, we have to account for a couple of fundamental differences between the central clouds and the distributed edge. First of all, location matters; there is a reason why a specific set of compute is located in a specific physical location. And further, each edge location will have limited amounts of compute compared to a central cloud based on space and cost limitations.

So the question is can we reuse the tooling, processes, and organization that is in place for the central clouds, but apply them to the edge?

Meet the team

Let’s look at how this could work in practice. Most enterprise teams of some size have two types of roles (or at least responsibilities):

  1. The application owner role — we call her Applifer Developez. Applifer plans, designs, and codes applications. She is passionate about the application layer and less interested in the details of the infrastructure. She just wants to run her applications. Applifer is focused on bringing value to the enterprise by supporting the business goals with software.
  2. The infrastructure (or platform) role — we call him Platrick McEngine. Platrick plans, designs, and configures infrastructure. He is passionate about the infrastructure layer and its moving parts. He wants to provide a robust and efficient environment for applications. His main purpose is to be an enabler for the application teams.

The relationship between the Applifers and Platricks of the world is often fraught with tension. Applifers want to focus on the application layer and just want to run their applications, while Platricks are focused on providing a robust and efficient infrastructure environment for the applications to run on.

Hassle at the Edge? Adjust abstractions!

Platrick has built a sophisticated infrastructure that takes up his day, with little room for anything else. Meanwhile, Applifer has a growing set of applications that she needs to place on the edge infrastructure provided by Platrick. And over time she also starts lining up new versions of these applications.

At the heart of this challenge is the fact that the handover between the application team and the infrastructure team is still largely manual for the edge. Applifer’s development process is well automated through the build, test, and secure phases. But the motion of replacing the old version of the application with the new is still very much a manual and intra-team exercise. It usually takes a meeting or two where the application team needs to describe which sites are affected, where the new release can be found, and provide any additional context needed for a successful upgrade (like changed resource requirements, specific steps of the upgrade process, etc).

As the amount of deployed applications and their individual release frequencies increase, Platrick has a hard time keeping up. His working day is already pretty much full with keeping the infrastructure components up to date. And with the addition of frequent and manual deployments, he becomes the bottleneck to the release schedule Applifer has in mind.

To solve this bottleneck issue, introducing automation tools and platforms with the right abstractions can help. These tools can allow for more seamless collaboration between Applifer and Platrick, reducing the need for manual handovers and enabling more efficient development at the edge. Additionally, with the right abstractions in place, developers can have more control over the infrastructure layer, allowing for greater flexibility and agility in responding to changes and new requirements.

What are the right abstractions?

First a couple of words about the term abstractions. Barbara Liskov is a pioneering computer scientist known for her contributions to programming languages and software engineering. Her work on abstract data types greatly influenced the field of object-oriented programming. In 2008, Liskov became the first woman to receive the Turing Award, recognizing her outstanding contributions to computer science. A passage from her seminal paper Programming With Abstract Data Types reads:

“What we desire from abstraction is a mechanism that permits the expression of relevant details and the suppression of irrelevant details.”

This quote feels self-evident as does many foundational insights in computer science and other fields of inquiry. The interesting angle here is to figure out what the right abstractions for Applifer and Platrick are respectively.

Starting with Applifer, we already know that she produces versioned applications at an increasing pace. She has an intimate understanding of the design and implementation of these applications including the requirements they have on the runtime environment in terms of libraries, databases, and other infrastructure services. She wants to express these definitions and requirements in some formal and complete definition of an application. We call that an application specification and it needs to be formally defined and contain enough information to satisfy the needs of an automation platform.

This means that everything needed in terms of e.g. runtime configuration, distribution and protection of sensitive data, and the right type of networking setup must be described in this artifact. Applifer also needs to be able to formally describe where, or under which circumstances (e.g. geographical, based on hardware configuration, etc) a particular application should be started and kept alive. We call that a deployment specification.

Platrick, on the other hand, needs a platform that can receive the application and deployment specifications and then manage a robust set of applications accordingly. The platform needs to be able to allow him to monitor the set of application replicas that is the effect of the configuration of the application and deployment. And it must allow him to drill into specific aspects of the running components in order to observe detailed behavior in problematic situations.

A self-service golden path at the edge

To enable developer self-service at the edge, it is important to provide a set of tools and platforms that can be used by both Applifer and Platrick. These tools should be designed to abstract away the underlying infrastructure complexity and provide a clear separation of concerns between the two roles.

An edge platform should offer a self-service developer experience equally comfortable for Applifers to use as the ones they are using for the centralized cloud. It should be designed with a golden path for deploying to the edge and offer an application-centric approach that can limit the need for manual interactions between Applifers and Platricks to a minimum. While this specific path might not be a one size fits all, the thinking of using edge application tooling that is built with the value stream (Applifer) in mind, is.

With the right abstractions in place, developers can focus on building applications and delivering value to the business, while infrastructure teams can focus on providing a robust and efficient environment for those applications to run on.

About the author

Carl Moberg is the CTO and co-founder of Avassa. Moberg has spent many years solving for automation and orchestration, starting with building customer service platforms for ISPs back when people used dial-up for their online activities. He then moved on to focus on making multi-vendor networks programmable through model-driven architectures.

DISCLAIMER: Guest posts are submitted content. The views expressed in this post are that of the author, and don’t necessarily reflect the views of Edge Industry Review (EdgeIR.com).

Article Topics

 |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Automating the Edge

“Automating

Deploying AI Models at the Edge

“Deploying

Latest News