Avassa aids cloud-to-edge architecture with application lifecycle management, monitoring

Categories Brand Focus  |  Edge Computing News
Avassa aids cloud-to-edge architecture with application lifecycle management, monitoring

Edge computing is rapidly becoming a staple of computing infrastructure that applications can consume. As developers look to build a coherent cloud-to-edge strategy, the dynamic and distributed nature of edge deployments makes the management of applications a challenge, however. Avassa is helping to simplify application lifecycle management and monitoring with an easy-to-use solution that manages distributed applications across distributed on-premises edge environments.

Part of the work of helping make edge applications easier to deploy involves integrations and partnerships with other companies in the edge ecosystem. Avassa has been busy on this front, announcing partnerships with companies such as Red Hat, a leading provider of enterprise open source software solutions, and Scale Computing, a provider of edge computing hardware systems.

EdgeIR spoke with Carl Moberg, CTO of Avassa, about the evolution of enterprise computing and the emergence of a cloud-to-edge topology for enterprise architecture. The following was edited for brevity and clarity.

EdgeIR: Can you tell us about how and why Avassa was founded?

Carl Moberg: We could increasingly see applications and infrastructure being placed outside of the central cloud and closer to the source of data, or at the edge. Meanwhile, we also saw that the focus for many enterprises was moving away from infrastructure and into the application domain, without any comfortable tooling for managing the lifecycle of those applications. There are interesting challenges in managing applications outside of the (central) cloud, so we set out to start a company focusing on managing applications in edge environments.

EdgeIR: Many enterprises find it challenging to manage applications across a widely distributed infrastructure. What is needed to efficiently operate applications at the edge and at scale?

CM: A few computers, but in very many locations and where the location matters — that’s our hard and fast definition of the edge, that drives interesting requirements that make the edge very different from the centralized cloud.

The number one thing is the profound impact of the fact that location matters. Because the placement and scheduling kind of explodes [with edge] and there’s some deeper thinking that needs to be done around the fact that not all applications are going to run in all locations at all times, and knowing which of my applications should be running where or maybe under which circumstances. That is in stark contrast to having a single cluster, for example, that many people have in the cloud or maybe two clusters where there’s disaster recovery.

The second thing is about operations including monitoring and observability. It’s a big difference between monitoring and observing 20 separate containers or 20 applications in a single cluster. If you think about an application that has 200, or 1000, replicas across as many locations, of course, trying to understand what healthy means in that capacity is, is just a different way of thinking about things….I think those two things should be at the heart of any solution, and at the heart of the thinking of anyone pursuing kind of that kind of edge infrastructure.

EdgeIR: What approach should engineers take when they are managing applications that have code in the central cloud but also have applications that are running components at the Edge?

CM: Users typically run some components of their applications at the edge, and have some sort of corresponding functionality in the centralized cloud. A good example is a point of sales application for retail, which communicates with a centralized database with a centralized set of components for loyalty programs and other things but also has components running at the edge for offline capabilities and fast response times. And of course, every kind of AI or machine learning like an inference application that you run on industrial shop floors; they do a lot of heavy lifting with the inference at the edge, but they always transact the result to a centralized environment.

There’s an interesting emerging application topology that spans the edge and the centralized cloud; as you have feature growth, you should think about both components as a coherent whole. Don’t look at them as two separate domains with operational or organizational firewalls between them. Make sure that you give a team operational responsibility for the lifecycle of those applications that tie together edge-native and cloud-native.

EdgeIR: How does that tie into the platform engineering approach? Has there been enough consideration of how platform engineering works with edge?

CM: Platform engineering is a perfect starting point for a conversation about this. At the heart of what platform teams do is provide the infrastructure that is easily consumed by people or teams with application mindsets.

How can platform teams organize themselves around product thinking so that they can provide useful features and a useful environment for application teams? That kind of ties back into this [separation] of the roles and responsibilities and core challenges. I think about the edge as just another substrate that should provide the same ergonomic feel that the cloud has managed to put together over the last couple of years.

One of the emerging discussions in platform engineering is the one about IDPs, or internal or integrated developer platforms, which at its heart is a way of providing a portal-like experience both for onboarding, but also for features along the lifecycle of applications aimed at providing a comfortable environment for application teams. I think that is the start of an architecture to provide that kind of two-layer topology management that I talked about. This might just be the orchestrator of orchestrators that we’ve been thinking about the last couple of years that ties together the centralized components with the edge components.

EdgeIR What role does observability play in the management of nonstop operations?

CM One of the things that Kubernetes brought to the forefront was the commoditization of what used to be very complicated clustering mechanisms. The commoditization of things like the Raft algorithm, which is at the heart of Kubernetes, is key to understanding how we can provide nonstop operation on a cluster level. The ability to react to applications failing by different strategies-we try to restart that. And if it doesn’t come up, try to reschedule on another node if we have the resources. That survivability, based on commoditized clustering functions, is key. Making sure that you have enough resources and contextual information at the site to be able to do this, even if you’re offline and fully disconnected from the internet is a formidable and very interesting task.

Article Topics

 |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Edge Ecosystem Videos

Featured Edge Computing Company

Latest News