Josep Martí, co-founder and CEO of NearbyComputing spoke exclusively to EdgeIR about current trends he’s witnessing in the edge space and what the company is doing to further unleash the power of edge.
When asked about NearbyComputing’s main focus for the rest of 2023, Martí says that as edge native use cases like edge AI and many others are becoming more and more relevant for enterprises in their journey towards efficiency, competitiveness and customer service, the company plans on focusing on enabling these companies to simplify the deployment and operation of Edge applications across their locations, with the aim of seamlessly connecting the edge with the cloud to access a whole new generation of applications, while keeping control over CPAEX and OPEX.
“We have noticed a progressive understanding of the edge value prop by end customers. Enterprises definitely wish to onboard applications that require intensive compute capabilities (like video analytics, digital twins, etc.) and realize that building an edge platform, as a complement of their cloud resources, is what makes sense,” he further explains.
“But the edge is a different paradigm from the cloud, that needs a certain degree of education. Eventually, the goal is to deploy new solutions in a shared environment, enabling all operations teams to enjoy an improved availability, performance and reliability of their individual solutions.”
He goes on to narrate that another relevant fact to be signaled is that all top hardware providers now have a wide catalog of compact and performant edge servers that can work outside of a data center and can be easily leveraged to deploy edge apps.
“The management layer has been highlighted as critical by all analysts, so software platforms like NearbyOne, that can encompass edge-to-cloud deployments are fine-tuned and ready to deliver in a wide range of verticals and use cases, are now being seen by customers as a must-have layer,” he continues.
“We perceive, in essence, that from the supply side companies can now have access to a comprehensive, performant and fully proven technology stack to operate edge computing, and that on the demand side the interest is clearly growing, large companies are planning the onboarding of edge technologies.”
Key drivers of the edge to cloud continuum
The “edge to cloud continuum” refers to the spectrum of computing and data processing that spans from the edge devices (e.g., IoT devices, sensors) to the cloud infrastructure.
Martí notes that application developers have to deal with huge amounts of data coming from many sources.
“On most occasions, the highest efficiency comes from processing data close to the source, while relying on the Cloud for long-term storage and batch activities. Up to now, unfortunately, they had to choose either to rely on either on-premise or Cloud infrastructure, but combining is a way better decision,” he continues.
“Security is also a concern for most companies, which prefer to keep their most valuable data inside the secured perimeter.Also, controlled latency and jitter are also a must for specific verticals. Let’s think about Telco networks, where virtualized OpenRAN workloads, for instance, are highly latency-sensitive.
“Thanks to cloudification and orchestration, managing all these use cases becomes transparent and reliable and application developers can optimize their performance, imagining new and exciting solutions.”
Flexibility and automation is key
Nearby has always had an application-centric approach, by design. In the company’s view, applications rule and the network’s and infrastructure behavior have to adapt to app needs, anytime.
According to Martí: “Nearby introduces new features and capabilities into NearbyOne on a regular basis, and the main drivers of the product evolution are flexibility and automation.”
In regards to flexibility, NearbyComputing focuses on how different architectures not only coexist, but also interplay, as well as how connectivity and infrastructure can be used in the most efficient way and reduce CAPEX.
When looking at automation, the company looks at how complex deployments can be automated and run in parallel. NearbyComputing also monitors how the orchestrator can learn from the status of the network and its behavior over time, as well as how decisions can be made without human intervention.
NearbyComputing recently appeared in Gartner’s cool vendors listing, which Martí describes as “one of the most relevant awards a startup can obtain”.
“The award has a very positive impact: it is very exclusive and we are proud of having been picked. Gartner’s awards are well-known by the industry and our customers assess exactly what it means without any side explanation,” he mentions.
Markets to watch
When asked about the key markets NearbyComputing is interested in and the strategies they have in place to expand, Martí says that EMEA is the company’s natural space, but it plans to fuel expansion in the North America and APAC markets.
“We are partnering with some of the top 10 global system Integrators, and through them we are accessing a growing number of opportunities in those regions,” he explains.
“We already participate in some of the most ambitious projects on edge computing in Singapore and develop projects with companies such as American Towers, WWT or Casa Systems in the US, amongst others.”
NearbyComputing’s plan includes having a local presence to support its partners and to access a wider number of opportunities.
A complete orchestration platform
A complete orchestration platform, often referred to as an orchestration system or orchestration framework, is a software solution that facilitates the automation, coordination, and management of complex workflows, processes, and tasks within an organization’s IT infrastructure or across various systems and services. These platforms are essential for optimizing and streamlining operations, improving efficiency, reducing errors, and ensuring consistency in the execution of tasks.
Martí describes NearbyOne’s internal orchestration and automation engine as “a nice achievement from our development team”.
“It has defined southbound and northbound interfaces allowing it to connect to the widest set of technologies and to cross-domain OSS systems. We keep integrating with no issues new apps and network functions in every project we are requested to,” he further explains.
“It allows scaled-up networks by adding nodes to the same orchestration backend or federating with other instances of itself or other orchestrators through the Camara APIs. Moreover, it allows connected deployments, so a full and complex architecture of distributed radio and core network functions can be deployed ‘as one’ with all components checking that others linked to them are up.”
He says that this feature saves huge amounts of time and avoids a lot of potential errors that are complicated to trace and find.
The orchestrator has the capacity to follow external convergence criteria, allowing for user-defined intended orchestration, to meet at any availability, capacity, security, pre-condition at any time, according to Martí.
“All of this make NeabyOne, in our honest opinion, the network manager’s best companion,” he adds.
Why end-to-end process orchestration matters
End-to-end process orchestration ensures that all the steps and components of a workflow are executed in a seamless and coordinated manner to achieve a specific business objective or outcome. This concept is often used in the context of business process management (BPM), workflow automation, and digital transformation efforts.
Martí concludes by suggesting IT solutions take a business approach, rather than a purely technical one.
“Enterprise operations are a set of chained and intertwined processes to deliver an output that gives sense to all of them, and that’s what should also be the case for orchestration SW. Only by encompassing the whole needs that are connected in a specific process real value can be delivered,” he says.
“Most modern applications rely on a distributed edge-cloud compute environment interconnected by different levels of networks. Ensuring its performance on a permanent basis in a convergent way requires catering for a large number of interdependent actions across all those layers and domains, reacting in real time or even before any issue arises, by projecting network observability data into the close future. This is the way we conceive orchestration at NearbyComputing.”
End-to-end process orchestration is crucial for modern organizations looking to streamline operations, cut costs, enhance customer experiences, and remain competitive in a dynamic business landscape. It enables better control, visibility, and adaptability in an increasingly complex and interconnected world.
edge AI | edge-to-cloud | Nearby Computing | NearbyComputing SL | orchestration