Deploying adaptive AI in distributed water plants - Barbara Accoina

Startup debuts Kubernetes-ready platform to ease AI resource allocation headaches

Runai Labs Ltd. has announced the general availability of its deep learning virtualization platform, which could eventually prove to be particularly useful in a variety of edge environments.

Company executives say their elastic infrastructure management product will eventually become a full virtualization layer for deep neural networks.

The startup, which goes by the name Run:AI, has said that CIOs can use the platform to see how resources for data scientists are being used and to manage those resources on the fly. For the most part today, CIOs are forced to view GPUs like an ocean shore – either wide and dry during low tide (slack processing demand), or narrow and frenzied during high tide.

It works this way. Teams of data scientists typically are assigned a set of GPUs based on a statistical analysis of how they have used GPU resources. The problem with averaging use is that too often teams are starved of computing power and other times their assigned – and expensive – resources stand idle.

Run:AI enables CIOs to create and manage artificial intelligence infrastructure for data-scientist teams on the fly, optimizing hardware use and boosting development time. To accomplish this, the company’s platform gives CIOs visibility into GPU use and the ability to more efficiently control resource allocation.

Sets of GPUs can be assigned using Run:AI software and the number of assigned GPUs can be automatically increased (assuming there is overhead) or winnowed and allocated to other teams, all based on pre-set priorities.

Run:AI executives made their scheduler a plug-in for Kubernetes, open-source container-orchestration software that automates application deployment, scaling and management.

The company raised $13M in 2019 to develop its software.

Analysis

Run:AI, though not specifically pitched as an edge AI or edge analytics company, is an interesting AI startup from a number of angles. For one, it is going against the grain of AI performance optimization. A number of startups are working on processor designs that are specifically designed to run AI and ML workloads and are either receiving hefty venture funding or are being acquired by big tech companies.

Acquisitions so far this year include Xnor.AI by Apple for an undisclosed sum, which was preceded in 2019 by Intel’s $2B acquisition of Habana Lab.

Run:AI’s approach to optimizing and orchestrating workloads across (eventually) a wide range of existing CPU and GPUs and, later, AI-specific chips has the advantage of requiring less venture funding while potentially proving very useful to the enterprise market going forward.

While most AI model training is done in centralized data centers (often in private facilities, but sometimes in public cloud services), AI workloads for some use cases (manufacturing process optimization, for example) will benefit from being run at the device and/or plant edge. Having Run:AI software for resource optimization means companies deploying AI could in theory take Kubernetes containers and run algorithms across edge gateways that were underutilized, capacity-wise, at their manufacturing facility.

Jim Davis, Principal Analyst, Edge Research Group

Article Topics

 |   |   |   |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Machine learning at the Edge

“Barbara

Latest News