Israel-based Run AI, a provider of compute orchestration for AI workloads, has announced its Atlas platform is now certified to run the Nvidia AI enterprise platform. The certification enables partners and clients to leverage the Nvidia end-to-end cloud-native AI and data analytics software suite to optimize AI model production.
More and more companies are using advanced machine learning technology, which requires powerful AI computing chips. GPUs are vital for operating AI applications. Firms increasingly rely on software to get the most out of their AI infrastructure to release models faster in the market.
“The certification of Run AI Atlas for NVIDIA AI Enterprise will help data scientists run their AI workloads most efficiently,” said Omri Geller, CEO and co-founder of Run AI. “Our mission is to speed up AI and get more models into production, and NVIDIA has been working closely with us to help achieve that goal.”
Recommended reading: Nvidia Fleet Command adds remote edge AI management and multi-instance GPU
Run AI Atlas platform is an orchestration software for AI computation that helps developers consume GPUs more effectively and efficiently. Run AI has the capabilities to create fractional GPUs as virtual ones inside the available GPU frame buffer memory and compute space. Enterprises can access fractional GPUs through containers to run containerized AI applications.
The software tool uses Kubernetes Scheduler and software-based fractional GPU technology. This gives customers an added advantage to access multiple GPUs, multiple GPU nodes or even fractions of a single GPU. Run AI platform operates on VMware vSphere and bare metal servers to support various distributions of Kubernetes.
“Enterprises across industries are turning to AI to power the breakthroughs that will help improve customer service, boost sales and optimize operations,” said Justin Boitano, vice president of enterprise and edge computing at Nvidia. “Run AI’s certification for NVIDIA AI Enterprise provides customers with an integrated, cloud-native platform for deploying AI workflows with MLOps management capabilities.”
Run AI has been a long-time partner with Nvidia. Previously the company worked with Weights & Biases and Nvidia to use Nvidia’s computing resources orchestrated by Run AI’s Atlas platform. Earlier this year, the collaboration led to a proof of concept which enabled multi-cloud GPU flexibility for enterprises that employ Nvidia GPUs in the cloud.
AI/ML | application development | edge AI | GPU | Kubernetes | model management | Nvidia | Run AI