Deploying AI Models at the Edge

Nvidia Fleet Command adds remote edge AI management and multi-instance GPU

Nvidia Fleet Command adds remote edge AI management and multi-instance GPU

Nvidia announced several new features for its Fleet Command managed edge AI service platform, including remote management of edge locations, multi-instance GPU for assigning applications to each instance, and a collaborative space for tool integration. The company notes that these new capabilities will play an important role in the growing demand for edge technology at an enterprise scale.

Organizations need an efficient management platform to help them monitor and secure thousands of edge locations using a unified platform. Nvidia Fleet Command allows businesses to provide and deploy edge artificial intelligence applications into distributed environments from a single cloud-based platform.

According to the company, “deployment is just the first step in managing AI applications at the edge.” Nvidia points towards the efforts of industry leaders and technology companies to deliver an efficient and cost-effective management platform that optimizes deployed edge AI systems. With the introduction of new features to the Fleet Command platform, the company addresses problems associated with monitoring and updating the security of edge devices.

The Fleet Command management platform will provide support for a multi-instance graphics processing unit This means that the user can now partition an Nvidia GPU into several independent instances. Organizations can leverage this feature to assign applications to each instance, which allows them to run multiple edge AI applications on the same GPU. In simple terms, an Nvidia multi-instance GPU (MIG) will enable developers to partition the GPU into multiple instances. For better optimization and low latency, each will have its own compute core and dedicated resources.

Along with the performance upgrades associated with MIG, the new feature also adds security and resilience through a set of dedicated hardware resources for the compute power, memory, and caching of each instance. This means even if a fault occurs due to an application running in one instance, it will not have any impact on the application running in another instance, reducing the downtime and allowing it to continue operating uninterrupted. An MIG is also designed to work efficiently with containers and virtual machines.

The collaboration space in the Fleet Command platform allows enterprises to integrate third-party edge solutions for product development and deployment at edge locations. Some of the partner solutions, including Domino Data Lab, provide an enterprise MLOps platform that allows engineers to collaboratively develop, deploy and monitor AI models at scale. Another tool available comes from Milestone Systems, called an AI Bridge, which is an application programming interface gateway for providing video feed access to AI applications from edge cameras.

Nvidia Launchpad gives short-term access to Fleet Command for the deployment and management of edge AI applications on servers using hands-on labs. Nvidia Launchpad is an Equinox-hosted solution with infrastructure for testing and prototyping edge applications.

Article Topics

 |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Deploying AI Models at the Edge

“Deploying

Latest News