Automating the Edge with Robotics

Vultr unveils GPU Stack and Container Registry for AI application lifecycle management

Vultr unveils GPU Stack and Container Registry for AI application lifecycle management

Cloud computing platform Vultr has launched its Vultr GPU Stack and Container Registry for enterprises and digital startups to build, test and operationalize AI models at scale.

The company says the GPU Stack supports instant provisioning of the full array of NVIDIA GPUs, while the new Vultr Container Registry makes AI pre-trained NVIDIA NGC models available globally for on-demand provisioning, development, training, tuning and inference.

Available across Vultr’s 32 cloud data center locations, across all six continents, the new Vultr GPU Stack and Container Registry aims to accelerate speed, collaboration and the development and deployment of AI and machine learning (ML) models.

When asked about the launch, J.J. Kardwell, CEO of Vultr’s parent company, Constant, says: “Vultr is committed to enabling innovation ecosystems around the world – from Silicon Valley and Miami to São Paulo, Tel Aviv, Tokyo, Singapore, London, Amsterdam and beyond – providing instant access to high-performance cloud GPU and cloud computing resources to accelerate AI and cloud-native innovation.

“By working closely with NVIDIA and our growing ecosystem of technology partners, we are removing access barriers to the latest technologies, and offering enterprises the first composable, full-stack solution for end-to-end AI application lifecycle management. This enables data science, MLOps and engineering teams to build on a globally-distributed basis, without worrying about security, latency, local compliance, or data sovereignty requirements.”

“The Vultr GPU Stack and Container Registry provide organizations with instant access to the entire library of pre-trained LLMs on the NVIDIA NGC catalog, so they can accelerate their AI initiatives and provision and scale NVIDIA cloud GPU instances from anywhere,” said Dave Salvator, director of accelerated computing products at NVIDIA.

The Vultr GPU Stack uses the full array of NVIDIA GPUs, pre-configured with the NVIDIA CUDA Toolkit, NVIDIA cuDNN and NVIDIA drivers, for immediate deployment.

The company says that the solution’s goal is to remove the complexity of configuring GPUs , calibrating them to the specific model requirements for each application and integrating them with the AI model accelerators of choice.

Article Topics

 |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Automating the Edge

“Automating

Deploying AI Models at the Edge

“Deploying

Latest News