TPU Inference Servers for Efficient Edge Data Centers - DOWNLOAD

Gcore unveils new inference solution at the edge, promising low latency AI experiences

Gcore unveils new inference solution at the edge, promising low latency AI experiences

Gcore, a provider of edge AI, cloud, network, and security solutions, has unveiled its latest offering, Gcore inference at the edge. The solution aims to deliver low latency experiences for AI applications.

The newly introduced Gcore Inference at the edge allows for the distributed deployment of pre-trained machine learning (ML) models to edge inference nodes, facilitating real-time inference.

Utilizing Gcore’s network comprising over 180 edge nodes, interconnected through low latency routing technology, the inference at the edge solution promises enhanced performance. According to the company, each high-performance node, positioned at the edge of the network, is equipped with NVIDIA L40S GPUs, designed for AI inference tasks.

The infrastructure ensures a response time of under 30 ms by determining the route to the nearest available inference region when a user initiates a request, the company notes.

Andre Reitenbach, CEO of Gcore, emphasizes the significance of Gcore inference at the edge in enabling customers to focus on training their machine learning models without being burdened by concerns regarding deployment costs, skills, and infrastructure.

“At Gcore, we believe the edge is where the best performance and end-user experiences are achieved, and that is why we are continuously innovating to ensure every customer receives unparalleled scale and performance. Gcore inference at the edge delivers all the power with none of the headache, providing a modern, effective, and efficient AI inference experience,” adds Reitenbach.

The offering aims to be beneficial for various industries, including automotive, manufacturing, retail, and technology, by providing them with cost-effective, scalable, and secure AI model deployment options.

One of the key features of Gcore inference at the edge is its support for a range of ML models, including both fundamental and custom ones.

Gcore inference at the edge includes built-in DDoS protection, adherence to data privacy and security standards such as GDPR, PCI DSS, and ISO/IEC 27001, model autoscaling to handle load spikes, and scalable cloud storage solutions.

Read more:

Gcore launches new serverless edge computing product for developers to deploy applications

Gcore, Ampere unite to offer high-demand cloud services

Article Topics

 |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Latest News