Deploying AI Models at the Edge

One Stop Systems follows up record revenue in 2019 with addition to “AI on the Fly” line

One Stop Systems follows up record revenue in 2019 with addition to “AI on the Fly” line

Custom server and storage vendor One Stop Systems Inc. is having a banner year selling high-performance systems to enterprise and military customers, and 2020 might follow suit as the company looks to provide more compute power for artificial intelligence use cases at the device edge.

One Stop Systems makes systems and platforms for original-equipment manufacturers who sell mission-critical edge systems to scientists, engineers and others. Among the hardware it makes are custom servers, compute accelerators and flash storage arrays, many of which use high performance I/O systems and pack in large numbers of GPU chips.

The goal: enable what the company calls “AI on the Fly,” which calls for moving the processing of AI algorithms closer to where the data resides. The company argues that data generated by sensors (radar, LIDAR, sonar and such) along with video data can generate hundreds of gigabytes per second of data. In military applications, as an example, where real-time intelligence is needed, shipping data back to a central cloud is untenable. Instead, specialized, ruggedized systems need to be able to perform compute on nearby data that is stored on high performance arrays of flash drives.

High volume server vendors don’t often work on customizing their systems for such specific requirements, which is where a vendor like OSS steps in. The company has leveraged its expertise in designing GPU-centric systems and is now offering a new addition to its AI on the Fly product line. It is an OSS PCIe 4.0 value expansion system incorporating the newest NVIDIA V100S Tensor Core graphics processing unit. (Value expansion refers to a way of implementing machine learning models to incorporate data from model-free reinforcement to help train systems more quickly).

OSS says that the system adds computing strength to its Gen 3 and Gen 4 servers through two OSS PCle x16 Gen 4 links, which support 512 Gpbs of aggregated bandwidth to the graphics processing unit. In this case, the company uses the NVIDIA V100S Tensor Core GPU, which brings Nvidias CUDA Cores (for parallel computing) and Tensor Cores (a processing core focused on enabling matrix operations used in deep learning algorithms) in a unified architecture to enable mixed-precision computing.

It was, indeed, a good year for One Stop Systems.

Company executives said last week in an earning preview statement that they booked record revenue of $58.3 million for the year ended Dec. 31. That was a 57 percent gain on fiscal year 2018, and $300,000 more than earlier company guidance had indicated.

Fourth-quarter revenue increased 28 percent, to $18.4 million, compared to the same period a year ago.

Several factors played into the performance, including new revenue from corporate acquisitions and new design wins in both the enterprise and military market verticals, according to the company.

Analysis

OSS is a small vendor specializing in high performance computing systems. As other chip and server hardware companies look to tap into demand for AI workload-capable edge compute systems, they can look for evidence of market demand and revenue by closely looking at publicly-traded companies like OSS. OSS’ recent design wins in the enterprise and military markets signal that AI is indeed driving demand for edge compute in certain demanding use cases such as battlefield intelligence, while face and voice recognition in industrial and other security settings is driving demand for ruggedized GPU-accelerated servers where slightly less compute power is needed and price is more of a consideration for buyers.

Jim Davis, Principal Analyst, Edge Research Group

Article Topics

 |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Automating the Edge

“Automating

Deploying AI Models at the Edge

“Deploying

Latest News