Deploying adaptive AI in distributed water plants - Barbara Accoina

Hardware roundup: edge-optimized servers, software-defined AI hardware, edge-cloud network services

Categories Edge Computing News  |  Funding  |  Hardware

SolidRun has partnered with Gyrfalcon Technology to develop an edge-optimized ARM-based AI inference server to improve GPU performance while reducing cost and power, the company announced.

Dubbed Janux GS31, it is a scalable and modular server that supports current neural network frameworks and can be set up with up to 128 Gyrfalcon Lightspeeur SPR2803 AI acceleration chips for inference performance for video AI models.

“Powerful, new AI models are being brought to market every minute, and demand for AI inference solutions to deploy these AI models is growing massively,” said Dr. Atai Ziv, CEO at SolidRun in a prepared statement. “While GPU-based inference servers have seen significant traction for cloud-based applications, there is a growing need for edge-optimized solutions that offer powerful AI inference with less latency than cloud-based solutions. Working with Gyrfalcon and utilizing their industry-proven ASICs has allowed us to create a powerful, cost-effective solution for deploying AI at the Edge that offers seamless scalability.”

The server is customized for AI mass deployment issues that might arise at the edge such as energy consumption, cost effectiveness and server real estate. Janux GS31 supports ultra-low latency decoding and video analytics of up to 128 channels of 1080p/60Hz video, making it a viable option for smart cities and infrastructure, intelligent enterprise/industrial video surveillance applications, and tagging photos and videos for text-based searching. Gyrfalcon’s Lightspeeur 2803S Neural Accelerator chips provide the server with application and energy efficiency, and deliver up to 24 TOPS per Watt.

SambaNova Systems

SambaNova Systems, a startup founded in 2017 in Palo Alto, California, has raised $250 million in series C funding to further develop software-defined AI hardware.

The startup was founded by Rodrigo Liang and Stanford professors Kunle Olukotun and Chris Ré. Kunle Olukotun was recently recognized with the IEEE Computer Society’s Harry H. Goode Memorial Award and leads the Stanford Hydra Chip Multiprocessor (CMP) research project. Ré is an associate professor in the Department of Computer Science at Stanford University’s InfoLab who has been awarded a MacArthur genius award.

“Raising $250 million in this funding round with support from new and existing investors puts us in a unique category of capitalization,” said Liang, a veteran of Sun Microsystems and Oracle, in a prepared statement. “This enables us to further extend our market leadership in enterprise computing.”

SambaNova has not communicated who its customers are or given details about its products, but, according to a report in VentureBeat, they are combining algorithmic optimizations and custom board-based hardware and leveraging AI processing research funded by the Defense Advanced Research Projects Agency (DARPA).

SambaNova has so far raised more than $450 million. It raised $56 million in a series A funding round in March 2018 and $150 million in a series B funding round in April 2019. The AI chip market is forecast to reach $91.18 billion by 2025.

InsidePacket

InsidePacket has chosen SONiC, Microsoft’s Software for Open Networking in the Cloud, to provide cloud providers and data centers with edge-cloud services, boost performance and reduce network latency from siloed appliances via a Broadcom-based white box switch, the company announced.

InsidePacket is collaborating with a number of companies including Microsoft, Baidu, and Broadcom, who have together completed the SAI APIs specifications as part of the effort to add load balancer services to SONiC. SONiC Architecture is a microservices-based modular open-source network operating system that manages network infrastructures, switching, and routing operations.

“InsidePacket is proud to collaborate with Microsoft, Baidu and the OCP community in bringing high scale edge cloud network services over a Jericho 2 white box to the edge cloud,” said Eli Karpilovski, CEO and Founder of InsidePacket, in a prepared statement. “Through the integration with SONiC, InsidePacket customers can now leverage the high-quality product and incredible work already accomplished by the Open Networking community, along with the ipOS technology designed to address the new architectures for the cloud.”

InsidePacket’s ipOS solution enables a 10 to 40x improvement in productivity, performance, cost, and scale compared to current typical appliances, the company claims. By integrating with SONiC, InsidePacket’s solution is a programmable, edge-accelerated platform based on a programmable-based switch ASIC designed to roll out middlebox services at high performance and ultra-low latency. Middlebox services are defined as any non-standard services other than IP routing which are performed on the network, including WAN optimization, firewall, application gateways, CDNs and other proxy-based functions.

The InsidePacket load balancer and DDOS mitigation are the first ipOS network services offered as containers operated on top of the latest official SONiC version, leveraging the full capacity of a Jericho 2-based switch, and achieving beyond 40x performance improvement vs. standard appliances, at a fraction of the cost, the company claims.

Article Topics

 |   |   |   |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Machine learning at the Edge

“Barbara

Latest News