Automating the Edge with Robotics

OmniML secures $10M for edge-optimized ML model development

Categories Edge Computing News  |  Edge Startups  |  Funding
OmniML secures $10M for edge-optimized ML model development

OmniML, a startup developing smaller and faster machine learning models, today announced $10 million in seed funding to accelerate the use of AI on edge devices. GGV Capital led the round with additional investment by Qualcomm Ventures, Foothill Ventures, and a few other venture capital firms.

Founded by Dr. Song Han, MIT EECS professor and serial entrepreneur, Dr. Di Wu, former Facebook engineer, and Dr. Huizi Mao, co-inventor of the “deep compression” technology coming out of Stanford University, OmniML solves a fundamental mismatch between AI applications and edge hardware to make AI more accessible for everyone, not just data scientists and developers.

OmniML enables and empowers smaller, scalable machine learning (ML) models on edge devices to be more capable of performing AI inference at levels that are impossible today outside of data centers and cloud environments. OmniML’s approach has already achieved orders-of-magnitude improvements for many major ML tasks on edge devices.

“OmniML’s leading Neural Architecture Search based platform has the potential to disrupt AI model optimization by creating new models that are efficient to begin with, rather than just compressing models,” says Carlos Kokron, Vice President, Qualcomm Technologies Inc. and Managing Director, Qualcomm Ventures Americas. “Their solution offers enterprise customers the ability to build the best AI models for target hardware resulting in significant time and cost savings, as well as improved accuracy. We are excited to invest in OmniML to help make edge AI ubiquitous.”

OmniML’s breakthrough will accelerate the deployment of AI on the edge – particularly computer vision – by alleviating costly pain points often found between AI applications and the high demand they place on hardware. Developers will no longer have to optimize ML models manually for specific chips and devices, a fundamental change that will result in faster deployment of high-performance, hardware-aware AI that can run anywhere.

OmniML is working with customers in sectors such as smart cameras and autonomous driving to create AI-enabled advanced computer vision for improved security and real-time situational awareness. This technology, though, is broadly applicable—for instance, it can improve the retail customer experience and support safety and quality control detection for precision manufacturing.

“AI is so big today that edge devices aren’t equipped to handle its computational power,” adds OmniML Co-Founder and CEO Di Wu, PhD. “That doesn’t have to be the case. Our ML model compression addresses the gap between AI applications and edge devices, increasing the devices’ potential and allowing for hardware-aware AI that is faster, more accurate, cost effective and easy to implement for anyone, on diverse hardware platforms.”

OmniML’s neural architecture search algorithm has been integrated by Amazon’s AutoGluon open source AutoML library and Meta’s PyTorch open-source deep learning framework and has won multiple awards and recognitions.

Article Topics

 |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Automating the Edge

“Automating

Deploying AI Models at the Edge

“Deploying

Latest News