A startup called Deep Vision has emerged with a new AI processor with novel chip design that is better suited to edge computing applications like Smart Cities and Smart Retail where lower energy consumption is a key requirement.
While other chips coming to market are similarly aiming for low power applications like smart cameras and edge gateways, Deep Vision’s says its chip design and software tools together take a different approach that will enable devices to significantly improve image recognition, object tracking and other functions with better accuracy and lower latency than the competition. In settings such as retail banking and grocery stores, for example, cameras will be able to track more people with greater accuracy; other applications include in-cabin monitoring of passengers for autonomous vehicle operation.
Deep Vision’s chip design approach is based around a low latency data architecture that has the ability to run multiple AI models against the data at the same time, according to Ravi Annavajjhala, CEO of Deep Vision. The design essentially works to minimize the need to move data around between memory and processor core to improve performance rather than just speeding up the processing cycles or increasing the size of the data paths between memory and processor core.
The company’s design is based on research conducted by Dr. Rehan Hameed and Dr. Wajahat Qadeer, who founded Deep Vision in 2015. The resulting approach is a patented “Polymorphic Dataflow Architecture” that is characterized as focusing on latency, compared to chips in the vein of Nvidia GPUs, Google TPUs or other AI-focused chips deployed in cloud data centers that were designed for massive throughput for a single AI model. The Deep Vision ARA-1 is claimed to offer a lower system power consumption (typically around 2 Watts) compared to other designs, yet the processor is said to run deep learning models such as Resnet-50 at a 6x improved latency than Google’s Edge TPU and 4x improved latency than Intel’s MyriadX.
New chip designs like ARA-1 that have new instruction sets don’t find traction in the market if developers find them hard to program. Annavajjhala, CEO of Deep Vision, said the company has focused extensively on enabling an easy to program environment for customers.
“Our biggest design goal was a seamless software experience,” he said. The adoption of any processor is affected by how easy the software experience is, he acknowledged.
The compiler has been built to allow seamless porting of all industry-standard AI frameworks, including: Caffe, Tensor Flow, MXNET, PyTorch, and Networks like Deep Lab V3, Resnet-50, Resnet-152, MobileNet-SSD, YOLO V3, Pose Estimation and UNET.
Beyond support for standard models, Deep Vision’s software developer kit (SDK) offers a bit-accurate simulator and tools for optimizing power and performance to the needs of the customer’s application. Deep Vision says that its SDK also allows for a frictionless workflow, which results in a low code, automated, seamless migration process from training model to the production application.
(Deep Vision ARA-1 and ARA-1 with USB and PCIe interface option. Source: Deep Vision)
Customers have been able to port models over and accurately simulate models before even having sample chips, and once Deep Vision shipped chips, found that the models ran correctly without additional coding, executives said.
Deep Vision has raised $19 million and backed by multiple investors, including Silicon Motion, Western Digital, Stanford, Exfinity Ventures, and Sinovation Ventures.
This article was first published on Biometric Update.
chips | Deep Vision | edge AI | Google | GPU | Industry 4.0 | Nvidia | smart city | TPU