Automating the Edge with Robotics

What are event-based vision systems for edge computing?

What are event-based vision systems for edge computing?

With the growing number of applications for computer vision, there has been a significant amount of work done in focus to solve complex robotics and machine vision tasks. One of the areas of focus is on the bio-inspired technology of event cameras. Event cameras are different from conventional frame cameras because they only capture images to report an event when the sensor system senses changes in brightness. This event-based vision system is bio-inspired because it works similar to the way humans perceive the world and surroundings.

Research on human vision capabilities has taken a big leap over the last couple of years. Existing research shows that humans can gather data from a scene that is changing at a rate of up to 1000 times a second. However, this information is encoded, which means it is likely impossible for fixed frame-rate cameras to observe using the typical 60 frames per second that high-quality cameras record.

To solve this challenge, event-based sensing has garnered attention. Event-based vision systems rely on each pixel to sense significant change and reduce the recording of redundant data to save processing power, memory and other associated resources.

Event-based vision systems are inspired by how the human eye works; this miniature vision system enables efficient streaming of visual data for real-time analytics. By mimicking the human eye, event-based vision systems have enabled enterprises and manufacturers to track high-speed moving objects with far better accuracy than standard fixed frame-rate cameras. Neuromorphic vision system vendors are developing efficient and advanced machine learning algorithms to achieve higher temporal resolution and lower latency.

Some of the advantages demonstrated by researchers include high temporal resolution, low latency, low power, and high dynamic range. For high temporal resolution, when compared to frame-based cameras, event-based vision sensors are able to capture very fast motions without suffering from motion blur. Each pixel in the event-based vision system works independently so as to transmit the image as soon as the change is detected. It does not have to wait for a global exposure time of the frame, reducing the latency of data transmission further to about 10 microseconds. According to researchers, event-based systems have also shown high dynamic range (meaning the difference between light and dark tones in images) of event-based cameras to exceed the 60dB of high-quality frame-based cameras.

Event-based vision systems offer many advantages over the conventional vision system, such as complete image information, low computational power, and the ability to work in poor and changing light conditions. These systems have a huge potential to be integrated for computer vision and industrial robotic applications. Several developers have come up with new event-based vision technology to build systems for a wide range of markets.

Future advances in industrial automation will depend more on efficient computer vision techniques to provide edge computing systems with real-time insights.

Article Topics

 |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Automating the Edge

“Automating

Deploying AI Models at the Edge

“Deploying

Latest News