Deploying AI Models at the Edge

Chooch AI speeds up AI vision for face biometrics with Nvidia, edge processing

Chooch AI speeds up AI vision for face biometrics with Nvidia, edge processing

Chooch AI, a provider of technology and services for visual AI, announced that it has achieved several breakthroughs in the use of real-time AI on video streams in partnership with chip firm Nvidia and systems integration partner Convergint Technologies. The advancements portend improvements in the speed and accuracy of biometric identification systems.

Among the developments, Chooch AI said that edge AI deployments can now achieve “extreme response-time performance” of twenty milliseconds on multiple video feeds with extremely high accuracy for a wide variety of use cases.

Pre-trained models for workplace safety, fire detection, crowd analysis and more are available to customers and do not require customer data. The company notes that there are over 8000 AI classes and eight models running on NVIDIA Jetson Nano which achieve accuracy of over 90%; models can inference on up to five camera streams simultaneously with the NVIDIA Jetson AGX Xavier products.  Also of note: the AI models are deployable to devices from a single dashboard, which is also where real time alerts and reports can be configured.

Biometric Update interviewed Emrah Gultelkin, co-founder and CEO of Chooch AI to find out more about the company’s technology and partnership with Nvidia. This interview has been edited for length and clarity.

Q: What is driving some biometric identification systems to use AI models at the “edge?

EG: The basic here is why we moved the inferencing to the edge. The grand theme here is about the cost and being able to do streaming of all these videos on the edge. If you do inferencing in the cloud, it’s just impossible, on a large scale. That’s what we have seen. Bringing inferencing to the edge, without any loss of accuracy, or even making it more accurate because you can track better on the edge has been a focus of our work.

The second issue here is being able to do the training very quickly. When we talk about pre-trained models or pre-trained perceptions, as we call them, that is key. We’re down to few hours of training for new models, so it’s almost like a pre-trained model. At the same time, you need to be able to tweak the models or you need to be able to train new models very quickly; you need to feed the AI to learn these things on a grand scale. So, you constantly need to train. And we’re doing that on the back end as well and we’ve brought down the training process, including data collection, adaptation, everything down to hours and not weeks or months.

Q: How has the market changed this year, post-pandemic. Have you responded? How have you adjusted the product and marketing strategy?

EG: The pandemic has accelerated public safety based on cameras, so we’ve gotten a lot of traction on the PPE front. Touchless boarding and touchless check-in have come in post-COVID. We’re seeing the general acceleration of interest in AI deployments across the field just because people are more remote, and you know either you’re looking for assistance to do, so it has been a good ride for us post-COVID. But at the same time, you know some of our clients are also wondering if they’re going to survive—like the airlines or retail—so it’s kind of a double-edged sword in that sense.  We think our technology can help with efficiencies in the workplace. And we’re constantly deploying it.

Q: You introduced a Mobile SDK last year. What traction have you been seeing for that in 2020?

EG: That [SDK] was made for developers on our platform. A lot of the traction there is biometric related, such as facial authentication with liveness detection as well as general biometric with liveness detection. It was easy for us to put that on the edge initially. A lot of the facial software and the facial platforms are more developed than just general object detection or even OCR, so we were able to put a lot of the biometric stuff on the edge quite early.

Q: Can you address how have adapted your business with the advancements in chip technology and this move to the edge?

EG: Chooch AI has been using Amazon Web Services T4 instances on the cloud. We were pushed on to the edge because of our clients—basically it’s a market very market driven issue. The accuracy of the previous processors on the edge were so low, like below 60%, that we were we were not using them. Now with from anywhere from the NVIDIA Jetson Nano to the AGX Xavier, you’re able to deploy the exact same accuracy as you would have in the cloud. That’s been a huge revolution.

The business model has been built around charging customers for custom training, the onboarding fee and then we would charge for API calls. We’re still doing that for our cloud business. But anywhere you have very heavy video streaming, you need to put these models on edge processors, either on the edge [device] or on premise. Now you still have the some of the custom training and, if necessary, integration, but it’s no longer an API call; you’re charging a per camera and per device licensing fee. So, the business model has changed a lot, and it’s also not costing us anything once the software is embedded in the machine.

Q: In terms of applications and deployment, what is the biggest challenge right now?

EG: The processing power has increased phenomenally over just over the last year. And that has made it easier for us to deploy on the edge, albeit some of the components are still kind of messy, and we’re working to make sure that these are all packaged properly and that people can just flash them onto their edge devices immediately. We’re in the process of making that very easy for our clients and we’re working with NVIDIA on that as well. These things will resolve over the next 12 to 24 months and we’re working hard with our partners to resolve them.

Q: It seems that the crux of the issue is now that we’ve had all these tremendous advances in hardware processing power that the next step is getting the different software components—the operating system and applications and other drivers and whatnot—integrated properly. And then, of course, managing the process of flashing the devices is maybe another issue as well.

EG: That’s exactly it. I mean, you have these powerful processors but without the software or if the software is not really integrated properly the whole system doesn’t work. That has been a challenge, but it’s normal for this to happen because these are very new processors out there. And then you have the link between the camera—the camera feed—and the device to the cloud which needs to be synced up as well for updates and so forth.

I’ll give you an example. There are different types of deep learning frameworks like TensorFlow, GLUE, PyTorch, and whatnot. NVIDIA has created a great tool, TensorRT, which kind of compresses the models without degradation to them. However, it doesn’t work with all the different deep learning frameworks, so you have to convert the deep learning frameworks to a different platform to transform them to TensorRT. It’s basically like dominoes—if one piece of it falls you know you need to make sure that you know that the others are following. We’re seeing that a lot and so that’s one thing [regarding] deep learning frameworks—you know you need to put these on the edge. You can’t just put TensorFlow Lite on the edge because the degradation is too much, so you need to put the real TensorFlow or MXNet or PyTorch or GLUE and whatever you’re using, and to do that you need to change some of that code.

Q: What’s the balance of business looking like in terms of revenue sources or the geographical source of customers?

EG: Recently, we’re getting more traction from the government. We don’t know if that’s COVID related or not. We’ve had more traction on the video integrator side as well. We’re working with the major video integrators in the United States, and they are building their applications on top of our software. The government work is related to geospatial [uses], which is like fire detection and other types of asset detection, and the video integrator side is more workplace related.

Q: In the broader edge computing realm, are there any other developments that could further propel your market? For instance, AWS is now offering services like AWS Local Zones (which is off-premise) and AWS Outposts (which is on-premise managed cloud). What are some other things that you’re looking forward to seeing happening in the market?

EG: When we talk about edge deployments, we’re limiting that to inferencing, right? So, it’s the inferencing engine plus all the services around inferencing. But when we talk about training and using models, we’ll see this more in the future where you ask, “What training has to be live?” Then you’re going to have to put a lot of these things on to the edge as well. [The edge devices] are not powerful enough to do that. But if we’re talking about putting like these distributed data centers then you can do training live as the data comes in.

We’re really looking forward to that, but it also involves networking and how you create the data sets as well. We have the tools necessary to make training very quick. For biometric identification, it’s live, so anything that comes in is done immediately. In terms of training for more complex objects and actions it needs to go through a process. There are limitations to the software and there are some limitations to the network as well today. The more we improve that process, the faster this will scale on-premise and in these distributed networks as well.

This interview was originally posted on Biometric Update.

Article Topics

 |   |   |   |   |   |   |   |   |   |   |   |   |   | 

Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Featured Edge Computing Company

Edge Ecosystem Videos

Deploying AI Models at the Edge

“Deploying

Latest News