Edge Impulse's Object Anomaly Detection Technology, And Other Innovations at Embedded World 2025

Unveiling the Future of Embedded AI: Object Anomaly Detection and More at Embedded World 2025

author avatar

25 Mar, 2025. 2 minutes read

Wevolver: David. So what are we looking at here?

David: Thanks. Well, what we have here is an STM32N6. Now, this is a microcontroller-based product. So this is an MCU with an NPU. And what we're doing in this specific example here is object detection and anomaly detection all in one. So we can place a coffee pod or two onto the conveyor belt and we will see that coffee pod has been detected. Now we're also detecting, as I said, anomalies or defect detection. So if I place these coffee pods which have defects in them, we'll see over on the right hand side of the screen where the defects are in a heat map. I'll take this one here, which has some puncture wounds in it. And we'll do the object detection as well as the defects. This is super interesting because this is an MCU based device, extremely low power, and because it's got the MCU and the NPU, we can actually run two distinct models at the same time on the device. Pretty cool.

David: All right, and now we're over at a second demo pod here. For this one though, I'm going to actually pass the mic over to my colleague Dmitry, who will walk you through it. Dmitry? 

Dmitry: Hey. Yeah, I'm Dmitry. And here we see the modal cascade. There are two models running on this particularly powerful device made by Qualcomm. It's 9075, recently announced chip. So we have the object detection model which detects vehicles and then once it's detected, it is being passed to a visual language model. You have a choice of lava, open source, lava and AMG. This is in in-house model by Qualcomm. So we have crafted a prompt here which is the vehicle class, vehicle manufacturer, sign identification. If press generate all, it'll just take a few moments. And keep in mind that both the detection and the visual language model, they both run on this device. It doesn't go anywhere to the clouds, you can see the answers being streamed in real time. Yeah, let's wait for the last one here. Yeah, it gets most of it very correctly. Let's see the last one actually. Yeah, that's all good. And the value proposition for us at Edge Impulse is in fact that you cannot blindly trust visual language models. You use ChatGPT, you know it can make mistakes, it says they are right at the bottom. So in that case, if you really are serious and you want to put this into production, you want to validate your prompt, you want to create it, validate it and check the accuracy against the data that you have, and this is exactly what we are providing at Edge Impulse. So you create your prompt here, you tweak it around, you test it, and then you pass it on to the production model running there. That's about it.

David: All right, thanks, Dmitry. Appreciate it. And I think that's going to do it for us here from Embedded World 2025. If you have any questions, be sure to reach out edgeimpulse.com. Thanks.