Edge AI

Ask questions and find answers on how to deploy, debug, and optimize AI models on ST microcontrollers, microprocessors, and smart sensors.

cancel
Showing results for 
Search instead for 
Did you mean: 

Forum Posts

Resolved! Connection error while using NUCLEO-N657X0-Q with STM32CubeProgrammer

I was trying to deploy YOLO models with NUCLEO-N657X0-Q board, following instructions shown in the README file in the repository "STM32N6-GettingStarted-ObjectDetection"README.md . While using prebuild binaries, it told me to use STM32CubeProgrammer ...

Wen0127_1-1756198844462.png Wen0127_2-1756199189617.png Wen0127_3-1756199429856.png Wen0127_4-1756199445946.png
Wen0127 by Associate II
  • 631 Views
  • 3 replies
  • 2 kudos

Resolved! STM32N6 + YOLOv8n ONNX Model. Issues with PNG Input and Code Generation

Hi everyone,I'm working with an STM32N6 board and trying to deploy a custom YOLOv8n model in ONNX format using CubeMX and STM32 Edge AI. I’ve run into a few issues and was hoping someone could help clarify the process.Here’s what I’ve done so far:I s...

TerZer by Associate III
  • 1243 Views
  • 16 replies
  • 6 kudos

Resolved! Unanle to run tensorflow lite micro on stm32769 eval board

I am trying to run tflm with stm32f769 eval. I build the tflm as a static library and included it in my project. My app  is crashing whille calling allocatetensors() function.  Anyone having any luck with it? Also I couldn' find any official demo for...

hadi81 by Associate II
  • 514 Views
  • 3 replies
  • 0 kudos

the result of onnx model validate on target has two different output, why?

hiI have validate on the STM32N6 board by onnx model in STM32MX,but i have two different output ,like this: m_outputs_1: (10, 3)/float64, min/max=[-282.578064, 481.370483], mean/std=[-2.619613, 298.301398], output  m_outputs_2: (10, 1)/float64, min/m...

cxf by Associate III
  • 1766 Views
  • 25 replies
  • 2 kudos

Is it possible to use multiple AI models on my STM32N6570-DK?

Hello,im trying to implement an ai project that uses two different ai models in one pipeline.The output from model A should be the input for model B.The question that im asking would be how im able to combine the two models within one project in my S...

lyannen by Associate III
  • 696 Views
  • 8 replies
  • 6 kudos

STM32N6570 and storing a model in the flash

Hello! How can I store the model's input data in the flash of the STM32N6570-DK, then deploy the model, read the input data from flash, and perform inference? Are there any relevant tutorials for this? I previously followed the model deployment tutor...

llcc by Senior
  • 591 Views
  • 5 replies
  • 1 kudos

Resolved! Problems with running an AI Model on the NUCLEO-N657X0-Q Board

Hello, I'm working with a NUCLEO-N657X0-Q Board and trying to run/debug a TensorFlow Lite model that has been quantized to int_8 to run on hardware. I've tried several approaches but every attempt leads to a dead-end of one error leading to three mor...

Coop23 by Associate
  • 1050 Views
  • 2 replies
  • 2 kudos