2025-06-24 10:03 PM - last edited on 2025-06-25 12:51 AM by KDJEM.1
Dear STMicroelectronics Community,
I am a student currently participating in the Tron Programming Contest, and my team is utilizing the STM32N6570-DK board for our project. Working with such a high-end and powerful board presents a significant learning curve, and we have encountered a few challenges for which we would greatly appreciate your expert guidance.
Our primary objective is to implement an AI-based fire and smoke detection system using a camera as input. For the contest, we are specifically required to use the μT-Kernel 3.0 Real-Time Operating System (RTOS).
My queries are as follows:
AI Model Integration: After optimizing our TinyYOLO model using the ST Edge AI Developer Cloud (which generates C files like network_eclobs.h, network_eclobs.c, etc.), we are seeking precise instructions on how to effectively integrate and utilize these generated C files within our μT-Kernel 3.0 project. Specifically, we need clarity on the exact API calls and the overall workflow for running our custom AI model on the STM32N6570-DK.
Camera Usage & Resolution with μT-Kernel 3.0: We are facing difficulties in utilizing the camera connected to the board. We've reviewed the STM32N6 MPU BSP documentation, which suggests that the camera resolution cannot be changed. However, we've observed examples within the same GitHub repository where the camera resolution appears to have been modified. Could you please provide clarification on whether resolution changes are indeed possible and, if so, how to achieve this? Furthermore, detailed advice on how to effectively interface and manage the camera, including image acquisition and buffering, within a μT-Kernel 3.0 environment would be invaluable.
These two points – integrating the generated AI model C code and ensuring its seamless operation with the camera for image input – are our most pressing concerns. Any insights or suggestions, particularly regarding μT-Kernel 3.0 compatibility and best practices for high-performance camera-to-AI pipeline, would be immensely helpful. While not a primary necessity at this moment, any brief information or resources regarding LCD integration for displaying camera feeds or results would also be beneficial for future development.
Thank you very much for your time and assistance.