2025-09-25 11:48 AM
Hello,
ST Team and Community!
Recently, I've been working on implementing an AI model based on YoloV11n, built to run on an N6570. I'm trying to transform the raw memory that the model returns to me into a standard YoloV11n output, so I can have the data I need for my application.
Analyzing a little deeper and doing the reverse path to link two functions documented in the LL_ATON lib:
We have that, inside the network.c, generated by CubeAI, the final symbol that should have the implementation of the function simply just returns WORNG_INDEX for both LL_ATON_Set_User_Input_Buffer and LL_ATON_Set_User_Output_Buffer. Was it supposed to happen this way?
Here, in the official documentation for the LL_ATON lib, it says something about a compilation flag that is required for the user to allocate buffers this way. Could this be the reason for the generation error, or is the function actually not implemented?
My current settings for the cubeAI package - for context:
Best regards,
Rafael V. Volkmer