cancel
Showing results for 
Search instead for 
Did you mean: 

XGboost model for STM32MP157D-DK1

meme002
Associate II

Hello,

 

I was wondering by using X-AI package. Will I be able to use the XGboost model converted to ONXX format in STM32 MP157 A7 core.  Or it only supports Neural Network models?

 

I was browsing through the forum and found that ONXX runtime recipe is required in order to achieve this. Here is the link for that https://community.st.com/t5/stm32-mpus-products/hello-can-i-implement-random-forest-classifier-on-stm32-mpu-if/td-p/160467

 

However, I see ST wiki has a page showing how to measure performance of models using ONXX runtime. website is here.  

Are both the ONXX runtime links same? I prefer using st link since it has detailed description on how to use it.

 

Thanks

 

 

1 ACCEPTED SOLUTION

Accepted Solutions
VABRI
ST Employee

Dear @meme002 ,

 

The STCommunity link you provide was raised e/o 2021.

Since then, the X-LINUX-AI expansion package has been updated with the support of ONNX Runtime.

So you can rely on https://wiki.st.com/stm32mpu/wiki/Category:X-LINUX-AI_expansion_package to install X-LNUX-AI package and on https://wiki.st.com/stm32mpu/wiki/How_to_measure_the_performance_of_your_models_using_ONNX_Runtime to know how to use ONNX runtime.

 

BR

Vincent

View solution in original post

6 REPLIES 6
VABRI
ST Employee

Dear @meme002 ,

 

The STCommunity link you provide was raised e/o 2021.

Since then, the X-LINUX-AI expansion package has been updated with the support of ONNX Runtime.

So you can rely on https://wiki.st.com/stm32mpu/wiki/Category:X-LINUX-AI_expansion_package to install X-LNUX-AI package and on https://wiki.st.com/stm32mpu/wiki/How_to_measure_the_performance_of_your_models_using_ONNX_Runtime to know how to use ONNX runtime.

 

BR

Vincent

meme002
Associate II

Thanks for clarification @VABRI. I have one quick question. I am planning to develop a pipeline for the STM32 MP157D-DK1 board I have. Let me know if you see any issues in the pipeline.

 

1. M4 Firmware to collect Data and then perform FFT.

2. Send that FFT values to A7 using RPMmsg framework.

3. A7 will have the random forest model for predicting based on the values it gets from CM4. Not sure how to achieve this yet, should I use user space application in STM32 IDE or Device tree customization.

 

 

VABRI
ST Employee

Hi @meme002

Your pipeline will work.

Depending on how your data are collected, you will need to update the device tree to assign the IP needed for data collection on the Cortex M4.

On Linux usrleand, your application will retrieve M4 pre-processed data receive via RPMesg and you can then trig a ONNX processing for your random forest processing.

 

BR

Vincent

Hello @VABRI I am halfway there, I wanted to ask your suggestion on the approach, I am planning to use "#include<onnxruntime_cxx_api.h> so that I can c++ api to run the sklearn model.

 

I know how to make things work in linux ubuntu desktop which involves creating cmake and compling that to generate build which can then later be used to compile the onnx model and make inference using C++ api.

 

However in openSTlinux, I am just curious since AI developer package already has ONNX runtime installed, if I create a user application in C++ in a7 core with #include<onnxruntime_cxx_api.h> does it work? or I need to use some kind of cmake to build.

No worries figured out myself. Installing software Xlinux add on solved ths issue.

VABRI
ST Employee

Hi @meme002,

Yes to good way to develop application is to used the OpenSTLinux SDK with the X-LINUX-AI SDK addons.

This will allow you to build your application againt ONNX runtime API.

 

BR

Vincent