2024-11-04 01:03 AM
Hi,
I am following this tutorial to use my own image recognition model with the STM32H747I-DISCO board. I have already ran the demos and they seem to work fine.
I first configure the model in CUBE-MX successfully and then copy the required files to the demo project.
When I compile the code in the STM-IDE I get the following error a bunch of times:
undefined reference to `forward_conv2d_if32of32wf32'
I also get this:
STM32H747I_DISCO_PersonDetect_Google_CM7.elf section `.bss' will not fit in region `DTCMRAM'
STM32H747I_DISCO_PersonDetect_Google_CM7.elf section `.axiram_section' will not fit in region `AXIRAM'
STM32H747I_DISCO_PersonDetect_Google_CM7.elf section `.sram_section' will not fit in region `SRAM123'
section .axiram_section VMA [24000000,241ad7ff] overlaps section .bss VMA [20017a40,2c6301a7]
even though CUBE-MX said that the used memory is within the flash and ram available (as shown in the image included).
The declaration is in layers_conv2d.h but I can't find the definition anywhere.
I have done the Updating to a newer version of X-CUBE-AI part in the tutorial successfully.
Any ideas?
thanks
Solved! Go to Solution.
2024-11-08 01:00 AM - edited 2024-11-08 01:03 AM
Hello @dogg ,
You need to install everything required to use a gpu with tensorflow. It can be quite tricky to configure on windows but you can find tutorials on google if you look for: how to use gpu with tensorflow.
(I don't link you something specific as it will depend greatly on your situation).
You can check if you are seing your gpu in a .py or .ipynb notebook doing this:
import tensorflow as tf
tf.test.gpu_device_name()
If you don't see anything, it means your configuration is not complete.
Once everything is setup, model zoo will automatically use your gpu during training as long as you use the same right python when running stm32ai_main.py.
Have a good day,
Julian