cancel
Showing results for 
Search instead for 
Did you mean: 

Which software do I need to install to start with STM32N6570-DK?

Anhem
Associate II

I am a newbie with STM32N6570-DK.

My target is to deploy AI model on this board. Which software do I need to install to start with STM32N6570-DK? I means that all softwares.

Thank you.

17 REPLIES 17

Hi @Julian E. 

Here is my config file

Concerning the graphic card, did you get an error ? the code should use a GPU if it finds one, but should still works even if you don't have one.

Yes, I think so and I commented out source code that check GPU

@hydra.main(version_base=None, config_path="", config_name="user_config")
def main(cfg: DictConfig) -> None:
    """
    Main entry point of the script.

    Args:
        cfg: Configuration dictionary.

    Returns:
        None
    """

    # # Configure the GPU (the 'general' section may be missing)
    # if "general" in cfg and cfg.general:
    #     # Set upper limit on usable GPU memory
    #     if "gpu_memory_limit" in cfg.general and cfg.general.gpu_memory_limit:
    #         set_gpu_memory_limit(cfg.general.gpu_memory_limit)
    #     else:
    #         print("[WARNING] The usable GPU memory is unlimited.\n"
    #               "Please consider setting the 'gpu_memory_limit' attribute "
    #               "in the 'general' section of your configuration file.")
Anhem
Associate II

@Julian E. 

Here is my user_config.yaml. I keep it the same as initial. I see stedgeAI and IDE paths. Do I need to change these paths?

I also see there is some places for board STM32H7. Do I need to changed these places?

general:
  project_name: COCO_2017_person_Demo
  model_type: st_yolo_lc_v1
#choices=[st_ssd_mobilenet_v1, ssd_mobilenet_v2_fpnlite, tiny_yolo_v2, st_yolo_lc_v1, 
#         st_yolo_x, yolo_v8, yolo_v5u]
  model_path: 
  logs_dir: logs
  saved_models_dir: saved_models
  gpu_memory_limit: 16
  num_threads_tflite: 4
  global_seed: 127

operation_mode: chain_tqeb
#choices=['training' , 'evaluation', 'deployment', 'quantization', 'benchmarking',
#        'chain_tqeb','chain_tqe','chain_eqe','chain_qb','chain_eqeb','chain_qd ']

dataset:
  name: COCO_2017_person
  class_names: [ person ]
  training_path: /dataset/coco_person_2017_tfs/train
  validation_path:
  validation_split: 0.1
  test_path: /dataset/coco_person_2017_tfs/val
  quantization_path: /dataset/coco_person_2017_tfs/val
  quantization_split: 0.01

preprocessing:
  rescaling: { scale: 1/127.5, offset: -1 }
  resizing:
    aspect_ratio: fit
    interpolation: nearest
  color_mode: rgb
                       
data_augmentation:
  ########## For tiny_yolo_v2 and st_yolo_lc_v1 only ###########
  random_periodic_resizing:
    period: 10
    image_sizes: [(192, 192), (224, 224), (256, 256), (288, 288), (320, 320), (352, 352),
                  (384, 384), (416, 416), (448, 448), (480, 480), (512, 512),
                  (544, 544), (576, 576), (608, 608)]
  random_flip:
    mode: horizontal
  random_crop:
    crop_center_x: (0.25, 0.75)
    crop_center_y: (0.25, 0.75)
    crop_width: (0.5, 0.9)
    crop_height: (0.5, 0.9)
    change_rate: 0.9
  random_contrast:
    factor: 0.4
  random_brightness:
    factor: 0.3 

training:
  model:
    # alpha: 0.35
    input_shape: (192, 192, 3)
    # pretrained_weights: imagenet
  dropout:
  batch_size: 64
  epochs: 4
  optimizer:
    Adam:
      learning_rate: 0.005
  callbacks:
    ReduceLROnPlateau:
      monitor: val_map
      patience: 10
      factor: 0.25
    ModelCheckpoint:
      monitor: val_map
    EarlyStopping:
      monitor: val_map
      patience: 20

postprocessing:
  confidence_thresh: 0.001
  NMS_thresh: 0.5
  IoU_eval_thresh: 0.4
  plot_metrics: False   # Plot precision versus recall curves. Default is False.
  max_detection_boxes: 100

quantization:
  quantizer: TFlite_converter
  quantization_type: PTQ
  quantization_input_type: uint8
  quantization_output_type: float
  granularity: per_channel   #per_tensor
  optimize: False   #can be True if per_tensor
  export_dir: quantized_models

benchmarking:
  board: STM32H747I-DISCO

tools:
  stedgeai:
    version: 10.0.0
    optimization: balanced
    on_cloud: True
    path_to_stedgeai: C:/Users/<XXXXX>/STM32Cube/Repository/Packs/STMicroelectronics/X-CUBE-AI/<*.*.*>/Utilities/windows/stedgeai.exe
  path_to_cubeIDE: C:/ST/STM32CubeIDE_<*.*.*>/STM32CubeIDE/stm32cubeide.exe

deployment:
  c_project_path: ../../application_code/object_detection/STM32H7/
  IDE: GCC
  verbosity: 1
  hardware_setup:
    serie: STM32H7
    board: STM32H747I-DISCO

mlflow:
  uri: ./experiments_outputs/mlruns

hydra:
  run:
    dir: ./experiments_outputs/${now:%Y_%m_%d_%H_%M_%S}

 

When you run the command:

python stm32ai_main.py

 it uses by default the user_config.py

 

If you want to use the one you sent me in the zip you should run:

python stm32ai_main --config-name YOUR_YAML.yaml

 

You do indeed need to edit the ST Edge AI and CubeIDE paths.

(which you did in your yaml in the zip, but not in the user_config.yaml that is called if you did not specify the config-name)

 

In your error I see permission denied, so you may have to edit the permission with chmod of your stedgeai folder.

 

Let me know if it works.

Also try to revert the python file for the GPU to its original state.

 

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hi @Julian E. 

After editing permission with

sudo chmod 777 -R /opt/ST/STEdgeAI/ 
sudo chmod 777 -R /opt/ST/STEdgeAI/*
python3 stm32ai_main.py --config-path ./config_file_examples/ --config-name deployment_n6_ssd_mobilenet_v2_fpnlite_config.yaml

But the process took too much time and it stopped to at the point. You can see in the atttached message. I waited 30 minutes, but it did not do anything as before changing permission. My board has 2 LEDs (red and orange). It seems that I need to use USB-C to USB-C (not USB-A to USB to make sure enough power. Anyway, which cable should I use: data cable or charge cable same as for mobile phone charging?), but this is different problem.

How I can fix this issue with deployment and permission? I am afraid that I need to reinstall STEdgeAI because currently it does not run after changing permission. Thank you.

 

(stm_32)  /media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src  python3 stm32ai_main.py --config-path ./config_file_examples/ --config-name deployment_n6_ssd_mobilenet_v2_fpnlite_config.yaml                   

2025-03-24 09:14:14.630486: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2025-03-24 09:14:14.630502: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
[INFO] : Running `deployment` operation mode
[INFO] : ClearML config check
[INFO] : The random seed for this simulation is 123
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
[INFO] : Generating C header file for Getting Started...
[INFO] : This TFLITE model doesnt contain a post-processing layer
loading model.. model_path="/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/stm32ai-modelzoo/object_detection/ssd_mobilenet_v2_fpnlite/ST_pretrainedmodel_public_dataset/coco_2017_person/ssd_mobilenet_v2_fpnlite_035_192/ssd_mobilenet_v2_fpnlite_035_192_int8.tflite"
loading conf file.. "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/stmaic_STM32N6570-DK.conf" config="None"
"n6 release" configuration is used
[INFO] : Selected board :  "STM32N6570-DK Getting Started Object Detection (STM32CubeIDE)" (stm32_cube_ide/n6 release/stm32n6)
[INFO] : Compiling the model and generating optimized C code + Lib/Inc files:  /media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/stm32ai-modelzoo/object_detection/ssd_mobilenet_v2_fpnlite/ST_pretrainedmodel_public_dataset/coco_2017_person/ssd_mobilenet_v2_fpnlite_035_192/ssd_mobilenet_v2_fpnlite_035_192_int8.tflite
setting STM.AI tools.. root_dir="", req_version=""
 Cube AI Path: "/opt/ST/STEdgeAI/2.0/Utilities/linux/stedgeai".
[INFO] : Offline CubeAI used; Selected tools:  10.0.0 (x-cube-ai pack)
loading conf file.. "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/stmaic_STM32N6570-DK.conf" config="None"
"n6 release" configuration is used
compiling... "ssd_mobilenet_v2_fpnlite_035_192_int8_tflite" session
 model_path  : ['/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/stm32ai-modelzoo/object_detection/ssd_mobilenet_v2_fpnlite/ST_pretrainedmodel_public_dataset/coco_2017_person/ssd_mobilenet_v2_fpnlite_035_192/ssd_mobilenet_v2_fpnlite_035_192_int8.tflite']
 tools       : 10.0.0 (x-cube-ai pack)
 target      : "STM32N6570-DK Getting Started Object Detection (STM32CubeIDE)" (stm32_cube_ide/n6 release/stm32n6)
 options     : --st-neural-art default@/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/Model/user_neuralart.json --input-data-type uint8 --inputs-ch-position chlast

 

Hello @Anhem,

 

First, concerning the cable, you need a data cable.

Also, make sure to connect it to the port where it says st link v3. (same side of the boot pins).

 

Then concerning the fact that it blocks, next time, can you try to press ENTER when it blocks. I believe that sometime I have to do it.

 

I think we can check if the issue comes from ST Edge AI Core or Model Zoo by manually using ST Edge AI Core. To do so, please try the following:

  1. Go to your st edge ai folder: 
    /opt/ST/STEdgeAI/2.0/Utilities/linux/stedgea
  2. Then copy your model in this folder (either a .h5, .onnx or .tflite)
  3. Open a terminal and run the command:
    ./stedgeai generate --model YOUR_MODEL --target stm32n6 --st-neural-art​

(I don't use linux, so the first part of the command is maybe wrong, but the options are good)

Let it run and if it run as it should, you should have a message saying it is complete.

If it works, we at least know that ST Edge AI core is working.

 

In the meantime, I will test the exact model that you use. I don't think it comes from that, but we never know.

 

Have a good day,

Julian


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hi @Julian E. 

Thank you for your response.

First, concerning the cable, you need a data cable. Yes, I will use data cable usbC to usbC.

Also, make sure to connect it to the port where it says st link v3. (same side of the boot pins). Yes, it is default on the board, in JP2, STLK.  I attached the image below 

Then concerning the fact that it blocks, next time, can you try to press ENTER when it blocks. I believe that sometime I have to do it.  I tried press Enter when it is running, but it did not solve the problem. But when I tried Enter twice and wait, it seems that it can building and flashing, but I got error.

(stm_32)  /media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src  python3 stm32ai_main.py --config-path ./config_file_examples/ --config-name deployment_n6_ssd_mobilenet_v2_fpnlite_config.yaml

2025-03-25 11:11:00.143876: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2025-03-25 11:11:00.143895: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
[INFO] : Running `deployment` operation mode
[INFO] : ClearML config check
[INFO] : The random seed for this simulation is 123
INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
[INFO] : Generating C header file for Getting Started...
[INFO] : This TFLITE model doesnt contain a post-processing layer
loading model.. model_path="/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/stm32ai-modelzoo/object_detection/ssd_mobilenet_v2_fpnlite/ST_pretrainedmodel_public_dataset/coco_2017_person/ssd_mobilenet_v2_fpnlite_035_192/ssd_mobilenet_v2_fpnlite_035_192_int8.tflite"
loading conf file.. "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/stmaic_STM32N6570-DK.conf" config="None"
"n6 release" configuration is used
[INFO] : Selected board :  "STM32N6570-DK Getting Started Object Detection (STM32CubeIDE)" (stm32_cube_ide/n6 release/stm32n6)
[INFO] : Compiling the model and generating optimized C code + Lib/Inc files:  /media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/stm32ai-modelzoo/object_detection/ssd_mobilenet_v2_fpnlite/ST_pretrainedmodel_public_dataset/coco_2017_person/ssd_mobilenet_v2_fpnlite_035_192/ssd_mobilenet_v2_fpnlite_035_192_int8.tflite
setting STM.AI tools.. root_dir="", req_version=""
 Cube AI Path: "/opt/ST/STEdgeAI/2.0/Utilities/linux/stedgeai".
[INFO] : Offline CubeAI used; Selected tools:  10.0.0 (x-cube-ai pack)
loading conf file.. "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/stmaic_STM32N6570-DK.conf" config="None"
"n6 release" configuration is used
compiling... "ssd_mobilenet_v2_fpnlite_035_192_int8_tflite" session
 model_path  : ['/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/stm32ai-modelzoo/object_detection/ssd_mobilenet_v2_fpnlite/ST_pretrainedmodel_public_dataset/coco_2017_person/ssd_mobilenet_v2_fpnlite_035_192/ssd_mobilenet_v2_fpnlite_035_192_int8.tflite']
 tools       : 10.0.0 (x-cube-ai pack)
 target      : "STM32N6570-DK Getting Started Object Detection (STM32CubeIDE)" (stm32_cube_ide/n6 release/stm32n6)
 options     : --st-neural-art default@/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/Model/user_neuralart.json --input-data-type uint8 --inputs-ch-position chlast


"series" value is not coherent.. stm32n6 != stm32n6npu
 results -> RAM=621,048 IO=110,592:153,800 WEIGHTS=1,618,465 MACC=0 RT_RAM=1,773 RT_FLASH=632,783 LATENCY=0.000
[INFO] : Optimized C code + Lib/Inc files generation done.
[INFO] : Building the STM32 c-project..
deploying the c-project.. "STM32N6570-DK Getting Started Object Detection (STM32CubeIDE)" (stm32_cube_ide/n6 release/stm32n6)
updating.. n6 release
 -> s:copying file.. "network.c" to /media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/Model/network.c
 -> s:copying file.. "network_ecblobs.h" to /media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/Model/network_ecblobs.h
 -> s:copying file.. "network_atonbuf.xSPI2.raw" to /media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/Model/network_atonbuf.xSPI2.raw
 -> s:removing dir.. /media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/Middlewares/AI_Runtime/Lib/GCC/ARMCortexM55
 -> s:copying dir.. "ARMCortexM55" to /media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/Middlewares/AI_Runtime/Lib/GCC/ARMCortexM55
 -> s:removing dir.. /media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/Middlewares/AI_Runtime/Inc
 -> s:copying dir.. "Inc" to /media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/Middlewares/AI_Runtime/Inc
 -> s:removing dir.. /media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/Middlewares/AI_Runtime/Npu/ll_aton
 -> s:copying dir.. "ll_aton" to /media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/Middlewares/AI_Runtime/Npu/ll_aton
 -> u:copying file.. "app_config.h" to /media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/Inc/app_config.h
 -> updating cproject file "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/application_code/object_detection/STM32N6/STM32CubeIDE" with "NetworkRuntime1000_CM55_GCC.a"
building.. n6 release
flashing.. n6 release STM32N6570-DK
Error executing job with overrides: []
Traceback (most recent call last):
  File "/home/anhem/miniconda3/envs/stm_32/lib/python3.10/site-packages/clearml/binding/hydra_bind.py", line 230, in _patched_task_function
    return task_function(a_config, *a_args, **a_kwargs)
  File "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/stm32ai_main.py", line 228, in main
    process_mode(cfg)
  File "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/stm32ai_main.py", line 102, in process_mode
    deploy(cfg)
  File "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/../deployment/deploy.py", line 111, in deploy
    stm32ai_deploy_stm32n6(target=board, stlink_serial_number=stlink_serial_number, stm32ai_version=stm32ai_version, c_project_path=c_project_path,
  File "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/../../common/deployment/common_deploy.py", line 528, in stm32ai_deploy_stm32n6
    stmaic.build(session, user_files=user_files, serial_number=stlink_serial_number)
  File "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/../../common/stm32ai_local/build.py", line 513, in cmd_build
    _cube_ide_builder(cube_ide_exe[0], session, used_conf,
  File "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/../../common/stm32ai_local/build.py", line 470, in _cube_ide_builder
    _programm_dev_board(conf, session.series, serial_number=serial_number)
  File "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/../../common/stm32ai_local/build.py", line 231, in _programm_dev_board
    st_links, _ = get_stm32_board_interfaces(series)
  File "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/../../common/stm32ai_local/stm32_tools.py", line 329, in get_stm32_board_interfaces
    st_links[idx] = _stm32_get_info(app[0], serial_number=st_link['sn'], series=series)
  File "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/../../common/stm32ai_local/stm32_tools.py", line 309, in _stm32_get_info
    run_shell_cmd(cmd_line, logger=cur_logger, parser=parser, assert_on_error=True)
  File "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/../../common/stm32ai_local/utils.py", line 312, in run_shell_cmd
    raise excep_
  File "/media/anhem/mnt/STM32N6570DK_dev/stm32ai-modelzoo-services-1dc52d8ef939fb7875c6e00e7a6f6311f07c5cb1/object_detection/src/../../common/stm32ai_local/utils.py", line 305, in run_shell_cmd
    raise RuntimeError('invalid command ' + '\"{}\"'.format(str_args))
RuntimeError: invalid command "STM32_Programmer_CLI --connect port=SWD mode=HOTPLUG -hardRst sn=0025001F3333511831363730"

Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.

It seems that it is error with STM32_Programmer_CLI. I tried to add STM32_Programmer_CLI to PATH of Ubuntu and run this command

STM32_Programmer_CLI --connect port=SWD mode=HOTPLUG sn=0025001F3333511831363730

And I got this output

      -------------------------------------------------------------------
                        STM32CubeProgrammer v2.19.0                  
      -------------------------------------------------------------------

ST-LINK SN  : 0025001F3333511831363730
ST-LINK FW  : V3J15M6
Board       : STM32N6570-DK
Voltage     : 3,29V
Error: Unable to get core ID
Error: No STM32 target found! If your product embeds Debug Authentication, please perform a discovery using Debug Authentication
2nd connect tentative with frequency (8MHz)
ST-LINK SN  : 0025001F3333511831363730
ST-LINK FW  : V3J15M6
Board       : STM32N6570-DK
Voltage     : 3,29V
Error: Unable to get core ID
Error: No STM32 target found! If your product embeds Debug Authentication, please perform a discovery using Debug Authentication

It seem that, the board is detected but STM32 core is not found. How can I fix it? Current my setting: BOOT0 - Right, BOOT1 - Left, use Jumper STLINK, connect board with PC via USB-A to USB-C. How can I fix this issue?

 

I think we can check if the issue comes from ST Edge AI Core or Model Zoo by manually using ST Edge AI Core. To do so, please try the following:

  1. Go to your st edge ai folder: 
    /opt/ST/STEdgeAI/2.0/Utilities/linux/stedgea
  2. Then copy your model in this folder (either a .h5, .onnx or .tflite)
  3. Open a terminal and run the command:
     
    ./stedgeai generate --model YOUR_MODEL --target stm32n6 --st-neural-art​

(I don't use linux, so the first part of the command is maybe wrong, but the options are good)

Let it run and if it run as it should, you should have a message saying it is complete.

If it works, we at least know that ST Edge AI core is working.

Yes, I check on my side, ST Edge AI works well. Here is output and some generated files. Please see the attached images and log file

 

In the meantime, I will test the exact model that you use. I don't think it comes from that, but we never know.  Yes, thank you so much. Now I am stucked here and I could not revert initial permession of STEdgeai.

 

I am waiting good news from you. Thank you so much.

 

Julian E.
ST Employee

Hello @Anhem,

 

Please set both boot pin to the right. This should solve the issue.

Also, please try to only use USB-C to USB-C data cable. When the application will run, you will encounter an issue because of the insufficient voltage.

 

Have a good day,

Julian 

 


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

Hi @Julian E. 

Please set both boot pin to the right. This should solve the issue.

https://github.com/STMicroelectronics/stm32ai-modelzoo-services/blob/main/object_detection/deployment/README_STM32N6.md#table-of-contents in this guide in deployment part. Step 3.3 said that BOOT0 - RIGHT and after running commands

cd ../src/
python stm32ai_main.py --config-path ./config_file_examples/ --config-name deployment_n6_ssd_mobilenet_v2_fpnlite_config.yaml

both pins are set to the LEFT. 

So I will try as your comment. Firstly I will set both pins to RIGHT and after running Python script I wall set both pins to LEFT to run inference.

Also, please try to only use USB-C to USB-C data cable. When the application will run, you will encounter an issue because of the insufficient voltage.

Yes, I understood

 

Thank you so much. I could successfully run tfllite model from Model zoo. You are right, we need to set both BOOT pins to the right before running Python script, after that turn them to LEFT. I also used USB-C to USB-C data cable (led LD3 is yellow-green => that's fine). Moreover, in Linux we need to solve permission issue and PATH of some STM32 components. That is from my experience that I want to share.

 

I have one more question. As my understanding, currently I am running with internal flash (user internal memory). Is that right? and How much memory I can use with this model? I want to know this value to estimate whether my AI model can be run with this settings.

 

Moreover, I know that STM32N6570-DK supports SD card. It means that we can deploy all C source code + AI model to SD and run inference for LARGER models. Is that right? If it is correct, could you please provide guide link to do this (about how to set up board to run with SD card, how to build and flash, it is better if it comes with AI model).