2025-05-21 1:19 AM
Hi,
I'm working with implementing a Spiking Neural Network on a neural ART equipped STM32N6 board. As a true SNN has operators that are not supported by the STEdgeAI compiler, I have opted to create a ANN model that simulates spiking behaviour. My problem is that the current approach utilizes custom RNN layers which seem to not be supported either by STEdge nor by TFLite conversion. A partial solution for that was to unroll my "RNN layers" by using a fixed time step. This allowed me to convert my model into TFLite but when I import my model into the STEdgeAI suite I seem to be unable to neither quantize, optimize or benchmark it. Am I still using something unsupported which blocks me from using my model? Or am I missing some other step to use my model in STEdgeAI?
Attached is the unrolled single neuron model in ".tflite", snn_to_ann_neuron.py which is the original implementation, and unrolled_s2a.py which is my code to unroll it.
Thanks in advance!
2025-05-21 7:21 AM - edited 2025-05-21 7:22 AM
Hello @brianon,
What command did you use?
For a non neural art stm32 and with the stedgeai core v2.1, I successfully generated the code for your attached model.
stedgeai.exe generate --model snu_model_float32.tflite --target stm32
ST Edge AI Core v2.1.0-20194
Have a good day,
Julian
2025-05-26 7:39 AM
Hi @Julian E.
Right now my goal is to compile it for a neural-ART equipped stm32n6. I have tried using both the command line tool and the developer cloud, getting errors in both. For the command line tool, the command I use is "stedgeai.exe generate --model snu_model_float32.tflite --target stm32n6 --st-neural-art" but it just gives me warning that my functions are not quantized and a internal error that I have a "NoneType". In the developer cloud, my model seems to be not eligible to be quantized nor can it be optimized giving the same errors.
Could it be that I still use a non-supported operator or have I missed some other fundamental error in my implementation?
2025-05-26 7:51 AM - edited 2025-05-26 7:52 AM
Hello Brian,
Can you try, as a first step to follow guidelines of the "dev cloud" on how to perform quantization on your tflite (Click LEARN MORE link in the devcloud) -- you can quantize it yourself when generating the tflite file, in your python code, and devcloud help provides example code for that--.
This is a required step for using the Neural-art accelerator properly.
Note that it may or may not help in fixing the "NoneType" error, but this is a much advised step here.
2025-05-27 5:50 AM - edited 2025-05-28 9:17 AM
Hi @SlothGrill,
Thank you for the pointer, the example code helped partially as I at least do not get the warnings that my operators are not quantized anymore. Though as you mention it did not help with the "NoneType" problem. Attached is a zip with my update neuron code, the unrolling function, and the code to export and quantize an example model using my neuron. The second zip contains the quantized model in tflite format.
Am I doing something wrong when exporting/quantizing that results in the NoneType internal error? Or is there an underlying problem in my neuron code?
2025-05-28 9:21 AM
Hello @brianon ,
Thanks for all the input.
I have the same issue here, even when reducing your cell to something simpler. I will forward the model to experts here :)
On my side, what i've tried that seems to make the generation successful is to replace your StepFunction with something more "standard" as activation function (tf.nn.relu), and the generation seems to go to the end.
In your code, just for you to test, in snn_to_ann_neuron.py, remove your StepFunction and type StepFunction = tf.nn.relu
I'm pretty sure it's not what you want to do, but if you can use an "activation function" that comes directly from keras, i guess some parts might be better handled here (lambda layers are a bit complex to handle ...).
I will come back to you on what are the limitations you are encountering/whether the behaviour you observe is a bug.
Cheers