cancel
Showing results for 
Search instead for 
Did you mean: 

After analyzing a tflite network with X-CUBE-AI, I get the warning " NOT IMPLEMENTED: Shape with 1 dimensions not supported: (15600,)". How can I fix this issue ? Thank you

MSant.11
Associate III
 
4 REPLIES 4
JTedot
Associate II

Same problem, still no answer. Judging by your specific input, you want to use YAMNet on embedded devices as well, right?

MSant.11
Associate III

Yes, I wish.

Do you know working sound classifications, that can be used as well in embedded STM board ?

Self-trained models seem to work. Check out this article: https://www.kdnuggets.com/2020/02/audio-data-analysis-deep-learning-python-part-1.html

At the end, you could convert your model via

import tensorflow as tf
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open("my_model.tflite", "wb").write(tflite_model)

and pass that to the X Cube AI. It passes the analysis correctly. I'm thinking about dropping YAMNet on embedded systems, since too many functions happen abstractly under the hood and aren't really accessible. If you build your own model, you are in full control, but you also have to extract features yourself. You can't pass a network an audio file, you have to pass it a highly filtered spectrogram (in essence audio classification is image classification).

MSant.11
Associate III

Thank you!

Do you have any experience with FP-AI-SENSING1 and its Middleware ASC (acoustic scene classification) ?

https://www.st.com/en/embedded-software/fp-ai-sensing1.html