Issue with TensorFlow Dense Layer Conversion to .nb Format
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2025-02-28 6:56 AM
Hello,
I am encountering an issue when converting a TensorFlow Dense layer to .tflite and subsequently using the ST Edge AI tool to convert it to .nb format, following the ST Edge AI Guide for MPU.
When using a lower-dimensional input, the conversion succeeds. For example, the following code works as expected:
input_features = tf.keras.Input(shape=(1, 384), dtype=tf.float32)
output = tf.keras.layers.Dense(1536)(input_features)
model = tf.keras.Model(inputs=input_features, outputs=output)
However, increasing the input dimension to (1500, 384) results in the following error:
ST Edge AI Core v2.0.0-20049
PASS: 0%| | 0/2 [00:00<?, ?it/s]
Galcore warning: MMU is disabled!
E [main.c:vnn_VerifyGraph:93]CHECK STATUS(-3:The requested set of parameters produce a configuration that cannot be supported.)
E [main.c:main:236]CHECK STATUS(-3:The requested set of parameters produce a configuration that cannot be supported.)
E 21:37:15 Fatal model generation error: 64768
E010(InvalidModelError): Error during NBG compilation, model is not supported
Is there a known limitation on the input/output dimensions for Dense layers when converting to .nb format? If so, is there a recommended approach to handle larger dimensions, or any workarounds to enable successful conversion and NPU acceleration?
Thank you for your time and assistance.
Best regards,
Justin
Solved! Go to Solution.
- Labels:
-
ST Edge AI Core
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2025-03-03 2:35 AM
Hello @J-WTY ,
You can find the constraint regarding the ST Edge AI Core here: https://stedgeai-dc.st.com/assets/embedded-docs/index.html
Depending on what you use, meaning tensorflow, tflite, keras or onnx and if you use the NPU (example: STM32N6), the constraint on the layers may be different. You can find a dedicated page for each one of them.
There is a commun constraint that is this one:
Common constraints
- input and output tensors must be not dynamic.
- variable-length batch dimension (i.e.
(None,)
) is considered as equal to 1 - must not be greater than 6D
- dimension must be in the range
[0, 65536[
- batch dimension is not supported for the axis parameter
- variable-length batch dimension (i.e.
- data type for the weights/activations tensors must be:
- float32, int8, uint8
- only int32 for the bias tensor is considered
- for some operators, bool type is also supported
- operator with un-connected output is not supported
- mixed data operations (i.e hybrid operator) are not supported, activations and weights should be quantized
- generated c-model is always channel-last (or
NHWC
format) - 1D operator is mapped on the respective 2D operator by adding a singleton dimension on the input: (12,3) -> (12, 0, 3)
So, instead of using a (1500,384) shape, please use a (1,1500,384) shape. For example:
input_features = tf.keras.Input(shape=(1, 1500, 384), dtype=tf.float32)
output = tf.keras.layers.Dense(1536)(input_features)
model = tf.keras.Model(inputs=input_features, outputs=output)
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2025-03-03 2:35 AM
Hello @J-WTY ,
You can find the constraint regarding the ST Edge AI Core here: https://stedgeai-dc.st.com/assets/embedded-docs/index.html
Depending on what you use, meaning tensorflow, tflite, keras or onnx and if you use the NPU (example: STM32N6), the constraint on the layers may be different. You can find a dedicated page for each one of them.
There is a commun constraint that is this one:
Common constraints
- input and output tensors must be not dynamic.
- variable-length batch dimension (i.e.
(None,)
) is considered as equal to 1 - must not be greater than 6D
- dimension must be in the range
[0, 65536[
- batch dimension is not supported for the axis parameter
- variable-length batch dimension (i.e.
- data type for the weights/activations tensors must be:
- float32, int8, uint8
- only int32 for the bias tensor is considered
- for some operators, bool type is also supported
- operator with un-connected output is not supported
- mixed data operations (i.e hybrid operator) are not supported, activations and weights should be quantized
- generated c-model is always channel-last (or
NHWC
format) - 1D operator is mapped on the respective 2D operator by adding a singleton dimension on the input: (12,3) -> (12, 0, 3)
So, instead of using a (1500,384) shape, please use a (1,1500,384) shape. For example:
input_features = tf.keras.Input(shape=(1, 1500, 384), dtype=tf.float32)
output = tf.keras.layers.Dense(1536)(input_features)
model = tf.keras.Model(inputs=input_features, outputs=output)
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2025-03-18 9:09 AM
Hello Julian E.,
Thanks for your answer, I'm sorry for ignoring your reply for days.
with the help of your suggestion, I will try to solve it as soon as possible, thanks!
Best regards,
J-WTY
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2025-03-23 6:09 PM
Hello @Julian E.
I found it work to use (1, 1500, 384) for the shape of Input layer!
Thanks for your help a lot!
But I want to know that is this due to the constraint "1D operator is mapped on the respective 2D operator by adding a singleton dimension on the input: (12,3) -> (12, 0, 3)"? Thanks!
Best regards,
J-WTY
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2025-03-24 7:00 AM
Hello @J-WTY,
Thanks for the update!
Regarding your question, I am not exactly sure.
In my opinion, because ST Edge AI core was originally made for images, they probably already had 2D operators in C code.
So instead of developing the 1D variant for each operator, they mapped your 1D input to 2D as it is probably the same thing either way.
Keep in mind that I am speculating here, but I think it is a reasonable explanation.
Have a good day,
Julian
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
‎2025-03-29 2:05 AM
