cancel
Showing results for 
Search instead for 
Did you mean: 

INTERNAL ERROR: 'FLOAT16' --> Is it that STM doesnot support tf.float16 formats ?

rickforescue
Associate III

I tried to compress a keras model. It is a versy simple Neural network model with just fully connected (dense) layers. I converted it to tflite and compressed the weight to tf.float16 which was originally tf.float32. When I upload the model, it gave me error

Neural Network Tools for STM32AI v1.6.0 (STM.ai v7.1.0-RC3) 

INTERNAL ERROR: 'FLOAT16'

Is it that STM doesnot support float16 formats ?

More Info: Its a tflite model and I used STM32Cube.AI.runtime

This discussion is locked. Please start a new topic to ask your question.
3 REPLIES 3
TDK
Super User

Half-precision floats aren't supported by FPUs on Cortex M4 or M7.

If you feel a post has answered your question, please click "Accept as Solution".
waclawek.jan
Super User

As usually. there's no point in discussing this without knowing STM32 model (core type) and compiler.

Actually, half-precision floats came up recently as a question in a local forum (in Czech language), and there appears to be *some* support in Cortex-M, although compilers probably won't support it seamlessly.

JW

Weird of them to support that instruction and yet not support any add/subtract/mult/divide instructions on FLOAT16.

https://developer.arm.com/documentation/dui0646/a/The-Cortex-M7-Instruction-Set/Multiply-and-divide-instructions

If you feel a post has answered your question, please click "Accept as Solution".