2022-02-25 04:06 AM
I tried to compress a keras model. It is a versy simple Neural network model with just fully connected (dense) layers. I converted it to tflite and compressed the weight to tf.float16 which was originally tf.float32. When I upload the model, it gave me error
Neural Network Tools for STM32AI v1.6.0 (STM.ai v7.1.0-RC3)
INTERNAL ERROR: 'FLOAT16'
Is it that STM doesnot support float16 formats ?
More Info: Its a tflite model and I used STM32Cube.AI.runtime
2022-02-25 04:56 PM
Half-precision floats aren't supported by FPUs on Cortex M4 or M7.
2022-02-25 11:36 PM
As usually. there's no point in discussing this without knowing STM32 model (core type) and compiler.
Actually, half-precision floats came up recently as a question in a local forum (in Czech language), and there appears to be *some* support in Cortex-M, although compilers probably won't support it seamlessly.
JW
2022-02-26 05:08 AM
Weird of them to support that instruction and yet not support any add/subtract/mult/divide instructions on FLOAT16.