cancel
Showing results for 
Search instead for 
Did you mean: 

Error while model optimization using stm32 AI developer cloud

Ritesh1
Associate II

 

I have created a tensorflow model for image classification. But during the optimization process using STM32 AI developer cloud, I am getting the following error:

error: Error when deserializing class 'InputLayer' using config={'batch_shape': [None, 32, 32, 3], 'dtype': 'float32', 'sparse': False, 'name': 'input_layer'}. Exception encountered: Unrecognized keyword arguments: ['batch_shape']

 

model = models.Sequential([
layers.Input(shape=(32, 32, 3)),
layers.Conv2D(32, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),

layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(10, activation='softmax') 
])

 

 

 

 

 

Ritesh

1 ACCEPTED SOLUTION

Accepted Solutions
Julian E.
ST Employee

Hello @Ritesh1 ,

 

I don't know what you've done, but with a code looking like this, it works on st edge ai dev cloud:

 

import tensorflow as tf from tensorflow.keras import layers, models # Create the model model = models.Sequential([ layers.Input(shape=(32, 32, 3)), # Input layer with shape (32, 32, 3) layers.Conv2D(32, (3, 3), activation='relu'), # First convolutional layer with 32 filters, kernel size (3, 3), and ReLU activation layers.MaxPooling2D((2, 2)), # First max pooling layer with pool size (2, 2) layers.Conv2D(64, (3, 3), activation='relu'), # Second convolutional layer with 64 filters, kernel size (3, 3), and ReLU activation layers.MaxPooling2D((2, 2)), # Second max pooling layer with pool size (2, 2) layers.Conv2D(64, (3, 3), activation='relu'), # Third convolutional layer with 64 filters, kernel size (3, 3), and ReLU activation layers.Flatten(), # Flatten layer to convert 2D matrix to 1D vector layers.Dense(64, activation='relu'), # Fully connected layer with 64 units and ReLU activation layers.Dense(10, activation='softmax') # Output layer with 10 units (for classification) and softmax activation ]) # Compile the model model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # Save the model as an H5 file model.save('model.h5') # Summary of the model model.summary()

 

Have a good day,

Julian

 

​
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.

View solution in original post

1 REPLY 1
Julian E.
ST Employee

Hello @Ritesh1 ,

 

I don't know what you've done, but with a code looking like this, it works on st edge ai dev cloud:

 

import tensorflow as tf from tensorflow.keras import layers, models # Create the model model = models.Sequential([ layers.Input(shape=(32, 32, 3)), # Input layer with shape (32, 32, 3) layers.Conv2D(32, (3, 3), activation='relu'), # First convolutional layer with 32 filters, kernel size (3, 3), and ReLU activation layers.MaxPooling2D((2, 2)), # First max pooling layer with pool size (2, 2) layers.Conv2D(64, (3, 3), activation='relu'), # Second convolutional layer with 64 filters, kernel size (3, 3), and ReLU activation layers.MaxPooling2D((2, 2)), # Second max pooling layer with pool size (2, 2) layers.Conv2D(64, (3, 3), activation='relu'), # Third convolutional layer with 64 filters, kernel size (3, 3), and ReLU activation layers.Flatten(), # Flatten layer to convert 2D matrix to 1D vector layers.Dense(64, activation='relu'), # Fully connected layer with 64 units and ReLU activation layers.Dense(10, activation='softmax') # Output layer with 10 units (for classification) and softmax activation ]) # Compile the model model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # Save the model as an H5 file model.save('model.h5') # Summary of the model model.summary()

 

Have a good day,

Julian

 

​
In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.