cancel
Showing results for 
Search instead for 
Did you mean: 

How to train a convolutional neural network with data collected by ADC?

Fyouj.1
Associate II

Dear ST Engineer,

    I have downloaded the AER audio recognition AI app from Github. There are about 50 kinds of audio in the app for training convolutional neural network (CNN). If I need to train a specific audio that I have customized, what should I do?For example, I collected a frame of 20ms data with 48K sampling rate and stored it in array X[960], with 960 data points in total. How do I use this array X to train convolutional neural network (CNN)?In addition, where is the interface function after the trained model is compiled into the project by CUBE.AI?

6 REPLIES 6
JTedot
Associate II

AFAIK you can't train on an embedded chip, just run inference. You do the training on your local machine in easier to handle languages like python. The bad news is that you can't feed audio data directly into a network, you first need to extract and refine its features before feeding a much smaller input of these extractions into your network. After successfully porting your model as a saved file you can then interface your network on the ST-chip with the functions

ai_mynetworkname_create(network)
ai_mynetworkname_init(network, network_config)
ai_mynetworkname_run(network, inputbuffer, outputbuffer)

However, the feature extraction is not handled by this. To get an idea about extraction, view this link: https://www.kdnuggets.com/2020/02/audio-data-analysis-deep-learning-python-part-1.html

Fyouj.1
Associate II

Thank you for the reply.

The key problem is how to improve the data and input into the network, are there any examples of this?How do data files used to train neural networks go from ADC data to Python supported data files?What tools and software are used?

You could send the ADC's data via SPI to a raspberry pi and have it store the data as a wav file. Same goes for an SD-card connected via SPI. Since you need wav-files to train, it HAS to leave your embedded device because it a) doesn't have a filesystem and b) its memory is incredibly small when dealing with wav files. From there on you can use them to train like the guide from the above supplied link.

Fyouj.1
Associate II

Thank you. Another question is, the data that is used to train the neural network and the data that is used to test the neural network model that has been trained should be the same sample rate and data length, right?

If you can assume that every incoming "fresh" data is picked up by the same microphone and program then yes. If you want to classify varying inputs like soundfiles from freesound.org then your network should be trained on diverse datasets. In general: train your model for what you use it. Diverse classification -> diverse training. Constant classification -> constant training.

Fyouj.1
Associate II

Dear ST Engineer,

AER applications downloaded from https://github.com/Shahnawax/AER-CNN-KERAS, because librosa version upgrade to 0.8.1, some functions have been replaced, where is the new application download?