Showing results for 
Search instead for 
Did you mean: 

HSDatalog2 problem with STWINBX1

Associate II

We are using the STEVAL-STWINBX1 at a CNC machine to gather vibration data. We are using the fp-sns-datalog2 HSDatalog Python framework to record the data over USB. Unfortunately, the data seems to always get corrupted in longer measurements (>5 min).

In these cases, the dataframes get corrupted and it seems as though the axis of the accelerometer are switched around

The following code is used to read the raw measurement files:

from st_hsdatalog.HSD.HSDatalog import HSDatalog
hsd_factory = HSDatalog()
hsd = hsd_factory.create_hsd(acq_folder)

sensor = HSDatalog.get_sensor(hsd, sensor_name)
stwin_data = HSDatalog.get_dataframe(hsd, sensor)[0]

and the resulting dataframe shows the corrupt data - in the beginning, everything is ok but in the end, the timestamps are invalid and it seems as though the axis are switched:


The HSDatalog call gives the following warnings:

dtmi found in locally in base supported models
dtmi: dtmi/appconfig/steval_stwinbx1/fpSnsDatalog2_datalog2-2.json
[c:\Users\N.Jourdan_Lokal\anaconda3\lib\site-packages\numpy\core\](file:///C:/Users/N.Jourdan_Lokal/anaconda3/lib/site-packages/numpy/core/ RuntimeWarning: invalid value encountered in multiply
stop = asanyarray(stop) * 1.0
[c:\Users\N.Jourdan_Lokal\anaconda3\lib\site-packages\numpy\core\](file:///C:/Users/N.Jourdan_Lokal/anaconda3/lib/site-packages/numpy/core/ RuntimeWarning: invalid value encountered in multiply
start = asanyarray(start) * 1.0`

Associate II

Update: We have also tried with the most recent version of the SDK (1.2.1) and the issue is still there. In this version of the sdk the script gives warnings for corrupt time stamps and all the sensor values are set to 0 instead of the screenshot above. 

is there any way to use this sensor for more than a few minutes?

Associate II

Update 2: It seems to be at least partially caused by events from the host computer. I can provoke this problem by locking the screen or unplugging the docking station and sometimes other USB devices. I cannot confirm yet that the problem is only caused by these events or if it also happens sporadically on its own. 

Hi @nicolas_tudarmstadt,

your Update 2 is right: USB streaming is controlled by the PC. Host PC is the master, asking data from the board, so the events you described can impact dramatically on the acquisition. Other actions impacting acquisition performance could be high CPU usage on the PC or antivirus control on USB ports. We are already aware about this limitation, that DATALOG2 can't control. See Datalog troubleshooting chapter in User manual.

Excluding the above PC not desired working conditions, are you experiencing any other bugs while using DATALOG2 via USB?

Have you opened also the 2 issues on github (link, link)? If so and if further support is still needed, we can continue  the discussion there.


Best regards


In order to give better visibility on the answered topics, please click on 'Accept as Solution' on the reply which solved your issue or answered your question.
Associate II

Hi Simone,

thank you for the reply and the explanation. I understand that the connection quality depends on the host pc. What I don't understand is why all data points are corrupted following a probably short-time anomaly concerning only a few datapoints. It seems that the data is still transmitted after the anomaly, just somehow shifted with switched axis as described above. I suspect this problem arises due to the sequential transmission of the data. Can't this be recovered using e.g. a data counter that is transmitted with the data? This would make the setup much more fault tolerant. Our workaround at the moment is to reset the data acquisition every two minutes using a separate script to limit the potentially faulty data to those two minutes but this is quite ugly.

The second github issue is independent of this problem. It just seems that all the example configs dont work with the python cli example because of the samples_per_ts being a dictionary.


Thanks again,