2023-08-14 07:22 AM
We are using the STEVAL-STWINBX1 at a CNC machine to gather vibration data. We are using the fp-sns-datalog2 HSDatalog Python framework to record the data over USB. Unfortunately, the data seems to always get corrupted in longer measurements (>5 min).
In these cases, the dataframes get corrupted and it seems as though the axis of the accelerometer are switched around
The following code is used to read the raw measurement files:
from st_hsdatalog.HSD.HSDatalog import HSDatalog
hsd_factory = HSDatalog()
hsd = hsd_factory.create_hsd(acq_folder)
sensor = HSDatalog.get_sensor(hsd, sensor_name)
stwin_data = HSDatalog.get_dataframe(hsd, sensor)[0]
and the resulting dataframe shows the corrupt data - in the beginning, everything is ok but in the end, the timestamps are invalid and it seems as though the axis are switched:
The HSDatalog call gives the following warnings:
dtmi found in locally in base supported models
dtmi: dtmi/appconfig/steval_stwinbx1/fpSnsDatalog2_datalog2-2.json
[c:\Users\N.Jourdan_Lokal\anaconda3\lib\site-packages\numpy\core\function_base.py:128](file:///C:/Users/N.Jourdan_Lokal/anaconda3/lib/site-packages/numpy/core/function_base.py:128): RuntimeWarning: invalid value encountered in multiply
stop = asanyarray(stop) * 1.0
[c:\Users\N.Jourdan_Lokal\anaconda3\lib\site-packages\numpy\core\function_base.py:127](file:///C:/Users/N.Jourdan_Lokal/anaconda3/lib/site-packages/numpy/core/function_base.py:127): RuntimeWarning: invalid value encountered in multiply
start = asanyarray(start) * 1.0`
2023-08-14 11:21 PM
Update: We have also tried with the most recent version of the SDK (1.2.1) and the issue is still there. In this version of the sdk the script gives warnings for corrupt time stamps and all the sensor values are set to 0 instead of the screenshot above.
is there any way to use this sensor for more than a few minutes?
2023-08-15 01:02 AM
Update 2: It seems to be at least partially caused by events from the host computer. I can provoke this problem by locking the screen or unplugging the docking station and sometimes other USB devices. I cannot confirm yet that the problem is only caused by these events or if it also happens sporadically on its own.
2023-08-23 12:35 AM
your Update 2 is right: USB streaming is controlled by the PC. Host PC is the master, asking data from the board, so the events you described can impact dramatically on the acquisition. Other actions impacting acquisition performance could be high CPU usage on the PC or antivirus control on USB ports. We are already aware about this limitation, that DATALOG2 can't control. See Datalog troubleshooting chapter in User manual.
Excluding the above PC not desired working conditions, are you experiencing any other bugs while using DATALOG2 via USB?
Have you opened also the 2 issues on github (link, link)? If so and if further support is still needed, we can continue the discussion there.
Best regards
Simone
2023-08-23 12:50 AM
Hi Simone,
thank you for the reply and the explanation. I understand that the connection quality depends on the host pc. What I don't understand is why all data points are corrupted following a probably short-time anomaly concerning only a few datapoints. It seems that the data is still transmitted after the anomaly, just somehow shifted with switched axis as described above. I suspect this problem arises due to the sequential transmission of the data. Can't this be recovered using e.g. a data counter that is transmitted with the data? This would make the setup much more fault tolerant. Our workaround at the moment is to reset the data acquisition every two minutes using a separate script to limit the potentially faulty data to those two minutes but this is quite ugly.
The second github issue is independent of this problem. It just seems that all the example configs dont work with the python cli example because of the samples_per_ts being a dictionary.
Thanks again,
Nicolas