2026-04-08 4:31 PM - edited 2026-04-08 4:42 PM
Many years ago, at ANU in third year physics, the professor provided an object and said explain and check it as a project. Let us assume that I have a SENSOR TILE.BOX pro and I follow my training and check the output of the MEMS Studio. The first step was to provide the CLI_Example amended so that it provided the files in the correct location for our downstream software. I would not use your MEMs software, it is just to kludgy. But I wanted to confirm my program and understand your data.
The next step is to use MEMS Studio to get the output file processed log.
The third step is to write a simple C# program to take the .dat files apart and complete necessary analysis and upload the needed output to the MYSQL cloud storage.
Next is the same plot with the real byte data from the acceleration dat file. This is an analysis in EXCEL.
Notice the secondary wave or Fourier series. If I show the NPP
The NPP shows the Fourier series on the linear regression. It is normal data as expected, there is a minor load but the non-Gaussian element is not of concern.
The residual plot shows why your first plot is nice, but it is based on the theory that the records all occur equally spaced, this says otherwise. I am sorry, but one believes the data and not the theory.
The linear regression from EXCEL shows the zero is almost a zero but not quite and the slope is so defined the error is close to zero, really close to zero.
| Coefficients | Standard Error | t Stat | P-value | Lower 95% | Upper 95% | |
| Intercept | -0.08248 | 0.034077 | -2.42047 | 0.017989 | -0.1504 | -0.01457 |
| X Variable 1 | 2.44E-05 | 1.29E-07 | 188.4985 | 7.1E-100 | 2.41E-05 | 2.47E-05 |
I have read many ST.COM manuals as we use quite a few of your accelerometers, so the closest manual said that the time for a single byte step was 27.5 microseconds, but it reality I found that 6.875 was the best fit, 27.5/4.
That is how I got 11.013 seconds that exactly matches the acquisition file output. The record time between files was 12 seconds on a looping CLI_Example.
So then I moved onto the disassemble of the data file. This took three days and a lot of lost valleys.
The standard jump blocks at 2308 bytes could be recovered, as shown.
| 2308 | 0 | 9 | 0 | 9 | 0 | jump |
| 4616 | 0 | 18 | 0 | 18 | 0 | jump |
| 6924 | 0 | 27 | 0 | 27 | 0 | jump |
| 9232 | 0 | 36 | 0 | 36 | 0 | jump |
| 11540 | 0 | 45 | 0 | 45 | 0 | jump |
| 13848 | 0 | 54 | 0 | 54 | 0 | jump |
| 16156 | 0 | 63 | 0 | 63 | 0 | jump |
| 18464 | 0 | 72 | 0 | 72 | 0 | jump |
| 20772 | 0 | 81 | 0 | 81 | 0 | jump |
| 23080 | 0 | 90 | 0 | 90 | 0 | jump |
| 25388 | 0 | 99 | 0 | 99 | 0 | jump |
| 27696 | 0 | 108 | 0 | 108 | 0 | jump |
| 30004 | 0 | 117 | 0 | 117 | 0 | jump |
| 32312 | 0 | 126 | 0 | 126 | 0 | jump |
| 34620 | 0 | 135 | 0 | 135 | 0 | jump |
| 36928 | 0 | 144 | 0 | 144 | 0 | jump |
| 39236 | 0 | 153 | 0 | 153 | 0 | jump |
| 41544 | 0 | 162 | 0 | 162 | 0 | jump |
| 43852 | 0 | 171 | 0 | 171 | 0 | jump |
| 46160 | 0 | 180 | 0 | 180 | 0 | jump |
| 48468 | 0 | 189 | 0 | 189 | 0 | jump |
| 50776 | 0 | 198 | 0 | 198 | 0 | jump |
| 53084 | 0 | 207 | 0 | 207 | 0 | jump |
| 55392 | 0 | 216 | 0 | 216 | 0 | jump |
| 57700 | 0 | 225 | 0 | 225 | 0 | jump |
| 60008 | 0 | 234 | 0 | 234 | 0 | jump |
I could set up an array that duplicates the elements of the bytes array from the dat file and mark all of these elements as 1, C# automatically makes it all zeros on allocation.
Then one hunts for the timing signals, the incorrect assumption is that they are equally spaced, they are not. There is no help in the manuals that is this precise and asking the question elicited the standard answer look in the manual. I did, I have them all printed out and I read them at night. The manuals vaguely refer to 1000 step, but we have the six and four blocks interspersed that makes the odd set of offset numbers.
One is looking for data in this form - and when you find one you mark it as all 1's.
| 6012 | 40 | 1 |
| 6013 | 53 | 1 |
| 6014 | 199 | 1 |
| 6015 | 87 | 1 |
| 6016 | 50 | 1 |
| 6017 | 247 | 1 |
| 6018 | 222 | 1 |
| 6019 | 63 | 1 |
The offset data held me up for two days.
The offset delta to each time number is shown on the graph for the uploaded file, I have as yet not tested it further on more data. The pattern is not a uniform series and it repeats at 29 steps. 29 is prime so it is a good choice. The underlying control must be the interaction of the four, the six and eight. Why not just extend the four to six with two zeros and make the eight two sixes with four extra zeros, then the counting is easy.
I can now mark with 1's all of the data. One finds this by looking for the constant bytes in the timing data. In this case the eight byte that is 63 or 64.
So now we have a array with zeros and 1's and 2's so we should just step through the array and take out the sub array that has all values equal to zero.
So I uploaded the data from the MEMS Studio file and attempted to align the output with the byte output from my program.
It works for a short number of entries and then fails.
If I have the total bytes, I subtract out the four zeros at the start, the jump and timing data I get a number that matches the data left over in the array.
You have 729 extra data points. At first I thought you were adding data into the timing signal <that was a long hard look>, but no it is not in the correct format. So I looked for duplicates in your data as you had too many points so they are either random stuff or duplicates. About every 76 steps if I take the sum of your data in row n and row n+ 1 and subtract them I get a zero.
We have collected acceleration data for the best part of 20 years, I have never seen duplicates, it breaks all of Newton's laws of physics, and I should not have to explain why. This gives the extra 729 steps over the byte count for your data. My catch function is coarse so it may find an odd false, but not like this table.
| Equal Numbers at data point :: | 71 | -0.00634 | -0.00634 | 0 | Step to Last Duplicates :: | 72 |
| Equal Numbers at data point :: | 146 | -0.0039 | -0.0039 | 0 | Step to Last Duplicates :: | 76 |
| Equal Numbers at data point :: | 221 | -0.0083 | -0.0083 | 0 | Step to Last Duplicates :: | 76 |
| Equal Numbers at data point :: | 296 | -0.00537 | -0.00537 | 0 | Step to Last Duplicates :: | 76 |
| Equal Numbers at data point :: | 371 | -0.00683 | -0.00683 | 0 | Step to Last Duplicates :: | 76 |
| Equal Numbers at data point :: | 446 | -0.00683 | -0.00683 | 0 | Step to Last Duplicates :: | 76 |
| Equal Numbers at data point :: | 521 | -0.00488 | -0.00488 | 0 | Step to Last Duplicates :: | 76 |
| Equal Numbers at data point :: | 596 | -0.00293 | -0.00293 | 0 | Step to Last Duplicates :: | 76 |
| Equal Numbers at data point :: | 671 | -0.00439 | -0.00439 | 0 | Step to Last Duplicates :: | 76 |
| Equal Numbers at data point :: | 746 | -0.00586 | -0.00586 | 0 | Step to Last Duplicates :: | 76 |
| Equal Numbers at data point :: | 821 | -0.00195 | -0.00195 | 0 | Step to Last Duplicates :: | 76 |
| Equal Numbers at data point :: | 896 | -0.00683 | -0.00683 | 0 | Step to Last Duplicates :: | 76 |
| Equal Numbers at data point :: | 971 | -0.00488 | -0.00488 | 0 | Step to Last Duplicates :: | 76 |
| Equal Numbers at data point :: | 1046 | -0.00049 | -0.00049 | 0 | Step to Last Duplicates :: | 76 |
| Equal Numbers at data point :: | 1105 | -0.00244 | -0.00244 | 0 | Step to Last Duplicates :: | 60 |
| Equal Numbers at data point :: | 1120 | -0.00439 | -0.00439 | 0 | Step to Last Duplicates :: | 16 |
| Equal Numbers at data point :: | 1195 | -0.00537 | -0.00537 | 0 | Step to Last Duplicates :: | 76 |
| Equal Numbers at data point :: | 1270 | -0.00878 | -0.00878 | 0 | Step to Last Duplicates :: | 76 |
| Equal Numbers at data point :: | 1344 | -0.01074 | -0.01074 | 0 | Step to Last Duplicates :: | 75 |
| Equal Numbers at data point :: | 1419 | -0.00342 | -0.00342 | 0 | Step to Last Duplicates :: | 76 |
There will of course be random same's, as I compare the sum of all three XY and Z, but this rare, but explains the 60.
It is your program, I no longer need it as I now understand your formats. I enclose the C# program that generates the correct files.
You should look at MEMS Studio.
John Nichols
I do not mind if you use the stuff from the program as long as you credit Rezzona Limited from the Isle of Man.
2026-04-08 4:35 PM
2026-04-08 4:36 PM - edited 2026-04-08 4:36 PM
2026-04-08 6:11 PM - edited 2026-04-08 6:12 PM
i have now done the second data file. The blue is the spacing on the first file for the extras and the red for the second file. The 78 and 79 match exactly on the count, this is statistically unlikely.
Also your temperature calibration on the device is in error, the device is kept in a room at 20C, the second source of error may be device heating, one has to assume that that was considered in design. Your device reads a nice 30 C. The room is never at 30.
The frequency response of a building can have significant temperature impact that we need to allow for using multiple linear regression, any advice on the reason would be appreciated.
2026-04-11 9:19 PM
So we mainly work with bridges and buildings, ie structural reliability.
So I have been running the Sensor TIle Box Pro at about 1800 Hz, and instead of getting a duplicate every 76th row I get 4 duplicates from MEMS STUDIO analysis on all runs. Sample shown.
One would hope that ST.COM people would say something like thanks for telling us about the error and we will fix it.
Of course, silence works as well.
John Nichols
This is almost the step function for the timing, I made a mistake at 17 it should be 6052, I have fixed it but too lazy to get a new snap.
I have not found the repeat yet as I only go to 18000 records.
I have not worked out how to set the elapsed time to 9.1 seconds. MEMS Studio does not report the run time correctly.
2026-04-13 2:18 AM
Hello John,
Format of the .dat file used by Datalog2 firmware is described in the UM3203 which you can find on “FP-SNS-DATALOG2 | Product - STMicroelectronics” web page.
Datalog2 stores data in the individual .dat files for each sensor. MEMS Studio is able to convert .dat files into one .csv file. Each sensor can run a different output data rate. To indicate new data in each row “data_changed” bit field is used. It is described in MEMS Studio user manual UM3233 which you can find on https://www.st.com/en/development-tools/mems-studio.html web page.
It seems you didn't take into account this aspect and this is the reason why you report it as duplicate samples.
Data from device are send in chunks and incomplete chunks at the end are not involved in analysis. This could cause discrepancy between metadata and MEMS Studio record time.
Best Regards,
Marian
2026-04-13 12:03 PM
Dear Marian:
I am not reporting duplicate entries on my software written in C#, I am reporting duplicate entries in the MEMS Studio CSV file. You have a duplicate entry in at 7600 recording cycles of 1 every 76 entries.
1. I count the number of bytes, your CSV file has excess byte count. By a long way.
2. I found the duplicate entries and showed them in an earlier post.
My software is correct. I have an exact count of bytes to output at 1 to 1 and no duplicate entries. MEMS Studio in 1860 Hz recording does 4 copies per step of the acceleration data. You can see it in the post. This is not my software this is your software.
As a check I use hex editor to check all of the lines.
I am not wrong, I do this all the time. I am happy my software is correct, I am sorry but your software has a bug.
I am happy to zoom to show you.
This is only the accelerometer, I have not found errors with Pressure or temperature and I am not looking at GYRO.
John
2026-04-14 6:11 PM
Dear Marian:
I have looked and searched for UM3203, I cannot find it associated with the sensor website. I have read all of the PDF's on the website, and watched a lot of ST videos on the Sensor.
I want to use the sensor to monitor structural vibration, so it needs to work in a certain manner only. I have now reviewed the results from amending your software to fit closer to structural vibration methods, it is close but I will get closer as I look more at automode.
But the MEMS Studio pdf's makes no comments nor provides samples on the output.
I have corrected all of the errors in my code, so I get output that theoretically matches a subset of you data, but it still outputs too many points, the lower the ODR the higher the output. There is an error in MEMS Studio Code. There is no doubt, your author should try it.
My comments on the analysis are attached as a PPT file.
We will not use your software other than the modified CLI_EXAMPLE, but it would be nice to get the code for datalog2 dll, so I can see your algorithms for the encoding and make the output in line with standards used for structural monitoring. The timing issue is still a challenge.
I have found no stray bits of data in your dat files, they are complete, but only 19.1 seconds.
Thanks John