2021-04-15 06:47 AM
Hello, I have series of questions related to the data implementations using SPAD zone settings. After reading of manuals i made conclusions how to use this feature of the sensor, but i'm not sure if my conclusions are correct.
1. I'm interested in accuracy of data returned by sensor in different zones. I tested all center zones started from 16x16 and reducing it to 4x4 size array. And each redusing gave me differense almost in 1.5cm, so the difference between value that retured in 16x16 case and 4x4 case equals 7-8cm! I had clear conditions of the experiment: sensor was fixed vertically in front of plane wall. I just wondered why the zones with the same center have such differents in output value.
2. The second quastions is about if there any possobility to define geometric locations of the zones. User manuals said that 16x16 array has 27 FoV, 8x8 -20 FoV, 4x4-15 FoV. So i'm really wondered how those zones geometricly exists, because the reducing of the FoV is not uniform (if it was uniform 4x4 would have near 6.5 FoV ).
I'm trying to get picture of object in front of the sensor using data from all possible zones. It mean i'm trying to scan FoV using 4x4 array, starting from the upper left angle of the 16x16 array and finishing with bottom right one. And data that i got in some zones realy confuse me, because i'm providing all tests in front of the plane.
I will be really greatful if you will explain more proporetly how to use SPAD multiplay zones to recieve as much data as possible (and also as accurate as it possible).
Thank You!
Solved! Go to Solution.
2021-08-19 10:31 AM
In my testing the best results can be obtained by doing an offset calibration for every zone you expect to use. Then, when you change the zone, you change the offset register.
Using ST's offset_calibration can be a pain when doing it a lot, so I'd put the sensor in front of a wall at a known distance and write some code to change the zone, do the calibration, read the register, and go to the next zone.
The 16x16 array covers a 27 degree circle - but it can be viewed as a 20 degree X 20 degree square. (27 degree diagonal).
So, in theory each 4x4 zone is a 5degree x 5degree zone.
But it doesn't work quite that way.
What happens is that the light enters the lens and bounces around inside a bit before exiting. And this causes a 'halo' effect.
Or you can view it as a 'blur'.
So if you have a dull target, you might see a zone as a 5x5, but if you have a bright target there will be a lot of blur and you will see the zone as much larger.
I used a 4 wide x 8 tall zone and marched it across the 16 wide array (giving 13 zone) in my 2D lidar example.
2D LIDAR using multiple VL53L1X Time-of-Flight long distance ranging sensors
but I did have to offset calibrate each zone.
2021-08-19 10:31 AM
In my testing the best results can be obtained by doing an offset calibration for every zone you expect to use. Then, when you change the zone, you change the offset register.
Using ST's offset_calibration can be a pain when doing it a lot, so I'd put the sensor in front of a wall at a known distance and write some code to change the zone, do the calibration, read the register, and go to the next zone.
The 16x16 array covers a 27 degree circle - but it can be viewed as a 20 degree X 20 degree square. (27 degree diagonal).
So, in theory each 4x4 zone is a 5degree x 5degree zone.
But it doesn't work quite that way.
What happens is that the light enters the lens and bounces around inside a bit before exiting. And this causes a 'halo' effect.
Or you can view it as a 'blur'.
So if you have a dull target, you might see a zone as a 5x5, but if you have a bright target there will be a lot of blur and you will see the zone as much larger.
I used a 4 wide x 8 tall zone and marched it across the 16 wide array (giving 13 zone) in my 2D lidar example.
2D LIDAR using multiple VL53L1X Time-of-Flight long distance ranging sensors
but I did have to offset calibrate each zone.