2024-12-23 08:00 AM - last edited on 2024-12-23 12:58 PM by Peter BENSCH
2024-12-23 08:13 AM
It's not part of the standard language.
Do you mean, "what Python3 libraries are available for the VL53L4CX which support extended mode?"
Or do you already have a library? If so, say which one.
How to write your question to maximize your chances to find a solution
2024-12-23 09:03 AM
I've install the PYPI version of VL53L1X, which appears to control the VL53L4CX perfectly. However, I have experimented with the ROI at full extent i.e. 0, 15, 15, 0, and with the range varying from 1 to 3 i.e. short, medium and long. Sometimes I get "wrap around", and sometimes I don't, depending on range of object and range setting. But, I've now set the ROI to as small as it can go i.e. 4, 8, 8, 4, and using ranges 2 and 3 I get some good clear results, with no wrap around. But, where I expected the range to plateau out at distances over 4m, it actually just records around 2.5m when in Long (3) mode. So if extended mode was turned on, would the distances recorded be more accurate and longer ?
As you can see from the 2 attached scans, both have the same parameters i.e. timing budget and ROI, but one is long and the other medium, range seeting 3 and 2 respectively. They look roughly the same until the distances between 135 and 180 degrees, where they differ dramatically. Also, in that area (135 to 180 degrees) the distance should go further than 250cm - they should in theory go off the scale.
2024-12-24 02:17 AM
@peterhczaja wrote:I've install the PYPI version of VL53L1X.
This: https://pypi.org/project/VL53L1X/ ?
There are contact links for the maintainers on that page
2025-01-02 09:29 AM
Short, Med, and Long define the 'pulse repetition rate'. It's how long the sensor waits before the next pulse.
If one waits long enough for the light to go out to 4M, you don't get as many pulses in say 30ms then you would have if you only waited for the light to go out to 2M.
More pulses means you get more photon detects. So, you get more accuracy, and you oddly, you can see faint targets more clearly.
I believe what you are seeing is the combination of both your distance mode and the reflectivity of your target.
When in long mode you are sending fewer pulses and thus getting fewer photons returning.
In medium mode, you get more pulses, and thus more photons.
If you increased your timing budget in 'long' mode so you got the same number of pulses, you should get the same distances.
So next let's talk about the different ways one can use the sensor.
The VL53L1X and the VL53L4CD use standard ranging.
The VL53L1CB and the VL53L4CX were intended to use Histogram ranging.
(Histogram ranging involves sending the raw data to your MCU and it is the power of your MCU that digs out the distance. It does a better job, but it takes more resources.)
If you use the Python L1X code, you are running your VL53L4CX in standard ranging mode, not histograms.
(And yes, it does work. The L4 and the L1 only differ in the VCSEL (laser). Otherwise, they are the same part.)
I see no reason that one angle is better than the other. I can only guess that you might have a window (and thus ambient light) affecting your measurements.
2025-01-04 08:53 AM
OK, well, bearing all that has been said in mind, I decided to revert to the VL53L1X and carry out a few tests, and I attach to this note. I've basically used the VL53L1X to scan 180 degrees in front of my robot using a simple servo mechanism.
For each scan, I've changed just one parameter to see what the effect would be on distance measurement. So for example, the 1st diagram "Long 40000/60 0,15,15,0" refers to Long range setting, with a timing budget of 40000, and inter measurement of 60, and then finally the ROI setting, in this case Full.
As you will note from the top 2 diagrams (left to right), there is hardly any difference, and the measurements are reasonably close to actual measurements.
I decided to leave the scans on long range, and see what the effect would be by varying the ROI. All the scans, by the way, were done by the robot being in exactly the same position for every scan.
Now if we compare the top left with all the others below, there is a marked difference in accuracy of measurement. Starting with a very narrow ROI (6,9,9,6), the measurements are totally inaccurate. Moving out a little (4,11,11,4), you can see the shape starting to form, but the measurements are not accurate. Then finally we use a ROI as a band across (0,11,15,4), and there is a definite improvement.
So I can deduce from this little experiment that adjusting the ROI from FULL to a smaller ROI does not improve accuracy at all, which is a bit strange because I would have expected a smaller ROI to be more selective. However, this doesn't appear to be the case.
But, what would be a better idea (maybe), would be to be able to narrow the transmitted beam (not sure what you call this) to potentially reduce "scatter". Would this be possible with a small funnel like device on the front of the VL53L1X ?
Also, another issue I encountered, but not shown in my document, is what happens when you point the sensor into an area greater than 4 metres. My current tests show that the VL53L1X just returns a distance of about 2.5 metres. This may be because the "beam" is being reflected off the floor, but I'm a bit sceptical on that because I've tried it off the edge of a table as well, which is a lot higher off the floor. What I would expect is either a flat line at the outer limit (4 metres) or no data returned at all.
Finally, your advice on which sensor to use, the VL53L1X or VL53L4CX. I'm using them purely to detect what is in front of the robot, and then to determine which direction the robot should take, hence avoiding any obstacle. So I guess the further the robot can "see", the better, assuming that is, there is python code to enable the "extended view" facility.