2013-11-27 07:42 PM
Hello, this is my first post, i'm new to the community. As part of my thesis i developed an object tracking system with an STM32F103vct microcontroller to demonstrate the capabilities of it and the potential to make embedded vision systems that can be even more useful than a whole computer running Opencv (or etc) in particular cases where dimensions and energy constraints wont make possible to have a computer running, like in UAV's, self driving vehicles, autonomous robots or even cubesat-space applications . I must say STM32 micro controllers are outstanding and a lot of great stuff can be made, please check the video below and i would be glad to answer your questions.
https://www.youtube.com/watch?v=AwHzUILiGmA #artificial-vision #lcd #camera2013-11-28 11:57 PM
A really interesting project with a nice 'presentable' system as result. The concept seems to take some simplification in the image processing part, partly, I assume, because of the limited time for a thesis project.
Other projects (often based on OpenCV) go the edge detection -> pattern matching way, instead of searching large colour clusters. But that might be part of follow-up projects. From the facts presented (F103CV, 320x240), I guess you used external RAM to store the image. Otherwise the image processing would become rather messy. How long does an image processing cyle take, i.e. at wich rate you update servos ? How about the processor load, would there be room for more sophisticated algorithms ? BTW, a friend of mine had a similiar thesis in the mid-nineties, only based on a 8051 MCU ...2013-11-29 08:57 AM
Thanfs
Thanks for the reply, the Camera comes with a FIFO memory included which stores the whole frame, as the microcontroller reads the pixels it performs the segmentation algorithm which after evaluating all the frame stores in the microcontroller a ''binary image'' that only takes 2400 uint_32 memory spaces.. Servos are updated every 20ms with a filter to smooth movements, frames and PI controller is updated every 115ms. I suppose the processor is running instructions all the time (i dont know how to measure the LOAD, can you please tell me?). It is very cool your friend trying to make a similar project back in the nineties, i can only guess how difficult should have been, i was a child!! Weakness of my project is the algorithm to locate centroid and the surrounding box, as you said finding ''large color blobs'' isn't the best method, but i just can think of another way to make it without having too many instructions involved2013-12-02 12:58 AM
Thanks for the info. Most cameras mentioned in different threads here do not have (or use) a full frame buffer. I think it's a good idea, especially in a setup with limited RAM.
As I understand it, you let the camera gather a picture, and stop it upon completion, to read the image. The next image is probably taken once the processing of the current one is finished, and all controller/servo outputs are updated. This way, you run the MCU under full load most of the time, i.e. measuring the load would make no sense. I assumed you have a fixed scheduling frame, to get constant sampling/update rates.Weakness of my project is the algorithm to locate centroid and the surrounding box ...I think the choice of algorithm here is restricted by hardware limitations. Trying to recognize objects by colour is not a very realistic real-world approach, as it needs a ''colour-managed'' environment, and constant lighting. Other method I know start with edge detection, and try to recognize objects by diffent pattern matching algorithms. But these use AT LEAST one full frame in RAM, and require more complex calculations, often using floating point. One way to improve your project would be to use a Cortex M4 (the STM32F4xx has 192k RAM, FPU, and runs at 168MHz). Or, you could use one of the many affordable Cortex A5/A7/A9 boards available, adding the comfort of an OS. Another way would be to scale down the image resolution. That, BTW, was what my friend did in his project. He used edge detection on the difference of the current image to a reference, to detect new and moving objects. Performance was only about 2..3 frames per second, but enough to detect pedestrians on a traffic light.
2013-12-11 09:57 AM
2013-12-11 05:28 PM
2013-12-12 01:08 AM
After reading your feedback i wanted to improve on the ''Blob detection'', is far better now, it now recognizes the contour of the blob, and is a little bit faster, im pretty sure MCU usage is 100%. I also have thought about migrating it to STMF4 discovery, it would be the best.. but there are too many camera and lcd pins.
This is probably not part of your thesis project anymore, but a good start at diving into the embedded world. Using a Cortex M4 (like the STM32F4), both improves performance and adds floating point capability, and the RAM size puts much less restrictions on you algorithms.I still haven't experienced with Cortex A5/A7/A9 families. Its a whole new world which some day i would like to try That is, in fact, a whole different world. Image size and colour restrictions are mostly absent, and the available performance is at least one order of magnitude higher. And you would have the support of an OS, which allows you to use, for instance, webcams via USB, and an attached HDMI monitor.
2013-12-12 01:10 AM
... which I am hoping will be able to accurately target objects in flight.
A heat-seeking anti-aircraft/anti-tank missile ? ... ;)
2013-12-12 03:44 PM
Not quite. More a combination of a Water Pistol and the Sentry Gun from the Aliens film. I am hoping to be able to throw a dry pillow up in the air and it come down wet and if it can differentiate friend (birds) from foe (damn squirrels) all the better.
Also the moving head light chassis I am using is large enough and powerful enough to carry my DSLR which should hopefully enable some nice shots of birds in flight and other wildlife in my garden. Today I finished wiring up the FPGA (using a QFP-240 to PGA adapter which is fitted below one of my Disco boards) and the next step is to see if I have got the design right so that I can program it and finalise the hardware design and get some proper prototype boards made up. And with the Christmas break approaching I should have some time to work on the software.2013-12-13 12:08 AM
Not quite.
I hope you didn't take my comment too serious ...I am hoping to be able to throw a dry pillow up in the air and it come down wet and if it can differentiate friend (birds) from foe (damn squirrels) all the better.
Would be rather difficult/costly to differ between birds an squirrels, I guess. (BTW, both fall in the class ''friend'' for me, since they are edible ... ;) Having some ''for-fun'' image processing projects, I follow another hardware approach. Using , I can attach a cheap webcam, and have netbook-like performance with a proper OS and plenty of RAM. It draws a little more current, though. I think FPGAs are a viable way to circumvent some architectural restrictions and performance shortcomings of Cortex M MCUs, but I don't have the time to delve into that, too.