cancel
Showing results for 
Search instead for 
Did you mean: 

STM32F1 TRACKING OBJECTS BY COLOR

diegomess89
Associate II
Posted on November 28, 2013 at 04:42

Hello, this is my first post, i'm new to the community. As part of my thesis i developed an object tracking system with an STM32F103vct microcontroller to demonstrate the capabilities of it and the potential to make embedded vision systems that can be even more useful than a whole computer running Opencv (or etc) in particular cases where dimensions and energy constraints wont make possible to have a computer running, like in UAV's, self driving vehicles, autonomous robots or even cubesat-space applications . I must say STM32 micro controllers are outstanding and a lot of great stuff can be made, please check the video below and i would be glad to answer your questions.

https://www.youtube.com/watch?v=AwHzUILiGmA

#artificial-vision #lcd #camera
13 REPLIES 13
frankmeyer9
Associate II
Posted on November 29, 2013 at 08:57

A really interesting project with a nice 'presentable' system as result. The concept seems to take some simplification in the image processing part, partly, I assume, because of the limited time for a thesis project.

Other projects (often based on OpenCV) go the edge detection -> pattern matching way,

instead of searching large colour clusters. But that might be part of follow-up projects.

From the facts presented (F103CV, 320x240), I guess you used external RAM to store the image. Otherwise the image processing would become rather messy.

How long does an image processing cyle take, i.e. at wich rate you update servos ?

How about the processor load, would there be room for more sophisticated algorithms ?

BTW, a friend of mine had a similiar thesis in the mid-nineties, only based on a 8051 MCU ...

diegomess89
Associate II
Posted on November 29, 2013 at 17:57

Thanfs

Thanks for the reply, the Camera comes with a FIFO memory included which stores the whole frame, as the microcontroller reads the pixels it performs the segmentation algorithm which after evaluating all the frame stores in the microcontroller a ''binary image'' that only takes 2400 uint_32 memory spaces..

Servos are updated every 20ms with a filter to smooth movements, frames and PI controller is updated every 115ms. I suppose the processor is running instructions all the time (i dont know how to measure the LOAD, can you please tell me?).

It is very cool your friend trying to make a similar project back in the nineties, i can only guess how difficult should have been, i was a child!!

Weakness of my project is the algorithm to locate centroid and the surrounding box, as you said finding ''large color blobs'' isn't the best method, but i just can think of another way to make it without having too many instructions involved

frankmeyer9
Associate II
Posted on December 02, 2013 at 09:58

Thanks for the info. Most cameras mentioned in different threads here do not have (or use) a full frame buffer. I think it's a good idea, especially in a setup with limited RAM.

As I understand it, you let the camera gather a picture, and stop it upon completion, to read the image. The next image is probably taken once the processing of the current one is finished, and all controller/servo outputs are updated. This way, you run the MCU under full load most of the time, i.e. measuring the load would make no sense.

I assumed you have a fixed scheduling frame, to get constant sampling/update rates.

Weakness of my project is the algorithm to locate centroid and the surrounding box ...

 

I think the choice of algorithm here is restricted by hardware limitations. Trying to recognize objects by colour is not a very realistic real-world approach, as it needs a ''colour-managed'' environment, and constant lighting.

Other method I know start with edge detection, and try to recognize objects by diffent pattern matching algorithms. But these use AT LEAST one full frame in RAM, and require more complex calculations, often using floating point.

One way to improve your project would be to use a Cortex M4 (the STM32F4xx has 192k RAM, FPU, and runs at 168MHz).

Or, you could use one of the many affordable Cortex A5/A7/A9 boards available, adding the comfort of an OS.

Another way would be to scale down the image resolution. That, BTW, was what my friend did in his project. He used edge detection on the difference of the current image to a reference, to detect new and moving objects. Performance was only about 2..3 frames per second, but enough to detect pedestrians on a traffic light.

diegomess89
Associate II
Posted on December 11, 2013 at 18:57

I think your friend's project is amazing given the hardware he had in that time. After reading your feedback i wanted to improve on the ''Blob detection'', is far better now, it now recognizes the contour of the blob, and is a little bit faster, im pretty sure MCU usage is 100%. I also have thought about migrating it to STMF4 discovery, it would be the best.. but there are too many camera and lcd pins. I still haven't experienced with Cortex A5/A7/A9 families. Its a whole new world which some day i would like to try

Mark Edwards
Associate II
Posted on December 12, 2013 at 02:28

“i just can think of another way to make it without having too many instructions involved�?

I am playing with something very similar – here is where I have got to.

My current solution doesn’t rely on colour, but on movement (as it may want to track an object in the dark using (monochrome) IR) .

The output from the camera (in the first instance) goes to a FPGA (cyclone iii)  (although the CPU gets a copy – when tracking by colour (and/or brightness) is required when the camera is moving – but at least you know where the object was and where the camera has moved to, when examining the frame buffer) which compares the current frame to the previous frame and sends ‘difference’ information to the processor as bits in a SPI stream (every pixel  is coded as 0 –no difference or 1 – change)  as the pixel data is clocked out of the cameras FIFO (OV7670).

The CPU checks each line as it arrives and the surrounding lines and tracks the largest block of movement (via the mechanics of an ex moving head light) which I am hoping will be able to accurately target objects in flight.

I too started with a 32F103 but realised I needed a faster processor so am currently working with 2 32F429’s as one needs to be tied up with sending the image data to a Raspberry Pi (which I use as a real time display) and motor control.

frankmeyer9
Associate II
Posted on December 12, 2013 at 10:08

After reading your feedback i wanted to improve on the ''Blob detection'', is far better now, it now recognizes the contour of the blob, and is a little bit faster, im pretty sure MCU usage is 100%. I also have thought about migrating it to STMF4 discovery, it would be the best.. but there are too many camera and lcd pins.

 

 

This is probably not part of your thesis project anymore, but a good start at diving into the embedded world. Using a Cortex M4 (like the STM32F4), both improves performance and adds floating point capability, and the RAM size puts much less restrictions on you algorithms.

I still haven't experienced with Cortex A5/A7/A9 families. Its a whole new world which some day i would like to try

That is, in fact, a whole different world. Image size and colour restrictions are mostly absent, and the available performance is at least one order of magnitude higher.

And you would have the support of an OS, which allows you to use, for instance, webcams via USB, and an attached HDMI monitor.

frankmeyer9
Associate II
Posted on December 12, 2013 at 10:10

 ... which I am hoping will be able to accurately target objects in flight.

 

A heat-seeking anti-aircraft/anti-tank missile ?   ...   😉

Mark Edwards
Associate II
Posted on December 13, 2013 at 00:44

Not quite. More a combination of a Water Pistol and the Sentry Gun from the Aliens film. I am hoping to be able to throw a dry pillow up in the air and it come down wet and if it can differentiate friend (birds) from foe (damn squirrels) all the better.

Also the moving head light chassis I am using is large enough and powerful enough to carry my DSLR which should hopefully enable some nice shots of birds in flight and other wildlife in my garden.

Today I finished wiring up the FPGA (using a QFP-240 to PGA adapter which is fitted below one of my Disco boards) and the next step is to see if I have got the design right so that I can program it and finalise the hardware design and get some proper prototype boards made up. And with the Christmas break approaching I should have some time to work on the software.

frankmeyer9
Associate II
Posted on December 13, 2013 at 09:08

Not quite.

 

I hope you didn't take my comment too serious ...

I am hoping to be able to throw a dry pillow up in the air and it come down wet and if it can differentiate friend (birds) from foe (damn squirrels) all the better.

 

Would be rather difficult/costly to differ between birds an squirrels, I guess.

(BTW, both fall in the class ''friend'' for me, since they are edible ... 😉

Having some ''for-fun'' image processing projects, I follow another hardware approach. Using

http://www.wandboard.org/

, I can attach a cheap webcam, and have netbook-like performance with a proper OS and plenty of RAM. It draws a little more current, though.

I think FPGAs are a viable way to circumvent some architectural restrictions and performance shortcomings of Cortex M MCUs, but I don't have the time to delve into that, too.