2013-11-27 07:42 PM
Hello, this is my first post, i'm new to the community. As part of my thesis i developed an object tracking system with an STM32F103vct microcontroller to demonstrate the capabilities of it and the potential to make embedded vision systems that can be even more useful than a whole computer running Opencv (or etc) in particular cases where dimensions and energy constraints wont make possible to have a computer running, like in UAV's, self driving vehicles, autonomous robots or even cubesat-space applications . I must say STM32 micro controllers are outstanding and a lot of great stuff can be made, please check the video below and i would be glad to answer your questions.
https://www.youtube.com/watch?v=AwHzUILiGmA #artificial-vision #lcd #camera2013-12-16 07:37 PM
Hello Tede, your project is interesting, i have some questions; locating a moving blob by extracting the current frame form a previous frame can get you information of movement, but if your camera actually moves, the whole frame moves with it making the background of the previous and next frame also a moving object, how did you resolved these problem?
As your camera uses IR do you adapted any kind of filter for Ov7670? How much did your system improved with a change from STM32F1 to STM32F4? How are you sending image data to the raspberry, How long does it takes? Sorry if i ask too much, i find very interesting your whole project that has all these cameras, MCU (2 STM and a raspberry!!!) and even an FPGA involved.2013-12-16 07:54 PM
Well, i know that migrating the system to STM32F4 is not a must, but it just bothers me the idea that my project can be faster , and better, but also it is consuming so much time right now.. Well i guess that is how thesis work feels like.
Anyway Cortex A5/A7/A9 capabilities are just astounding, sometimes i think i'm really losing time by not learning about this new world. What board would you recommend for start, maybe outside STM? Maybe Raspberry?2013-12-17 04:41 PM
Hi ayala. I am hoping that once it identifies movement and starts moving then encoders will provide the camera movement offset information to allow the object to be located in successive frames as it will already know what it is looking for, where it was and where it was going.
I am using OV7670 during development, if the system works as I hope I will use different cameras. The main differences between the processors are the greater processor speed, larger memory, external memory interface and the camera interface. I was sending serial data via SPI and 8 bit picture using 8 IO pins. As the pins on the Pi are scattered across the input register there doesn’t seem any advantage, due to having to select the bits out of the register and the latency of the handshaking in using the parallel port so will probably just use SPI in future (I was using a 8MHz SPI clock reliably but that seemed to be the limit with the F1. Am hoping to increase this speed with the F4). Give me a few more months and I will put up a video so that you can see if my approach works.2014-02-28 11:56 PM
Hi Diego
I too made a basic image processing system with an stm32f103rbt6 based board. Mine has not the performance yours has, I can reach about 7 frames per second if I do not show the images on the LCD (Instead i display a red crosshair which shows the position of a detected blob). It can only detect 1 blob of 1 predefined color (in the code) and tracks this with 2 servo's. As long as there are no other 'matching pixels'' it works pretty decent. I process pixels ''on the fly'' since there is not enough memory inside the F103. For each pixel taken from the camera I stream it to the LCD but first I determine if the pixel meets the criteria and make it white or make it black when it does not meet the criteria of the color I look for.Do you care to share your code? I'd love to see how you solved it.