In the beginning I tried to generalize motion tracking system. Previously I tried to detect a colored object and rotate the camera accordingly. The fundamental concept of operation is to detect movement by detecting changes between two consecutive frames. The camera rotates according to the movement and tries to keep the moving object in frame.
It uses OpenCV with Python bindings to run the computer vision routines. The whole computer vision subsystem contains image processing followed by contour detection. The camera rotation platform uses a stepper motor that is controlled by a Atmel ATmega8L micro-controller which is connected to the computer with a Bluetooth Serial modem.
The custom PCB that contains the ATmega8L and the Motor driver that driver the motor…
Camera mounted on the rotor
There is a lot of room for improvement. I will continue to work on this.
This was my B. Tech Final Year Project.
All codes are on http://github.com/rivalslayer/MotionDetection including the AVR Studio Project files required to compile and program the micro-controller that is responsible for interfacing the camera rotation with the computer. The complete report is also here.
For my final year project, I initially planned to use Kalman filtering for motion detection, but as I couldn’t make the Kalman Filtering work on time, I settled with a color based tracking system. Simply converting the whole camera image to HSV color-space and setting a rang for a particular color, I tracked the colored blobs. Next I will be using Kalman filtering.
The computer vision library used was OpenCV, running with Python bindings. The system is communicating with the computer via Bluetooth and there is a webcam fitted on the rotation mechanism that I built from old printer parts.
Here is a video of it in action,
My final year team engineering project is computer vision based object tracking system. A camera will follow an object, trying to keep it at the center, as it moves it front of it. Primarily, our objective is to detect the object that is moving and then using a motorized platform, rotate the camera. First, we will do a side-wise panning platform, then up-down panning. If possible we will make a motorized object follower by integrating the whole system in a singular device. For that we are planning to use a Linux based ARM board (viz. BeagleBoard).
For computer vision we are using OpenCV. After testing multiple languages and computer vision libraries, it was apparent that OpenCV is the best library out there. Previously, JMyron and Blob Detection in Processing was used. Then now, it’s fully OpenCV. Primary prototyping will be done by Python. If possible we will migrate it to C for better performance.
The rotation platform and the PC will communicate with a Bluetooth Serial communication module, or cable serial. The motorized platform would have stepper motors, controlled with an AVR (presumably, ATmega8). AVR programming will be done with AVRGCC.
Currently our focus is on Computer Vision. Exploring capabilities of OpenCV. Our current plan is to use contour detection, then using the contour CG as our control point, we will rotate the camera. Assuming we will face problems with the control feedback to the rotor, we will have to explore control schemes. We might use predictive filtering techniques to predict control points and Kalman filtering as primary control point detection.
The camera being used is a Logitech C270 webcam.
Experiments with OpenCV yielded some results…
Canny edge detection
Haar-like object detection as face detection