Monday, 2 July 2012

SIFT

The algorithm

SIFT is quite an involved algorithm. It has a lot going on and can become confusing, So I’ve split up the entire algorithm into multiple parts. Here’s an outline of what happens in SIFT.
  1. Constructing a scale space
  2. This is the initial preparation. You create internal representations of the original image to ensure scale invariance. This is done by generating a “scale space”.
  3. LoG Approximation
  4. The Laplacian of Gaussian is great for finding interesting points (or key points) in an image. But it’s computationally expensive. So we cheat and approximate it using the representation created earlier.
  5. Finding keypoints
  6. With the super fast approximation, we now try to find key points. These are maxima and minima in the Difference of Gaussian image we calculate in step 2
  7. Get rid of bad key points
  8. Edges and low contrast regions are bad keypoints. Eliminating these makes the algorithm efficient and robust. A technique similar to the Harris Corner Detector is used here.
  9. Assigning an orientation to the keypoints
  10. An orientation is calculated for each key point. Any further calculations are done relative to this orientation. This effectively cancels out the effect of orientation, making it rotation invariant.
  11.  
  12.  
  13. http://www.aishack.in/2010/05/sift-scale-invariant-feature-transform/

Sunday, 1 July 2012

Face Recognition using SURF


BLOG #1


FACE RECOGNITION USING SURF



SURF is an scale and rotation invariant feature detector . SURF feature detector have only 64 dimention So it has good efficincy than SIFT-128.

There are several traditional methods of face recognition:-
-eigen face
-fisherface
-2D-PCA
-Elastic graph matching

Lets see what is this SURF

  •  SURF is a scale and in-plane rotation invariant detector and descriptor with comparable or even better performance with SIFT.
  • ,    In SURF, detectors are first employed to find the interest points in an image, and then the descriptors are used to extract the feature vectors at each interest point.
  • SURF uses Hessian-matrix approximation operating on the integral image to locate the interest points, which reduces the computation time drastically.
  • As for the descriptor, the first-order Haar wavelet responses in x and y directions are used in SURF to describe the intensity distribution within the neighborhood of an interest point, whereas the gradient is used by SIFT.
  • In addition, only 64 dimensions are usually used in SURF to reduce the time cost for both feature computation and matching. Because each of SURF feature has only 64 dimensions in general and an indexing scheme is built by using the sign of the Laplacian, SURF is much faster than the 128-dimensional SIFT at the matching step.
  •  

Generally,,,

       This project comprises of three steps.
  • Preprocessing the images
  • Interest point detection and extracting features
  • Feature matching





PREPROCESSING THE IMAGES






Detecting face is just the beginning. The next step is to normalise the face, so that only the face part is cut. Then we will apply histogram equilisation to make the face invariant to brightness and contrast. Finally, we rotate the face, so that the eye coordinates remain on same height. This is needed, since we will be comparing photos for face recognition and if the guy tilts the head, his nose may be taken as one of the eyes, by mistake by the recognition system. In the end, the piece that we actually need looks like this


The face recognition system compares this face, with the faces like this to recognise people. Incidentally, there is a savior for us. It's the CSU face evaluation system. Just google for the name and download the zipped file. The beauty of CSU face evaluation system is that, you supply the image containing face, along with any back ground, with the eye center coordinates, and it returns you the face which is detected, normalised, equalised and rotated.



The only thing, that we need to supply it, is the eye center coordinates of the image i.e., the x and y coordinates of left eye center (eye ball center) and the same of the right eye. To get these coordinates, I used the eye detector, which I made you learn in the previous posts. Since, it draws a square around the eye, finding the center of that square is no big task. Will explain it to you in future posts, and the next post will be on sunday ( 28/08/11). For now, just download the CSU face evaluation system and try to decipher and also carefully go through it's documentation.







SURF ALGORITHM FOR MATCHING

SURF stands for Speeded up Robust Features. In a given image, SURF tries to find the interest points - The points where the variance is maximum. It then constructs a 64-variable vector around it to extract the features (A 128-variable one is also possible, but it would consume more time for matching). In this way, there will be as many descriptors in an image as there are interest points. Now, when we use SURF for matching, we follow the below steps for every probe image (The image we need to match against)

1. Take a descriptor of the probe image
2. Compare this with all the descriptors of the gallery image (One of the set of possible matching images)
3. Find the nearest descriptor which is the one with the lowest distance to our descriptor compared to all the other descriptors
4. The nearest descriptor distance is stored
5. Take a descrptor other than the one's already taken from probe image and go to step-2
6. Now all the nearest descriptor distances are added, and the sum is divided with the total number of probe descriptors. This gives us the average distance
7. This average distance, along with the name of the gallery image we just matched, can be outputted to a file
8. For every gallery image, go to step-1
9. When all the gallery images get over, sort the distances in the outputted file and the one with the lowest distance is the best match for our probe image

There is already a function in openCV called cvExtractSURF to extract the SURF features of images. But there is no function to directly compare two images using SURF and give their distance. In the next post, we will be talking about doing this











Saturday, 30 June 2012