Tuesday, March 3, 2015

A Vision based Geometrical Method to find Fingers Positions in Real Time Hand Gesture Recognition


A Vision based Geometrical Method to find Fingers Positions in Real Time Hand Gesture Recognition

(JOURNAL OF SOFTWARE, VOL. 7, NO. 4, APRIL 2012)


Ankit Chaudhary Computer Vision Research Group, BITS PilaniRajasthan, INDIA Jagdish L. Raheja Machine Vision Lab, CEERI/CSIR, Pilani Rajasthan, INDIA ,Shekhar Raheja Digital Systems Group, CEERI/CSIR, Pilani Rajasthan, INDIA

Introduction

A novel method to calculate the bended finger’s angle has presented here which could be used to control the elector-mechanical robotic hand.In this method the hand gesture will be interpreted for controlling the robotic hand.The angles for all the fingers will be calculated and that could be further passed to the robotic hand for controlling its finger

User has to show his natural hand (without wearing any mechanical electronic equipment) to the camera and the palm should face the camera. User can show any hand to system (right of left) and also there is no restriction on the direction of the hand.Now user would bend his fingers to hold an object (virtual object), and the robotic hand will do the same operation to hold the actual object on its location.
This vision
 based system detects fingertips in the real time from live video and calculate fingers bending angle.


1.First captured 2D image would be prepossessed
2.Skin filter would be applied.
3.Segmentation method is able to extract the hand gesture from the image frame even if there are skin colors like objects in the gesture background.
4. The processed image would be cropped to get only area of interest to make further processing faster.
5. In the cropped image, fingertips and center of palm would be detected
6. then system would measures distance between Centre of palm and fingertips.
7. The calculated angle for each finger could be passed as input to the robotic hand, so that robotic hand can bend its finger accordingly.


System is able to detect fingertips, center of palm and angles continuously without any system error. In this paper fingertip detection based gesture recognition was done without using any training data or any learning based approach.

III. IMAGE PRE-PROCESSING


Real time video was captured using a simple web camera in 2D, which was connected to a windows XP® running PC.

A. Skin Filter


A HSV color space based skin filter was applied on the captured RGB format image to reduce lighting effect.
The resultant image was segmented to get a binary image from the original one.Smoothening to the image was needed, as the output Image had some jagged edges as clearly visible.
There can be some noise in the filtered images due to false detected skin pixels or some skin color objects To remove these errors, the biggest BLOB was applied to the noisy image(The only limitation of this filter is that the BLOB should be the biggest one. In this masking, background would be eliminated, so false detected skin pixels would not exist in the background.)

B. Hand Direction Detection

The user can give directionally free input by showing hand gesture to the camera. Hand can be on any angle in any direction but the palm should be toward the camera.System first has to find out the direction of hand to extract the area of interest. For this a 4-way scan of preprocessed image was performed as shown in figure 4 and histograms were generated based on skin color intensity in each direction. In all four scans the maximum value of skin pixels was chosen from histograms and it was noted that maximum value of sin pixels represents the hand wrist and obviously opposite end of this scan would represent the fingers in the hand



 Histograms generation equations were:


 (Here imb represents the binary silhouette and m, n represents the row and columns of the matrix imb. The yellow bar showed in figure 4 corresponds to the first skin pixel in the binary silhouette scanned from the left to right direction. Similarly the other bars correspond to respective directions as shown in figure 4. For this input image frame the red bar had greater magnitude than other bars, the hand wrist was in downward direction of the frame and consequently the direction of fingers were in the upward direction.)

C. Image Cropping
Image cropping eliminates unwanted region to be processed in further steps, hence avoids unnecessary computational time.

In the histograms which we generated in hand direction detection, it was observed that at the point where hand wrist ends, a steeping inclination of the magnitude in the histogram starts. As starting point of image where inclination was found and the other points of the first skin pixel in other three scans, would give the coordinates where the image is to be cropped. The equations for cropping the image were:


Where imcrop represents the cropped image, Xmin, Ymin, Xmax, Ymax represent the boundary of the hand

IV. FINGERTIP AND CENTRE OF PALM DETECTION
Now we have the cropped image which has area of interest, in which we will try to find out fingertips and center of palm for further use in system.


A .Detection of fingertips

the hand direction is already known from previous steps. A scan was done in the cropped binary image from the wrist to the finger and number of pixels were calculated for each row or column based on the hand direction, as hand can be either horizontal or vertical. The intensity values for each pixel.The intensity values for each pixel were assigned from 1 to 255 in increased manner from wrist to fingertip proportionality. So, each skin pixel on the edges of the fingers is assigned a high intensity value of 255. Fingertips were detected taking a threshold of value 255 as shown in figure 6. Mathematically the fingertip detection can be explained as:-




Here Fingeredge would give the fingertips points.


The line having high intensity pixel, is first indexed and checked whether differentiated value lie inside an experimentally set threshold for a frame of resolution 240x230, if it is true it represents a fingertip. A result of fingertip detection process is shown in figure 7 where fingertips are detected as white dots


B. Centre of Palm detection

Automatic centre of palm (COP) detection in a real time input system is a challenging task.
The exact location of the COP in the hand can be identified by applying a mask of dimension 30x30 to the cropped image and counting the number of on pixels lying within the mask.

This process was made faster using summed area table of the cropped binary image for calculating the masked values [5]. In the summed area table the value at any point (x, y) is the sum of all the pixels above and to the left of (x, y), inclusive. As shown in equation:-

 Once the summed area table has been computed, the task of evaluating any rectangle can be accomplished in constant time with just four array references
 

The value of the rectangular mask over a region can be calculated by simply four lookups. This improves the speed of the computation by a factor of 250. The COP was calculated as the mean of the centers of all the regions that have a sum of more than a threshold as shown in figure 8.

V. ANGLE CALCULATION
The fingertips and COP information is now known to us, which would be used to detect the position of fingers in the gesture, made by user in one frame of input

A. Distance between COP & Fingertips The distance between each fingertip and COP can be calculated by subtracting their coordinates as shown in figure 9.



               Figure 9. Distance calculation between COP and Fingertips


B. Finger bending angle
Initially user has to show all fingers open gesture to the system which will be recorded as the reference frame for the system and the in this frame bending angle of all fingers would be marked as 1800 as shown in figure 10.




                                      Figure 10. The reference frame


The distance between any fingertip and the COP would be the maximum in this position. As user starts bending the fingers in either direction (forward or backward), distances among fingertips and COP would decreases.

 The values calculated from the reference frame would be stored for each finger. If the user changes the position of his fingers, the distance between COP and fingertips would be compared with the reference distances as shown in figure 11. The angle on which fingers were bended, would be calculated on comparing these distances. In our method it can be from 1800 to 900 as after bending more than 900, base of fingers in the hand would be detected as fingertips. Through the experiments, the distance between COP and fingertip is assumed to be the 1/3rd of the reference distance on 900 and when the angle is 1800, the distance between COP and fingertip is assumed to be equal to the reference distance 



From the figure 12 it is clear that when d = dref, angle a1=00 and when d= dref /3, angle a1=900.
So, we can express angle a1 as:



Angle of fingers bending, a2 from the figure 12, would be



This method works on any hand gesture input, shown in any direction. Even if the user is moving the hand position, this information is also passed to remote robotic hand and robotic hand would also move in the same direction.



VI. PERFORMACNE

The simulation of system
was done in MATLAB® running on Windows XP® and Pentium®4, 2.80 GHz processor. Maximum time was taken by preprocessing part and after that, image cropping and hand direction detection took longest time. It is clear that if we take few assumptions, the system will run much faster, but for the robustness we are not putting any condition to the user.


The users was giving input with their hand (one hand at a time, either right or left), in random directions. System was tested in different conditions for long time to check its sustainability in the commercial deployment and it performed excellently with different users. It is free from the user’s hand geometry and would work same for everyone.


VII. CONCLUSION


The usage of the system is very easy and robust. User has to show the hand in front of the camera and the fingers bending angle would be calculated. These angles could be passed to the electro-mechanical robotic hand. User can show any hand to camera in any direction, and fingertips and centre of palm would be detected in the captured image. The gesture was extracted even from the complex background and cropping of ROI made algorithm faster


The bending angles of fingers were calculated using a time efficient geometric modeling method. The user can control the robotic hand using his gesture without wearing any gloves or markers.




No comments:

Post a Comment