Sunday, February 15, 2015

Research Paper

 IMAGE BASED SIGN LANGUAGE RECOGNITION SYSTEM FOR SINHALA SIGN     LANGUAGE

H.C.M. Herath, W.A.L.V.Kumari, W.A.P.B Senevirathne and M.B Dissanayake
Department of Electrical and Electronic Engineering, Faculty of Engineering, University of Peradeniya, Sri Lanka.

Introduction

 A novel method for hearing impaired people to communicate with others effectively by means of technology is presented. The goal of this research is to achieve a tool that will help a hearing impaired person to communicate with a person who is not aware of sign languages. This paper presents a low cost approach to develop an image processing based Sinhala sign language recognition application for real time applications.

New concept of mapping the gesture using centroid finding method, which is capable of mapping the input gesture with the data base independent from hand size and position, is explored.

 METHODOLOGY

In the prototype developed for this project a green background is used to capture the image for the simplicity of the implementation. First, the RGB image captured from the web camera is separated into the three matrices, red (R), green (G) and blue (B). Next G matrix is subtracted from the R matrix. This is done because it was experimentally found that red is the most dominant color of the skin  and the background used is in green color as well. However, the algorithm presented can be fine tuned to be used in a background with any constant color. Shadows are removed in this process. Then shadow effect remove by subtracting G matrix from R matrix and it is convert to binary image.

Then resulted image is converted to binary image by defining a threshold. This is generated to facilitate faster mapping. The resulted binary image accuracy is depended on lighting condition at which the image is captured. If the  lighting intensity is sufficient to capture the image with its natural colors or closer to natural colors then the binary image is noise free.Next boundaries of the hand are identified by drawing smallest possible rectangle around the hand and the image is cropped to extract the region to interest. Then the cropped image is equally divided in to four parts.


Next centroid of each segment is calculated). The (Height/y) and (Width/x) ratios are calculated for each segment and then they are compared against pre calculated values that are in the database. Errors of the ratios are also calculated using eq. (01).

Error = (ratio in the database) – (ratio calculated for the real time image) (01)

Finally, the image with minimum error is selected as the matched image. The prototype of the system is developed using Matlab simulation package, a portable camera (Intex Model No IT-309WC, 16MP) and green back ground.

Results and Conclusion

The proposed prototype was tested in real time with 5 random participants, against a database of 15 sign gestures. According to the results, the system identified 10 numbers of gestures with 100% accuracy, 4 numbers of gestures with 80% accuracy and one gesture with 60% accuracy. In other words, the prototype correctly recognized 92% of gestures. Therefore the proposed algorithm shows good adaptability and acceptable level of performance for a random selection of users.

The things that abstract from this research paper to our project

We can try with this methodology to identify the sign by using this algorithm.