Sunday, June 7, 2015

RGB-H-CbCr Skin Colour Model for Human Face Detection

Abstract 

RGB, HSV and YUV (YCbCr) are standard models used in various colour imaging applications, not all of their information are necessary to classify skin colour. This paper presents a novel skin colour model, RGB-H-CbCr for the detection of human faces. Skin regions are extracted using a set of bounding rules based on the skin colour distribution obtained from a training set. The segmented face regions are further classified using a parallel combination of simple morphological operations.

Introduction

This paper present a novel skin colour model RGB-H-CbCr for human face detection. This model utilises the additional hue and chrominance information of the image on top of standard RGB properties to improve the dis criminality between skin pixels and non-skin pixels. In this approach, skin regions are classified using the RGB boundary rules introduced by Peer et al. and also additional new rules for the H and CbCr subspaces. These rules are constructed based on the skin colour distribution obtained from the training images. The classification of the extracted regions is further refined using a parallel combination of morphological operations.


System Overview


In this colour-based approach to face detection, prior formulation of the proposed RGB H- CbCr skin model is done using a set of skin-cropped training images. Three commonly known colour spaces
– RGB, HSV and YCbCr are used to construct the proposed hybrid model. Bounding planes or rules for each skin colour subspace are constructed from their respective skin colour distributions.

In the first step of the detection stage, these bounding rules are used to segment the skin regions of
input test images. After that, a combination of morphological operations are applied to the extracted
skin regions to eliminate possible non-face skin regions. Finally, the last step labels all the face regions in the image and returns them as detected faces.


RGB-H-CbCr Model

The prepared skin colour samples were analysed in the RGB, HSV and YCbCr spaces, As opposed to 3-D space cluster approximation used by Garcia and Tziritas , this paper intend to examine 2-D colour subspaces in each of the mentioned colour models, i.e. H-S, S-V, H-V and so forth.

In RGB space, the skin colour region is not well distinguished in all 3 channels.

In HSV space, the H (Hue) channel shows significant discrimination of skin colour regions, as
observed from the H-V and H-S plots in where both plots exhibited very similar distribution of pixels. In the hue channel, most of the skin colour samples are concentrated around values between
0 and 0.1 and between 0.9 and 1.0 (in a normalized scale of 0 to 1).
The Cb-Cr subspace offers the best discrimination between skin and non-skin regions.

Skin colour bounding rules

From the skin colour subspace analysis, a set of bounding rules is derived from all three colour spaces, RGB, YCbCr and HSV, based on training observations. All rules are derived for intensity values between 0 and 255.
Rule A

In RGB space, use the skin colour rules introduced by Peer et al. The skin colour at
uniform daylight illumination rule is defined as

(R > 95) AND (G > 40) AND (B > 20) AND
(max{R, G, B} − min{R, G, B} > 15) AND
(|R − G| > 15) AND (R > G) AND (R > B)     
                                                                         Rule(1)

while the skin colour under flashlight or daylight lateral
illumination rule is given by

(R > 220) AND (G > 210) AND (B > 170) AND
(|R − G| ≤ 15) AND (R > B) AND (G > B) 
                                                                         Rule(2)

To consider both conditions when needed, this paper used
a logical OR to combine both rule (1) and rule (2).  Rule A: Equation(1) OR Equation(2)     

Rule B
Based on the observation that the Cb-Cr subspace is a strong discriminant of skin colour;

Cr ≤ 1.5862 × Cb + 20   (3)
Cr ≥ 0.3448 × Cb + 76.2069  (4)
Cr ≥ -4.5652 × Cb + 234.5652  (5)
Cr ≤ -1.15 × Cb + 301.75  (6)
Cr ≤ -2.2857 × Cb + 432.85  (7)

Rules (3) to (7) are combined using a logical AND to obtain the CbCr bounding rule B.
Rule B: Equation(3) AND Equation(4) AND Equation(5) AND Equation(6) AND Equation(7) 

Rule C
In the HSV space, the hue values exhibit the most noticeable separation between skin and non-skin
regions. We estimated two cut off levels as our H subspace skin boundaries,
H < 25 (9)
H > 230 (10)
where both rules are combined by a logical OR to obtain the H bounding rule C.
Rule C: Equation(9) OR Equation(10)

Thereafter, each pixel that fulfills Rule A, Rule B and Rule C is classified as a skin colour pixel,
Rule A AND Rule B AND Rule C

Skin colour segmentation

The proposed novel combination of all 3 bounding rules from the RGB, H and CbCr subspaces  is named the “RGB-H-CbCr” skin colour model. Although skin colour segmentation is normally
considered to be a low-level or “first-hand” cue extraction, it is crucial that the skin regions are
segmented precisely and accurately. Our segmentation technique, which uses all 3 colour spaces was designed to boost the face detection accuracy.

Morphological Operations

The next step of the face detection system involves the use of morphological operations to refine the skin regions extracted from the segmentation step.
Firstly, fragmented sub-regions can be easily grouped together by applying simple dilation on the
large regions. Hole and gaps within each region can also be closed by a flood fill operation.

this paper used a morphological opening to “open up” or pull apart narrow, connected regions.
Additional measures are also introduced to determine the likelihood of a skin region being a face
region. 
Two region properties – box ratio and eccentricity are used to examine and classify the shape
of each skin region.
The box ratio property is simply defined as the width to height ratio of the region bounding box. By
trial and error, the good range of values lie between 1.0 and 0.4. Ratio values above 1.0 would not suggest a face since human faces are oriented vertically with a longer height than width. Meanwhile, ratio values below 0.4 are found to misclassifying arms, legs or other elongated objects as faces.

The eccentricity property measures the ratio of the minor axis to major axis of a bounding ellipse.
Eccentricity values of between 0.3 and 0.9 are estimated to be of good range for classifying face
regions. Though this property works in a similar way as box ratio, it is more sensitive to the region shape and is able to consider various face rotations and poses.
Both the box ratio and eccentricity properties can be applied to the extracted skin regions either sequentially or parallelly, following a dilation, opening or flood fill.


Conclusion

In this paper, they have presented a novel skin colour model, RGB-H-CbCr to detect human faces. Skin region segmentation was performed using a combination of RGB, H and CbCr subspaces, which
demonstrated evident discrimination between skin and non-skin regions. The experimental results showed that this new approach in modelling skin colour was able to achieve a good detection success rate. On a similar test data set, the performance of our approach was comparable to that of the AdaBoost face classifier.

Saturday, June 6, 2015

Title

Real-Time Palm Tracking and Hand Gesture Estimation Based on Fore-Arm Contour

Introduction

In this thesis proposed an image processing system using a web camera. Differently from other hand recognition method. They mark up the important features of hand: fingertips, palm center by computation geometry calculation, provide real-time interaction between gesture and the system. Within the advantages brought by computation geometry method, this system can accurately locate the palm center even when the fore-arm is involved. And the system tolerates a certain rotation of palm and fore-arm, which enhances the freedom of use in palm center estimation.

Overview


1). Pre-Processing and Convex hull method

This will introduce how to extract the interested area from color image. Here they use a single web camera to capture a series of iamges. After transforming the image into HSV color space, might be able to extract the interested area by defining the range of hue saturation and value. A binary image will be produced and morphology processing will be performed. Erosion will eliminate the noises while the dialation will fill up the defects to smooth the ontour of interested area.

The color model captured from web camera is composed of RGB values, but it will be influenced by the light very easily, we must convert the RGB color space into another color space which is not sensitive to lght variation. In addition to RGB, there are some other commonly used color spaces such as Normalized RGB , HSV , YCbCr , and so forth. In order to make the system achieve real-time processing and adapt to most of the environments, we choose the color model with simple converting formulas and low sensitivity. In this thesis, we choose HSV color space to extract human skin areas. The skin color areas will be represented in binary image. Hence we will perform morphology processing including erosion and dilation. Noises will be eliminated by erosion and the contour of skin color area will be smoothed by dilation.

1.1) Skin color detection using HSV color space

The HSV model comprises three components of a color: hue, saturation, and value. It can distinguish the value from the hue and saturation. A hue represents the gradation of a color within the optical spectrum, or visible spectrum of light. Saturation is the intensity of a specific hue, which is based on the purity of a color. The practicability of the HSV model can be referred to two main factors: 
1) the value separated from the color image, and 
2) both the hue and saturation related to human vision.

All colors composed of three so-called primary colors (red, green, and blue) are inside the chromatic circles. The hue H is the angle of a vector with respect to the red axis. When H = 0 , the color is red. The saturation S is the degree of a color that has not been diluted with white. It is proportional to the
distance from the location to the center of a circle. The longer distances are away from the center of a chromatic circle, the more saturation the colors will be. The value V is measured by the line which is orthogonal to the chromatic circle and passes through the center of the circle. It tends to be white along this center line to the apex.

The relationship between the HSV and RGB models is expressed below:



We distinguish the skin color from non-skin color regions by setting upper and lower bound thresholds. In our experimental environment, we choose the H value from 2 to 39 and from 300 to 359; the S value between 0.1 and 0.9 for the range of skin colors


1.2) Morphology Processing

After the generating of the binary image, we further perform morphological operations on them to smooth hand segments and it also can eliminate some noises. Two basic morphological operations are dilation and erosion. These two morphological operations are described below. 

For two sets in 2 Z , A and B , the dilation of A by B denoted A+B is
defined as

This equation is based on obtaining the reflection of B about its origin and shifting this reflection by z Then the dilation of A by B yields the set of all displacements, z , such that ˆB and A overlap with at least one element. 


Set B is commonly referred to a structuring element in the dilation operation as well as in other morphological ones. Therefore, all points inside this boundary constitute the dilation of A by B.

Given two sets A and B in 2 Z , the erosion of A by B denoted A - B is defined as 

In words, this equation manifests that the erosion of A by B is the set of all points z such that B , translated by z , is contained in A. As in the case of dilation, this equation is not the unique definition of erosion. However,this is usually favored in practical implementations of mathematical morphology for the same reasons stated earlier in connection with previous equation. Following figure shows the original set for reference, and the solid line stands for the limit beyond which any further displacements. In consequence, all points inside this boundary constitute the erosion of A by B. 


The erosion and dilation operations are both the conventional and popular methods used to clean up anomalies in the objects. The combination of these two operations can also achieve good performance. Therefore, a moving object region is morphologically eroded (once) then dilated (twice).

1.3) Conotur Finding

After the previous processing, we should be able to obtain a binary image. The white pixels represent for skin color regions. The next step will be contour finding. A contour is a sequence of points which are the boundary pixels of a region. The contour of those regions will be found so that we can disregard those small areas and focus on the fore-arm area we are interested in. It can be done by comparing the length of their contour. The longest one we are looking for.

In this thesis they use Theo Pavlidis' Algorithm to find contours.



1.4) Convex Hull

Finding a convex hull is a well-known problem in computation geometry. Let us imagine that there are several nails on the wall, a rubber band is used to surround those nails. Only the peripheries will touch the rubber band while the nails inside won’t be able to affect the rubber band. The shape of the rubber band is probably how the convex hull of the nails will be looked like. It is obviously not difficult for a human to understand the shape of convex hull just in a glance. But as for machines, an algorithm is needed. 

We calculate the convex hull of the fore-arm contour in order to find the desired information. Usually the arm part has a smooth contour and doesn’t contain any important information. The hand part has more convex and concave contours and it usually contains the information we want. After the comparison of a fore-arm contour and its convex hull, we find out that the convexity defects are around the palm area. Hence we might be able to find the points which have the longest distance to the convex hull in each convexity defect. Those points are on the edge of palm section since the convexity defects are around the palm area. Since the fore-arm is relatively smooth, so the neighbor defect usually won’t create a point which has the longest distance to the convex hull on the arm contour. By these points, we can determine the position of palm even the fore-arm contour is included in the contour.

Here they use Three Coin Algorithm to find convex hull.

1.5) Convexity Defects

We calculate the convex hull of the fore-arm contour in order to get the convexity defetcs of the contour. It provides us useful information to understand the shape of a contour. 


By connecting the start , end and points on the contour between the start point and end point, we get a convexity defect area.


The depth of the defect is the longest distance of all points in the defect to the convex hull edge of the defect. The point in the defect which has the longest distance ti the convex hull edge of the defect will be the depth point. 




2) Palm Tracking and Fingrtip Detection

We are able to extract the points which has the longest depth in a convexity defetc. The convexity defects with a depth larger than a certain threshold tend to be appeared around the palm. 

Gathering depth point gives us useful information to decide where the palm poisition is. In this thesis there are 2 ways to determine the palm position. 
1). Minimum Enclosing Circle
It calculates the smallest circle which covers the all the depth points.

2). Calculate average position within a minimum enclosing circle.

2.1)Minimum Enclosing Circle

2.1.1) Find circle through 3 points

It is obviously that 3 different points in a plane can form a unique circle. The equation of the circle in a plane has three unknown variables.
When there are 3 points in a plane, we might be able to input their x and y coordinates into the equation to generate 3 equations with 3 variables. That makes unique solution. Instead of solving thhe simultaneous equations, we can first calculate the center of the circle according to geometric property. when the center of the circle is calculated it will be easy to get the radius.


This circle is formed by 3 points P1, P2 and P3. Two lines can also be formed through two pairs of three points. Line A passes through P1 and P2. Line B passes through P2 and P3. The equations of these two lines are;



The center of the circle is the intersection of the two lines which are prependiclular to line A and line B and also pass the midpoints of P1 P2and P2 P3. The perpendiclular of a line with slope m has slope -1/m, thus the equations of the lines perpendicular to line A and line B which pass the midpoints of P1 P2 and P2 P3 are;

The 2 lines of the equations above intersect at the center point, so the coordinate of the center point will be the common solution for the equationa above.  The following equation solves the X-coordinate of the center point.

(mb-ma) will be 0 only when the two lines are parallel, that implies that they must be coincident. But we apply the calculation only when the 3 input points are different, so will avoid that situation. By substituting the x value into the following equation we obtain the Y coordinate of the center point. 



2.2) Fingertip Detection

Algorithm




2.2.1) Buffer for palm position


Since the contour we extract can have some small changes from frame tp frame, it may cause the palm position to move a little bit in every frame. So tha palm position might be shaking. To reduce that weapply a buffer for the palm position. 

The buffer will record the position and radius of palm cirlces in the past 3 frames and the current frame. The palm circle we are going to draw in the current frame will be the average of the buffer. 

2.2.2) Mix of palm position and average depth point position

When the fore-arm part is large in the image the depth point at the wrist part might slip away. This will enlarge the size of the palm circle and the estimation of palm will be wrong. 


we calculate the average position of current depth points and so that slipped depth point can only affect 1/(amount of depth points) of the results. 

we calculate the average of average depth points and the palm circle. The effect of slipped depth point will be reduced and the estimation of palm position will still be accurate. 




2.2.3) Add an extra point when there is less than 2 depth points

Another condition we need to take care of often happened when the hand is closed; all the depth of convexity defects could be smaller than the threshold except the 2 convexity defetcs beside the wrist. Then palm center will be incorrect. 


To avoid this problem when the total amount of depth points is 2 or less, the system will add an extra point for minimum enclosing circle. This point can be obtained by finding the point which is at the top of the contour. 




3) Results


Applications

We can use their contour finding algorithm and convex hull finding algorithm for our system. And also we can use their palm position and fingertip detection algorithms as well. 





























Title


A Fast and Robust Fingertips Tracking Algorithm for Vision-Based

Multi-touch Interaction

Introduction

This paper propose a fast and robust fingertip 
tracking algorithm based on geometry structure model of 
hand. 
Compared with existing methods, this algorithm attempts to track movements of multiple fingertips not only in 2D but also in 3D space without using any marks. A stereovision system is set up to retrieve depth information of the scene. The algorithm detects the hand region using skin color filter as well as depth images reconstructed from the stereovision system, and then calculates the position of palm center. Finally based on the observation that the geometry structure of hands is almost identical, it’s possible utilizing the geometry relation between palm center and hand
contour to determine the fingertip locations on the contour. 


  1.  Hand Localization

In order to track the movements of fingertips, hand region should be segmented from background first. Hand localization, however, is still a challenging problem due to its high degree of freedom (DOF) and flexible shapes. For the purpose of improving system efficiency, this approach handles the problem in a simpler way. The hand is considered as the closest object with skin color in front of the camera. The hand region can be detected by a two step method. First, a skin-color filter is applied to locate the candidates of hand region.Then the hand is segmented from depth images using depth clipping and region grow algorithm.

1.1). Skin colour filter

Skin-color has been proven to be an effective cue for extracting hand and face regions from background. Basically, skin color detection is to define decision rules and build a skin color classifier. The main difficulty is to find both appropriate color space and adequate decision rules.
In our algorithm, we choose YCbCr color space for skincolor segmentation. YCbCr color space separates the color information to three channels: luminance, chrominance and compactness, and is appropriate for skin-color segmentation. In addition, a parametric model, the Gaussian Mixture
Model is also employed to describe the skin-color distribute. For a single Gaussian model, skin-color probability distribution p(c|skin) is defined as follow



Here c is a color vector. μs and Σs are the model parameters which can be estimated from training data as follows: 


where n is the number of training samples. Finally, the Gaussian Mixture Model is defined by


where k denotes the number of mixture components, λi denotes the weight of each Gaussian model which satisfy kΣ i=1 λi = 1. The P(c|skin) can be used directly as the measure of how ”skin-like” the color is. In this paper we set k to 5 and the Gaussian Mixture Model parameters are evaluated by
the well-known Expectation Maximization (EM) algorithm. After extracting skin-like regions from background, apply morphological operations on the extracted regions for the purpose of removing noises. In this stage, some objects with skin-like color are also extracted as candidates of hand
region.

1.2) Hand segmentation from depth image

As mentioned before, hand is assumed to be the closest object with skin-color in front of the camera. Therefore, the hand can be identified from all the candidates by depth clipping technique and region grow algorithm. Note that the hand region in depth image is continuously distributed and extends within a limited 3D space, so the points with minimum depth are picked as seeds. By applying
region grow algorithm, the hand region is segmented from the background.
(a) Original hand image; (b) Depth image of (a); (c) Extracted region after skin-color filtering and noise removal; (d) Extracted region after applying depth clipping and region grow algorithm;
(e) Hand region: intersection of (c) and (d).

2. Fingertip Tracking

In this section, we will describe a fast and robust fingertip track algorithm. Different from most existing approaches which are usually based on template matching, curvature analysis or color marks, this approach utilizes the geometry information of hand shape to determine the positions of
fingertips. Due to its simplicity and effectiveness, this method is expected to make faster and more stable tracking on. 

2.1) Palm center localization

In order to find positions of fingertips, the center and the size of palm should be estimated first.
We notice the palm is a rectangle-like region, more or less. Based on this observation, a projection based algorithm is 






































The basic idea is to project the hand region in all directions. If the projection line goes through only one block of the hand region, that implies this block is a candidate of palm region and should be preserved. After projections along all directions, the intersection of all candidates will make a final palm region. This algorithm is formulated in this algorithm. 

Normally it’s unnecessary to project the image in all directions because the extracted region will not change much when the projection angle interval is smaller than a certain value. Here this does the projection every 450 from 00 to ±1800 experientially. Compared with shape model based methods,
this algorithm is faster and simpler

Palm center C0 is defined as the point in the palm region which has maximum distance from the closest palm boundary. 




(a) Original hand image;
(b) Extracted hand region; 
(c) Preserved hand region when the projection angleθ =00;
(d) Preserved hand region when the projection angle θ =900;
(e) Extracted palm region: intersection of all preserved hand regions when projection angle varies from 00 to ±1800 at an interval equals to 450. The red point is the palm center.

where P denotes the point inside palm region Rp, PB denotes the boundary B of palm region, and d2 is the function for calculating 2D Euclidean distance between two points. The size of palm R is defined as the distance between C0 and the closest boundary.


2.2) Fingertip localization 

The fingertip is considered as the point with maximum distance to the palm center on the contour of each finger. Here utilize another two-step method to locate the fingertips. Contour for each finger is extracted first. Then the point on the contour with maximum distance to the palm center is picked as fingertip. Finger contours can be regarded as subsets of hand contour. The contour which makes the maximum area is considered as the hand border. Then the distances between contour points and palm center C0 are calculated. If a distance is larger than a predefined threshold, this contour point will be put into a candidate set of fingertips immediately, as formulated in the following equation

Here P denotes a contour point, d2(P;C0) denotes the distance between palm center C0 and P, α is a scale factor, set it to 1:2 empirically. F is the candidate set of the fingertips.

Firstly we need to determine the contour points for each finger in the candidate set. This can be fulfilled by tracing all the contour points and compared it with the points in the candidate set. In order to decrease the computing time, this present a more effective solution here. The idea is to assign
an index to each point in the candidate set, then sort the set by the index. A function ϕ is defined for calculating the index.


Here Pf is the point in the candidate set F and θPf is the index of Pf. The candidate set is sorted by θPf in ascending order (clockwise). Then the distances between successive points are calculated as follows to determine the start and end point of several subsets (the contour points for each finger) of the candidate set.

Here, Dpi denotes the distance between successive points Pi and Pi+1. If Dpi is greater than a predefined threshold δ (in this system, it is set to 2.5), Pi is considered as the start or end point of a subset. Therefore, all points in the candidate set can be divided into several subsets which are disconnected on the hand contour. Meanwhile each of them corresponds to one fingertip contour. Finally we compute the distance from each point in the subset to the palm center. The one with
maximum distance is identified as a fingertip.


(a) Extracted Hand region; (b) Hand contour; (c) Candidate set of fingertips on the hand contour.
(d) Calculation of index θpf for each point in the candidate set. The set is sorted by the index. The
indexes are illustrated using a color coding scheme from pure green (the minima) to pure red (the maxima). The points are sorted in ascending order (clockwise); (e) Extracted fingertips (green points) and the start and end point for each fingertip region (red points).

Applications

In this paper, proposed a fast and robust method to track the fingertips. Different from existing approaches, this vision based method needs neither pressure sensing devices nor extra marks for fingertip localization. With the help of a stereovision system, 3D positions of the fingertips
are recovered by an efficient algorithm based on skin color detection and geometry model analysis. 
we can use their palm center detection methods and fingertip detection methods for our system. 














Friday, March 20, 2015

Efficient technique for color image noise reduction

Introduction

Noise is the result of errors in the image acquisition process that results in pixel values that do not reflect the true intensities of the real scene.Noise can occur during image capture, transmission, etc. Noise reduction is the process of removing noise from a signal. Noise reduction techniques are
conceptually very similar regardless of the signal being processed.

The image captured by the sensor undergoes filtering by different smoothing filters and the resultant
images. All recording devices, both analogue and digital, have traits which make them susceptible to noise. The fundamental problem of image processing is to reduce noise from a digital color image. The two most commonly occurring types of noise are  Impulse noise, Additive noise (e.g. Gaussian noise)  and  Multiplicative noise (e.g. Speckle noise).


Types of noises

Image Noise is classified as Amplifier noise (Gaussian noise), Salt-and-pepper noise (Impulse noise),Shot noise, Quantization noise (uniform noise),Film grain, on-isotropic noise,
Speckle noise (Multiplicative noise) and Periodic noise.

Amplifier Noise (Gaussian noise)

The standard model of amplifier noise is additive, Gaussian, dependent at each pixel and dependent of the signal intensity, caused primarily by Johnson–Nyquist noise (thermal noise), including that
which comes from the reset noise of capacitors ("kTC noise"). It is an idealized form of white noise, which is caused by random fluctuations in the signal.
Amplifier noise is a major part of the noise of an image sensor, that is, of the constant noise level in dark areas of the image. In Gaussian noise, each pixel in the image will be changed from its original value by a (usually) small amount.

Salt-and-Pepper Noise (Impulse Noise)

Salt and pepper noise is sometimes called impulse noise or spike noise or random noise or independent noise. In salt and pepper noise (sparse light and dark disturbances), pixels in the image are very different in color or intensity unlike their surrounding pixels. Salt and pepper degradation can be caused by sharp and sudden disturbance in the image signal. Generally this type of noise will only affect a small number of image pixels. When viewed, the image contains dark and white dots, hence the term salt and pepper noise. An image containing salt-and-pepper noise will have dark pixels in bright regions and vice versa.

Shot Noise

The dominant noise in the lighter parts of an image from an image sensor is typically that caused by statistical quantum fluctuations, that is, variation in the number of photons sensed at a given exposure
level; this noise is known as photon shot noise. Shot noise has a root mean- square value proportional to the square root of the image intensity, and the noises at different pixels are independent of one
another.

Quantization Noise (Uniform Noise)

The noise caused by quantizing the pixels of a sensed image to a number of discrete levels is known as quantization noise; it has an approximately uniform distribution, and can be signal may dependent

Film Grain

The grain of photographic film is a signal-dependent noise, related to shot noise. That is, if film grains are uniformly distributed (equal number per area), and if each grain has an equal and independent probability of developing to a dark silver grain after absorbing photons, then the number of such dark grains in an area will be random with a binomial distribution

Non-Isotropic Noise

In film, scratches are an example of non-isotropic noise. While we cannot completely do away with image noise, it can certainly reduce some of it. Corrective filters are yet another device
that helps in reducing image noise.

Speckle Noise (Multiplicative Noise)

 speckle noise can be modelled by random values multiplied by pixel values hence it is also called multiplicative noise. Speckle noise is a major problem in some radar applications.

Periodic Noise

If the image signal is subjected to a periodic rather than a random disturbance, we obtain an image corrupted by periodic noise. The effect is of bars over the image.


Removing noise in images by filtering

filters are required for removing noises before processing. There are lots of filters in the paper to remove noise. They are of many kinds as linear smoothing filter, median filter, wiener filter and Fuzzy filter.
In this filtering techniques, the three primaries(R, G and B) are done separately. It is followed by some gain to compensate for attenuation resulting from the filter. The filtered primaries are then combined to form the colored image



Linear Filters

Linear filter used to remove certain types of noise. Averaging or Gaussian filters are appropriate for this purpose. Linear filters also tend to blur sharp edges, destroy lines and other fine image details,
and perform poorly in the presence of signal-dependent noise

Linear smoothing filters

One method to remove noise is by convolving the original image with a mask that represents a low-pass filter or smoothing operation Smoothing filters tend to blur an image, because pixel intensity values that are significantly higher or lower than the surrounding neighbourhood would "smear" across the area. Because of this blurring, linear filters are seldom used in practice for noise reduction; they are, however, often used as the basis for non linear noise reduction filters.

Adaptive Filter

The adaptive filter is more selective than a comparable linear filter, preserving edges and other high-frequency parts of an image. In addition, there are no design tasks; the wiener2 function handles all preliminary computations and implements the filter for an input image.

Non-Linear Filters

In recent years, a variety of non linear median type filters such as weighted median, rank conditioned rank selection, and relaxed median have been developed

Median Filter

consider each pixel in the image sort the neighbouring pixels into order based upon their intensities
 replace the original value of the pixel with the median value from the list.
Median and other RCRS filters are good at removing salt and pepper noise from an image, and also cause relatively little blurring of edges, and hence are often used in computer vision applications.

Fuzzy Filter

Fuzzy filters provide promising result in image-processing tasks that cope with some drawbacks of classical filters. Fuzzy filter is capable of dealing with vague and uncertain information

Performance measure


The Peak Signal to Noise Ratio (PSNR) is the value of the noisy image with respect to that of the original image. The value of PSNR and MSE(Mean square Error)for the proposed method is found out experimentally.


Conclusion 


Using Fuzzy filter technique ensures noise free and quality of the image. The main advantages of this fuzzy filter are the de noising capability of the destroyed color component differences. Hence the method can be suitable for other filters available at present.