Why Using Moments of Area?
It is seen that the usage of moments of area highly supports for the systems using Artificial intelligence. The reason behind the fact is that these moments tend to be more specific about the original shapes of the Ligatures (blobs). The main property of moments is that they are Rotation, translation and Scaling independent and that’s why they are highly used for the recognition process. These moments along with other features are then usually passed to Neural Nets to get the result…..
It is highly recommended to study the last article before proceeding next… https://codersource.net/2010/02/01/implementation-of-labeling-connected-components/
Consider now a two dimensional light intensity function g(x,y) (the pattern or visual input) normalized so that the volume under the function is equal to one. For our purposes it then behaves just like a bi-variate probability density function. For such a function of two variables the moments are defined as follows:
mjk= ? ? x jyk f(x, y)dxdy
for j, k = 0,1,2,….
In this way we now are able to obtain a countably infinite number of shape measurements for describing g(x,y). As in the one dimensional case, an infinite number of measurements yields an impractical system. Luckily, in practice, even if the shape of g(x,y) is quite complicated not many moments are needed to adequately describe it.
Consider designing a character recognition system for machine printed characters. Two characters such as A and A are still considered A?s and therefore quite similar even though one is bigger than the other. Such patterns would still be considered A?s even if they differed in location, stretching, squeezing, and shearing. Rotation is a rather special transformation and depending on the context it may or may not be desirable to treat patterns that are invariant under rotation as belonging to the same pattern class. This may result, for example, in the system?s inability to discriminate between M and W or p and d. Transformations such as these are special cases of affine transformations and are treated in detail in [Ry86]. Let R2 denote the two-dimensional Euclidean plane.
Definition: A mapping T of R2 onto R2 is called an affine transformation if there exists an invert as a pattern P and three of its collapsing projections on the x, y and x+y axes.
How this method of Moments came into existence was in fact from Sigfried. He opened the book and quickly noticed that different shapes had different formulas and gave different values even if the shapes had the same height, width or area. He was now getting excited. He turned to the next page and was electrified. Here was a page full of letter shapes, exactly his field of interest. In the book they were called I-beams, X-beams and L-beams and what have you, but to him they were just letters of the alphabet and they all had their formulas for the moments. Sigfried realized he had stumbled on a new and different method for extracting shape features from patterns. He put the dynamics book back on the shelf, kissed his worn-out paperback copy of de Bono?s Lateral Thinking and rushed back to his office to tell his room mate he had found a topic for his thesis, feature extraction for character recognition by the Method of Moments.
Guidelines for Use
To illustrate RTS invariant moments, we start with a simple image containing some distinct artificial objects( specifically text)
Now we apply Grayscale conversion to the image to convert it to Grayscale image.
Now, we will convert the image into binary so only two intensities will be present in the image….. Black and white or in other words 0 and 1.
After scanning this image and labeling the distinct pixels classes with a different grayvalue, we obtain the labeled output image.
C# Sample Program:
The algorithm is coded in C# using unsafe so the quality and speed of the program may not be affected. The class BitmapData is used to read and process the pixels in the image. This is the specialty of C# to provide such a speed even on image processing applications. There is a set of modules that are designed to implement the algorithm. The program scans the image to convert the image into grayscale Levels. Then it converts the image into binary. This binary image will be subjected to splitting it into components. Here we can us 4-Connected or 8-Connected Algorithm but we used 8-Connected Algorithm for finding the Blobs in the image as described above.
In order to understand the basic functionality of the program, you must consider my previous articles <give reference here>. The program consists of a set of classes that illustrate the basic functionality of the Moments of area. Let us see how these classes work together to get the values of the invariant moments. The diagram below explains the hierarchy of the classes.
There are three classes here that can be seen in the image above. Please refer to the article “Connected Components” to learn about the functionality of the class ConnectedComponents. Here we will discuss how does the classes Object and Feature works at all.
Class Object:
It is a collection of objects having the following information about each object obtained during the process of finding the connected components.
private int startX=0,startY=0;
private int finalX=0,finalY=0;
public image.Features feature;
(startX,startY): the top left position of the object.
(finalX,finalY): the bottom right position of the object
feature: an object of type Features.
Process:
In the previous article, we have seen that we get an array of objects when we pass an image to the instance of the class ConnectedComponents. Now, we want to get some important information from these objects. When recognition of objects is made, we segment the objects from the image first and then get Features from these objects. Some of the features that we calculate are RTS invariant Moments.
Class Features:
public Features (int startX,int startY,int width,int height,int l)
The constructor takes the top left position of the object, the width and height and the label i.e., intensity value of the object so the object can be reached easily.
private void calcAreaCenter()
-
It calculates the area of the object.
-
It calculates the central position of the object along horizontal axis.
-
It calculates the central position of the object along vertical axis.
public double normCentralMoment (int p, int q,int objectId)
p: which moment to be calculated as directed on the vertical axis
q: which moment to be calculated as directed on the horizontal axis
object id: The intensity value of the object.
The function calculates the normalized central moment with respect to the center.
p: which moment to be calculated as directed on the vertical axis
q: which moment to be calculated as directed on the horizontal axis
object id: The intensity value of the object.
The function calculates the central moment of each of the pixel value.
The final output is a colored image showing separate colors for every blob…. An array named objects of Type Class Objects contains the sizes of each blob or object being segmented and the object also contains the features of the objects yet obtained…
Output: A word file containing the feature list for the RTS invariant Moments.
Attachments:
Project Files: RTS_Invariant.zip
public double centralMoment (int p, int q, int objectId)