Good,
Algorithms for recognizing images with a high level of abstraction (for example, the type of abstraction needed to create reliable handwriting recognition software or facial recognition software) remain one of the most difficult problems in computer science today. However, pattern recognition for well-constrained applications, such as the application you described, is a solvable and very fun algorithmic problem.
I would suggest two possible strategies to accomplish your task:
The first strategy involves using third-party software that can pre-process your image and return data about the low-level components of the image. I have experience using a software called pixcavator that has an SDK. Pixavator will execute your image and examine the discrepancy between the color values of each of the pixels in order to return the borders of the various image components. Software such as pixcavator should be able to easily determine the boundaries for comopents in your image and, most importantly, each of the pips. Then your work will be available based on the data that the third-party software will return to you, and find the components that correspond to the description of the small circular sections, which are either white or black. You will be able to calculate how many of these image components have been split and use this to return the number of pixels in your image.
If you are ambitious enough to solve this problem without using third-party software, the problem is still solvable. Essentially, you want to define a circular scanner, which is a set of pixels in a circular shape that will scan through your image test, looking for a pip (just like an eye can scan an image to look for something hidden in a picture). As your algorithmic “eye” scans the image, it will take on a lot of pixels from the image (call it test sets) and compare with a predefined set of pixels (what we will call your exercise sets) and check to see if the set tests corresponds to one of the training sets within a predetermined margin of error. The easiest way to run such a test is to simply compare the color data for each of the pixels in the test set with each of the pixels in the training set to create a third set of pixels called your discrepancy set. If the values in your inconsistent set are small enough (which means that the test set is quite similar to the set of workouts), you will define this area in the image as a pip and proceed to scan other parts of your image.
Guess a little and check to find the correct error of errors, so that you catch every pip, and you do not experience positive results regarding things that are not pipes.
Amichai May 03 '10 at 6:01 a.m. 2010-05-03 06:01
source share