I ran into the problem of creating floating point coordinates from an image.
The initial problem is this: the input image is handwritten text. From this I want to create a set of points (only x, y coordinates) that make up individual characters.
At first I used findContours to create points. Since this finds edges of characters, you first need to start the decimation algorithm, since I am not interested in the shape of the characters, indicate only strings, or as in this case.
Input:

thinning:

So, I run my input through the thinning algorithm, and everything is fine, the output looks good. Running findContours on this, however, is not so good, it skips a lot of things, and I get something unusable.
The second idea was to create bounding fields (using findContours) using these bounding fields to capture characters from the thinning process and capture all non-white pixel indices as โdotsโ and shift them to the bounding box position. This creates an even worse result and seems like a bad method.
Awful code for this:
Mat temp = new Mat(edges, bb); byte roi_buff[] = new byte[(int) (temp.total() * temp.channels())]; temp.get(0, 0, roi_buff); int COLS = temp.cols(); List<Point> preArrayList = new ArrayList<Point>(); for(int i = 0; i < roi_buff.length; i++) { if(roi_buff[i] != 0) { Point tempP = bb.tl(); tempP.x += i%COLS; tempP.y += i/COLS; preArrayList.add(tempP); } }
Are there any alternatives or am I missing something?
UPDATE:
I overlooked the fact that I needed dots (pixels) to order. In the above method, I just use the scanline method to capture all the pixels. If you look, for example, at "o", he will first take a point on the left side, then one on the right side. I need neighboring pixels to order them, since I want to draw paths with dots later (outside of opencv). Is it possible?