Segmenting characters from an image

I have to deal with the problem of segmentation of the following license plate images, whereas with threshold images after the images, the characters are split into more than 1 character. Therefore, I get the wrong recognition result. I applied the morphological operation of closing after the image threshold, even after that I could not separate the characters correctly.

License Plate image 1

License Plate image 2

License Plate image 3

License Plate image 4

The code used to segment the above images is below

#include <iostream> #include<cv.h> #include<highgui.h> using namespace std; using namespace cv; int main(int argc, char *argv[]) { IplImage *img1 = cvLoadImage(argv[1] , 0); IplImage *img2 = cvCloneImage(img1); cvNamedWindow("Orig"); cvShowImage("Orig",img1); cvWaitKey(0); int wind = img1->height; if (wind % 2 == 0) wind += 1; cvAdaptiveThreshold(img1, img1, 255, CV_ADAPTIVE_THRESH_GAUSSIAN_C, CV_THRESH_BINARY_INV, wind); IplImage* temp = cvCloneImage(img1); cvNamedWindow("Thre"); cvShowImage("Thre",img1); cvWaitKey(0); IplConvKernel* kernal = cvCreateStructuringElementEx(3, 3, 1, 1, CV_SHAPE_RECT,NULL); cvMorphologyEx(img1, img1, temp, kernal, CV_MOP_CLOSE, 1); cvNamedWindow("close"); cvShowImage("close",img1); cvWaitKey(0); } 

The output images below.

U Y and 2 are not segmented properly

U, P, Y and 2 are not segmented Properly

U 3 Y 2 and 5 are not segmented properly

Can someone provide a good method for segmenting characters from these images ... ??

+6
source share
1 answer

I would like to show a quick and dirty approach for highlighting letters / numbers in plates, as actual character segmentation is not a problem. When is the input image:

enter image description hereenter image description hereenter image description hereenter image description here

This is what you get at the end of my algorithm:

enter image description hereenter image description hereenter image description hereenter image description here

So, what I'm discussing in this answer will give you some ideas and help you get rid of the artifacts present at the end of your current segmentation process. Keep in mind that this approach should only work with these types of images, and if you need something more reliable, you will need to adjust some things or come up with completely new ways to do these things.

  • Given the sharp changes in brightness, it is best to align the histogram to improve the contrast and make them more similar to each other, so that all other methods and parameters work with them:

enter image description hereenter image description hereenter image description hereenter image description here

enter image description hereenter image description hereenter image description hereenter image description here

enter image description hereenter image description hereenter image description hereenter image description here

  • The result of binarization is similar to what you achieved, so I came up with a way to use findContours() to remove smaller and larger segments:

enter image description hereenter image description hereenter image description hereenter image description here

  • The result seems a little better, but it destroyed important segments of the characters on the plate. However, this is not a problem right now because we are not worried about character recognition: we just want to isolate the area where they are. So, the next step is to continue erasing the segments, or rather, those that do not coincide with the same Y axis of the digits. The contours that survived this cut process are as follows:

enter image description hereenter image description hereenter image description hereenter image description here

  • This is much better, and at this moment a new std::vector<cv::Point> is created to save all the pixel coordinates needed to draw all these segments. This is necessary to create cv::RotatedRect , which allows us to create a bounding box, as well as crop the image :

enter image description hereenter image description hereenter image description hereenter image description here

From now on, you can use cropped images to execute your own methods and easily segment plate characters.

Here is the C ++ code :

 #include <iostream> #include <vector> #include <opencv2/highgui/highgui.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <opencv2/imgproc/imgproc_c.h> /* The code has an outter loop where every iteration processes one of the four input images */ std::string files[] = { "plate1.jpg", "plate2.jpg", "plate3.jpg", "plate4.jpg" }; cv::Mat imgs[4]; for (int a = 0; a < 4; a++) { /* Load input image */ imgs[a] = cv::imread(files[a]); if (imgs[a].empty()) { std::cout << "!!! Failed to open image: " << imgs[a] << std::endl; return -1; } /* Convert to grayscale */ cv::Mat gray; cv::cvtColor(imgs[a], gray, cv::COLOR_BGR2GRAY); /* Histogram equalization improves the contrast between dark/bright areas */ cv::Mat equalized; cv::equalizeHist(gray, equalized); cv::imwrite(std::string("eq_" + std::to_string(a) + ".jpg"), equalized); cv::imshow("Hist. Eq.", equalized); /* Bilateral filter helps to improve the segmentation process */ cv::Mat blur; cv::bilateralFilter(equalized, blur, 9, 75, 75); cv::imwrite(std::string("filter_" + std::to_string(a) + ".jpg"), blur); cv::imshow("Filter", blur); /* Threshold to binarize the image */ cv::Mat thres; cv::adaptiveThreshold(blur, thres, 255, cv::ADAPTIVE_THRESH_GAUSSIAN_C, cv::THRESH_BINARY, 15, 2); //15, 2 cv::imwrite(std::string("thres_" + std::to_string(a) + ".jpg"), thres); cv::imshow("Threshold", thres); /* Remove small segments and the extremelly large ones as well */ std::vector<std::vector<cv::Point> > contours; cv::findContours(thres, contours, cv::RETR_LIST, cv::CHAIN_APPROX_SIMPLE); double min_area = 50; double max_area = 2000; std::vector<std::vector<cv::Point> > good_contours; for (size_t i = 0; i < contours.size(); i++) { double area = cv::contourArea(contours[i]); if (area > min_area && area < max_area) good_contours.push_back(contours[i]); } cv::Mat segments(gray.size(), CV_8U, cv::Scalar(255)); cv::drawContours(segments, good_contours, -1, cv::Scalar(0), cv::FILLED, 4); cv::imwrite(std::string("segments_" + std::to_string(a) + ".jpg"), segments); cv::imshow("Segments", segments); /* Examine the segments that survived the previous lame filtering process * to figure out the top and bottom heights of the largest segments. * This info will be used to remove segments that are not aligned with * the letters/numbers of the plate. * This technique is super flawed for other types of input images. */ // Figure out the average of the top/bottom heights of the largest segments int min_average_y = 0, max_average_y = 0, count = 0; for (size_t i = 0; i < good_contours.size(); i++) { std::vector<cv::Point> c = good_contours[i]; double area = cv::contourArea(c); if (area > 200) { int min_y = segments.rows, max_y = 0; for (size_t j = 0; j < c.size(); j++) { if (c[j].y < min_y) min_y = c[j].y; if (c[j].y > max_y) max_y = c[j].y; } min_average_y += min_y; max_average_y += max_y; count++; } } min_average_y /= count; max_average_y /= count; //std::cout << "Average min: " << min_average_y << " max: " << max_average_y << std::endl; // Create a new vector of contours with just the ones that fall within the min/max Y std::vector<std::vector<cv::Point> > final_contours; for (size_t i = 0; i < good_contours.size(); i++) { std::vector<cv::Point> c = good_contours[i]; int min_y = segments.rows, max_y = 0; for (size_t j = 0; j < c.size(); j++) { if (c[j].y < min_y) min_y = c[j].y; if (c[j].y > max_y) max_y = c[j].y; } // 5 is to add a little tolerance from the average Y coordinate if (min_y >= (min_average_y-5) && (max_y <= max_average_y+5)) final_contours.push_back(c); } cv::Mat final(gray.size(), CV_8U, cv::Scalar(255)); cv::drawContours(final, final_contours, -1, cv::Scalar(0), cv::FILLED, 4); cv::imwrite(std::string("final_" + std::to_string(a) + ".jpg"), final); cv::imshow("Final", final); // Create a single vector with all the points that make the segments std::vector<cv::Point> points; for (size_t x = 0; x < final_contours.size(); x++) { std::vector<cv::Point> c = final_contours[x]; for (size_t y = 0; y < c.size(); y++) points.push_back(c[y]); } // Compute a single bounding box for the points cv::RotatedRect box = cv::minAreaRect(cv::Mat(points)); cv::Rect roi; roi.x = box.center.x - (box.size.width / 2); roi.y = box.center.y - (box.size.height / 2); roi.width = box.size.width; roi.height = box.size.height; // Draw the box at on equalized image cv::Point2f vertices[4]; box.points(vertices); for(int i = 0; i < 4; ++i) cv::line(imgs[a], vertices[i], vertices[(i + 1) % 4], cv::Scalar(255, 0, 0), 1, CV_AA); cv::imwrite(std::string("box_" + std::to_string(a) + ".jpg"), imgs[a]); cv::imshow("Box", imgs[a]); // Crop the equalized image with the area defined by the ROI cv::Mat crop = equalized(roi); cv::imwrite(std::string("crop_" + std::to_string(a) + ".jpg"), crop); cv::imshow("crop", crop); /* The cropped image should contain only the plate letters and numbers. * From here on you can use your own techniques to segment the characters properly. */ cv::waitKey(0); } 

For a more complete and reliable way to recognize license plates using OpenCV, take a look at Mastering OpenCV with Computer Vision Practical Projects , Chapter 5 . Source code is available on Github!

+12
source

Source: https://habr.com/ru/post/976468/


All Articles