I use Python, but the basic idea is the same.
If you directly do cvtColor: bgr -> gray for img2, you must fail. As gray becomes difficult to distinguish between regions:

Related answers:
- How to detect color spots in an image using OpenCV?
- Edge detection on a colored background using OpenCV
- OpenCV C++/Obj-C: /
white
, colored
. , Saturation(饱和度)
HSV color space
. HSV . Https://en.wikipedia.org/wiki/HSL_and_HSV#Saturation.
:
BGR
bgr
hsv
- S
- (
Canny
, HoughLines
, findContours
), , .
:

:

Python (Python 3.5 + OpenCV 3.3):
import cv2
import numpy as np
img = cv2.imread("test2.jpg")
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
h,s,v = cv2.split(hsv)
th, threshed = cv2.threshold(s, 50, 255, cv2.THRESH_BINARY_INV)
cnts = cv2.findContours(threshed, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)[-2]
canvas = img.copy()
cnts = sorted(cnts, key = cv2.contourArea)
cnt = cnts[-1]
arclen = cv2.arcLength(cnt, True)
approx = cv2.approxPolyDP(cnt, 0.02* arclen, True)
cv2.drawContours(canvas, [cnt], -1, (255,0,0), 1, cv2.LINE_AA)
cv2.drawContours(canvas, [approx], -1, (0, 0, 255), 1, cv2.LINE_AA)
cv2.imwrite("detected.png", canvas)