NumPy converts 8-bit to 16/32-bit image

I use OpenCV 2 to process some images in the YCbCr color space. At the moment, I can detect some noise due to the conversion of RGB → YCbCr and then YCbCr → RGB, but as the documentation says :

If you use cvtColor with 8-bit images, the conversion will be lost. For many applications, this will not be noticeable, but it is recommended to use 32-bit images in applications that require a full range of colors, or convert the image before the operation, and then convert it back.

So, I would like to convert my image to 16 or 32 bits, but I did not find how to do this with NumPy. Some ideas?

img = cv2.imread(imgNameIn)
# Here I want to convert img in 32 bits
cv2.cvtColor(img, cv2.COLOR_BGR2YCR_CB, img)
# Some image processing ...
cv2.cvtColor(img, cv2.COLOR_YCR_CB2BGR, img)
cv2.imwrite(imgNameOut, img, [cv2.cv.CV_IMWRITE_PNG_COMPRESSION, 0])
+4
source share
2 answers

Thanks @moarningsun the problem is resolved:

i = cv2.imread(imgNameIn, cv2.CV_LOAD_IMAGE_COLOR) # Need to be sure to have a 8-bit input
img = np.array(i, dtype=np.uint16) # This line only change the type, not values
img *= 256 # Now we get the good values in 16 bit format
+1
source

The accepted answer is not accurate. A 16-bit image has 65,536 intensity levels ( 2^16), so the values ​​range from 0to 65535.

If you want to get a 16-bit image from an image presented as an array floatin the range from 0 to 1, you need to multiply each coefficient of this array by 65535.

, , . :  - float, float, .  - ( ) float to integer . .

0

Source: https://habr.com/ru/post/1546240/


All Articles