Need to make a cartoon comic version of the image with Python and OpenCV

I am trying to make a function that will make any image look like a comic book comic. Here is my code:

import numpy import cv2 __author__ = "Michael Beyeler" __license__ = "GNU GPL 3.0 or later" class Cartoonizer: def __init__(self): self.numDownSamples = 1 self.numBilateralFilters = 7 def render(self, img_rgb): # downsample image using Gaussian pyramid img_color = img_rgb for _ in range(self.numDownSamples): img_color = cv2.pyrDown(img_color) # repeatedly apply small bilateral filter instead of applying # one large filter for _ in range(self.numBilateralFilters): img_color = cv2.bilateralFilter(img_color, 9, 9, 7) # upsample image to original size for _ in range(self.numDownSamples): img_color = cv2.pyrUp(img_color) # convert to grayscale and apply bilateral blur img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_RGB2GRAY) for _ in range(self.numBilateralFilters): img_gray_blur = cv2.bilateralFilter(img_gray, 9, 9, 7) # detect and enhance edges img_edge = cv2.adaptiveThreshold(img_gray_blur, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 9, 5) # convert back to color so that it can be bit-ANDed with color image img_edge = cv2.cvtColor(img_edge, cv2.COLOR_GRAY2RGB) #Ensure that img_color and img_edge are the same size, otherwise bitwise_and will not work height = min(len(img_color), len(img_edge)) width = min(len(img_color[0]), len(img_edge[0])) img_color = img_color[0:height, 0:width] img_edge = img_edge[0:height, 0:width] return cv2.bitwise_and(img_color, img_edge) 

I took it from here, retaining the license and changing it a little: http://www.askaswiss.com/2016/01/how-to-create-cartoon-effect-opencv-python.html

What I originally: Source image

Here is what my script outputs: My script output

And here is what I need: The image I'm trying to replicate

What I have noticed so far:

  • There are too many colors in my code to blur the image, I need a less smooth transition from light to dark.
  • The target image has smooth edges that are lines when my code creates a lot of noise (β€œlone” black dots) and separates the lines. I tried to change some parameters, added some random filters, but I really do not know what to do next.

Any help is greatly appreciated.

+5
source share
1 answer

I do not have Python code, it is written in MATLAB (using DIPimage 3 ). But I think you can get some ideas from him. Here is what he does:

1- s1 is a slightly smoothed version of the image, s is a smoother version. s1 will be used to create strings. s is the color basis of the output. For smoothing, I use trivial nonlinear diffusion. This preserves (even enhances) the edges. It looks like a two-way filter.

2- Using s1 , a slightly smoothed image, I first use the Laplace operator (this one uses Gaussian derivatives, parameter 1.5 is the sigma for Gaussian). This is similar to the difference in Gaussian. Your call to cv2.adaptiveThreshold matches the equivalent of gaussf(img,2)-img . My Laplacian does something similar to gaussf(img,2)-gaussf(img,1) (Gaussian difference). That is, there are fewer details in this output than in cv2.adaptiveThreshold .

3- Laplacian has been applied to a color image, so it gives a color output. I will convert this value to gray using the max color element. Then I clamp and stretch this, essentially doing the second half of what cv2.adaptiveThreshold does, except that the output is not binary, but still grayed out. That is, there are darker and lighter lines. More importantly, the lines do not look jagged, because there is a gradual change from dark to light along the edges of each line. I had to change these parameters a bit to get a good result. l now an image that is 1, where there will be no lines, and lower (darker) where there will be lines.

4- Now I apply the closure of the path. This is a fairly specialized morphological operator; you may have to make some efforts to find an implementation. It removes dark lines that are very short. This basically eliminates the problem you are having. I am sure there are other ways to solve the dot problem.

5- Multiplication of a strongly smoothed image s by an image of lines l . Where l was 1, nothing changes. Where l has lower values, s will become darker. It effectively draws lines on the image. This is a nicer effect than the bit and the operator you are using.

 img = readim('teddybear.jpg'); % Simplify using diffusion s1 = colordiffusion(img,2); s = colordiffusion(gaussf(img),10); % Find lines -- the positive response of the Laplace operator l = laplace(s1,1.5); l = tensorfun('immax',l); l = stretch(clip(l,0.4,4),0,100,1,0); % Remove short lines l = pathopening(l,8,'closing','constrained'); % Step 4: paint lines on simplified image out = s * l % Color diffusion: function out = colordiffusion(out,iterations) sigma = 0.8; K = 10; for ii = 1:iterations grey = colorspace(out,'grey'); nabla_out = gradientvector(grey,sigma); D = exp(-(norm(nabla_out)/K)^2); out = out + divergence(D * nabla_out); end end 

code output

The biggest difference with your β€œtarget” image is that diffusion makes the image smooth, but it does not quantize color. Color quantization is what gives this cartoony effect to areas of uniform color. One way to do this is to use clustering of k-values ​​in a color histogram.

+4
source

Source: https://habr.com/ru/post/1275247/


All Articles