I do not have Python code, it is written in MATLAB (using DIPimage 3 ). But I think you can get some ideas from him. Here is what he does:
1- s1 is a slightly smoothed version of the image, s is a smoother version. s1 will be used to create strings. s is the color basis of the output. For smoothing, I use trivial nonlinear diffusion. This preserves (even enhances) the edges. It looks like a two-way filter.
2- Using s1 , a slightly smoothed image, I first use the Laplace operator (this one uses Gaussian derivatives, parameter 1.5 is the sigma for Gaussian). This is similar to the difference in Gaussian. Your call to cv2.adaptiveThreshold matches the equivalent of gaussf(img,2)-img . My Laplacian does something similar to gaussf(img,2)-gaussf(img,1) (Gaussian difference). That is, there are fewer details in this output than in cv2.adaptiveThreshold .
3- Laplacian has been applied to a color image, so it gives a color output. I will convert this value to gray using the max color element. Then I clamp and stretch this, essentially doing the second half of what cv2.adaptiveThreshold does, except that the output is not binary, but still grayed out. That is, there are darker and lighter lines. More importantly, the lines do not look jagged, because there is a gradual change from dark to light along the edges of each line. I had to change these parameters a bit to get a good result. l now an image that is 1, where there will be no lines, and lower (darker) where there will be lines.
4- Now I apply the closure of the path. This is a fairly specialized morphological operator; you may have to make some efforts to find an implementation. It removes dark lines that are very short. This basically eliminates the problem you are having. I am sure there are other ways to solve the dot problem.
5- Multiplication of a strongly smoothed image s by an image of lines l . Where l was 1, nothing changes. Where l has lower values, s will become darker. It effectively draws lines on the image. This is a nicer effect than the bit and the operator you are using.
img = readim('teddybear.jpg'); % Simplify using diffusion s1 = colordiffusion(img,2); s = colordiffusion(gaussf(img),10); % Find lines -- the positive response of the Laplace operator l = laplace(s1,1.5); l = tensorfun('immax',l); l = stretch(clip(l,0.4,4),0,100,1,0); % Remove short lines l = pathopening(l,8,'closing','constrained'); % Step 4: paint lines on simplified image out = s * l % Color diffusion: function out = colordiffusion(out,iterations) sigma = 0.8; K = 10; for ii = 1:iterations grey = colorspace(out,'grey'); nabla_out = gradientvector(grey,sigma); D = exp(-(norm(nabla_out)/K)^2); out = out + divergence(D * nabla_out); end end

The biggest difference with your βtargetβ image is that diffusion makes the image smooth, but it does not quantize color. Color quantization is what gives this cartoony effect to areas of uniform color. One way to do this is to use clustering of k-values ββin a color histogram.