If the image has a resolution of 1 pixel per unit, how would you define the "edge" of a pixel? The concept of "edge" only makes sense in a frame with an increased resolution compared to the pixel itself, and contour
cannot draw any edges if it works with the same resolution as the image itself.
On the other hand, of course, you can increase the resolution so that the concept of "edge" makes sense. So, let's say we increase the resolution by 100 times, we can easily draw edges using the contour
graph.
import matplotlib.pyplot as plt import numpy as np k = [] for s in [2103, 1936, 2247, 2987]: np.random.seed(s) k.append(np.random.randint(0, 2, size=(2,6))) arr = np.hstack([np.vstack(k)[:, :-1], np.vstack(k).T[::-1].T ]) image = np.zeros(shape=(arr.shape[0]+2, arr.shape[1]+2)) image[1:-1, 1:-1] = arr f = lambda x,y: image[int(y),int(x) ] g = np.vectorize(f) x = np.linspace(0,image.shape[1], image.shape[1]*100) y = np.linspace(0,image.shape[0], image.shape[0]*100) X, Y= np.meshgrid(x[:-1],y[:-1]) Z = g(X[:-1],Y[:-1]) plt.imshow(image[::-1], origin="lower", interpolation="none", cmap="Blues") plt.contour(Z[::-1], [0.5], colors='r', linewidths=[3], extent=[0-0.5, x[:-1].max()-0.5,0-0.5, y[:-1].max()-0.5]) plt.show()

For comparison, we can also draw an image in the same plot using imshow
.