I don’t understand how someone could come up with a simple 3x3 matrix called the core, so when applied to the image it would have some amazing effect. Examples: http://en.wikipedia.org/wiki/Kernel_(image_processing) .
If you want to delve into the story, you will need to check out some other conditions. In old image processing textbooks, what we call kernels today will most likely be called operators . "Another key term is convolution . Both of these terms hint at the mathematical basis of the nuclei.
http://en.wikipedia.org/wiki/Convolution
You can read about mathematical convolution in the textbook Computer Vision by Ballard and Brown. The book dates back to the early 80s, but it is still very useful, and you can read it for free online:
http://homepages.inf.ed.ac.uk/rbf/BOOKS/BANDB/toc.htm
From the table of contents in Ballard and Brown, you will find a PDF link for section 2.2.4 Spatial Properties.
http://homepages.inf.ed.ac.uk/rbf/BOOKS/BANDB/LIB/bandb2_2.pdf
In the PDF file, scroll down to the "Convolution Theorem" section. This provides a mathematical background for convolution. This is a relatively short step from thinking about convolution, expressed as functions and integrals, to applying the same principles to the discrete world of shades of gray (or color) data in 2D images.
You will notice that a number of cores / operators are associated with names: Sobel, Prewitt, Laplacian, Gaussian, etc. These names help to suggest that there is a history - a rather long history - of the mathematical development and research of image processing, which led to the large number of cores that are widely used today.
Gauss and Laplace lived long before us, but their mathematical work leaked into forms that we can use in image processing. They did not work on image processing cores, but the developed mathematical methods are directly applicable and are usually used in image processing. Other cores have been developed specifically for image processing.
The Prewitt operator (kernel), which is very similar to the Sobel operator, was published in 1970 if Wikipedia is correct.
http://en.wikipedia.org/wiki/Prewitt_operator
Why does it work?
Read about mathematical convolution theory to understand how one function can be “passed” or “dragged” through another. This may explain the theoretical basis.
Then the question arises as to why individual kernels work. In this case, you look at the transition of the edge from dark to light in the image, and if you plot the brightness of the pixels on a two-dimensional scattering diagram, you will notice that the values along the Y axis grow rapidly relative to the transition to the edge of the image. This edge transition is a slope. The slope can be found using the first derivative. TA-dah! A kernel that approximates the first derived operator will find edges.
If you know that in optics there is a Gaussian blur, then you might think how this can be applied to a two-dimensional image. Thus, the conclusion of the Gaussian core.
Laplacian, for example, is an operator that, according to the first sentence from Wikipedia, "is a differential operator given by the divergence of the gradient of a function in Euclidean space."
http://en.wikipedia.org/wiki/Laplacian
Boy. This is quite a leap from this definition to the core. The next page perfectly describes the explanation of the relationship between derivatives and nuclei, and reads quickly:
http://www.aishack.in/2011/04/the-sobel-and-laplacian-edge-detectors/
You will also see that one of the forms of the Laplacian kernel is simply called the "edge search" kernel in the Wikipedia entry you specified.
There is more than one edge core, each of which has its own place. The nuclei of Laplacian, Sobel, Previtt, Kirsch and Roberts give different results and are suitable for different purposes.
How did people come up with these kernels (trial and error?)?
The cores were developed by different people in a number of areas of research.
Some cores (in my memory) were designed specifically to simulate the "early vision" process. Early vision is not something that happens only to early people, or only to people who get up at 4 a.m., but instead turn to low-level processes of biological vision: a sense of the primary color, intensity, edges, and the like. At a very low level, edge detection in biological vision can be modeled using nuclei.
Other kernels, such as Laplacian and Gaussian, are approximations of mathematical functions. With a little effort, you can get the cores yourself.
Image processing and image processing software packages often allow you to define your own kernel. For example, if you want to identify a figure in an image small enough to be defined by a few connected pixels, then you can define a kernel that matches the shape of the image function that you want to detect. Using custom kernels to detect objects is too rude to work in most real applications, but sometimes there are reasons to create a special kernel for a very specific purpose, and sometimes a little trial and correction is required to find a good kernel.
As the user templatetypedef pointed out, you can think of kernels intuitively, and in a fairly short time think about what everyone will do.
Is it possible to prove that it will always work for all images?
Functionally, you can throw a 3x3, 5x5 or NxN core onto an image of the appropriate size and it will “work” in the sense that the operation will be completed and there will be some result. But then the ability to calculate the result, whether it is useful or not, is not an excellent definition of "work".
One informational definition of whether the kernel works is whether the crushing image with this kernel causes a result that you find useful. If you manipulate images in Photoshop or GIMP, and if you find that a particular improvement kernel does not give what you need, you can say that the kernel does not work in the context of your specific image and the final result you want. There is a similar problem in image processing for computer vision: we must choose one or more cores and other (often not based on the core) algorithms that will act sequentially to do something useful, for example, to identify faces, measure the speed of cars or manual robots in assembly tasks.
Homework
If you want to understand how you can translate a mathematical concept into a kernel, this will help to bring out the kernel yourself. Even if you know what the final result of the output should be in order to align the concept of kernels and convolution, it helps to extract the kernel from a mathematical function independently, on paper and (preferably) from memory.
Try to derive a 3x3 Gaussian kernel from a mathematical function.
http://en.wikipedia.org/wiki/Gaussian_function
Getting the kernel on your own, or at least looking for an online tutorial and reading it carefully, will be very revealing. If you prefer not to do the work, then you may not appreciate how some mathematical expression is “translated” into a bunch of numbers in a 3x3 matrix. But it normal! If you get the general meaning of a common kernel, it is useful, and if you notice how two similar kernels produce slightly different results, then you will have a good feeling for them.