How to detect different layers in an image in Objective-C

I am working on an application in which I have an image and I want to detect all the different layers of the image. For example, suppose I have an internal image of a structure that has walls, a sofa, a carpet, chairs, a bed, etc. Now I want to detect different layers of images, such as a sofa, bed, chairs, walls, etc., so that I can color them separately.

Please help me.

+4
source share
2 answers

To do this, you have to dig into image processing:

  • Good image processing library: GPUImage from BradLarson

  • You will need to find out about the detection of boundaries and get points from this detection.

  • Once you get these points, you will have to separate the edges from the individual closed pieces. To do this, you will need to study the implementation tactics of convex shapes and concave shapes.

  • After certain shapes are detected in the image, you can make color changes in the base areas of the image enclosed in shapes.

However, as a reminder, this will only make an approximate conclusion, because the objects in the image can be in different lighting conditions, and because of this, edge detection can fail to capture the true border of the object.

+3
source

Not. It is not possible to write software to detect the “chair” and “table” in a photograph / image. This is not about layers, the image has only pixels. You can write a tool similar to the Photoshops magic wand, which selects colors that are next to each other. But actually it really doesn’t work in an automatic way, because the computer cannot make decisions based on criteria similar to how your mind works by “seeing” a chair.

+2
source

Source: https://habr.com/ru/post/1497735/


All Articles