Semantic segmentation with CNN encoder-decoder

Alleged misuse of technical terms. I am working on a semantic segmentation project through CNN; trying to implement an architecture like Encoder-Decoder, so the output is the same size as the input.

How do you create shortcuts? What loss function should be used? Especially in a situation of severe class imbalance (but the ratio between classes varies from image to image).

The problem is related to two classes (objects of interest and background). I am using Keras with a shadoworflow backend.

So far, I plan that the expected outputs will be the same as the inputs, using pixel marking. The final layer of the model has either softmax activation (for 2 classes) or sigmoid activation (to express the probability that the pixels belong to the class of objects). I am having problems creating an appropriate objective function for such a task, such as:

Function

(y_pred, y_true),

in agreement with Keras.

Please try to clarify the dimensions of the tensors used (model input / output). Any thoughts and suggestions are greatly appreciated. Thank!

+4
source share
3 answers

, TensorFlow, Keras :

output = Convolution2D(number_of_classes, # 1 for binary case
                       filter_height,
                       filter_width,
                       activation = "softmax")(input_to_output) # or "sigmoid" for binary
... 
model.compile(loss = "categorical_crossentropy", ...) # or "binary_crossentropy" for binary

(image_height, image_width) (, sparse_categorical_crossentropy ).

, InBalance ( , : - ) Qaru question.

+1

:

  • "":

    model.add(Reshape(NUM_CLASSES,HEIGHT*WIDTH))  #shape : HEIGHT x WIDTH x NUM_CLASSES
    model.add(Permute(2,1)) # now itll be NUM_CLASSES x HEIGHT x WIDTH
    #Use some activation here- model.activation()
    #You can use Global averaging or Softmax
    
  • :

    In this case, your last level should be Upsample / Unpool / Deconvolve up to HEIGHT x WIDTH x CLASSES. Thus, your output essentially has the form: (HEIGHT, WIDTH, NUM_CLASSES).

+1
source

Source: https://habr.com/ru/post/1669052/


All Articles