When people try to solve the problem of semantic segmentation using CNN, they usually use softmax-crossentropy loss during training (see Fully conv. - Long ). But when it comes to comparing the performance of different approaches, measures such as cross-connect are reported.
My question is: why don't people train directly to the extent that they want to optimize? It seems strange for me to train to a certain extent during training, but to evaluate in a different way for tests.
I see that IOU has problems for training samples where there is no class (union = 0 and intersection = 0 => divide zero by zero). But when I can ensure that every sample of my truth is based on all classes, is there another reason not to use this measure?
Make this article where they come up with a way to make the IoU concept differentiable. I implemented my solution with amazing results!
: " -, ?". - , . (, , ). (softmax crossentropy) . , -, , , , , , - (, , , ).
.
, , IoU, . , , .
An assessment of this direction is that earlier comments mean that errors are differentiable . I believe that there is nothing about the IoU indicators that the network can use to say: "Hey, this is not quite here, but I should perhaps move my bounding box a little to the left!"
Just subtle information, but hope this helps.
Source: https://habr.com/ru/post/1660123/More articles:Finding duplicates and not duplicates in Java - javapopd before return / exit function - bashHow hash and / or salt passwords in UWP C #? - c #Как связать JComboBox selectedItem с Jtable с помощью кода, используя классы привязки? - javaHow to end a call in CallKit - iosIonic Cannot read hasOwnProperty property from undefined - cordovaHow to get through an immutable card of immutable cards? - javascriptAmazon EC2 cluster error Failed to provide ec2 instances because "the requested EMR_EC2_DefaultRole instance profile is not valid" - amazon-web-servicesdotnet run / build - How do I indicate where the packages are? - .netProcedure (== against no) - luaAll Articles