The Tensorflow function tf.nn.weighted_cross_entropy_with_logits()accepts an argument pos_weight. The documentation defines it pos_weightas a “coefficient that should be used with positive examples." I suggest that this means that an pos_weightincrease increases the loss of false positives and reduces the loss of false negatives. Or do I have it back?
tf.nn.weighted_cross_entropy_with_logits()
pos_weight
Actually, it's the other way around. Summary Documentation:
The argument is pos_weightused as a multiplier for a positive purpose.
, 5 7 , pos_weight=2, , 10 7 .
5
7
pos_weight=2
10
, . 5 0 . pos_weight, . , .
0
Source: https://habr.com/ru/post/1661316/More articles:Set change function to combine - elixirRuby - how to raise the same error for multiple methods without writing it multiple times? - ruby | fooobar.comПрограммно установить интервал в Stackview - iosHow to split all commits by file? - gitgit merge manipulates history - gitSelenium - ElementNotVisibleException - pythonOrientation of a class value starting with a space - htmlComparing int with NULL in C - is this manual wrong? - cWhat is the default location for included files when creating docker image? - includeEntity Framework - First Code Approach - c #All Articles