I train my method. I got the result as shown below. Is this a good learning speed? If not, is it tall or short? This is my result.
lr_policy: "step" gamma: 0.1 stepsize: 10000 power: 0.75 # lr for unnormalized softmax base_lr: 0.001 # high momentum momentum: 0.99 # no gradient accumulation iter_size: 1 max_iter: 100000 weight_decay: 0.0005 snapshot: 4000 snapshot_prefix: "snapshot/train" type:"Adam"
This is a link
At low levels of learning, improvements will be linear. Thanks to the high levels of training, they will look more exponential. Higher learning speeds will reduce loss faster, but they are stuck in worse loss values.
. . 0.0005 0.0001 , . , , .
, , - , . , , , . , , , , .
(, 0,1), , , . , , 100 , 100 . , .
, , .
Source: https://habr.com/ru/post/1015954/More articles:NSOpenPanel Swift warning message on OSX 10.12 - swift3How to pass identifiers for hasMany relationships using store.createRecord ()? - ember.jsThe right way to create Vue components that rely on parent assembly tools - javascriptswift + OS X sandboxing: обрабатывать "NSVBOpenPanel" как "NSOpenPanel":: поскольку мне нужно получить отправителя в методе делегирования - swiftPrevent response body loading in python async http requests - pythonSelect2 autocomplete by parameter value - javascriptpackage installation error - your package only supports platforms [], but your local platforms - ["ruby", "x86_64-linux"] - ruby | fooobar.comКак определить, включена ли функция "Мобильные данные" или отключена (даже при подключении WiFi) в iOS? - iosStore the most informative NLTK functions of NaiveBayesClassifier in a list - pythonWhat is http-remoting protocoll - javaAll Articles