I am trying to understand the purpose of a function in keras. ReduceLROnPlateau()
ReduceLROnPlateau()
I realized that this feature helps to reduce the learning speed when there is no improvement in checking for loss. But will the network not get out of the local minimum? What if the network remains at a local minimum of about 5 epochs, and this function further reduces the learning speed, and an increase in the learning speed will actually help the network get out of such a local minimum?
In other words, how will he understand if he has reached a local minimum or plateau?
, CS231n, :
. , , , , . , : , , . , , .
, , . , , . , , , .. . , , . , , , .
, , 10 . , " " " " - , ReduceLROnPlateau .
ReduceLROnPlateau
Source: https://habr.com/ru/post/1693935/More articles:Undefined is not an object (rating "Sn [e]") - androidThe best way to transfer data between widgets in Flutter - dartSafari hides an element when using translate3d translation in another transition - javascriptHow to transfer data from a child widget to a parent - dartMore efficient way to use pandas get_loc? - pythonTrying to build a hook in the plot for ggrepel (geom_text_repel) - rRefactor orElseThrow block to return a RuntimeException - javaGmail API OAuth error: parameter not allowed for this message type: redirect_uri - google-oauthHow to extract the studied ML model for a separate implementation? - pythonИмпорт Scala сопутствующего объекта в файл, где он определен - scalaAll Articles