I have a small network. Trained [many hours] and stored at the checkpoint. Now I want to restore from a breakpoint, in another script, and use it. I recreate the session: build the entire network, st all statements are created again using the same code that I did before training. This code sets a random seed for TF using time.time () [which is different from each run].
Then I restore from the checkpoint. I start the network and get different numbers [small but significant differences] every time I start the restored network. It is imperative that the entrance is fixed. If I fix a random seed to some value, then the non-deterministic behavior disappears.
I am puzzled because I thought that the restore (no variables were saved, so I assume that the entire graph was checked) eliminates all random behavior from this thread. Initialization, etc. Redefined by the reconstructed checkpoint, this is only an advanced run.
Is it possible? meaning? Is there a way to find out which variables or factors in my graph are not set by the restored checkpoint?
source share