Canned grades in 1.0 (LinearClassifier, DNNClassifier, etc.) use the Trainable interface , which defines:
fit(
x=None,
y=None,
input_fn=None,
steps=None,
batch_size=None,
monitors=None,
max_steps=None
)
and describes the steps as
The number of steps to model training. If not, train forever. "steps" work gradually. If you call twice (steps = 10), then the training takes place in a total of 20 steps. If you do not want to have incremental behavior, set max_steps instead. If set, max_steps should be None.
I do not understand what it means.
m = LinearClassifier(
feature_columns=[sparse_column_a, sparse_feature_a_x_sparse_feature_b],
optimizer=tf.train.FtrlOptimizer(
learning_rate=0.1,
l1_regularization_strength=0.001
))
m.fit(input_fn=train_input_fn, steps=???)
Using the LinearClassifier, how do we train in one run train_input_fn
? Should the steps be the number of samples in train_input_fn
?
, train_input_fn
3 ?
1 ( ?): " - , input_fn "
, Estimator _train_model
:
all_hooks = []
self._graph = ops.Graph()
with self._graph.as_default() as g, g.device(self._device_fn):
random_seed.set_random_seed(self._config.tf_random_seed)
global_step = contrib_framework.create_global_step(g)
features, labels = input_fn()
.......
.......
with monitored_session.MonitoredTrainingSession(
...
hooks=all_hooks + model_fn_ops.training_hooks,
chief_only_hooks=chief_hooks + model_fn_ops.training_chief_hooks,
...
) as mon_sess:
loss = None
while not mon_sess.should_stop():
_, loss = mon_sess.run([model_fn_ops.train_op, model_fn_ops.loss])
input_fn
, ,
mon_sess.run([model_fn_ops.train_op, model_fn_ops.loss])
, input_fn
step
. , , ,
def train_input_fn():
current_log = TRAIN_FILES.pop()
with open('./logs/' + str(random.random()) + "__" + str(uuid.uuid4()) + "__" + str(time.time()) + ".run", "wb") as fh:
fh.write(("Ran log file %s" % (current_log)).encode('utf-8'))
> 1 .