TensorFlow: SKCompat Depreciation Warning

NOTE. My first question is here. Sorry for the lack of details or information. More than happy to clarify if necessary.

I run TensorFlow 1.0.0 on a Mac, and I keep getting this warning when using the learn.Estimator class

WARNING: tensorflow: From: 25: call fit (from tensorflow.contrib.learn.python.learn.estimators.estimator) with y is deprecated and will be removed after 2016-12-01. Update Instructions: The evaluator is separated from the Scikit Learn interface by going to a separate SKCompat class. The arguments x, y and batch_size are available only in the SKCompat class, Estimator will only accept input_fn. Conversion example: est = Estimator (...) β†’ est = SKCompat (Estimator (...))

I tried to look at this class and it has zero information in it. Full code is available here.

https://github.com/austinmwhaley/DeepFarm/blob/master/prototype_1.ipynb

Please let me know if there is any other information that someone needs.

+6
source share
2 answers

You can import SKCompat from shadoworflow.contrib.learn.python:

from tensorflow.contrib.learn.python import SKCompat

And then wrap your score with SKCompat (), for example. eg:

classifier = SKCompat(tf.contrib.learn.LinearClassifier(args))
+5
source

Or you just use the updated TensorFlow r1.1 evaluator API

The API for defining a model is very similar to some small changes in parameters, return type or function name. Here is an example that I used:

def model_fn():
    def _build_model(features, labels, mode, params):
        # 1. Configure the model via TensorFlow operations
        # Connect the first hidden layer to input layer (features) with relu activation
        y = tf.contrib.layers.fully_connected(features, num_outputs=64, activation_fn=tf.nn.relu,
                                              weights_initializer=tf.contrib.layers.xavier_initializer())
        y = tf.contrib.layers.fully_connected(y, num_outputs=64, activation_fn=tf.nn.relu,
                                              weights_initializer=tf.contrib.layers.xavier_initializer())
        y = tf.contrib.layers.fully_connected(y, num_outputs=1, activation_fn=tf.nn.sigmoid,
                                              weights_initializer=tf.contrib.layers.xavier_initializer())

        predictions = y

        # 2. Define the loss function for training/evaluation
        if mode == tf.estimator.ModeKeys.TRAIN or mode == tf.estimator.ModeKeys.EVAL:
            loss = tf.reduce_mean((predictions - labels) ** 2)
        else:
            loss = None

        if mode != tf.estimator.ModeKeys.PREDICT:
            eval_metric_ops = {
                "rmse": tf.metrics.root_mean_squared_error(tf.cast(labels, tf.float32), predictions),
                "accuracy": tf.metrics.accuracy(tf.cast(labels, tf.float32), predictions),
                "precision": tf.metrics.precision(tf.cast(labels, tf.float32), predictions)
            }
        else:
            eval_metric_ops = None

        # 3. Define the training operation/optimizer
        if mode == tf.estimator.ModeKeys.TRAIN:
            train_op = tf.contrib.layers.optimize_loss(
                loss=loss,
                global_step=tf.contrib.framework.get_global_step(),
                learning_rate=0.001,
                optimizer="Adam")
        else:
            train_op = None

        if mode == tf.estimator.ModeKeys.PREDICT:
            predictions_dict = {"pred": predictions}
        else:
            predictions_dict = None

        # 5. Return predictions/loss/train_op/eval_metric_ops in ModelFnOps object
        return tf.estimator.EstimatorSpec(mode=mode,
                                          predictions=predictions_dict,
                                          loss=loss,
                                          train_op=train_op,
                                          eval_metric_ops=eval_metric_ops)
    return _build_model

, :

e = tf.estimator.Estimator(model_fn=model_fn(), params=None)
e.train(input_fn=input_fn(), steps=1000)

TensorFlow r1.1 .

+3

Source: https://habr.com/ru/post/1015262/


All Articles