The above solution did not work in my case. Another way to read the csv file and create tfRecord is shown below:
Function set column names: Sl.No :, time, height, width, average, standard deviation, variance, heterogeneity, PixelCount, contourCount, Class.
An example of the functions we get from dataset.csv:
Features = [5, 'D', 268, 497, 13,706, 863,4939, 29,385, 0,0427, 39675, 10]
label: medium
def create_tf_example(features, label): tf_example = tf.train.Example(features=tf.train.Features(feature={ 'Time': tf.train.Feature(bytes_list=tf.train.BytesList(value=[features[1].encode('utf-8')])), 'Height':tf.train.Feature(int64_list=tf.train.Int64List(value=[features[2]])), 'Width':tf.train.Feature(int64_list=tf.train.Int64List(value=[features[3]])), 'Mean':tf.train.Feature(float_list=tf.train.FloatList(value=[features[4]])), 'Std':tf.train.Feature(float_list=tf.train.FloatList(value=[features[5]])), 'Variance':tf.train.Feature(float_list=tf.train.FloatList(value=[features[6]])), 'Non-homogeneity':tf.train.Feature(float_list=tf.train.FloatList(value=[features[7]])), 'PixelCount':tf.train.Feature(int64_list=tf.train.Int64List(value=[features[8]])), 'contourCount':tf.train.Feature(int64_list=tf.train.Int64List(value=[features[9]])), 'Class':tf.train.Feature(bytes_list=tf.train.BytesList(value=[label.encode('utf-8')])), })) return tf_example csv = pd.read_csv("dataset.csv").values with tf.python_io.TFRecordWriter("dataset.tfrecords") as writer: for row in csv: features, label = row[:-1], row[-1] print features, label example = create_tf_example(features, label) writer.write(example.SerializeToString()) writer.close()
For more information, click here. It works for me, hope it works.