I am trying to write a pure Java / Scala implementation of the Tensorflow RecordWriter class to convert a Spark DataFrame to a TFRecords file. According to the documentation, in TFRecords each record is formed as follows:
uint64 length uint32 masked_crc32_of_length byte data[length] uint32 masked_crc32_of_data
And a CRC mask
masked_crc = ((crc >> 15) | (crc << 17)) + 0xa282ead8ul
I am currently calculating CRC with a guava implementation with the following code:
import com.google.common.hash.Hashing object CRC32 { val kMaskDelta = 0xa282ead8 def hash(in: Array[Byte]): Int = { val hashing = Hashing.crc32c() hashing.hashBytes(in).asInt() } def mask(crc: Int): Int ={ ((crc >> 15) | (crc << 17)) + kMaskDelta } }
The rest of my code:
Part of the data encoding is performed using the following code fragment:
object LittleEndianEncoding { def encodeLong(in: Long): Array[Byte] = { val baos = new ByteArrayOutputStream() val out = new LittleEndianDataOutputStream(baos) out.writeLong(in) baos.toByteArray } def encodeInt(in: Int): Array[Byte] = { val baos = new ByteArrayOutputStream() val out = new LittleEndianDataOutputStream(baos) out.writeInt(in) baos.toByteArray } }
The record is created using the protocol buffer:
import com.google.protobuf.ByteString import org.tensorflow.example._ import collection.JavaConversions._ import collection.mutable._ object TFRecord { def int64Feature(in: Long): Feature = { val valueBuilder = Int64List.newBuilder() valueBuilder.addValue(in) Feature.newBuilder() .setInt64List(valueBuilder.build()) .build() } def floatFeature(in: Float): Feature = { val valueBuilder = FloatList.newBuilder() valueBuilder.addValue(in) Feature.newBuilder() .setFloatList(valueBuilder.build()) .build() } def floatVectorFeature(in: Array[Float]): Feature = { val valueBuilder = FloatList.newBuilder() in.foreach(valueBuilder.addValue) Feature.newBuilder() .setFloatList(valueBuilder.build()) .build() } def bytesFeature(in: Array[Byte]): Feature = { val valueBuilder = BytesList.newBuilder() valueBuilder.addValue(ByteString.copyFrom(in)) Feature.newBuilder() .setBytesList(valueBuilder.build()) .build() } def makeFeatures(features: HashMap[String, Feature]): Features = { Features.newBuilder().putAllFeature(features).build() } def makeExample(features: Features): Example = { Example.newBuilder().setFeatures(features).build() } }
And here is an example of how I use things together to create a TFRecords file:
val label = TFRecord.int64Feature(1) val feature = TFRecord.floatVectorFeature(Array[Float](1, 2, 3, 4)) val features = TFRecord.makeFeatures(HashMap[String, Feature] ("feature"->feature, "label"-> label)) val ex = TFRecord.makeExample(features) val exSerialized = ex.toByteArray() val length = LittleEndianEncoding.encodeLong(exSerialized.length) val crcLength = LittleEndianEncoding.encodeInt(CRC32.mask(CRC32.hash(length))) val crcEx = LittleEndianEncoding.encodeInt(CRC32.mask(CRC32.hash(exSerialized))) val out = new FileOutputStream(new File("test.tfrecords")) out.write(length) out.write(crcLength) out.write(exSerialized) out.write(crcEx) out.close()
When I try to read a file, I hit Tensorflow with TFRecordReader , I get the following error:
W tensorflow/core/common_runtime/executor.cc:1076] 0x24cc430 Compute status: Data loss: corrupted record at 0
I suspect that the calculation of the CRC mask is incorrect or the continent between the java and C ++ generated file does not match.