I am writing a standalone Spark program that receives its data from Cassandra. I followed the examples and created RDD via newAPIHadoopRDD () and the ColumnFamilyInputFormat class. An RDD is thrown, but I get a NotSerializableException when I call the RDD.groupByKey () method:
public static void main(String[] args) {
SparkConf sparkConf = new SparkConf();
sparkConf.setMaster("local").setAppName("Test");
JavaSparkContext ctx = new JavaSparkContext(sparkConf);
Job job = new Job();
Configuration jobConf = job.getConfiguration();
job.setInputFormatClass(ColumnFamilyInputFormat.class);
ConfigHelper.setInputInitialAddress(jobConf, host);
ConfigHelper.setInputRpcPort(jobConf, port);
ConfigHelper.setOutputInitialAddress(jobConf, host);
ConfigHelper.setOutputRpcPort(jobConf, port);
ConfigHelper.setInputColumnFamily(jobConf, keySpace, columnFamily, true);
ConfigHelper.setInputPartitioner(jobConf,"Murmur3Partitioner");
ConfigHelper.setOutputPartitioner(jobConf,"Murmur3Partitioner");
SlicePredicate predicate = new SlicePredicate();
SliceRange sliceRange = new SliceRange();
sliceRange.setFinish(new byte[0]);
sliceRange.setStart(new byte[0]);
predicate.setSlice_range(sliceRange);
ConfigHelper.setInputSlicePredicate(jobConf, predicate);
JavaPairRDD<ByteBuffer, SortedMap<ByteBuffer, IColumn>> rdd =
spark.newAPIHadoopRDD(jobConf,
ColumnFamilyInputFormat.class.asSubclass(org.apache.hadoop.mapreduce.InputFormat.class),
ByteBuffer.class, SortedMap.class);
JavaPairRDD<ByteBuffer, Iterable<SortedMap<ByteBuffer, IColumn>>> groupRdd = rdd.groupByKey();
System.out.println(groupRdd.count());
}
An exception:
java.io.NotSerializableException: java.nio.HeapByteBuffer java.io.ObjectOutputStream.writeObject0 (ObjectOutputStream.java:1164) java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1518) java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1483) java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1400) java.io.ObjectOutputStream.writeObject0 (ObjectOutputStream.java:1158) java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:330) org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala: 42) at org.apache.spark.storage.DiskBlockObjectWriter.write(BlockObjectWriter.scala: 179) at org.apache.spark.scheduler.ShuffleMapTask $$ anonfun $runTask $1.apply(ShuffleMapTask.scala: 161) at org.apache.spark.scheduler.ShuffleMapTask $$ anonfun $runTask $1.apply(ShuffleMapTask.scala: 158) scala.collection.Iterator $class.foreach(Iterator.scala: 727) at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala: 28) org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala: 158) org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala: 99) at org.apache.spark.scheduler.Task.run(Task.scala: 51) at org.apache.spark.executor.Executor $TaskRunner.run(Executor.scala: 187) java.util.concurrent.ThreadPoolExecutor $Worker.runTask(ThreadPoolExecutor.java:895) java.util.concurrent.ThreadPoolExecutor $Worker.run(ThreadPoolExecutor.java:918) java.lang.Thread.run(Thread.java:662)
, , - .
, reduceByKey() :
JavaPairRDD<ByteBuffer, SortedMap<ByteBuffer, IColumn>> reducedRdd = rdd.reduceByKey(
new Function2<SortedMap<ByteBuffer, IColumn>, SortedMap<ByteBuffer, IColumn>, sortedMap<ByteBuffer, IColumn>>() {
public SortedMap<ByteBuffer, IColumn> call(SortedMap<ByteBuffer, IColumn> arg0,
SortedMap<ByteBuffer, IColumn> arg1) throws Exception {
SortedMap<ByteBuffer, IColumn> sortedMap = new TreeMap<ByteBuffer, IColumn>(arg0.comparator());
sortedMap.putAll(arg0);
sortedMap.putAll(arg1);
return sortedMap;
}
}
);
:
- 1.0.0--hadoop1
- Cassandra 1.2.12
- Java 1.6
- , ?
, ?
,