RDD filter in scala spark

I have a dataset and I want to extract those (overview / text) that have (overview / time) between x and y, for example (1183334400 <time <1185926400),

Here are examples of my data:

product/productId: B000278ADA product/title: Jobst Ultrasheer 15-20 Knee-High Silky Beige Large product/price: 46.34 review/userId: A17KXW1PCUAIIN review/profileName: Mark Anthony "Mark" review/helpfulness: 4/4 review/score: 5.0 review/time: 1174435200 review/summary: Jobst UltraSheer Knee High Stockings review/text: Does a very good job of relieving fatigue. product/productId: B000278ADB product/title: Jobst Ultrasheer 15-20 Knee-High Silky Beige Large product/price: 46.34 review/userId: A9Q3932GX4FX8 review/profileName: Trina Wehle review/helpfulness: 1/1 review/score: 3.0 review/time: 1352505600 review/summary: Delivery was very long wait..... review/text: It took almost 3 weeks to recieve the two pairs of stockings . product/productId: B000278ADB product/title: Jobst Ultrasheer 15-20 Knee-High Silky Beige Large product/price: 46.34 review/userId: AUIZ1GNBTG5OB review/profileName: dgodoy review/helpfulness: 1/1 review/score: 2.0 review/time: 1287014400 review/summary: sizes recomended in the size chart are not real review/text: sizes are much smaller than what is recomended in the chart. I tried to put it and sheer it!. 

my Spark- Scala Code:

 import org.apache.hadoop.conf.Configuration import org.apache.hadoop.io.{LongWritable, Text} import org.apache.hadoop.mapreduce.lib.input.TextInputFormat import org.apache.spark.{SparkConf, SparkContext} object test1 { def main(args: Array[String]): Unit = { val conf1 = new SparkConf().setAppName("golabi1").setMaster("local") val sc = new SparkContext(conf1) val conf: Configuration = new Configuration conf.set("textinputformat.record.delimiter", "product/title:") val input1=sc.newAPIHadoopFile("data/Electronics.txt", classOf[TextInputFormat], classOf[LongWritable], classOf[Text], conf) val lines = input1.map { text => text._2} val filt = lines.filter(text=>(text.toString.contains(tt => tt in (startdate until enddate)))) filt.saveAsTextFile("data/filter1") } } 

but my code is not working

how can i filter these lines?

+6
source share
1 answer

It is much simpler. Try the following:

 object test1 { def main(args: Array[String]): Unit = { val conf1 = new SparkConf().setAppName("golabi1").setMaster("local") val sc = new SparkContext(conf1) def extractDateAndCompare(line: String): Boolean= { val from = line.indexOf("/time: ") + 7 val to = line.indexOf("review/text: ") -1 val date = line.substring(from, to).toLong date > startDate && date < endDate } sc.textFile("data/Electronics.txt") .filter(extractDateAndCompare) .saveAsTextFile("data/filter1") } } 

I usually find these intermediate helper methods to make things a lot clearer. Of course, this assumes that boundary dates are defined somewhere and that the input file contains format problems. I did this intentionally to keep it simple, but adding a try, returning an Option clause, and using the flatMap () function can help you avoid errors if you have any.

Also, your raw text is a bit cumbersome; you may need to examine Json, TSV files, or another alternative, simpler format.

+10
source

Source: https://habr.com/ru/post/985526/


All Articles