Equality of two data frames

I have a script below:

I have 2 data frames containing only 1 column. Let's say

DF1=(1,2,3,4,5) DF2=(3,6,7,8,9,10) 

Basically, these values โ€‹โ€‹are keys, and I create a parquet file DF1 if the keys in DF1 are not in DF2 (in the current example, it should return false). My current way to achieve my requirement:

 val df1count= DF1.count val df2count=DF2.count val diffDF=DF2.except(DF1) val diffCount=diffDF.count if(diffCount==(df2count-df1count)) true else false 

The problem with this approach is that I call the action elements 4 times, which is certainly not the best way. Can someone suggest me the best effective way to achieve this?

0
source share
2 answers

You can use the function below:

 import org.apache.spark.sql.functions._ def diff(key: String, df1: DataFrame, df2: DataFrame): DataFrame = { val fields = df1.schema.fields.map(_.name) val diffColumnName = "Diff" df1 .join(df2, df1(key) === df2(key), "full_outer") .withColumn( diffColumnName, when(df1(key).isNull, "New row in DataFrame 2") .otherwise( when(df2(key).isNull, "New row in DataFrame 1") .otherwise( concat_ws("", fields.map(f => when(df1(f) =!= df2(f), s"$f ").otherwise("")):_* ) ) ) ) .filter(col(diffColumnName) =!= "") .select( fields.map(f => when(df1(key).isNotNull, df1(f)).otherwise(df2(f)).alias(f) ) :+ col(diffColumnName):_* ) } 

In your case, run this:

 diff("emp_id", df1, df2) 

Example

 import org.apache.spark.sql.{DataFrame, SparkSession} import org.apache.spark.sql.functions._ object DiffDataFrames extends App { val session = SparkSession.builder().master("local").getOrCreate() import session.implicits._ val df1 = session.createDataset(Seq((1,"a",11),(2,"b",2),(3,"c",33),(5,"e",5))).toDF("n", "s", "i") val df2 = session.createDataset(Seq((1,"a",11),(2,"bb",2),(3,"cc",34),(4,"d",4))).toDF("n", "s", "i") def diff(key: String, df1: DataFrame, df2: DataFrame): DataFrame = /* above definition */ diff("n", df1, df2).show(false) } 
+1
source

Here is a way to get unusual rows between two data frames:

 val d1 = Seq((3, "Chennai", "rahman", "9848022330", 45000, "SanRamon"), (1, "Hyderabad", "ram", "9848022338", 50000, "SF"), (2, "Hyderabad", "robin", "9848022339", 40000, "LA"), (4, "sanjose", "romin", "9848022331", 45123, "SanRamon")) val d2 = Seq((3, "Chennai", "rahman", "9848022330", 45000, "SanRamon"), (1, "Hyderabad", "ram", "9848022338", 50000, "SF"), (2, "Hyderabad", "robin", "9848022339", 40000, "LA"), (4, "sanjose", "romin", "9848022331", 45123, "SanRamon"), (4, "sanjose", "romino", "9848022331", 45123, "SanRamon"), (5, "LA", "Test", "1234567890", 12345, "Testuser")) val df1 = d1.toDF("emp_id" ,"emp_city" ,"emp_name" ,"emp_phone" ,"emp_sal" ,"emp_site") val df2 = d2.toDF("emp_id" ,"emp_city" ,"emp_name" ,"emp_phone" ,"emp_sal" ,"emp_site") spark.sql("((select * from df1) union (select * from df2)) minus ((select * from df1) intersect (select * from df2))").show //spark is SparkSession 
0
source

Source: https://habr.com/ru/post/1268500/


All Articles