Spark-SQL Merge two data / datasets with the same column name

I have below two datasets

controlSetDF : has columns loan_id, merchant_id, loan_type, created_date, as_of_date
accountDF : has columns merchant_id, id, name, status, merchant_risk_status

I use Java spark api to join them, I only need specific columns in the final dataset

private String[] control_set_columns = {"loan_id", "merchant_id", "loan_type"};
private String[] sf_account_columns = {"id as account_id", "name as account_name", "merchant_risk_status"};

controlSetDF.selectExpr(control_set_columns)                                               
.join(accountDF.selectExpr(sf_account_columns),controlSetDF.col("merchant_id").equalTo(accountDF.col("merchant_id")), 
"left_outer"); 

But I get below errors

org.apache.spark.sql.AnalysisException: resolved attribute(s) merchant_id#3L missing from account_name#131,loan_type#105,account_id#130,merchant_id#104L,loan_id#103,merchant_risk_status#2 in operator !Join LeftOuter, (merchant_id#104L = merchant_id#3L);;!Join LeftOuter, (merchant_id#104L = merchant_id#3L)

It seems that the problem is that both data files have a merchant_id column.

NOTE. If I do not use .selectExpr (), it works fine. But it will display all columns from the first and second datasets.

+4
source share
2 answers

DataFrames, . Scala , Java Java Scala Seq:

Seq<String> joinColumns = scala.collection.JavaConversions
  .asScalaBuffer(Lists.newArrayList("merchant_id"));

controlSetDF.selectExpr(control_set_columns)
  .join(accountDF.selectExpr(sf_account_columns), joinColumns), "left_outer");

DataFrame .

+1

DataFrame , sf_account_columns. , , DataFrame .

+1

Source: https://habr.com/ru/post/1675217/


All Articles