How to filter a Spark data frame if one column is a member of another column

I have a data frame with two columns (one row and one array of rows):

root
 |-- user: string (nullable = true)
 |-- users: array (nullable = true)
 |    |-- element: string (containsNull = true)

How can I filter the dataframe so that the result of the dataframe contains only those rows that are userin users?

+4
source share
2 answers

Of course, this is possible and not so difficult. You can use for this UDF.

import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._

val df = sc.parallelize(Array(
  ("1", Array("1", "2", "3")),
  ("2", Array("1", "2", "2", "3")),
  ("3", Array("1", "2"))
)).toDF("user", "users")

val inArray = udf((id: String, array: scala.collection.mutable.WrappedArray[String]) => array.contains(id), BooleanType)

df.where(inArray($"user", $"users")).show()

Conclusion:

+----+------------+
|user|       users|
+----+------------+
|   1|   [1, 2, 3]|
|   2|[1, 2, 2, 3]|
+----+------------+
+5
source

Quick and easy:

import org.apache.spark.sql.functions.expr

df.where(expr("array_contains(users, user)")
+10
source

Source: https://habr.com/ru/post/1654188/


All Articles