How to transfer data frame to calculate flag, does value exist or not?

I have a data glory that looks like this:

Sno|UserID|TypeExp
1|JAS123|MOVIE
2|ASP123|GAMES
3|JAS123|CLOTHING
4|DPS123|MOVIE
5|DPS123|CLOTHING
6|ASP123|MEDICAL
7|JAS123|OTH
8|POQ133|MEDICAL
.......
10000|DPS123|OTH

UserID is the column that identifies the user, and the TypeExp column determines his type of spending this month. Now I have several different expenses. ie

TypeExpList = [VIDEO, GAMES, clothes, medical, OTH]

Now I want to transfer it to the user level. A data frame in which there is a 0 or 1 binary variable storing informational weather, or user "X" performed the above type of expenses

for example, in the above picture, the DataFrame Output should look like

User| TypeExpList         #Type list is this array corresponding entry [MOVIE,GAMES,CLOTHING,MEDICAL,OTH]
JAS123 |[1,0,1,0,1]      #since user has done expenditure on Movie,CLOTHING,OTHER Category 
ASP123 |[0,1,0,1,0]       #since User expenditure on  GAMES & MEDICAL
DPS123 |[1,0,1,0,1]         #since user expenditure on  MOVIE,CLOTHING & OTHER
POQ133 |[0,0,0,1,0]        #since User Expenditure on MEDICAL only 
+4
source share
3 answers

This is an input dataset.

$ cat input.csv
Sno|UserID|TypeExp
1|JAS123|MOVIE
2|ASP123|GAMES
3|JAS123|CLOTHING
4|DPS123|MOVIE
5|DPS123|CLOTHING
6|ASP123|MEDICAL
7|JAS123|OTH
8|POQ133|MEDICAL

pivot groupBy UserID.

val bins = spark
  .read
  .option("sep", "|")
  .option("header", true)
  .csv("input.csv")
  .groupBy("UserID")
  .pivot("TypeExp")
  .count
  .na
  .fill(0)
scala> bins.show
+------+--------+-----+-------+-----+---+
|UserID|CLOTHING|GAMES|MEDICAL|MOVIE|OTH|
+------+--------+-----+-------+-----+---+
|POQ133|       0|    0|      1|    0|  0|
|JAS123|       1|    0|      0|    1|  1|
|DPS123|       1|    0|      0|    1|  0|
|ASP123|       0|    1|      1|    0|  0|
+------+--------+-----+-------+-----+---+

0 1 s. , array , , .

val solution = bins.select(
  $"UserID" as "User",
  array("MOVIE","GAMES","CLOTHING","MEDICAL","OTH") as "TypeExpList")
scala> solution.show
+------+---------------+
|  User|    TypeExpList|
+------+---------------+
|POQ133|[0, 0, 0, 1, 0]|
|JAS123|[1, 0, 1, 0, 1]|
|DPS123|[1, 0, 1, 0, 0]|
|ASP123|[0, 1, 0, 1, 0]|
+------+---------------+

, , , , count 0, 1 .

UDF , 0 1.

val binarizer = udf { count: Long => if (count > 0) 1 else 0 }
val binaryCols = bins
  .columns
  .filterNot(_ == "UserID")
  .map(col)
  .map(c => binarizer(c) as c.toString)
val selectCols = ($"UserID" as "User") +: binaryCols
val solution = bins
  .select(selectCols: _*)
  .select(
    $"User",
    array("MOVIE","GAMES","CLOTHING","MEDICAL","OTH") as "TypeExpList")
scala> solution.show
+------+---------------+
|  User|    TypeExpList|
+------+---------------+
|POQ133|[0, 0, 0, 1, 0]|
|JAS123|[1, 0, 1, 0, 1]|
|DPS123|[1, 0, 1, 0, 0]|
|ASP123|[0, 1, 0, 1, 0]|
+------+---------------+
+2

crosstab :

val table = df.stat.crosstab("UserID", "TypeExp")

+--------------+--------+-----+-------+-----+---+
|UserID_TypeExp|CLOTHING|GAMES|MEDICAL|MOVIE|OTH|
+--------------+--------+-----+-------+-----+---+
|        ASP123|       0|    1|      1|    0|  0|
|        DPS123|       1|    0|      0|    1|  0|
|        JAS123|       1|    0|      0|    1|  1|
|        POQ133|       0|    0|      1|    0|  0|
+--------------+--------+-----+-------+-----+---+

API:

table.map(_.toSeq match {
  case Seq(id: String, cnts @ _*) => 
    (id, cnts.map(c => if(c != 0) 1 else 0))}).toDF("UserId", "TypeExp")

+------+---------------+
|UserId|        TypeExp|
+------+---------------+
|ASP123|[0, 1, 1, 0, 0]|
|DPS123|[1, 0, 0, 1, 0]|
|JAS123|[1, 0, 0, 1, 1]|
|POQ133|[0, 0, 1, 0, 0]|
+------+---------------+
+2

Scala, PySpark, DSL. Pivot Spark 1.6 +

val pivotDf = df.groupBy($"userid").pivot("typeexp").agg(count($"typeexp") )

pivotDf.show
+------+--------+-----+-------+-----+---+
|userid|CLOTHING|GAMES|MEDICAL|MOVIE|OTH|
+------+--------+-----+-------+-----+---+
|DPS123|       1|    0|      0|    1|  0|
|JAS123|       1|    0|      0|    1|  1|
|ASP123|       0|    1|      1|    0|  0|
|POQ133|       0|    0|      1|    0|  0|
+------+--------+-----+-------+-----+---+

pivotDf.selectExpr("userid", "array(movie, games, clothing, medical,oth) as  TypExpList")
       .show
+------+---------------+
|userid|     TypExpList|
+------+---------------+
|DPS123|[1, 0, 1, 0, 0]|
|JAS123|[1, 0, 1, 0, 1]|
|ASP123|[0, 1, 0, 1, 0]|
|POQ133|[0, 0, 0, 1, 0]|
+------+---------------+
+1

Source: https://habr.com/ru/post/1690049/


All Articles