Pyspark dataframe filter or enable based on list

I am trying to filter a dataframe in pyspark using a list. I want to either filter based on the list, or include only those entries with a value in the list. My code below does not work:

# define a dataframe
rdd = sc.parallelize([(0,1), (0,1), (0,2), (1,2), (1,10), (1,20), (3,18), (3,18), (3,18)])
df = sqlContext.createDataFrame(rdd, ["id", "score"])

# define a list of scores
l = [10,18,20]

# filter out records by scores by list l
records = df.filter(df.score in l)
# expected: (0,1), (0,1), (0,2), (1,2)

# include only records with these scores in list l
records = df.where(df.score in l)
# expected: (1,10), (1,20), (3,18), (3,18), (3,18)

Gives the following error: ValueError: cannot convert column to bool: use '&' for 'and', '|' for 'or', '~' for 'not' when building DataFrame Boolean expressions.

+27
source share
1 answer

, , "df.score in l" , df.score , "in" , "isin"

:

# define a dataframe
rdd = sc.parallelize([(0,1), (0,1), (0,2), (1,2), (1,10), (1,20), (3,18), (3,18), (3,18)])
df = sqlContext.createDataFrame(rdd, ["id", "score"])

# define a list of scores
l = [10,18,20]

# filter out records by scores by list l
records = df.filter(~df.score.isin(l))
# expected: (0,1), (0,1), (0,2), (1,2)

# include only records with these scores in list l
df.where(df.score.isin(l))
# expected: (1,10), (1,20), (3,18), (3,18), (3,18)
+49

Source: https://habr.com/ru/post/1659814/


All Articles