PySpark: TypeError: condition must be a row or column

I am trying to filter out the RDD as shown below:

spark_df = sc.createDataFrame(pandas_df)
spark_df.filter(lambda r: str(r['target']).startswith('good'))
spark_df.take(5)

But got the following errors:

TypeErrorTraceback (most recent call last)
<ipython-input-8-86cfb363dd8b> in <module>()
      1 spark_df = sc.createDataFrame(pandas_df)
----> 2 spark_df.filter(lambda r: str(r['target']).startswith('good'))
      3 spark_df.take(5)

/usr/local/spark-latest/python/pyspark/sql/dataframe.py in filter(self, condition)
    904             jdf = self._jdf.filter(condition._jc)
    905         else:
--> 906             raise TypeError("condition should be string or Column")
    907         return DataFrame(jdf, self.sql_ctx)
    908 

TypeError: condition should be string or Column

Any idea what I missed? Thank!

+7
source share
3 answers

DataFrame.filter, which is an alias for DataFrame.where, expects SQL expressions expressed either as Column:

spark_df.filter(col("target").like("good%"))

or equivalent SQL string:

spark_df.filter("target LIKE 'good%'")

I believe that you are trying to use RDD.filter, which is a completely different method:

spark_df.rdd.filter(lambda r: r['target'].startswith('good'))

and does not use SQL optimization.

+20
source

I went through this and decided to use UDF:

from pyspark.sql.functions import udf
from pyspark.sql.types import BooleanType

filtered_df = spark_df.filter(udf(lambda target: target.startswith('good'), 
                                  BooleanType())(spark_df.target))

lambda

+4

convert data frame to rdd.

spark_df = sc.createDataFrame(pandas_df)
spark_df.rdd.filter(lambda r: str(r['target']).startswith('good'))
spark_df.take(5)

I think it can work!

0
source

Source: https://habr.com/ru/post/1656847/


All Articles