PySpark: take average column value after using filter function

I use the following code to get the average age of people whose salaries are above a certain threshold.

dataframe.filter(df['salary'] > 100000).agg({"avg": "age"})

column age is numeric (float), but still I get this error.

py4j.protocol.Py4JJavaError: An error occurred while calling o86.agg. 
: scala.MatchError: age (of class java.lang.String)

Do you know any other way to get avg, etc., without using groupByfunction and SQL queries.

+15
source share
1 answer

The aggregation function must have a value and a column name:

dataframe.filter(df['salary'] > 100000).agg({"age": "avg"})

Alternatively you can use pyspark.sql.functions:

from pyspark.sql.functions import col, avg

dataframe.filter(df['salary'] > 100000).agg(avg(col("age")))

You can also use CASE .. WHEN

from pyspark.sql.functions import when

dataframe.select(avg(when(df['salary'] > 100000, df['age'])))
+34
source

Source: https://habr.com/ru/post/1607161/


All Articles