This can be done using aggregations, but this method will have a higher complexity than the pandas method. But you can achieve similar performance with UDF. It will not be as elegant as pandas, but:
Assuming this set of holiday data:
holidays = ['2016-01-03', '2016-09-09', '2016-12-12', '2016-03-03'] index = spark.sparkContext.broadcast(sorted(holidays))
And the dataset of dates 2016 in the data area:
from datetime import datetime, timedelta dates_array = [(datetime(2016, 1, 1) + timedelta(i)).strftime('%Y-%m-%d') for i in range(366)] from pyspark.sql import Row df = spark.createDataFrame([Row(date=d) for d in dates_array])
UDF can use pandas searchsorted
, but for artists, you need to install pandas. Found that you can use the python plan as follows:
def nearest_holiday(date): last_holiday = index.value[0] for next_holiday in index.value: if next_holiday >= date: break last_holiday = next_holiday if last_holiday > date: last_holiday = None if next_holiday < date: next_holiday = None return (last_holiday, next_holiday) from pyspark.sql.types import * return_type = StructType([StructField('last_holiday', StringType()), StructField('next_holiday', StringType())]) from pyspark.sql.functions import udf nearest_holiday_udf = udf(nearest_holiday, return_type)
And can be used with withColumn
:
df.withColumn('holiday', nearest_holiday_udf('date')).show(5, False) +----------+-----------------------+ |date |holiday | +----------+-----------------------+ |2016-01-01|[null,2016-01-03] | |2016-01-02|[null,2016-01-03] | |2016-01-03|[2016-01-03,2016-01-03]| |2016-01-04|[2016-01-03,2016-03-03]| |2016-01-05|[2016-01-03,2016-03-03]| +----------+-----------------------+ only showing top 5 rows