You can try some NumPy and RDD. First, a bunch of imports:
from operator import itemgetter import numpy as np from pyspark.statcounter import StatCounter
Define several variables:
keys = ["key1", "key2", "key3"] # list of key column names xs = ["x1", "x2", "x3"] # list of column names to compare y = "y" # name of the reference column
And some helpers:
def as_pair(keys, y, xs): """ Given key names, y name, and xs names return a tuple of key, array-of-values""" key = itemgetter(*keys) value = itemgetter(y, * xs) # Python 3 syntax def as_pair_(row): return key(row), np.array(value(row)) return as_pair_ def init(x): """ Init function for combineByKey Initialize new StatCounter and merge first value""" return StatCounter().merge(x) def center(means): """Center a row value given a dictionary of mean arrays """ def center_(row): key, value = row return key, value - means[key] return center_ def prod(arr): return arr[0] * arr[1:] def corr(stddev_prods): """Scale the row to get 1 stddev given a dictionary of stddevs """ def corr_(row): key, value = row return key, value / stddev_prods[key] return corr_
and convert DataFrame in RDD pairs:
pairs = df.rdd.map(as_pair(keys, y, xs))
Next, display statistics for each group:
stats = (pairs .combineByKey(init, StatCounter.merge, StatCounter.mergeStats) .collectAsMap()) means = {k: v.mean() for k, v in stats.items()}
Note. With 5000 functions and a 7000 group, there should be no problem storing this structure in memory. With large datasets, you may have to use RDD and join , but this will be slower.
Center data:
centered = pairs.map(center(means))
Calculate covariance:
covariance = (centered .mapValues(prod) .combineByKey(init, StatCounter.merge, StatCounter.mergeStats) .mapValues(StatCounter.mean))
And finally, the correlation:
stddev_prods = {k: prod(v.stdev()) for k, v in stats.items()} correlations = covariance.map(corr(stddev_prods))
Sample data:
df = sc.parallelize([ ("a", "b", "c", 0.5, 0.5, 0.3, 1.0), ("a", "b", "c", 0.8, 0.8, 0.9, -2.0), ("a", "b", "c", 1.5, 1.5, 2.9, 3.6), ("d", "e", "f", -3.0, 4.0, 5.0, -10.0), ("d", "e", "f", 15.0, -1.0, -5.0, 10.0), ]).toDF(["key1", "key2", "key3", "y", "x1", "x2", "x3"])
Results with a DataFrame :
df.groupBy(*keys).agg(*[corr(y, x) for x in xs]).show()
+----+----+----+-----------+------------------+------------------+ |key1|key2|key3|corr(y, x1)| corr(y, x2)| corr(y, x3)| +----+----+----+-----------+------------------+------------------+ | d| e| f| -1.0| -1.0| 1.0| | a| b| c| 1.0|0.9972300220940342|0.6513360726920862| +----+----+----+-----------+------------------+------------------+
and the above method:
correlations.collect()
[(('a', 'b', 'c'), array([ 1. , 0.99723002, 0.65133607])), (('d', 'e', 'f'), array([-1., -1., 1.]))]
This solution, although a little involved, is quite flexible and can be easily configured to handle various data distributions. A further boost with JIT should also be given.