How to smooth nested lists in PySpark?

I have an RDD structure like:

rdd = [[[1],[2],[3]], [[4],[5]], [[6]], [[7],[8],[9],[10]]]

and I want this to become:

rdd = [1,2,3,4,5,6,7,8,9,10]

How do I write a map or reduce a function to make it work?

+6
source share
1 answer

You can, for example, flatMapuse lists:

rdd.flatMap(lambda xs: [x[0] for x in xs])

or make it a little more general:

from itertools import chain

rdd.flatMap(lambda xs: chain(*xs)).collect()
+9
source

Source: https://habr.com/ru/post/1623676/


All Articles