The only process version of your code should be replaced with lists:max/1 . A useful feature for parallelizing code is as follows:
pmap(Fun, List) -> Parent = self(), P = fun(Elem) -> Ref = make_ref(), spawn_link(fun() -> Parent ! {Ref, Fun(Elem)} end), Ref end, Refs = [P(Elem) || Elem <- List], lists:map(fun(Ref) -> receive {Ref, Elem} -> Elem end end, Refs).
pmap/2 applies Fun to each List member in parallel and collects the results in input order. To use pmap with this problem, you need to segment the original list in the list of lists and pass it to pmap. e.g. lists:max(pmap(fun lists:max/1, ListOfLists)) . Of course, the action of segmenting lists would be more expensive than just calling lists:max/1 , so this solution would require the list to be pre-segmented. Even then, it is likely that the overhead of copying lists outweighs any advantage of parallelization - especially on a single node.
The inherent problem with your situation is that computing each subtask is tiny compared to the overhead of data management. Tasks that are more intensively calculated (for example, factoring a list of large numbers) are easier to parallelize.
This does not mean that the search for the maximum value cannot be parallelized, but I believe that for this it is necessary that your data be pre-segmented or segmented in such a way as not to require the repetition of each value.
source share