adding an @ user6910411s pointer, which has been nicely explained in terms of design.
as explained by reference . Flexibility and removal of dependence on Akka was a constructive solution.
Question:
I think AKKA itself relies on netty, and Spark uses only a subset of AKKA. However, is AKKA tuning better for scalability (in terms of number of employees) compared to net? Any suggestion on this particular spark configuration?
Yes Spark 1.6 can be configured to use AKKA or Netty for RPC.
it can be configured using spark.rpcie val rpcEnvName = conf.get("spark.rpc", "netty"), which means the default value: netty.
Please see database 1.6
Here is more detailed information on how, when you need to go for what ...
Akka Netty , , W.R.T.
Akka - , , . Actor . , .
Netty , . NIO. Netty , HTTP, FTP, SSL .. , .
Netty Akka w.r.t. .
, , , Netty .
: Netty Akka . PLS , Spark2 Netty can not see Akka as spark.rpc flag there mean val rpcEnvName = conf.get("spark.rpc", "netty") . Spark2.0 . RpcEnv.scala.