What is more scalable with Spark 1.6 (RPC): Netty or AKKA?

Spark 1.6 can be configured to use AKKA or Netty for RPC. In case Netty is configured, does this mean that Spark does not use an actor model for messaging (for example, between workers and driver managers), or even in the case of a net configuration, the simplified actor model used is based on Netty.

I think AKKA itself relies on netty, and Spark uses only a subset of AKKA. However, is AKKA tuning better for scalability (in terms of number of employees) compared to netty? Any suggestion on this particular spark configuration?

+4
source share
1 answer

adding an @ user6910411s pointer, which has been nicely explained in terms of design.

as explained by reference . Flexibility and removal of dependence on Akka was a constructive solution.

Question:

I think AKKA itself relies on netty, and Spark uses only a subset of AKKA. However, is AKKA tuning better for scalability (in terms of number of employees) compared to net? Any suggestion on this particular spark configuration?

Yes Spark 1.6 can be configured to use AKKA or Netty for RPC.

it can be configured using spark.rpcie val rpcEnvName = conf.get("spark.rpc", "netty"), which means the default value: netty.

Please see database 1.6

Here is more detailed information on how, when you need to go for what ...


Akka Netty , , W.R.T.

Akka - , , . Actor . , .

Netty , . NIO. Netty , HTTP, FTP, SSL .. , .

Netty Akka w.r.t. .

, , , Netty .

: Netty Akka . PLS , Spark2 Netty can not see Akka as spark.rpc flag there mean val rpcEnvName = conf.get("spark.rpc", "netty") . Spark2.0 . RpcEnv.scala.

+3

Source: https://habr.com/ru/post/1664843/


All Articles