Why is "Cannot call methods on a stopped SparkContext" that is called when connecting to Spark Standalone from a Java application?

I downloaded Apache Spark 1.4.1, pre-built for Hadoop 2.6 and later. I have two Ubuntu 14.04 machines. I installed one of them as a Spark master with one subordinate, and the second with one Spark slave. When I execute the command ./sbin/start-all.sh, the master and slaves start up successfully. After that, I ran a sample PI program in spark-shell, setting it --master spark://192.168.0.105:7077to the main Spark URL displayed in the Spark web interface.

So far, everything is working fine.

I created a Java application and I tried to configure it to run Spark jobs when needed. I added spark dependencies in the file pom.xml.

        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>1.4.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming_2.11</artifactId>
            <version>1.4.1</version>
        </dependency>

SparkConfig:

private parkConf sparkConfig = new SparkConf(true)
            .setAppName("Spark Worker")
            .setMaster("spark://192.168.0.105:7077");

SparkContext SparkConfig:

private SparkContext sparkContext = new SparkContext(sparkConfig);

:

java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext
    at org.apache.spark.SparkContext.org$apache$spark$SparkContext$$assertNotStopped(SparkContext.scala:103)
    at org.apache.spark.SparkContext.getSchedulingMode(SparkContext.scala:1503)
    at org.apache.spark.SparkContext.postEnvironmentUpdate(SparkContext.scala:2007)
    at org.apache.spark.SparkContext.<init>(SparkContext.scala:543)
    at com.storakle.dataimport.spark.StorakleSparkConfig.getSparkContext(StorakleSparkConfig.java:37)
    at com.storakle.dataimport.reportprocessing.DidNotBuyProductReport.prepareReportData(DidNotBuyProductReport.java:25)
    at com.storakle.dataimport.messagebroker.RabbitMQMessageBroker$1.handleDelivery(RabbitMQMessageBroker.java:56)
    at com.rabbitmq.client.impl.ConsumerDispatcher$5.run(ConsumerDispatcher.java:144)
    at com.rabbitmq.client.impl.ConsumerWorkService$WorkPoolRunnable.run(ConsumerWorkService.java:99)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

Spark local, .

private parkConf sparkConfig = new SparkConf(true)
                .setAppName("Spark Worker")
                .setMaster("local");

Java , Spark.

, ? , , , URL- Spark Master.

, ? , .

+4
1

, Spark 1.4.1 Scala 2.10. spark-core_2.10 spark-streaming_2.10 2.11. spark-core_2.11 Spark, Scala 2.10.

Spark Scala 2.11 .

http://spark.apache.org/docs/latest/building-spark.html#building-for-scala-211

+5

Source: https://habr.com/ru/post/1615021/


All Articles