Apache spark error: not found: sqlContext value

I am trying to set up a spark in Windows 10. At first I ran into this error during startup and this helped in the solution. Now I still can not start import sqlContext.sql, as it still causes an error

----------------------------------------------------------------
Fri Mar 24 12:07:05 IST 2017:
Booting Derby version The Apache Software Foundation - Apache Derby - 10.12.1.1 - (1704137): instance a816c00e-015a-ff08-6530-00000ac1cba8
on database directory C:\metastore_db with class loader org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1@37606fee
Loaded from file:/F:/Soft/spark/spark-2.1.0-bin-hadoop2.7/bin/../jars/derby-10.12.1.1.jar
java.vendor=Oracle Corporation
java.runtime.version=1.8.0_101-b13
user.dir=C:\
os.name=Windows 10
os.arch=amd64
os.version=10.0
derby.system.home=null
Database Class Loader started - derby.database.classpath=''
17/03/24 12:07:09 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://10.128.18.22:4040
Spark context available as 'sc' (master = local[*], app id = local-1490337421381).
Spark session available as 'spark'.
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 2.1.0
      /_/

Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_101)
Type in expressions to have them evaluated.
Type :help for more information.

scala> import sqlContext.sql
<console>:23: error: not found: value sqlContext
       import sqlContext.sql
              ^
+10
source share
5 answers

Spark context is available as "sc" (master = local [*], app id = local-1490337421381).

A hidden session is available as a spark.

In Spark 2.0.x, the Spark entry point is SparkSession, and it is available in sparkthe Spark shell, so try as follows:

spark.sqlContext.sql(...)

You can also create your spark context like this.

val sqlContext = new org.apache.spark.sql.SQLContext(sc)

- , Spark , .

+28

Cloudera , Github (https://github.com/cloudera/clusterdock/issues/30):

root ( , ) HDFS. (sudo -u hdfs hdfs dfs -mkdir/user/root, sudo -u hdfs dfs -chown root: root/user/root), .

.. , . .

+1

Spark 2.1, SparkSession. SparkContext SparkSession

var sSession = org.apache.spark.sql.SparkSession.getOrCreate();
var sContext = sSession.sparkContext;
0

Azure HDI Spark. csv Azure Blob, : 23: : : sqlContext import sqlContext.implicits._ .

import sqlContext.implicits._

val flightDelayTextLines = sc.textFile("wasb://sparkcontainer@kademoappsparkstorage.blob.core.windows.net/sparkcontainer/Scored_FlightsAndWeather.csv")

AirportFlightDelays (OriginAirportCode: String, OriginLatLong: String, Month: Integer, Day: Integer, Hour: Integer, Carrier: String, DelayPredicted: Integer, DelayProbability: Double)

val flightDelayRowsWithoutHeader = flightDelayTextLines.map(s => s.split(",")). filter (line => line (0)! = "OriginAirportCode")

val resultDataFrame = flightDelayRowsWithoutHeader.map(s => AirportFlightDelays (s (0),// s (13) + "," + s (14),//Lat, Long s (1).toInt,//Month s (2).toInt,//Day s (3).toInt,//Hour s (5),//Carrier s (11).toInt,//DelayPredicted s (12).toDouble//DelayProbability)). toDF()

resultDataFrame.write.mode(""). saveAsTable ("FlightDelays") ? ,

0

Below is a link that consists of a large number of examples of how the SQL context was imported and used in various ways: Examples of how to import and use Spark sqlcontex

0
source

Source: https://habr.com/ru/post/1015984/


All Articles