文章目录

  • 访问 Hive
    • SparkSQL 整合 Hive
    • 访问 Hive 表
    • idea实现SparkSQL连接hive

访问 Hive

导读

1,整合 SparkSQL 和 Hive, 使用 Hive 的 MetaStore 元信息库
2,使用 SparkSQL 查询 Hive 表
3,案例, 使用常见 HiveSQL
4,写入内容到 Hive 表

SparkSQL 整合 Hive

导读

1,开启 Hive 的 MetaStore 独立进程
2,整合 SparkSQL 和 Hive 的 MetaStore

和一个文件格式不同, Hive 是一个外部的数据存储和查询引擎, 所以如果 Spark 要访问 Hive 的话, 就需要先整合 Hive

整合什么 ?

如果要讨论 SparkSQL 如何和 Hive 进行整合, 首要考虑的事应该是 Hive 有什么, 有什么就整合什么就可以

  • MetaStore, 元数据存储
    SparkSQL 内置的有一个 MetaStore, 通过嵌入式数据库 Derby 保存元信息, 但是对于生产环境来说, 还是应该使用 Hive 的 MetaStore, 一是更成熟, 功能更强, 二是可以使用 Hive 的元信息
  • 查询引擎
    SparkSQL 内置了 HiveSQL 的支持, 所以无需整合

为什么要开启 Hive 的 MetaStore

Hive 的 MetaStore 是一个 Hive 的组件, 一个 Hive 提供的程序, 用以保存和访问表的元数据, 整个 Hive 的结构大致如下


由上图可知道, 其实 Hive 中主要的组件就三个, HiveServer2 负责接受外部系统的查询请求, 例如 JDBC, HiveServer2 接收到查询请求后, 交给 Driver 处理, Driver 会首先去询问 MetaStore 表在哪存, 后 Driver 程序通过 MR 程序来访问 HDFS 从而获取结果返回给查询请求者

而 Hive 的 MetaStore 对 SparkSQL 的意义非常重大, 如果 SparkSQL 可以直接访问 Hive 的 MetaStore, 则理论上可以做到和 Hive 一样的事情, 例如通过 Hive 表查询数据

而 Hive 的 MetaStore 的运行模式有三种

  • 内嵌 Derby 数据库模式

这种模式不必说了, 自然是在测试的时候使用, 生产环境不太可能使用嵌入式数据库, 一是不稳定, 二是这个 Derby 是单连接的, 不支持并发

  • Local 模式

Local 和 Remote 都是访问 MySQL 数据库作为存储元数据的地方, 但是 Local 模式的 MetaStore 没有独立进程, 依附于 HiveServer2 的进程

  • Remote 模式

和 Loca 模式一样, 访问 MySQL 数据库存放元数据, 但是 Remote 的 MetaStore 运行在独立的进程中

我们显然要选择 Remote 模式, 因为要让其独立运行, 这样才能让 SparkSQL 一直可以访问

Hive 开启 MetaStore

Step 1: 修改 hive-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration><property><name>javax.jdo.option.ConnectionURL</name><value>jdbc:mysql://Bigdata01:3306/metastore?createDatabaseIfNotExist=true</value><description>JDBC connect string for a JDBC metastore</description></property><property><name>javax.jdo.option.ConnectionDriverName</name><value>com.mysql.jdbc.Driver</value><description>Driver class name for a JDBC metastore</description></property><property><name>javax.jdo.option.ConnectionUserName</name><value>root</value><description>username to use against metastore database</description></property><property><name>javax.jdo.option.ConnectionPassword</name><value>000000</value><description>password to use against metastore database</description></property><property><name>hive.cli.print.header</name><value>true</value></property><property><name>hive.cli.print.current.db</name><value>true</value></property><property><name>hive.metastore.warehouse.dir</name><value>/user/hive/warehouse</value></property><property><name>hive.metastore.local</name><value>false</value></property><property><name>hive.metastore.uris</name><value>thrift://Bigdata01:9083</value>  </property></configuration>

Step 2: 启动 Hive MetaStore

nohup /opt/module/hive/bin/hive --service metastore 2>&1 >> /opt/module/hive/logs/log.log &

SparkSQL 整合 Hive 的 MetaStore

即使不去整合 MetaStore, Spark 也有一个内置的 MateStore, 使用 Derby 嵌入式数据库保存数据, 但是这种方式不适合生产环境, 因为这种模式同一时间只能有一个 SparkSession 使用, 所以生产环境更推荐使用 Hive 的 MetaStore

SparkSQL 整合 Hive 的 MetaStore 主要思路就是要通过配置能够访问它, 并且能够使用 HDFS 保存 WareHouse, 这些配置信息一般存在于 Hadoop 和 HDFS 的配置文件中, 所以可以直接拷贝 Hadoop 和 Hive 的配置文件到 Spark 的配置目录

cd /opt/module/hadoop/etc/hadoop
cp hive-site.xml core-site.xml hdfs-site.xml /opt/module/spark/conf/   scp -r /opt/module/spark/conf Bigdata02:`pwd`
scp -r /opt/module/spark/conf Bigdata03:`pwd`

Spark 需要 hive-site.xml 的原因是, 要读取 Hive 的配置信息, 主要是元数据仓库的位置等信息
Spark 需要 core-site.xml 的原因是, 要读取安全有关的配置
Spark 需要 hdfs-site.xml 的原因是, 有可能需要在 HDFS 中放置表文件, 所以需要 HDFS 的配置

如果不希望通过拷贝文件的方式整合 Hive, 也可以在 SparkSession 启动的时候, 通过指定 Hive 的 MetaStore 的位置来访问, 但是更推荐整合的方式

访问 Hive 表

导读

1,在 Hive 中创建表
2,使用 SparkSQL 访问 Hive 中已经存在的表
3,使用 SparkSQL 创建 Hive 表
4,使用 SparkSQL 修改 Hive 表中的数据

创建文件名称 :studenttabl10k
添加数据如下:(只添加150行)

ulysses thompson 64  1.90
katie carson    25  3.65
luke king   65  0.73
holly davidson  57  2.43
fred miller 55  3.77
holly white 43  0.24
luke steinbeck  51  1.14
nick underhill  31  2.46
holly davidson  59  1.26
calvin brown    56  0.72
rachel robinson 62  2.25
tom carson  35  0.56
tom johnson 72  0.99
irene garcia    54  1.06
oscar nixon 39  3.60
holly allen 32  2.58
oscar hernandez 19  0.05
alice ichabod   65  2.25
wendy thompson  30  2.39
priscilla hernandez 73  0.23
gabriella van buren 68  1.32
yuri thompson   42  3.65
yuri laertes    60  1.16
sarah young 23  2.76
zach white  32  0.20
nick van buren  68  1.75
xavier underhill    41  1.51
bob ichabod 56  2.81
zach steinbeck  61  2.22
alice garcia    42  2.03
jessica king    29  3.61
calvin nixon    37  0.30
fred polk   66  3.69
bob zipper  40  0.28
alice young 75  0.31
nick underhill  37  1.65
mike white  57  0.69
calvin ovid 41  3.02
fred steinbeck  47  3.57
sarah ovid  65  0.00
wendy nixon 63  0.62
gabriella zipper    77  1.51
david king  40  1.99
jessica white   30  3.82
alice robinson  37  3.69
zach nixon  74  2.75
irene davidson  27  1.22
priscilla xylophone 43  1.60
oscar zipper    25  2.43
fred falkner    38  2.23
ulysses polk    58  0.01
katie hernandez 47  3.80
zach steinbeck  55  0.68
fred laertes    69  3.62
quinn laertes   70  3.66
nick garcia 50  0.12
oscar young 55  2.22
bob underhill   47  0.24
calvin young    77  1.60
mike allen  65  2.95
david young 77  0.26
oscar garcia    69  1.59
ulysses ichabod 26  0.95
wendy laertes   76  1.13
sarah laertes   20  0.24
zach ichabod    60  1.60
tom robinson    62  0.78
zach steinbeck  69  1.01
quinn garcia    57  0.98
yuri van buren  32  1.97
luke carson 39  0.76
calvin ovid 73  0.82
luke ellison    27  0.56
oscar zipper    50  1.31
fred steinbeck  52  3.14
katie xylophone 76  1.38
luke king   54  2.30
ethan white 72  1.43
yuri ovid   37  3.64
jessica garcia  54  1.08
luke young  29  0.80
mike miller 39  3.35
fred hernandez  63  0.17
priscilla hernandez 52  0.35
ethan garcia    43  1.70
quinn hernandez 25  2.58
calvin nixon    33  1.01
yuri xylophone  47  1.36
ulysses steinbeck   63  1.05
jessica nixon   25  2.13
bob johnson 53  3.31
jessica ichabod 56  2.21
zach miller 63  3.87
priscilla white 66  2.82
ulysses allen   21  1.68
katie falkner   47  1.49
tom king    51  1.91
bob laertes 60  3.33
luke nixon  27  3.54
quinn johnson   42  2.24
wendy quirinius 71  0.10
victor polk 55  3.63
rachel robinson 32  1.11
sarah king  57  1.37
victor young    38  1.72
priscilla steinbeck 38  2.11
fred brown  19  2.72
xavier underhill    55  3.56
irene ovid  67  3.80
calvin brown    37  2.22
katie thompson  20  3.27
katie carson    66  3.55
tom miller  57  2.83
rachel brown    56  0.74
holly johnson   38  2.51
irene steinbeck 29  1.97
wendy falkner   37  0.14
ethan white 29  3.62
bob underhill   26  1.10
jessica king    64  0.69
luke steinbeck  19  1.16
luke laertes    70  3.58
rachel polk 74  0.92
calvin xylophone    52  0.58
luke white  57  3.86
calvin van buren    52  3.13
holly quirinius 59  1.70
mike brown  44  1.93
yuri ichabod    61  0.70
ulysses miller  56  3.53
victor hernandez    64  2.52
oscar young 34  0.34
luke ovid   36  3.17
quinn ellison   50  1.13
quinn xylophone 72  2.07
nick underhill  48  0.15
rachel miller   23  3.38
mike van buren  68  1.74
zach van buren  38  0.34
irene zipper    32  0.54
sarah garcia    31  3.87
rachel van buren    56  0.35
fred davidson   69  1.57
nick hernandez  19  2.11
irene polk  40  3.89
katie young 26  2.88
priscilla ovid  49  3.28
jessica hernandez   39  3.13
yuri allen  29  3.51
victor garcia   66  3.45

在 Hive 中创建表
第一步, 需要先将文件上传到集群中, 使用如下命令上传到 HDFS 中

hdfs dfs -mkdir -p /input
hdfs dfs -put studenttabl10k /input/

第二步, 使用 Hive 或者 Beeline 执行如下 SQL

CREATE DATABASE IF NOT EXISTS spark_integrition;USE spark_integrition;CREATE EXTERNAL TABLE student
(name  STRING,age   INT,gpa   string
)
ROW FORMAT DELIMITEDFIELDS TERMINATED BY '\t'LINES TERMINATED BY '\n'
STORED AS TEXTFILE
LOCATION '/user/hive/warehouse';LOAD DATA INPATH '/input/studenttab10k' OVERWRITE INTO TABLE student;

通过 SparkSQL 查询 Hive 的表

查询 Hive 中的表可以直接通过 spark.sql(…​) 来进行, 可以直接在其中访问 Hive 的 MetaStore, 前提是一定要将 Hive 的配置文件拷贝到 Spark 的 conf 目录

[root@Bigdata01 bin]# ./spark-shell --master local[6]
20/09/03 20:55:20 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://Bigdata01:4040
Spark context available as 'sc' (master = local[6], app id = local-1599137751998).
Spark session available as 'spark'.
Welcome to____              __/ __/__  ___ _____/ /___\ \/ _ \/ _ `/ __/  '_//___/ .__/\_,_/_/ /_/\_\   version 2.4.6/_/Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_144)
Type in expressions to have them evaluated.
Type :help for more information.scala> spark.sql("use spark_integrition")
20/09/03 20:56:45 WARN HiveConf: HiveConf of name hive.metastore.local does not exist
res0: org.apache.spark.sql.DataFrame = []scala> spark.sql("select * from student limit 100")
res1: org.apache.spark.sql.DataFrame = [name: string, age: int ... 1 more field]scala> res1.show()
+-------------------+---+----+
|               name|age| gpa|
+-------------------+---+----+
|   ulysses thompson| 64|1.90|
|       katie carson| 25|3.65|
|          luke king| 65|0.73|
|     holly davidson| 57|2.43|
|        fred miller| 55|3.77|
|        holly white| 43|0.24|
|     luke steinbeck| 51|1.14|
|     nick underhill| 31|2.46|
|     holly davidson| 59|1.26|
|       calvin brown| 56|0.72|
|    rachel robinson| 62|2.25|
|         tom carson| 35|0.56|
|        tom johnson| 72|0.99|
|       irene garcia| 54|1.06|
|        oscar nixon| 39|3.60|
|        holly allen| 32|2.58|
|    oscar hernandez| 19|0.05|
|      alice ichabod| 65|2.25|
|     wendy thompson| 30|2.39|
|priscilla hernandez| 73|0.23|
+-------------------+---+----+
only showing top 20 rows

通过 SparkSQL 创建 Hive 表

通过 SparkSQL 可以直接创建 Hive 表, 并且使用 LOAD DATA 加载数据

[root@Bigdata01 bin]# ./spark-shell --master local[6]
20/09/03 21:17:37 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Spark context Web UI available at http://Bigdata01:4040
Spark context available as 'sc' (master = local[6], app id = local-1599139087222).
Spark session available as 'spark'.
Welcome to____              __/ __/__  ___ _____/ /___\ \/ _ \/ _ `/ __/  '_//___/ .__/\_,_/_/ /_/\_\   version 2.4.6/_/Using Scala version 2.11.12 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_144)
Type in expressions to have them evaluated.
Type :help for more information.scala> :paste
// Entering paste mode (ctrl-D to finish)val createTableStr ="""|create EXTERNAL TABLE student|(|  name  STRING,|  age   INT,|  gpa   string|)|ROW FORMAT DELIMITED|  FIELDS TERMINATED BY '\t'|  LINES TERMINATED BY '\n'|STORED AS TEXTFILE|LOCATION '/user/hive/warehouse'""".stripMarginspark.sql("CREATE DATABASE IF NOT EXISTS spark_integrition1")
spark.sql("USE spark_integrition1")
spark.sql(createTableStr)
spark.sql("LOAD DATA INPATH '/input/studenttab10k' OVERWRITE INTO TABLE student")// Exiting paste mode, now interpreting.20/09/03 21:20:57 WARN HiveConf: HiveConf of name hive.metastore.local does not exist
20/09/03 21:21:01 ERROR KeyProviderCache: Could not find uri with key [dfs.encryption.key.provider.uri] to create a keyProvider !!
createTableStr: String =
"
create EXTERNAL TABLE student
(name  STRING,age   INT,gpa   string
)
ROW FORMAT DELIMITEDFIELDS TERMINATED BY '\t'LINES TERMINATED BY '\n'
STORED AS TEXTFILE
LOCATION '/user/hive/warehouse'"
res0: org.apache.spark.sql.DataFrame = []scala> spark.sql("select * from student limit 100")
res1: org.apache.spark.sql.DataFrame = [name: string, age: int ... 1 more field]scala> res1.where('age > 50).show()
+-------------------+---+----+
|               name|age| gpa|
+-------------------+---+----+
|   ulysses thompson| 64|1.90|
|          luke king| 65|0.73|
|     holly davidson| 57|2.43|
|        fred miller| 55|3.77|
|     luke steinbeck| 51|1.14|
|     holly davidson| 59|1.26|
|       calvin brown| 56|0.72|
|    rachel robinson| 62|2.25|
|        tom johnson| 72|0.99|
|       irene garcia| 54|1.06|
|      alice ichabod| 65|2.25|
|priscilla hernandez| 73|0.23|
|gabriella van buren| 68|1.32|
|       yuri laertes| 60|1.16|
|     nick van buren| 68|1.75|
|        bob ichabod| 56|2.81|
|     zach steinbeck| 61|2.22|
|          fred polk| 66|3.69|
|        alice young| 75|0.31|
|         mike white| 57|0.69|
+-------------------+---+----+
only showing top 20 rows

目前 SparkSQL 支持的文件格式有 sequencefile, rcfile, orc, parquet, textfile, avro, 并且也可以指定 serde 的名称

idea实现SparkSQL连接hive

使用 SparkSQL 处理数据并保存进 Hive 表

前面都在使用 SparkShell 的方式来访问 Hive, 编写 SQL, 通过 Spark 独立应用的形式也可以做到同样的事, 但是需要一些前置的步骤, 如下

Step 1: 导入 Maven 依赖

<dependency><groupId>org.apache.spark</groupId><artifactId>spark-hive_2.11</artifactId><version>${spark.version}</version>
</dependency>

Step 2: 配置 SparkSession

如果希望使用 SparkSQL 访问 Hive 的话, 需要做两件事

1,开启 SparkSession 的 Hive 支持

经过这一步配置, SparkSQL 才会把 SQL 语句当作 HiveSQL 来进行解析

2,设置 WareHouse 的位置

虽然 hive-stie.xml 中已经配置了 WareHouse 的位置, 但是在 Spark 2.0.0 后已经废弃了 hive-site.xml 中设置的 hive.metastore.warehouse.dir, 需要在 SparkSession 中设置 WareHouse 的位置

设置 MetaStore 的位置

val spark = SparkSession.builder().appName("hive example").config("spark.sql.warehouse.dir", "/user/hive/warehouse")  //1.config("hive.metastore.uris", "thrift://Bigdata01:9083") //2.enableHiveSupport()                   //3                                .getOrCreate()

1,设置 WareHouse 的位置
2,设置 MetaStore 的位置
3,开启 Hive 支持

配置好了以后, 就可以通过 DataFrame 处理数据, 后将数据结果推入 Hive 表中了, 在将结果保存到 Hive 表的时候, 可以指定保存模式

全套代码如下:

package com.spark.hiveimport org.apache.spark.sql.{SaveMode, SparkSession}
import org.apache.spark.sql.types.{FloatType, IntegerType, StringType, StructField, StructType}object HiveAccess {def main(args: Array[String]): Unit = {//1.创建SparkSession//  1.开启hive支持//  2.指定Metastore 的位置//  3.指定Warehouse 的位置val spark = SparkSession.builder().appName(this.getClass.getSimpleName).enableHiveSupport()//开启hive支持.config("hive.metatore.uris","thrift://Bigdata01:9083").config("spark.sql.warehouse.dir","/user/hive/warehouse").getOrCreate()//隐式转换import spark.implicits._//2.读取数据/*** 1.上传HDFS, 因为要在集群中执行,所以没办法保证程序在哪个机器上执行*   所以,要把文件上传到所有机器中,才能读取本地文件,*   上传到HDFS中就可以解决这个问题,所有的机器都可以读取HDFS中的文件*   它是一个外部系统*  2.使用DF读取文件*/val schema = StructType(List(StructField("name",StringType),StructField("age",IntegerType),StructField("gpa",FloatType)))val dataframe = spark.read//分隔符.option("delimiter","\t")//添加字段 (源码).schema(schema).csv("hdfs:///input/studenttab10k")val resultDF = dataframe.where('age > 50)//3.写入数据resultDF.write.mode(SaveMode.Overwrite).saveAsTable("spark_integrition1.student")}
}

通过 mode 指定保存模式, 通过 saveAsTable 保存数据到 Hive

打包jar


放入spark 目录下 将 jar 重命名为spark-sql.jar

[root@Bigdata01 spark]# mv original-spark-sql-1.0-SNAPSHOT.jar spark-sql.jar

提交集群运行 (出现如下结果,则运行成功)

[root@Bigdata01 spark]# bin/spark-submit --master spark://Bigdata01:7077 \
> --class com.spark.hive.HiveAccess \
> ./spark-sql.jar
20/09/03 22:28:52 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
20/09/03 22:28:55 INFO SparkContext: Running Spark version 2.4.6
20/09/03 22:28:55 INFO SparkContext: Submitted application: HiveAccess$
20/09/03 22:28:55 INFO SecurityManager: Changing view acls to: root
20/09/03 22:28:55 INFO SecurityManager: Changing modify acls to: root
20/09/03 22:28:55 INFO SecurityManager: Changing view acls groups to:
20/09/03 22:28:55 INFO SecurityManager: Changing modify acls groups to:
20/09/03 22:28:55 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(root); groups with view permissions: Set(); users  with modify permissions: Set(root); groups with modify permissions: Set()
20/09/03 22:28:57 INFO Utils: Successfully started service 'sparkDriver' on port 40023.
20/09/03 22:28:57 INFO SparkEnv: Registering MapOutputTracker
20/09/03 22:28:57 INFO SparkEnv: Registering BlockManagerMaster
20/09/03 22:28:57 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information
20/09/03 22:28:57 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up
20/09/03 22:28:57 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-02e6d729-f8d9-4a26-a95d-3a019331e164
20/09/03 22:28:57 INFO MemoryStore: MemoryStore started with capacity 366.3 MB
20/09/03 22:28:57 INFO SparkEnv: Registering OutputCommitCoordinator
20/09/03 22:28:58 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
20/09/03 22:28:58 INFO Utils: Successfully started service 'SparkUI' on port 4041.
20/09/03 22:28:58 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://Bigdata01:4041
20/09/03 22:28:59 INFO SparkContext: Added JAR file:/opt/module/spark/./spark-sql.jar at spark://Bigdata01:40023/jars/spark-sql.jar with timestamp 1599143339071
20/09/03 22:28:59 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://Bigdata01:7077...
20/09/03 22:29:00 INFO TransportClientFactory: Successfully created connection to Bigdata01/192.168.168.31:7077 after 331 ms (0 ms spent in bootstraps)
20/09/03 22:29:00 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20200903222900-0001
20/09/03 22:29:00 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200903222900-0001/0 on worker-20200903203039-192.168.168.31-54515 (192.168.168.31:54515) with 8 core(s)
20/09/03 22:29:01 INFO StandaloneSchedulerBackend: Granted executor ID app-20200903222900-0001/0 on hostPort 192.168.168.31:54515 with 8 core(s), 1024.0 MB RAM
20/09/03 22:29:01 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200903222900-0001/1 on worker-20200903203048-192.168.168.32-39304 (192.168.168.32:39304) with 6 core(s)
20/09/03 22:29:01 INFO StandaloneSchedulerBackend: Granted executor ID app-20200903222900-0001/1 on hostPort 192.168.168.32:39304 with 6 core(s), 1024.0 MB RAM
20/09/03 22:29:01 INFO StandaloneAppClient$ClientEndpoint: Executor added: app-20200903222900-0001/2 on worker-20200903203050-192.168.168.33-35682 (192.168.168.33:35682) with 6 core(s)
20/09/03 22:29:01 INFO StandaloneSchedulerBackend: Granted executor ID app-20200903222900-0001/2 on hostPort 192.168.168.33:35682 with 6 core(s), 1024.0 MB RAM
20/09/03 22:29:01 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 58667.
20/09/03 22:29:01 INFO NettyBlockTransferService: Server created on Bigdata01:58667
20/09/03 22:29:01 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy
20/09/03 22:29:01 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200903222900-0001/2 is now RUNNING
20/09/03 22:29:01 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200903222900-0001/0 is now RUNNING
20/09/03 22:29:01 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, Bigdata01, 58667, None)
20/09/03 22:29:01 INFO BlockManagerMasterEndpoint: Registering block manager Bigdata01:58667 with 366.3 MB RAM, BlockManagerId(driver, Bigdata01, 58667, None)
20/09/03 22:29:01 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, Bigdata01, 58667, None)
20/09/03 22:29:01 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, Bigdata01, 58667, None)
20/09/03 22:29:03 INFO StandaloneAppClient$ClientEndpoint: Executor updated: app-20200903222900-0001/1 is now RUNNING
20/09/03 22:29:12 INFO EventLoggingListener: Logging events to hdfs://Bigdata01:9000/spark_log/app-20200903222900-0001.lz4
20/09/03 22:29:13 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
20/09/03 22:29:14 INFO SharedState: loading hive config file: file:/opt/module/spark/conf/hive-site.xml
20/09/03 22:29:15 INFO SharedState: Setting hive.metastore.warehouse.dir ('/user/hive/warehouse') to the value of spark.sql.warehouse.dir ('/user/hive/warehouse').
20/09/03 22:29:15 INFO SharedState: Warehouse path is '/user/hive/warehouse'.
20/09/03 22:29:19 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (192.168.168.31:39316) with ID 0
20/09/03 22:29:19 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
20/09/03 22:29:24 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.168.31:44502 with 366.3 MB RAM, BlockManagerId(0, 192.168.168.31, 44502, None)
20/09/03 22:29:25 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (192.168.168.33:60974) with ID 2
20/09/03 22:29:25 INFO InMemoryFileIndex: It took 857 ms to list leaf files for 1 paths.
20/09/03 22:29:26 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.168.33:55821 with 366.3 MB RAM, BlockManagerId(2, 192.168.168.33, 55821, None)
20/09/03 22:29:32 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Registered executor NettyRpcEndpointRef(spark-client://Executor) (192.168.168.32:35910) with ID 1
20/09/03 22:29:36 INFO HiveUtils: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
20/09/03 22:29:38 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.168.32:50317 with 366.3 MB RAM, BlockManagerId(1, 192.168.168.32, 50317, None)
20/09/03 22:29:39 WARN HiveConf: HiveConf of name hive.metastore.local does not exist
20/09/03 22:29:40 INFO metastore: Trying to connect to metastore with URI thrift://Bigdata01:9083
20/09/03 22:29:40 INFO metastore: Connected to metastore.
20/09/03 22:29:43 INFO SessionState: Created local directory: /tmp/c21738d9-28fe-4780-a950-10d38e9e32ca_resources
20/09/03 22:29:43 INFO SessionState: Created HDFS directory: /tmp/hive/root/c21738d9-28fe-4780-a950-10d38e9e32ca
20/09/03 22:29:43 INFO SessionState: Created local directory: /tmp/root/c21738d9-28fe-4780-a950-10d38e9e32ca
20/09/03 22:29:43 INFO SessionState: Created HDFS directory: /tmp/hive/root/c21738d9-28fe-4780-a950-10d38e9e32ca/_tmp_space.db
20/09/03 22:29:43 INFO HiveClientImpl: Warehouse location for Hive client (version 1.2.2) is /user/hive/warehouse
20/09/03 22:29:47 INFO FileSourceStrategy: Pruning directories with:
20/09/03 22:29:47 INFO FileSourceStrategy: Post-Scan Filters: isnotnull(age#1),(age#1 > 50)
20/09/03 22:29:47 INFO FileSourceStrategy: Output Data Schema: struct<name: string, age: int, gpa: float ... 1 more fields>
20/09/03 22:29:47 INFO FileSourceScanExec: Pushed Filters: IsNotNull(age),GreaterThan(age,50)
20/09/03 22:29:48 INFO ParquetFileFormat: Using default output committer for Parquet: org.apache.parquet.hadoop.ParquetOutputCommitter
20/09/03 22:29:48 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
20/09/03 22:29:48 INFO SQLHadoopMapReduceCommitProtocol: Using user defined output committer class org.apache.parquet.hadoop.ParquetOutputCommitter
20/09/03 22:29:48 INFO FileOutputCommitter: File Output Committer Algorithm version is 1
20/09/03 22:29:48 INFO SQLHadoopMapReduceCommitProtocol: Using output committer class org.apache.parquet.hadoop.ParquetOutputCommitter
20/09/03 22:29:50 INFO CodeGenerator: Code generated in 1046.0442 ms
20/09/03 22:29:50 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 281.9 KB, free 366.0 MB)
20/09/03 22:29:51 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 24.2 KB, free 366.0 MB)
20/09/03 22:29:51 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on Bigdata01:58667 (size: 24.2 KB, free: 366.3 MB)
20/09/03 22:29:51 INFO SparkContext: Created broadcast 0 from saveAsTable at HiveAccess.scala:54
20/09/03 22:29:53 INFO FileSourceScanExec: Planning scan with bin packing, max size: 4194304 bytes, open cost is considered as scanning 4194304 bytes.
20/09/03 22:29:54 INFO SparkContext: Starting job: saveAsTable at HiveAccess.scala:54
20/09/03 22:29:54 INFO DAGScheduler: Got job 0 (saveAsTable at HiveAccess.scala:54) with 1 output partitions
20/09/03 22:29:54 INFO DAGScheduler: Final stage: ResultStage 0 (saveAsTable at HiveAccess.scala:54)
20/09/03 22:29:54 INFO DAGScheduler: Parents of final stage: List()
20/09/03 22:29:54 INFO DAGScheduler: Missing parents: List()
20/09/03 22:29:54 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at saveAsTable at HiveAccess.scala:54), which has no missing parents
20/09/03 22:29:55 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 153.1 KB, free 365.9 MB)
20/09/03 22:29:55 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 55.6 KB, free 365.8 MB)
20/09/03 22:29:55 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on Bigdata01:58667 (size: 55.6 KB, free: 366.2 MB)
20/09/03 22:29:55 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1163
20/09/03 22:29:55 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at saveAsTable at HiveAccess.scala:54) (first 15 tasks are for partitions Vector(0))
20/09/03 22:29:55 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
20/09/03 22:29:55 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.168.33, executor 2, partition 0, ANY, 8261 bytes)
20/09/03 22:29:57 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 192.168.168.33:55821 (size: 55.6 KB, free: 366.2 MB)
20/09/03 22:30:24 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.168.33:55821 (size: 24.2 KB, free: 366.2 MB)
20/09/03 22:30:28 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 32924 ms on 192.168.168.33 (executor 2) (1/1)
20/09/03 22:30:28 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
20/09/03 22:30:28 INFO DAGScheduler: ResultStage 0 (saveAsTable at HiveAccess.scala:54) finished in 34.088 s
20/09/03 22:30:28 INFO DAGScheduler: Job 0 finished: saveAsTable at HiveAccess.scala:54, took 34.592171 s
20/09/03 22:30:29 INFO FileFormatWriter: Write Job 3b048e0c-6b5e-43ea-aad2-b1e64f4d9657 committed.
20/09/03 22:30:29 INFO FileFormatWriter: Finished processing stats for write job 3b048e0c-6b5e-43ea-aad2-b1e64f4d9657.
20/09/03 22:30:30 INFO InMemoryFileIndex: It took 26 ms to list leaf files for 1 paths.
20/09/03 22:30:30 INFO HiveExternalCatalog: Persisting file based data source table `spark_integrition1`.`student` into Hive metastore in Hive compatible format.
20/09/03 22:30:32 INFO SparkContext: Invoking stop() from shutdown hook
20/09/03 22:30:32 INFO SparkUI: Stopped Spark web UI at http://Bigdata01:4041
20/09/03 22:30:32 INFO StandaloneSchedulerBackend: Shutting down all executors
20/09/03 22:30:32 INFO CoarseGrainedSchedulerBackend$DriverEndpoint: Asking each executor to shut down
20/09/03 22:30:32 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
20/09/03 22:30:32 INFO MemoryStore: MemoryStore cleared
20/09/03 22:30:32 INFO BlockManager: BlockManager stopped
20/09/03 22:30:32 INFO BlockManagerMaster: BlockManagerMaster stopped
20/09/03 22:30:32 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
20/09/03 22:30:32 INFO SparkContext: Successfully stopped SparkContext
20/09/03 22:30:32 INFO ShutdownHookManager: Shutdown hook called
20/09/03 22:30:32 INFO ShutdownHookManager: Deleting directory /tmp/spark-5d113d24-2e67-4d1c-a6aa-e75de128da16
20/09/03 22:30:32 INFO ShutdownHookManager: Deleting directory /tmp/spark-f4a4aed1-1746-4e87-9f62-bdaaf6eff438

进入hive 目录查询

hive (spark_integrition1)> select * from student limit 10;
OK
student.name    student.age student.gpa
ulysses thompson    64  1.9
luke king   65  0.73
holly davidson  57  2.43
fred miller 55  3.77
luke steinbeck  51  1.14
holly davidson  59  1.26
calvin brown    56  0.72
rachel robinson 62  2.25
tom johnson 72  0.99
irene garcia    54  1.06
Time taken: 0.245 seconds, Fetched: 10 row(s)

end

本次分享就到这里了

Spark SQL 快速入门系列(五)SparkSQL 访问 Hive相关推荐

  1. 小猪的C语言快速入门系列(五)

    小猪的C语言快速入门系列(五) 标签: C语言 本节引言: 上一节我们C语言 复合数据类型 中的 数组 进行了解读,本节我们会继续来学习 复合数据类型中的 指针,指针可是C语言的灵魂:利用指针可以表示 ...

  2. spark SQL快速入门 1-9 慕课网

    1.hadoop安装 1.修改hadoop配置文件hadoop-env.shexport JAVA_HOME=/home/hadoop/app/jdk1.8.0_91core-site.xml< ...

  3. Spark Core快速入门系列(5) | RDD 中函数的传递

      大家好,我是不温卜火,是一名计算机学院大数据专业大二的学生,昵称来源于成语-不温不火,本意是希望自己性情温和.作为一名互联网行业的小白,博主写博客一方面是为了记录自己的学习过程,另一方面是总结自己 ...

  4. RHEL8.0快速入门系列笔记--计划任务服务crond(十五)

    RHEL8.0快速入门系列笔记–计划任务服务crond(十五) 1.了解计划任务的作用 作用:释放我们的双手,释放我们的时间 计划任务,让系统在将来的指定时间点执行某些任务(程序) 计划任务,可以周期 ...

  5. 2021-08-26 转载 Scala快速入门系列博客文章

    作者:Huidoo_Yang 出处:http://www.cnblogs.com/yangp/ 本文版权归作者Huidoo_Yang和博客园共有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面 ...

  6. Python3快速入门(五)——Python3函数

    Python3快速入门(五)--Python3函数 一.函数定义 1.函数定义 Python 定义函数使用 def 关键字,一般格式如下: def function_name(parameter_li ...

  7. c# wpf listbox 高度_WPF快速入门系列(1)——WPF布局概览

    一.引言 关于WPF早在一年前就已经看过<深入浅出WPF>这本书,当时看完之后由于没有做笔记,以至于我现在又重新捡起来并记录下学习的过程,本系列将是一个WPF快速入门系列,主要介绍WPF中 ...

  8. WPF快速入门系列(6)——WPF资源和样式

    WPF快速入门系列(6)--WPF资源和样式 一.引言 WPF资源系统可以用来保存一些公有对象和样式,从而实现重用这些对象和样式的作用.而WPF样式是重用元素的格式的重要手段,可以理解样式就如CSS一 ...

  9. 【物体检测快速入门系列 | 01 】基于Tensorflow2.x Object Detection API构建自定义物体检测器

    这是机器未来的第1篇文章 原文首发地址:https://blog.csdn.net/RobotFutures/article/details/124745966 CSDN话题挑战赛第1期 活动详情地址 ...

最新文章

  1. php多线程多核,Linux查看CPU个数/多核/多线程的查看
  2. Oracle创建表空间、用户、分配权限语句
  3. loadrunner- winsock 函数总结
  4. 向日葵在mac不能以服务器运行吗,mac远程桌面连接在哪?向日葵可以实现mac远程连接吗?...
  5. 蓝桥杯第七届决赛之---阶乘位数
  6. Spring Boot细节挖掘(Redis的集成)
  7. centos打显卡驱动命令_Centos7更新内核后安装N卡驱动一键配置脚本
  8. sts 的js代码不变色_[黑科技] 使用 Laravel Livewire 来构建实时搜索功能(不使用一行 JS 代码)...
  9. WPS个人版安装VBA教程
  10. 驱动精灵恶意投放后门程序 云控劫持流量、诱导推广
  11. Spark SQL面试题
  12. C/C++编程学习 - 第2周 ③ 反向输出一个三位数
  13. 763-GMAX3809 1.1” 900万分辨率全局快门CMOS图像传感器
  14. 2012年度最佳分享:仿webQQ界面,详情请下载,不吃亏
  15. IEEE754浮点数算数标准
  16. 汇编语言 ORG伪指令
  17. 基金类型(场内场外、开放封闭、ETF、联接、LOF)
  18. 如何用Keynote绘制app原型图
  19. Hinton等谈深度学习十年;PyTorch落地Linux基金会的影响;机器学习界的“GitHub”|AI系统前沿动态
  20. Linux下xl710网卡驱动,CentOS 6.x 系统安装+网卡驱动安装(Realtek PCIe GBE Family Controller for Linux)...

热门文章

  1. 银行活期存取款业务处理系统的数据流图
  2. python屏幕文字识别_python 图片文字识别 可截图识别
  3. Python数据分析_第06课:数据清洗与初步分析_笔记
  4. 解决:idea中tomcat项目改名后,原名项目启动后是新名项目的内容
  5. 计算机专业的励志人物,北京科技大学计算机与通信工程学院-【毕业学子未来路】王禹:保入中科院的全国大学生励志人物...
  6. python宝典 宋强 pdf_Python宝典
  7. 人工智能--语义网络表示法
  8. SCP 从Linux下载文件到Windows本地
  9. SUST OJ 1671: 数字拼图
  10. 钱多多软件制作第四天