在搭建了Hadoop和hive环境后,就可以使用hive来进行数据库相关操作了。Hive提供了hql(类sql)语句来操作,基本过程与mysql类似,区别的就是对于hive中的聚合操作,将使用hadoop底层的mapreduce进程来执行。

下面以一个游戏公司的游戏、用户等相关分析大数据业务为例,以Hive为工具来完成游戏活跃度、用户使用情况等的统计分析工作。

(1)数据的产生

因为获取游戏公司的实际数据还是比较困难的,我们直接自己来构建。使用python脚本就可以完成。由于hadoop和hive都是安装在centos系统上,centos默认安装了python2.7,所以可以直接编写脚本,然后在centos上运行得到结果。

用户数据的构建,模拟产生1000个用户

import randomdef getUser():location_List= ['BJ','SH','TJ','GZ','SZ']fd= fopen('userinfo','w+')for i in range(1000):userid = str(1000+i)age = str(random.randrange(10,40))area = random.choice(location_List)user_money = str(i)str_tmp = userid + ','+ age+ ',' + area + ',' + usermoney + '\n'fd.write(str_tmp)fd.close()if __name__=='__main__':getUser()

游戏信息的构建,模拟共4个游戏:

def getGame():Game_list= ['CHESS','LANDLORD','QGAME','ROYAL']fd =fopen('gameinfo','w+')for i in range(4):gameid = str(i)gamename = Game_list[i]str_tmp = gameid+ ',' +gamename + '\n'fd.write(str_tmp)fd.close()if __name__=='__main__':getGame()

用户玩游戏时间数据构建,模拟一共10天的用户玩游戏时间的记录

import datetime
from datetime import timedelta
import randomdef gameTime():game_list = [0, 1, 2,3]time_list= [10,15,20,20,50,60,90]for j in range(10):fdate = (datetime.datetime.now() + datetime.timedelta(days=j)).strftime('%Y-%m-%d')fd = open('gametime\\{}\\gametime_{}.txt'.format(fdate,fdate), 'w+')for i in range(1000):userid = str(1000 + i)gameid = str(random.choice(game_list))gametime = str(random.choice(time_list))str_tmp = fdate+ ','+ userid+ ','+ gameid + ',' + gametime  + '\n'                                                                                                                           + '\n'fd.write(str_tmp)fd.close()if __name__=='__main__':gameTime()

用户玩游戏费用数据构建,模拟这10天里每天的费用:

import random,datetime
from datetime import timedeltadef userFee():game_list = [0, 1, 2,3]money_list= [10,15,30,27,55,66,90]for j in range(10):fdate = (datetime.datetime.now() + datetime.timedelta(days=j)).strftime('%Y-%m-%d')os.mkdir('userfee\\{}'.format(fdate))for j in range(10):fdate = (datetime.datetime.now() + datetime.timedelta(days=j)).strftime('%Y-%m-%d')fd = open('userfee\\{}\\userfee_{}.txt'.format(fdate,fdate), 'w+')for i in range(1000):userid = str(1000 + i)gameid = str(random.choice(game_list))gametime = str(random.choice(money_list))str_tmp = fdate+ ',' + userid+ ','+ gameid + ',' + gametime  + '\n'                                                                                                                           + '\n'fd.write(str_tmp)fd.close()if __name__=='__main__':userFee()

(2)数据的Hive存储

上述的用户数据都是以文件方式存储下来的,接下来我们将其存储到Hadoop上。使用的时候就是利用Hive以建表创建数据、插入数据等方式来实现。

首先在hadoop中新建如下文件夹,用于设置hive存储位置。

[hadoop@master ~]$ hdfs dfs -mkdir /stat
[hadoop@master ~]$ hdfs dfs -mkdir /stat/data/
[hadoop@master ~]$ hdfs dfs -mkdir /stat/data/gameinfo
[hadoop@master ~]$ hdfs dfs -mkdir /stat/data/gametime
[hadoop@master ~]$ hdfs dfs -mkdir /stat/data/userinfo
[hadoop@master ~]$ hdfs dfs -mkdir /stat/data/userfee

然后进入hive shell命令端,开始创建stat数据库,用于存放上述生产的数据,同时创建一个analysis数据库,用于存放hive统计分析数据。

hive> create database stat;
hive> create database analysis;

接下来就可以使用hive创建表的命令创建4个表,gameinfo,gametime,userinfo,userfee:

hive > use stat;
hive > create table if not EXISTS gameinfo ( gameid int, gamename string) > row format delimited fields terminated by ','> location '/stat/data/gameinfo';
hive > create table if not EXISTS userinfo ( userid int, age int, area string , money int) > row format delimited fields terminated by ','> location '/stat/data/userinfo';
hive > create table if not EXISTS gametime ( date string, userid int, gameid int, gametime int) > partitioned by (dt string )> row format delimited fields terminated by ','> location '/stat/data/gametime';
hive > create table if not EXISTS userfee ( date string, userid int, gameid int, fee int) > partitioned by (st string)> row format delimited fields terminated by ','> location '/stat/data/userfee';

注意到建表时各个字段与第一步构建数据时要对应,这样保证后面数据能够正常导入。其中gametime和userfee都是涉及到分区,因为有每日的数据需要单独进行存储,所以在创建表时就设定好分区。

有了表名和字段定义后,可以导入数据了。

如下将userinfo和gameinfo的数据存入hive:

hive > load data local inpath 'userinfo.txt' into table stat.userinfo ;
hive > load data local inpath 'gameinfo.txt' into table stat.gameinfo ;

对于后续两个分区表,则采用alter table add方式来导入:

hive > alter table stat.gametime add if not EXISTS partition (dt = '2020-02-09')> location '/stat/data/gametime/2020-02-09/';
hive > alter table stat.gametime add if not EXISTS partition (dt = '2020-02-10')> location '/stat/data/gametime/2020-02-10/';
hive > alter table stat.gametime add if not EXISTS partition (dt = '2020-02-11')> location '/stat/data/gametime/2020-02-11/';
hive > alter table stat.gametime add if not EXISTS partition (dt = '2020-02-12')> location '/stat/data/gametime/2020-02-12/';
hive > alter table stat.gametime add if not EXISTS partition (dt = '2020-02-13')> location '/stat/data/gametime/2020-02-13/';
hive > alter table stat.gametime add if not EXISTS partition (dt = '2020-02-14')> location '/stat/data/gametime/2020-02-14/';
hive > alter table stat.gametime add if not EXISTS partition (dt = '2020-02-15')> location '/stat/data/gametime/2020-02-15/';
hive > alter table stat.gametime add if not EXISTS partition (dt = '2020-02-16')> location '/stat/data/gametime/2020-02-16/';
hive > alter table stat.gametime add if not EXISTS partition (dt = '2020-02-17')> location '/stat/data/gametime/2020-02-17/';hive > alter table stat.userfee add if not EXISTS partition (dt = '2020-02-09')> location '/stat/data/userfee/2020-02-09/';
hive > alter table stat.userfee add if not EXISTS partition (dt = '2020-02-10')> location '/stat/data/userfee/2020-02-10/';
hive > alter table stat.userfee add if not EXISTS partition (dt = '2020-02-11')> location '/stat/data/userfee/2020-02-11/';
hive > alter table stat.userfee add if not EXISTS partition (dt = '2020-02-12')> location '/stat/data/userfee/2020-02-12/';
hive > alter table stat.userfee add if not EXISTS partition (dt = '2020-02-13')> location '/stat/data/userfee/2020-02-13/';
hive > alter table stat.userfee add if not EXISTS partition (dt = '2020-02-14')> location '/stat/data/userfee/2020-02-14/';
hive > alter table stat.userfee add if not EXISTS partition (dt = '2020-02-15')> location '/stat/data/userfee/2020-02-15/';
hive > alter table stat.userfee add if not EXISTS partition (dt = '2020-02-16')> location '/stat/data/userfee/2020-02-16/';
hive > alter table stat.userfee add if not EXISTS partition (dt = '2020-02-17')> location '/stat/data/userfee/2020-02-17/';

这样就把生产的游戏时间、用户费用数据存入了hdfs中。其实可以观察到,对于后面的两个用户游戏时间、用户游戏费用数据的存储,直接使用hive来操作还是比较笨重的,毕竟语句之间差别的就是日期变量。如果能使用脚本来传参,将日期作为变量传入执行脚本,执行效率会高很多。这里就需要启动hiveserver2进程,使用jdbc或者beeline来进行操作,尤其可以使用javaAPI或者python来远程访问,编写脚本来实现hive数据库的管理效率会跟多。

如下为hdfs数据结构:

进入hive,查询一下导入数据的情况:

hive> select * from stat.gameinfo;
OK
0       LandLord
1       Buffle
2       Farm
3       PuQ
Time taken: 0.212 seconds, Fetched: 4 row(s)
hive> select * from stat.gametime where dt='2020-02-09' limit 1;
OK
2020-02-09      1000    1       60      2020-02-09
Time taken: 0.498 seconds, Fetched: 1 row(s)

(3)数据的hive统计分析

前面的4个业务相关数据表,我们需要经过统计分析来获得相应的规律信息。如通过游戏信息表、用户玩游戏时间表关联分析可获得每个游戏每天的活跃度(统计每个游戏每天有多少用户在玩,进而分析最喜爱游戏进行推荐);通过用户信息表、用户游戏消费表关联分析来获得基于年龄的付费信息,即看游戏对哪个年龄段最有吸引力;通过游戏信息表、用户游戏消费表关联分析获得每个游戏获得的收入金额等。在hive shell输入hql语句,用于分析统计类的,包括各种聚合、关联条件等,执行时系统自动将任务交给底层的mapreduce来执行。此时我们将使用另外一个数据库analysis。

首先来写一下hql语句,实现游戏活跃度的统计:

hql= ''' insert overwrite table analysis.gameactive partition (dt='2020-02-17') 
select gt.fdate as fdate,gi.fgamename as fgamename,count(gt.fuserid) as fcount from stat.gametime gt,stat.gameinfo as gi
where gt.fgameid=gi.fgameid and fdate='2020-02-17' group by gt.fdate,gi.fgamename ''';

如果表比较多的时候,还可以这样来使用:

insert overwrite table analysis.gameactive partition (dt='2020-02-17')
select gt.fdate as fdate,gi.fgamename as fgamename,count(gt.fuserid) as fcount
from stat.gametime gt
left join stat.gameinfo gi
on gt.fgameid = gi.fgameid
where fdate ='2020-02-17'
group by gt.fdate,gi.fgamename;

语句中有left join,左连接的方式,left join... on...,on后面为关联方式。然后接where语句,查询条件。最后在使用group by分组时,注意到分组所用的字段,一定是需要前面select语句里出现过的属性。

同样直接在hive shell命令行窗口输入:

hive > insert overwrite table analysis.gameactive partition (dt='2020-02-17')
hive > select gt.fdate as fdate,gi.fgamename as fgamename,count(gt.fuserid) as fcount
hive > from stat.gametime gt,stat.gameinfo as gi
hive > where gt.fgameid=gi.fgameid and fdate='2020-02-17'
hive > group by gt.fdate,gi.fgamename;

执行结束后,依次将时间修改成产生数据的10天时间,然后可以查询结果如下:

hive> select * from analysis.gameactive;
OK
2020-02-09      Buffle  268     2020-02-09
2020-02-09      Farm    221     2020-02-09
2020-02-09      LandLord        256     2020-02-09
2020-02-09      PuQ     255     2020-02-09
2020-02-10      Buffle  223     2020-02-10
2020-02-10      Farm    258     2020-02-10
2020-02-10      LandLord        255     2020-02-10
2020-02-10      PuQ     264     2020-02-10
2020-02-11      Buffle  235     2020-02-11
2020-02-11      Farm    259     2020-02-11
2020-02-11      LandLord        246     2020-02-11
2020-02-11      PuQ     260     2020-02-11
2020-02-12      Buffle  253     2020-02-12
2020-02-12      Farm    241     2020-02-12
2020-02-12      LandLord        266     2020-02-12
2020-02-12      PuQ     240     2020-02-12
2020-02-13      Buffle  222     2020-02-13
2020-02-13      Farm    273     2020-02-13
2020-02-13      LandLord        252     2020-02-13
2020-02-13      PuQ     253     2020-02-13
2020-02-14      Buffle  253     2020-02-14
2020-02-14      Farm    257     2020-02-14
2020-02-14      LandLord        245     2020-02-14
2020-02-14      PuQ     245     2020-02-14
2020-02-15      Buffle  251     2020-02-15
2020-02-15      Farm    239     2020-02-15
2020-02-15      LandLord        250     2020-02-15
2020-02-15      PuQ     260     2020-02-15
2020-02-16      Buffle  261     2020-02-16
2020-02-16      Farm    263     2020-02-16
2020-02-16      LandLord        231     2020-02-16
2020-02-16      PuQ     245     2020-02-16
2020-02-17      Buffle  242     2020-02-17
2020-02-17      Farm    223     2020-02-17
2020-02-17      LandLord        261     2020-02-17
2020-02-17      PuQ     274     2020-02-17
2020-02-18      Buffle  271     2020-02-18
2020-02-18      Farm    253     2020-02-18
2020-02-18      LandLord        226     2020-02-18
2020-02-18      PuQ     250     2020-02-18
Time taken: 0.523 seconds, Fetched: 40 row(s)

由此我们看到整个10天里,每天4个游戏的参与人数统计就出来了。

同样接下来实现游戏用户年龄段情况统计,使用的hql语句为:

hive > insert overwrite table analysis.gameuserage partition (dt='2020-02-11')
hive > select gt.fdate as fdate,sum(gt.fgametime) as fgametime,ui.fage as fage
hive > from stat.gametime gt,stat.userinfo as ui
hive > where gt.fuserid=ui.fuserid and fdate='2020-02-11'
hive > group by ui.fage,gt.fdate;

执行后,依次对每天数据进行统计处理,查询结果如下(第一列为日期,第二列为用户玩游戏的时间,第三列为用户年龄,因为在产生数据的时候我们设定了年龄段从10到39岁,所以结果就是对这30个年龄组玩游戏的时间进行了总计分析):

2020-02-18      1695    10      2020-02-18
2020-02-18      1350    11      2020-02-18
2020-02-18      1325    12      2020-02-18
2020-02-18      990     13      2020-02-18
2020-02-18      1355    14      2020-02-18
2020-02-18      1420    15      2020-02-18
2020-02-18      905     16      2020-02-18
2020-02-18      1140    17      2020-02-18
2020-02-18      1580    18      2020-02-18
2020-02-18      1085    19      2020-02-18
2020-02-18      1350    20      2020-02-18
2020-02-18      1525    21      2020-02-18
2020-02-18      1285    22      2020-02-18
2020-02-18      1105    23      2020-02-18
2020-02-18      1035    24      2020-02-18
2020-02-18      1185    25      2020-02-18
2020-02-18      975     26      2020-02-18
2020-02-18      1625    27      2020-02-18
2020-02-18      1370    28      2020-02-18
2020-02-18      1485    29      2020-02-18
2020-02-18      930     30      2020-02-18
2020-02-18      1390    31      2020-02-18
2020-02-18      1250    32      2020-02-18
2020-02-18      1005    33      2020-02-18
2020-02-18      1250    34      2020-02-18
2020-02-18      1280    35      2020-02-18
2020-02-18      970     36      2020-02-18
2020-02-18      1170    37      2020-02-18
2020-02-18      1015    38      2020-02-18
2020-02-18      1015    39      2020-02-18
Time taken: 0.446 seconds, Fetched: 270 row(s)

(4)sqoop工具的使用

sqoop是一个实现RDMS与HDFS数据交换处理的桥梁工具。实践过程中安装相对较为简单,不过本人在实践时下载sqoop1.99版本时在验证一直通不过,各种环境变量、依赖包都设置了还是不行。无奈只好选择sqoop1.4版本。

1. 安装配置

sqoop1.4tar包可以直接从各种镜像上下载下来,然后解压到centos,由于解压后名字较长,可以将其重命名一下。

[hadoop@master ~]$ tar -xvf sqoop-1.4.6-cdh5.14.0.tar.gz
[hadoop@master ~]$ mv sqoop-1.4.6-cdh5.14.0.tar.gz sqoop1.4.6

然后设置一下环境变量:

[hadoop@master ~]$ vi ~/.bashrc

输入SQOOP_HOME路径:

export SQOOP_HOME=/home/hadoop/sqoop1.4.6
export PATH=$PATH:$SQOOP_HOME/bin

接下来进入sqoop文件夹的conf配置目录中,配置sqoop-env.sh:

[hadoop@master conf]$ mv sqoop-env-template.sh sqoop-env.sh
[hadoop@master conf]$ vi sqoop-env.sh

在该文件中给定hadoop相关路径:

#Set path to where bin/hadoop is available
export HADOOP_COMMON_HOME=/home/hadoop/hadoop-3.1.2

#Set path to where hadoop-*-core.jar is available
export HADOOP_MAPRED_HOME=/home/hadoop/hadoop-3.1.2

#set the path to where bin/hbase is available
#export HBASE_HOME=

#Set the path to where bin/hive is available
export HIVE_HOME=/home/hadoop/hive-3.1.2-bin

#Set the path for where zookeper config dir is
#export ZOOCFGDIR=
由于本次没有使用hbase和zookeeper,所以没有设置两者的路径。

如此环境变量就配置完毕,还有两个jar包需要导入,一个是mysql的jdbc工具包,一个是java的json工具包,这两个都可以从网上下载下来,然后导入到sqoop的lib目录中:

[hadoop@master lib]$ ll java-json.jar
-rw-r--r--. 1 hadoop hadoop 84697 Feb 13 14:35 java-json.jar
[hadoop@master lib]$ ll mysql-connector-java-8.0.16.jar
-rw-r--r--. 1 hadoop hadoop 2293144 Feb 13 12:05 mysql-connector-java-8.0.16.jar

2. 开始使用

sqoop的语法较为复杂,不过感觉也都是模板化的,按照规则就可以正常执行。

首先来测试一下,在当前用户目录下输入sqoop help:

[hadoop@master ~]$ sqoop help
Warning: /home/hadoop/sqoop1.4.6/../hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /home/hadoop/sqoop1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/sqoop1.4.6/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /home/hadoop/sqoop1.4.6/../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
2020-02-13 14:56:14,436 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.14.0
usage: sqoop COMMAND [ARGS]Available commands:codegen            Generate code to interact with database recordscreate-hive-table  Import a table definition into Hiveeval               Evaluate a SQL statement and display the resultsexport             Export an HDFS directory to a database tablehelp               List available commandsimport             Import a table from a database to HDFSimport-all-tables  Import tables from a database to HDFSimport-mainframe   Import datasets from a mainframe server to HDFSjob                Work with saved jobslist-databases     List available databases on a serverlist-tables        List available tables in a databasemerge              Merge results of incremental importsmetastore          Run a standalone Sqoop metastoreversion            Display version informationSee 'sqoop help COMMAND' for information on a specific command.

我们看到help中提示的,也是sqoop的主要使用方法,import 和export,可以将mysql中的表导入到HDFS系统中,也可以将HDFS系统目录导入到mysql中以表的形式存储。还可以使用create-hive-table来直接创建hive的表。

例如先将mysql中创建一个数据库名为sqoop,然后新建一个数据表为gameactive,插入两行记录如下:

mysql> select * from gameactive;
Empty set (0.01 sec)mysql> insert into gameactive values('2020-02-13','puke',100),('2020-02-13','dizhu',150);
Query OK, 2 rows affected (0.14 sec)
Records: 2  Duplicates: 0  Warnings: 0mysql> select * from gameactive;
+------------+-----------+--------+
| fdate      | fgamename | fcount |
+------------+-----------+--------+
| 2020-02-13 | puke      |    100 |
| 2020-02-13 | dizhu     |    150 |
+------------+-----------+--------+
2 rows in set (0.00 sec)

然后使用sqoop脚本连接mysql和hadoop,将mysql中的这个数据表存入hdfs系统中:

[hadoop@master ~]$ sqoop import --connect jdbc:mysql://master:3306/sqoop --username root --password Root-123 --table gameactive --target-dir  /stat/test --delete-target-dir --num-mappers 1 --fields-terminated-by ','

上述脚本分为几个部分:

sqoop import \     采用import命令

--connect jdbc:mysql://master:3306/sqoop    使用jdbc连接mysql数据库,主机名为master,端口为3306,sqoop为mysql中的数据库名

--username root   连接时使用的mysql用户名为root

--password Root-123 连接时使用的mysql用户密码

--table gameactive  访问mysql中sqoop数据库里的gameactive数据表

--target-dir /stat/test 将数据表中内容存入hdfs中的stat/test目录下

--deleter-target-dir 如果该目录已存在就删除后再存入

--num-mappers 1   使用mapreduce中的mapper进程,数量为1

--fields-terminated-by ','  字段之间间隔采用逗号

执行上述脚本,系统会开启hadoop中的mapreduce任务进程处理:

[hadoop@master ~]$ sqoop import --connect jdbc:mysql://master:3306/sqoop --username root --password Root-123 --table gameactive --target-dir  /stat/test --delete-target-dir --num-mappers 1 --fields-terminated-by ','
Warning: /home/hadoop/sqoop1.4.6/../hbase does not exist! HBase imports will fail.
Please set $HBASE_HOME to the root of your HBase installation.
Warning: /home/hadoop/sqoop1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/sqoop1.4.6/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
Warning: /home/hadoop/sqoop1.4.6/../zookeeper does not exist! Accumulo imports will fail.
Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation.
2020-02-13 14:38:17,349 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.14.0
2020-02-13 14:38:17,386 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
2020-02-13 14:38:17,526 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
2020-02-13 14:38:17,529 INFO tool.CodeGenTool: Beginning code generation
Loading class `com.mysql.jdbc.Driver'. This is deprecated. The new driver class is `com.mysql.cj.jdbc.Driver'. The driver is automatically registered via the SPI and manual loading of the driver class is generally unnecessary.
2020-02-13 14:38:18,426 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `gameactive` AS t LIMIT 1
2020-02-13 14:38:18,525 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `gameactive` AS t LIMIT 1
2020-02-13 14:38:18,534 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hadoop/hadoop-3.1.2
Note: /tmp/sqoop-hadoop/compile/3acb0490ad7cb85df80adb8d2b955e47/gameactive.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
2020-02-13 14:38:20,418 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/3acb0490ad7cb85df80adb8d2b955e47/gameactive.jar
2020-02-13 14:38:21,378 INFO tool.ImportTool: Destination directory /stat/test is not present, hence not deleting.
2020-02-13 14:38:21,378 WARN manager.MySQLManager: It looks like you are importing from mysql.
2020-02-13 14:38:21,378 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
2020-02-13 14:38:21,378 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
2020-02-13 14:38:21,378 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
2020-02-13 14:38:21,386 INFO mapreduce.ImportJobBase: Beginning import of gameactive
2020-02-13 14:38:21,387 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
2020-02-13 14:38:21,456 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
2020-02-13 14:38:21,491 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
2020-02-13 14:38:21,768 INFO client.RMProxy: Connecting to ResourceManager at master/192.168.58.159:8032
2020-02-13 14:38:22,841 INFO mapreduce.JobResourceUploader: Disabling Erasure Coding for path: /tmp/hadoop-yarn/staging/hadoop/.staging/job_1581564243812_0001
2020-02-13 14:38:54,794 INFO db.DBInputFormat: Using read commited transaction isolation
2020-02-13 14:38:55,013 INFO mapreduce.JobSubmitter: number of splits:1
2020-02-13 14:38:55,632 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1581564243812_0001
2020-02-13 14:38:55,634 INFO mapreduce.JobSubmitter: Executing with tokens: []
2020-02-13 14:38:56,023 INFO conf.Configuration: resource-types.xml not found
2020-02-13 14:38:56,023 INFO resource.ResourceUtils: Unable to find 'resource-types.xml'.
2020-02-13 14:38:56,742 INFO impl.YarnClientImpl: Submitted application application_1581564243812_0001
2020-02-13 14:38:56,923 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1581564243812_0001/
2020-02-13 14:38:56,924 INFO mapreduce.Job: Running job: job_1581564243812_0001
2020-02-13 14:39:34,699 INFO mapreduce.Job: Job job_1581564243812_0001 running in uber mode : false
2020-02-13 14:39:34,702 INFO mapreduce.Job:  map 0% reduce 0%
2020-02-13 14:39:48,428 INFO mapreduce.Job:  map 100% reduce 0%
2020-02-13 14:39:49,482 INFO mapreduce.Job: Job job_1581564243812_0001 completed successfully
2020-02-13 14:39:49,615 INFO mapreduce.Job: Counters: 32File System CountersFILE: Number of bytes read=0FILE: Number of bytes written=234015FILE: Number of read operations=0FILE: Number of large read operations=0FILE: Number of write operations=0HDFS: Number of bytes read=87HDFS: Number of bytes written=41HDFS: Number of read operations=6HDFS: Number of large read operations=0HDFS: Number of write operations=2Job Counters Launched map tasks=1Other local map tasks=1Total time spent by all maps in occupied slots (ms)=21786Total time spent by all reduces in occupied slots (ms)=0Total time spent by all map tasks (ms)=10893Total vcore-milliseconds taken by all map tasks=10893Total megabyte-milliseconds taken by all map tasks=22308864Map-Reduce FrameworkMap input records=2Map output records=2Input split bytes=87Spilled Records=0Failed Shuffles=0Merged Map outputs=0GC time elapsed (ms)=404CPU time spent (ms)=2520Physical memory (bytes) snapshot=124497920Virtual memory (bytes) snapshot=3604873216Total committed heap usage (bytes)=40763392Peak Map Physical memory (bytes)=124497920Peak Map Virtual memory (bytes)=3604873216File Input Format Counters Bytes Read=0File Output Format Counters Bytes Written=41
2020-02-13 14:39:49,631 INFO mapreduce.ImportJobBase: Transferred 41 bytes in 88.1291 seconds (0.4652 bytes/sec)
2020-02-13 14:39:49,646 INFO mapreduce.ImportJobBase: Retrieved 2 records.

处理结束后,可以使用web界面来查看,也可以直接采用hdfs命令来访问/stat/test目录:

[hadoop@master ~]$ hdfs dfs -ls /stat/test
Found 2 items
-rw-r--r--   1 hadoop supergroup          0 2020-02-13 14:39 /stat/test/_SUCCESS
-rw-r--r--   1 hadoop supergroup         41 2020-02-13 14:39 /stat/test/part-m-00000
[hadoop@master ~]$ hdfs dfs -cat /stat/test/part-m-00000
2020-02-13,puke,100
2020-02-13,dizhu,150

结果与mysql中建立的数据完全一致,这样就实现了mysql与hdfs之间的数据导入。

反过来如果将hdfs中数据导入到mysql中,执行脚本语言就得使用export方式。

首先在mysql端建立一个表getData,

mysql> use sqoop
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -ADatabase changed
mysql> create table getData(fdate varchar(64),fgamename varchar(64),fcount int);
Query OK, 0 rows affected (0.78 sec)mysql> show tables;
+-----------------+
| Tables_in_sqoop |
+-----------------+
| gameactive      |
| getData         |
+-----------------+
2 rows in set (0.00 sec)

然后回到sqoop端,我们将前面从mysql中导出存储到HDFS中的数据再导回至mysql中,上述数据在hdfs系统存储位置为/stat/test,因此sqoop执行脚本写为:

[hadoop@master ~]$ sqoop export --connect jdbc:mysql://master:3306/sqoop --username root --password Root-123 --table getData -m 1 --export-dir '/stat/test' --fields-terminated-by ','

详细看的话:

sqoop export \     采用export命令

--connect jdbc:mysql://master:3306/sqoop    使用jdbc连接mysql数据库,主机名为master,端口为3306,sqoop为mysql中的数据库名

--username root   连接时使用的mysql用户名为root

--password Root-123 连接时使用的mysql用户密码

--table gameactive  访问mysql中sqoop数据库里的gameactive数据表

--target-dir /stat/test 将hdfs中的stat/test目录下的内容导出到mysql中

--num-mappers 1   使用mapreduce中的mapper进程,数量为1

--fields-terminated-by ','  字段之间间隔采用逗号

执行结束后,可以去mysql数据中查询:

mysql> select * from getData;
+------------+-----------+--------+
| fdate      | fgamename | fcount |
+------------+-----------+--------+
| 2020-02-13 | puke      |    100 |
| 2020-02-13 | dizhu     |    150 |
+------------+-----------+--------+
2 rows in set (0.04 sec)

可以看到,数据已经存入mysql中了。

Hive大数据项目实践相关推荐

  1. 大数据项目实践过程笔记

    开发工具intelijidea 2.19.3 目前围绕Hadoop体系的大数据架构包括: 传统大数据架构 数据分析的业务没有发生任何变化,但是因为数据量.性能等问题导致系统无法正常使用,需要进行升级改 ...

  2. 大数据项目实践:基于hadoop+spark+mongodb+mysql开发医院临床知识库系统

    一.前言 从20世纪90年代数字化医院概念提出到至今的20多年时间,数字化医院(Digital Hospital)在国内各大医院飞速的普及推广发展,并取得骄人成绩.不但有数字化医院管理信息系统(HIS ...

  3. 大数据项目实践:基于hadoop+spark+mongodb+mysql+c#开发医院临床知识库系统

    从20世纪90年代数字化医院概念提出到至今的20多年时间,数字化医院(Digital Hospital)在国内各大医院飞速的普及推广发展,并取得骄人成绩.不但有数字化医院管理信息系统(HIS).影像存 ...

  4. 大数据项目实践--手机日志分析

    一.准备环境 1.下载jdk-8u45-windows-x64.exe 安装于D:\Java8 2.修改JAVA_HOME为 D:\Java8\jdk1.8.0_45 3.修改HADOOP_HOME为 ...

  5. 架构(B站尚硅谷大数据项目实践 电影推荐系统概述)

    详细版: 整体流程: 数据模型: 数据模型解析: 整体模块: 环境搭建: 数据加载服务:spark(scala) 推荐模块: 后台: 前端: 打包部署: 解决冷启动问题:

  6. Cris 小哥哥的大数据项目之 Hive 统计 YouTube 热门视频

    Cris 小哥哥的大数据项目之 Hive 统计 YouTube 热门视频 Author:Cris 文章目录 Cris 小哥哥的大数据项目之 Hive 统计 YouTube 热门视频 Author:Cr ...

  7. 最详细大数据项目落地路线图实践总结

    今天,来谈一谈"大数据项目如何落地?"这个话题.从事过多个大数据项目的规划方案及项目落地工作,在这里与大家分享一些心得,主要是关于大数据项目如何成功落地并取得预期目标,也可以说这些 ...

  8. informatica数据脱敏_助您首个大数据项目破茧成蝶的实践指南

    自从本世纪初软件应用开始在整个业务流程中盛行以来,一个不争的事实就是:数据改变了我们的工作方式.越来越多的企业认识到必须在大数据方面有所作为,但他们却并未切实规划出如何开展这项工作.而调查发现,切实展 ...

  9. 大数据项目中的QA需要迎接新的挑战

    大数据项目中的QA需要迎接新的挑战 根据IDC全球半年度大数据和分析支出指南的最新预测,到2022年全球大数据和业务分析解决方案的收入将达到2600亿美元.在大数据和业务分析解决方案上投资增长最快的行 ...

最新文章

  1. HTML5 canvas处理图片的各种效果,包括放大缩小涂鸦等
  2. C#学习基本概念之结构与类
  3. 倒排索引统计与 Python 字典
  4. Java学完后从业薪资怎么样?前景如何?
  5. 全局处理ajax请求时session超时
  6. nodejs正则提取html,Nodejs正则表达式函数之match、test、exec、search、split、replace
  7. vb如何测试连接mysql_VB怎么连接访问Access数据库?
  8. .net发送邮件outlook中文乱码
  9. layui表格有边框_layui前端框架表格如何进行屏幕适配
  10. update语句更新多条记录, 标记下
  11. Spark 理论简答
  12. 芥子空间破解游戏的加固保护案例
  13. 奥运门票系统down机的技术问题
  14. WordPress和October
  15. Ubuntu18.04安装sagemath(命令行安装,超方便)
  16. WebBrowser 显示Html内容3点细节技巧
  17. 干了这碗蛋炒饭 继续APP性能提升
  18. linux下ganglia监控系统搭建,linux下ganglia监控系统搭建
  19. 连连抽-自助式个人抽签系统 LuckySelect
  20. jQuery实现聊天对话框

热门文章

  1. php外卖平台源码,PHP外卖系统惠州外卖网 v1.0
  2. Python OLS 双向逐步回归
  3. CTS计算机网,丝网印刷CTS计算机直接制版技术
  4. JVM系列(三):程序计数器(PC寄存器)
  5. linux终止进程api,进程调度 – Linux内核API kthread_stop
  6. crash分析linux内核崩溃转储文件vmcore
  7. 南开大学计算机学院导师邢,南开大学计算机系导师--苏明
  8. Altium Designer入门实战教程-从原理图到印制电路板
  9. Docker的Image
  10. 销售系统数据库触发器MySQL_mysql数据库触发器详解