大数据学习之路之hive

  • hive安装
    • mysql安装
      • 前言
      • 安装步骤
        • 1、解压文件
        • 2、安装依赖库
        • 3、安装结束后,配置mysql
        • 4、设置开机自启动
        • 5、添加环境变量
        • 6、初始化mysql
        • 7、开启mysql
        • 8、查看mysql状态
        • 9、配置账号
        • 10、使用:mysql -u root 进行登录
        • 11、配置密码
    • hive安装
  • hive作业
    • 大作业1
      • 作业1:构建曝光表、点击表,基于train.csv。
      • 作业2:把这两个表在合并起来,然后创建一个hive表。
      • 作业3:统计每个用户最喜欢的三个物品 (篮球、游戏机、大数据) : 30天的数据。 代表它的中长期兴趣。
      • 作业4:找出每个用户最近点击的三个物品 : 短期兴趣(即时兴趣,短袖、短裤、凉鞋)
      • 作业5: 每个物品的点击率 。
      • 作业6:找出最活跃的100个用户。 (综合考虑的点击和曝光?得学习一个知识点,威尔逊平滑)
      • 作业7:为每个用户生成最近50个点击物品序列。user_id [i1,i2,....,i50]
    • 大作业2
      • 作业1:将orders 和 product建hive表
      • 作业2:统计每个用户有多少个订单
      • 作业3:每个用户平均每个订单多少个商品。
      • 作业4:每个用户在一周中购买订单的分布情况
      • 作业5:一个用户平均每个月购买多少个商品,平均30天。
    • 大作业3
      • 1.判断movies这张表的时间戳是否是周末,并加在最后一列. udf实现
      • 2.hive实现编辑距离() udf实现
      • 3.hive实现威尔逊平滑() udf实现

hive安装

mysql安装

前言

m1芯片的电脑在装虚拟机的时候只能装arm架构的镜像,导致了安装mysql时候网上很多教程不能用,折腾了一天找到了一个教程亲测有效,在此记录下留着后续参考
参考资料: https://blog.csdn.net/weixin_46498976/article/details/122086624

安装步骤

1、解压文件

解压时用-zxvf会出现[gzip: stdin: not in gzip format 错误,是因为该压缩包未采用gzip格式压缩,所以不需要添加z指令

tar -xvf mysql-5.7.27-aarch64.tar.gz

2、安装依赖库

yum install -y libaio*

3、安装结束后,配置mysql

//1 移动文件

mv /usr/local/mysql-5.7.27-aarch64 /usr/local/mysql

//2 创建logs目录

mkdir -p /usr/local/mysql/logs

//3 ln -sf a b 建立软连接,b指向a

ln -sf /usr/local/mysql/my.cnf /etc/my.cnf

//4 cp是linux里的拷贝命令-r 是用于目录拷贝时的递归操作-f 是强制覆盖(问是否覆盖,y)

cp -rf /usr/local/mysql/extra/lib* /usr/lib64/

//5 移动依赖(会询问是否覆盖,y)

mv /usr/lib64/libstdc++.so.6 /usr/lib64/libstdc++.so.6.old

//6 软连接依赖

ln -s /usr/lib64/libstdc++.so.6.0.24 /usr/lib64/libstdc++.so.6

//7 创建mysql组(如果已经创建,忽略此步骤)

groupadd mysql

//8 创建mysql用户添加到mysql组(如果已经创建,忽略此步骤)

 useradd -g mysql mysql

//9 将/usr/loca/mysql目录包含所有的子目录和文件,所有者改变为root,所属组改变为mysql

chown -R mysql:mysql /usr/local/mysql

4、设置开机自启动

 cp -rf /usr/local/mysql/support-files/mysql.server /etc/init.d/mysqld
chmod +x /etc/init.d/mysqld
systemctl enable mysqld

5、添加环境变量

//修改文件配置
vim /etc/profile
或者是
vim ~/.bashrc//添加变量
export MYSQL_HOME=/usr/local/mysql
export PATH=$PATH:$MYSQL_HOME/bin//使文件生效
source /etc/profile

6、初始化mysql

mysqld --initialize-insecure --user=mysql --basedir=/usr/local/mysql --datadir=/usr/local/mysql/data

7、开启mysql

systemctl start mysqld

8、查看mysql状态

systemctl status mysqld

9、配置账号

编辑 /usr/local/mysql/my.cnf,并在**[mysqld]** 这一行下:
添加 skip-grant-tables //表示跳过密码校验

vim /usr/local/mysql/my.cnf

10、使用:mysql -u root 进行登录

因为配置了跳过密码校验所以这次需要输入密码

11、配置密码

以下命令在mysql客户端执行

//此时需要需要一下密码,之前是设置临时跳过密码校验,括号里面填写你需要设置的密码
SET PASSWORD = PASSWORD('123456');ALTER USER 'root'@'localhost' PASSWORD EXPIRE NEVER;
//备注:如果出现ERROR 1396 (HY000): Operation ALTER USER failed for ‘root‘@‘localhost‘,需要将localhost改为%,即语句为:
ALTER USER 'root'@'%' PASSWORD EXPIRE NEVER;
FLUSH PRIVILEGES;
//使root能再任何host访问
update user set host = '%' where user = 'root';
//刷新
FLUSH PRIVILEGES;

hive安装

bash: wget: 未找到命令

yum -y install wget
#set java environment
export JAVA_HOME=/usr/local/src/jdk1.8.0_172
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=$PATH:${JAVA_HOME}/bin#set hadoop environment
export HADOOP_HOME=/usr/local/src/hadoop-2.6.1
export PATH=$PATH:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin#set hive environment
export HIVE_HOME=/usr/local/src/apache-hive-1.2.2-bin
export PATH=$PATH:${HIVE_HOME}/bin

hive 是依赖hadoop中的hdfs作为存储,依赖mysql管理元数据

hadoop集群启动之后,需要等待一会,等待hadoop推出安全模式,退出之后才可以启动hive

检查hadoop集群状态:

hadoop dfsadmin -report

如果有:Safe mode is ON的时候,不可以启动hive,需要等待一会

删除hadoop中的旧版本的jline-0.9.94.jar
所在目录:$HADOOP_HOME/share/hadoop/yarn/lib/jline-0.9.94.jar

复制hive中的jline-2.12.jar到hadoop中:

cp lib/jline-2.12.jar $HADOOP_HOME/share/hadoop/yarn/lib/

hive作业

大作业1

作业1:构建曝光表、点击表,基于train.csv。

需求:
table1:badou_bigdata_exp_table
table2:badou_bigdata_click_table
思路分析:由于构建曝光表和点击表就需要先构建合表,由于数据存在本地的train.csv上,所以先查看本地csv的数据。

[root@master hive_test]# cat train.csv | head -10
id,target,timestamp,deviceid,newsid,guid,pos,app_version,device_vendor,netmodel,osversion,lng,lat,device_version,ts
1,0,NULL,8b2d7f2aed47ab32e9c6ae4f5ae00147,8008333091915950969,9a2c909ebc47aec49d9c160cdb4a6572,1,2.1.5,HONOR,g4,9,112.538518,37.837926,STF-AL00,1573298086436
2,0,NULL,8b2d7f2aed47ab32e9c6ae4f5ae00147,8008333091915950969,9a2c909ebc47aec49d9c160cdb4a6572,1,2.1.5,HONOR,w,9,111.731183,35.622741,STF-AL00,1573298087570
3,0,NULL,832aaa33cdf4a0938ba2c795eb3ffefd,4941885624885390992,d51a157d2b1e0e9aed4dd7f9900b85b2,2,1.9.9,vivo,w,8.1.0,4.9E-324,4.9E-324,V1818T,1573377075934
4,0,NULL,832aaa33cdf4a0938ba2c795eb3ffefd,6088376349846612406,d51a157d2b1e0e9aed4dd7f9900b85b2,1,1.9.9,vivo,w,8.1.0,4.9E-324,4.9E-324,V1818T,1573377044359
5,0,NULL,67dd9dac18cce1a6d79e8f20eefd98ab,5343094189765291622,625dc45744f59ddbc3ec8df161217188,0,2.1.1,xiaomi,w,9,116.750876,36.56831,Redmi Note 7,1573380989662
6,0,NULL,04813dbae7d339a61f38d648e77b2c28,3734327341629052372,3bc11f585ac7b18d7997fa83e19aa439,1,2.1.5,OPPO,o,8.1.0,4.9E-324,4.9E-324,PBAM00,1573280467883
7,0,NULL,04813dbae7d339a61f38d648e77b2c28,5518070787661276860,3bc11f585ac7b18d7997fa83e19aa439,2,2.1.5,OPPO,o,8.1.0,4.9E-324,4.9E-324,PBAM00,1573280458325
8,0,NULL,04813dbae7d339a61f38d648e77b2c28,6167225445325229993,,0,2.1.5,OPPO,w,8.1.0,0.0,0.0,PBAM00,1573280222575
9,0,NULL,04813dbae7d339a61f38d648e77b2c28,8963857601701307537,3bc11f585ac7b18d7997fa83e19aa439,2,2.1.5,OPPO,o,8.1.0,4.9E-324,4.9E-324,PBAM00,1573280474186

可以看到数据是有标题的,所以先根据标题进行train表的创建,并且由于数据源是带标题的,所以需要添加去除首行的函数:

CREATE TABLE `bigdata_train`(`id` string, `target` string, `timstamp` string, `deviceid` string, `newsid` string, `guid` string, `pos` string, `app_version` string,`device_vendor` string,  `netmodel` string, `osversion` string, `lng` string, `lat` string,`device_version` string, `ts` string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' tblproperties("skip.header.line.count"="1");

再把元数据导入到表中:

load data local inpath '/usr/local/hive_test/train.csv' into table bigdata_train;

最后查看表中数据:

select * from bigdata_train limit 10;
OK
id  target  timstamp    deviceid    newsid  guid    pos app_version device_vendor   netmodel    osversion   lng lat device_version  ts
1   0   NULL    8b2d7f2aed47ab32e9c6ae4f5ae00147    8008333091915950969 9a2c909ebc47aec49d9c160cdb4a6572    1   2.1.5   HONOR   g4  112.538518  37.837926   STF-AL00    1573298086436
2   0   NULL    8b2d7f2aed47ab32e9c6ae4f5ae00147    8008333091915950969 9a2c909ebc47aec49d9c160cdb4a6572    1   2.1.5   HONOR   w   111.731183  35.622741   STF-AL00    1573298087570
3   0   NULL    832aaa33cdf4a0938ba2c795eb3ffefd    4941885624885390992 d51a157d2b1e0e9aed4dd7f9900b85b2    2   1.9.9   vivo    w   8.1.0   4.9E-324    4.9E-324    V1818T  1573377075934
4   0   NULL    832aaa33cdf4a0938ba2c795eb3ffefd    6088376349846612406 d51a157d2b1e0e9aed4dd7f9900b85b2    1   1.9.9   vivo    w   8.1.0   4.9E-324    4.9E-324    V1818T  1573377044359
5   0   NULL    67dd9dac18cce1a6d79e8f20eefd98ab    5343094189765291622 625dc45744f59ddbc3ec8df161217188    0   2.1.1   xiaomi  w   116.750876  36.56831    Redmi Note 7    1573380989662
6   0   NULL    04813dbae7d339a61f38d648e77b2c28    3734327341629052372 3bc11f585ac7b18d7997fa83e19aa439    1   2.1.5   OPPO    o   8.1.0   4.9E-324    4.9E-324    PBAM00  1573280467883
7   0   NULL    04813dbae7d339a61f38d648e77b2c28    5518070787661276860 3bc11f585ac7b18d7997fa83e19aa439    2   2.1.5   OPPO    o   8.1.0   4.9E-324    4.9E-324    PBAM00  1573280458325
8   0   NULL    04813dbae7d339a61f38d648e77b2c28    6167225445325229993     0   2.1.5   OPPO    w   8.1.0   0.0 0.0 PBAM00  1573280222575
9   0   NULL    04813dbae7d339a61f38d648e77b2c28    8963857601701307537 3bc11f585ac7b18d7997fa83e19aa439    2   2.1.5   OPPO    o   8.1.0   4.9E-324    4.9E-324    PBAM00  1573280474186
10  0   NULL    9a887c7be5401571603f912eb3ba172f    1068538578400892892 b04dfb77636b0e593f54b08d9eda0c5f    1   2.1.5   HUAWEI  o   4.9E-324    4.9E-324    FLA-AL20    1573311961694
Time taken: 0.047 seconds, Fetched: 10 row(s)

然后就是根据这张表分别进行曝光表和点击表的创建:
首先是点击表的创建:

create table badou_bigdata_click_table as select * from bigdata_train where target=1;

最后校验数据的正确性:

select * from badou_bigdata_click_table limit 5;
OK
id  target  timstamp    deviceid    newsid  guid    pos app_version device_vendor   netmodel    osversion   lng lat device_version  ts
75  1   1573221217466   fc2537a764aeebad1d9738bd835830c1    1441205267295707622 7a7e251a3a8a3e51f304558189d920f8    0   2.1.5   OPPO    8.1.0   106.051689  27.969893   PBCM10  1573221210580
87  1   1573364503073   fc2537a764aeebad1d9738bd835830c1    2492493898394443843 7a7e251a3a8a3e51f304558189d920f8    5   2.1.5   OPPO    8.1.0   4.9E-324    4.9E-324    PBCM10  1573364373414
88  1   1573226171344   fc2537a764aeebad1d9738bd835830c1    2672348718155449426 7a7e251a3a8a3e51f304558189d920f8    5   2.1.5   OPPO    8.1.0   106.051689  27.969893   PBCM10  1573225982895
95  1   1573223482356   fc2537a764aeebad1d9738bd835830c1    3605107585850541024 7a7e251a3a8a3e51f304558189d920f8    4   2.1.5   OPPO    8.1.0   106.051689  27.969893   PBCM10  1573223474473
97  1   1573223238360   fc2537a764aeebad1d9738bd835830c1    3669597114738367832 7a7e251a3a8a3e51f304558189d920f8    3   2.1.5   OPPO    8.1.0   106.051689  27.969893   PBCM10  1573223197376
Time taken: 0.03 seconds, Fetched: 5 row(s)

然后是曝光表的创建:

create table badou_bigdata_exp_table as select * from bigdata_train where target=0;

最后校验数据的正确性:

select * from badou_bigdata_exp_table limit 5;
OK
id  target  timstamp    deviceid    newsid  guid    pos app_version device_vendor   netmodel    osversion   lng lat device_version  ts
1   0   NULL    8b2d7f2aed47ab32e9c6ae4f5ae00147    8008333091915950969 9a2c909ebc47aec49d9c160cdb4a6572    1   2.1.5   HONOR   g4  112.538518  37.837926   STF-AL00    1573298086436
2   0   NULL    8b2d7f2aed47ab32e9c6ae4f5ae00147    8008333091915950969 9a2c909ebc47aec49d9c160cdb4a6572    1   2.1.5   HONOR   w   111.731183  35.622741   STF-AL00    1573298087570
3   0   NULL    832aaa33cdf4a0938ba2c795eb3ffefd    4941885624885390992 d51a157d2b1e0e9aed4dd7f9900b85b2    2   1.9.9   vivo    w   8.1.0   4.9E-324    4.9E-324    V1818T  1573377075934
4   0   NULL    832aaa33cdf4a0938ba2c795eb3ffefd    6088376349846612406 d51a157d2b1e0e9aed4dd7f9900b85b2    1   1.9.9   vivo    w   8.1.0   4.9E-324    4.9E-324    V1818T  1573377044359
5   0   NULL    67dd9dac18cce1a6d79e8f20eefd98ab    5343094189765291622 625dc45744f59ddbc3ec8df161217188    0   2.1.1   xiaomi  w   116.750876  36.56831    Redmi Note 7    1573380989662
Time taken: 0.055 seconds, Fetched: 5 row(s)

作业2:把这两个表在合并起来,然后创建一个hive表。

思路:曝光表当做主表进行左连接,条件是曝光表和点击表的guid:

create table badou_bigdata_exp_click_table as select t1.* from badou_bigdata_exp_table t1 left join badou_bigdata_click_table t2 on t1.guid=t2.guid;

后又看到参考答案的创建sql,发现还是自己想简单了,因为这样的执行效率太慢了,大大消耗了内存资源:

CREATE TABLE badou_bigdata_train_table_tmp AS SELECT
a.id,
IF( b.target IS NULL, a.target, b.target ) AS target,
IF( b.timstamp IS NULL, a.timstamp,b.timstamp ) AS click_ts,a.deviceid,a.newsid,a.guid,a.pos,a.app_version,a.device_vendor,a.netmodel,a.osversion,a.lng,a.lat,a.device_version,a.ts
FROMbadou_bigdata_exp_table aLEFT JOIN badou_bigdata_click_table b ON a.id = b.id;

最终执行结果:

id   target  click_ts    deviceid    newsid  guid    pos app_version device_vendor   netmodel    osversion   lng lat device_version  ts
10000001    0   1573357426026   6a1df68b8badad6f4b641c8f582f2322    2689664997029714708 611bd0f8766753618591263272132843    0   2.1.3   vivo    o   6.0 106.974097  30.831128   vivo Y67L   1573357426026
10000045    0   1573365131299   ad93d3089241405100fb1ff8735991ee    1556057430065320256 cc92b20981adda0508fdc4c4d0ab5538    1   2.1.5   Xiaomi  o   6.0.1   120.102082  43.891799   Redmi 4 1573365131299
10000056    0   1573362918393   ad93d3089241405100fb1ff8735991ee    3766835294457650467 cc92b20981adda0508fdc4c4d0ab5538    1   2.1.5   Xiaomi  g4  6.0.1   120.097242  43.884301   Redmi 4 1573362918393
10000067    0   1573363077965   ad93d3089241405100fb1ff8735991ee    6192056377011693164 cc92b20981adda0508fdc4c4d0ab5538    0   2.1.5   Xiaomi  g4  6.0.1   120.097242  43.884301   Redmi 4 1573363077965
1000007     0   1573203411058   ca738b1dc69585e84a61cb3038ac6edf    1689893232489885440 8176384ec7c48958461fe265ebc0fa1b    4   2.1.5   HUAWEI  8.1.0   109.038469  34.365967   JKM-AL00    1573203411058
10000070    0   1573366037588   ad93d3089241405100fb1ff8735991ee    6320670222649107091 cc92b20981adda0508fdc4c4d0ab5538    0   2.1.5   Xiaomi  o   6.0.1   120.102082  43.891799   Redmi 4 1573366037588
10000078    0   1573363678362   ad93d3089241405100fb1ff8735991ee    7671456503114363206 cc92b20981adda0508fdc4c4d0ab5538    3   2.1.5   Xiaomi  o   6.0.1   120.097242  43.884301   Redmi 4 1573363678362
10000081    0   1573363470588   ad93d3089241405100fb1ff8735991ee    7973189610803366386 cc92b20981adda0508fdc4c4d0ab5538    2   2.1.5   Xiaomi  o   6.0.1   120.097242  43.884301   Redmi 4 1573363470588
10000089    0   1573366745792   ad93d3089241405100fb1ff8735991ee    9071096611567366815 cc92b20981adda0508fdc4c4d0ab5538    1   2.1.5   Xiaomi  g4  6.0.1   120.102082  43.891799   Redmi 4 1573366745792
10000092    0   1573312231182   d254b9cb38a2121e0092d9f9ea19f3a9    2452773528755707692 cfebf86187e824f3a288d278c16b504f    1   2.1.5   xiaomi  o   9   104.050882  30.551165   Redmi 6 1573312231182
Time taken: 0.1 seconds, Fetched: 10 row(s)

总结:执行时间过长,挑选个别字段进行输出而不是t1.*。

作业3:统计每个用户最喜欢的三个物品 (篮球、游戏机、大数据) : 30天的数据。 代表它的中长期兴趣。

思路:最喜欢的三个物品,也就是点击次数最多的三个物品,那么前提就是target=1,而次数最多的三个物品,也就是需要进行分组排序,写出来sql:

select tt.* from (select
t.guid ,t.newsid,t.cnt,
row_number()
over(partition by guid order by cnt desc) as rk
from
(select guid, newsid,count(1) as cnt from bigdata_train where target=1 group by guid,newsid) as t) as tt
where tt.rk <= 3;

执行结果:

00013192d769aaa12bfcdf95dd8ca28d 8638420900855054837 1   1
00013192d769aaa12bfcdf95dd8ca28d    8506365202084168148 1   2
0001e51aba755cd8e61c4c70966278a7    5929348546758534097 2   1
0001e51aba755cd8e61c4c70966278a7    2748650470947630453 2   2
0001e51aba755cd8e61c4c70966278a7    1302860292809481631 2   3
00029c2fa4ea7c0321f96489cc644bc3    8682411529501352213 1   1
00029c2fa4ea7c0321f96489cc644bc3    2044028105311489068 1   2
...
Time taken: 58.143 seconds, Fetched: 10 row(s)

因为指定了三个物品,所以限制rk<=3。
总结:总的来说这道题不简单,由于之前写mysql比较多,一开始的思想则是往group by去想,结果怎么写都不对,最后翻阅了ppt,想到了row_number() over(),也完成了这一道题。
参考答案:

select guid,collect_list(newsid) from (select guid,newsid,row_number() over(partition by guid order by cnt desc) as rank from (
select guid,newsid,count(newsid) as cnt from badou_bigdata_click_table group by  guid,newsid) t1) t2
where rank <=3 group by guid;

这里参考答案将其进行了列转行的处理,其他地方大概一致。

作业4:找出每个用户最近点击的三个物品 : 短期兴趣(即时兴趣,短袖、短裤、凉鞋)

思路:最近点击点三个物品,所以还是分组排序,根据newsid进行分组,然后根据点击时间戳进行排序,整个题的逻辑比较清晰。

select * from (select guid,newsid,timstamp,row_number()
over(partition by guid order by timstamp desc) as rk from bigdata_train where target=1 ) t where t.rk <= 3;

执行结果如下:

ffd1548a5655321073f7c8360435219b 4112331170558803086 1573360758625   1
ffd1548a5655321073f7c8360435219b    880559533888696092  1573360668042   2
ffd1548a5655321073f7c8360435219b    2153077858044836569 1573360611649   3
ffe95562f720835e85083e4c67975f32    3802416784763620931 1573397232512   1
ffe95562f720835e85083e4c67975f32    3802416784763620931 1573397232512   2
ffe95562f720835e85083e4c67975f32    1057173856846978076 1573394470595   3
ffec479985df43275ba56ad2e31d1254    8108519714549931518 1573177347308   1
ffec479985df43275ba56ad2e31d1254    4118016728546004807 1573177323496   2
fff3741aca586fa069c771947f1dffed    8869791493312456230 1573199847812   1
fff3741aca586fa069c771947f1dffed    6556160639576326434 1573176199475   2
fff3741aca586fa069c771947f1dffed    1195930531095517938 1573175673396   3
Time taken: 41.552 seconds, Fetched: 98098 row(s)

总结:这里是对时间戳进行排序取前三即可,同样是对用户进行分组排序的考察。

select guid,collect_list(newsid) from (select guid,newsid,row_number() over(partition by guid order by click_ts desc) as rank from badou_bigdata_click_table) t1
where rank <=3 group by guid;

这里同样的,参考答案进行了列转行的处理操作

作业5: 每个物品的点击率 。

思路:每个物品的点击率,首先需要知道每个物品的曝光个数和点击个数,然后进行点击率的计算。还是需要进行根据newsid进行分组,再进行count计算数量。
sql分析,首先是统计所有物品的曝光数量并根据物品进行分组排序:

select newsid,row_number()
over(partition by newsid) as rk from bigdata_train

然后是进行所有物品的点击数量并根据物品进行分组排序:

select newsid,row_number()
over(partition by newsid) as rk from bigdata_train where target=1

这两个的执行结果会根据newsid进行分组,但是也会依次进行排列下来,所以要想知道每一个物品的曝光数量和点击数量,就需要进行分别求rk的最大值,首先是查询曝光物品的最大值:

select t1.newsid,max(t1.crk) as mcrk from (select newsid,row_number()
over(partition by newsid) as crk from bigdata_train where target=1) t1 group by t1.newsid;

然后是点击物品的最大值:

select t.newsid,max(t.erk) as merk from (select newsid,row_number()
over(partition by newsid) as erk from bigdata_train) t group by t.newsid;

最后需要进行左连接查询,对两个最大值进行除法运算,所以最终的sql如下:

 SELECTt3.newsid,t2.mcrk / t3.merk AS clickrate FROM(SELECTt.newsid,max( t.erk ) AS merk FROM( SELECT newsid, row_number() over ( PARTITION BY newsid ) AS erk FROM bigdata_train ) t GROUP BYt.newsid ) t3LEFT JOIN (SELECTt1.newsid,max( t1.crk ) AS mcrk FROM( SELECT newsid, row_number() over ( PARTITION BY newsid ) AS crk FROM bigdata_train WHERE target = 1 ) t1 GROUP BYt1.newsid ) t2 ON t2.newsid = t3.newsid;

执行结果:

999918175504661534   0.18181818181818182
999924645121733028  NULL
999937189875401965  NULL
999960016917199008  0.42857142857142855
999967025295140281  NULL
999970982281264221  NULL
999986525801800888  NULL
999991340726885346  NULL
999993305518446597  NULL
999998186709813321  NULL
999998848290941824  NULL
Time taken: 146.808 seconds, Fetched: 1152851 row(s)

总结:点击率为NULL的这些则应该是只曝光了但并未点击过任何物品的用户,按理说应该加一个IS NOT NULL,但是这里加上以后会报错,有待优化,应该显示0或者是不显示最好。
经过查看参考答案,其进行了if(b.newsid is null,0)的处理就解决了如下的难点,最后的结果也将会是比较平滑的0.0。

select a.newsid,if(b.newsid is null,0,b.cnt/a.cnt) as click_rate from (
select newsid,count(*) as cnt from badou_bigdata_exp_table  group by newsid) a
left join (
select newsid,count(*) as cnt from badou_bigdata_click_table group by newsid) bon a.newsid = b.newsid;

执行结果:

1000139192407391952  0.0
1000145871444286037 0.0
100024892600557038  0.0
1000253651000827025 0.0
1000259560190098179 0.0
1000316363569490597 0.0
1000318637413729390 0.0
1000331905135857967 0.0
1000477678476758838 0.0
1000542888543280244 0.0

作业6:找出最活跃的100个用户。 (综合考虑的点击和曝光?得学习一个知识点,威尔逊平滑)

思路:威尔逊平滑需要知道点击数和曝光数量,目的在于寻找最活跃的用户,所以应该是根据用户进行分组,并且依次找出来点击数量和曝光数量,再进行威尔逊计算,由于考虑到含有null的点击数,所以需要用到if函数,由于考虑到点击数量和曝光数量为字符串类型,需要再用到cast函数进行类型转换。
第一步则是找到点击数量:

select guid,count(1) as clkcnt from bigdata_train where target=1 group by guid;

第二步找到曝光数量:

Select t2.guid,if(t2. expcnt is null,0, expcnt) as expcnt from (select guid,count(1) as expcnt from bigdata_train group by guid) as t2;

第三步对两个sql进行join:

SELECT
IF( t.clkcnt IS NULL, 0, clkcnt ) AS clkcnt,t3.*
FROM( SELECT guid, count( 1 ) AS clkcnt FROM bigdata_train WHERE target = 1 GROUP BY guid ) AS tLEFT JOIN (SELECTt2.guid,IF( t2.expent IS NULL, 0, expent ) AS expent FROM( SELECT guid, count( 1 ) AS expent FROM bigdata_train GROUP BY guid ) AS t2 ) AS t3 ON t.guid = t3.guid;

最后加入排序和威尔逊平滑即可:

SELECTtt.guid,wilson (cast( tt.expcnt AS INT ),cast( tt.clkcnt AS INT )) AS wilson
FROM(SELECTIF( t.clkcnt IS NULL, 0, clkcnt ) AS clkcnt,t3.* FROM( SELECT guid, count( 1 ) AS clkcnt FROM bigdata_train WHERE target = 1 GROUP BY guid ) AS tLEFT JOIN (SELECTt2.guid,IF( t2.expcnt IS NULL, 0, expcnt ) AS expcnt FROM( SELECT guid, count( 1 ) AS expcnt FROM bigdata_train GROUP BY guid ) AS t2 ) AS t3 ON t.guid = t3.guid ) AS tt
ORDER BYwilson DESC LIMIT 100;

最终执行结果:

Total MapReduce CPU Time Spent: 47 seconds 50 msec
OK
guid    wilson
ea49ef85b4513b2b53a7593cfed51102    0.89902964411057
ef00b37148074fc61aeddc0006605088    0.8921277928914595
fe1df17758bed16817d3a7f0930cdb04    0.8827051546326686
ee21cd588eed69a8a973c88dd302498d    0.8614933012300694
36de76116aaf0436aad9f1a3eaf7c4b3    0.8511303354362276
860151e5b0238fca47edd80cb4d126d7    0.8496001330468701
bed028408bb4d239bd479dcce579aaa4    0.8463950191745091
2ba6f79849899ccf58e203c698d41639    0.8411683067399609
e423f3fb2af92671afde796772e89d2f    0.8386720295640713
46a4b19de7f938f9c810501994e99021    0.8386259241508256
b4fc73df8b79b0fe788f228792e2a0a7    0.8334130500853825
e02de6af1b812f899f9a5a4bd2724b58    0.8282128713452332
cd878614d37f0619635832edd42c4bae    0.8275024889834934
f558dd8d8a718fff90f4945036f7bc67    0.8244225979130076
920c0bc73f25e1a92cc25a6e467ab762    0.8224841390615409
c730e9dda3d89144d57def2f45c567ad    0.8185653640996304
a0eb4e308e5ddadbd5361b814cee41f6    0.8177138429375862
cbf2c1ed1e9db56a0db5654888665186    0.8166725421340469
4312bebfcfb3e906267932f2eb673ac7    0.8148928999011745
de07a48ad757418bd52e6bb102f06562    0.8142508239666775
95d08adae517efd2d000c7f3493aede6    0.8139419434528171
e6476800aaf3d567f049dec2bd432c3d    0.8113604502381572
6acdcc498e9f30c6b23aadbee3f7ac05    0.8110726048346116
518a5f705190c63ac84eb2c4de069e32    0.8110726048346116
1e3a5ad387f365bd121a3d3633da3729    0.8082272286657962
a244ed8746bcc3f57e47cf444e56465a    0.8077756475006052
561f2f5f0e1afd499696d3521fc52f28    0.8076733116077394
592e5cc962b5b47497050c1ffa216379    0.8055833096907443
72e9df947deb348f107bd419fc2eeb50    0.8037559449211442
f9bebf8045af7953173c54f1fdf74a2d    0.8037277708944264
79328fa55ccb10fa00f89b25e86eb500    0.8035951995756149
975751d6299d4fc5d0bcd25ad70173e4    0.8035578968673404
db4fba37b5ac1eec7490a9225da7b143    0.8034246335339559
94cb52d043bbb6b729f890f50b07e37b    0.8019793792886702
e552f8adddefda4580bca8dac0647d9e    0.8000247669282198
6837b76b90bc1a76d19383f20d86fa0b    0.7955550972010875
85b1f9ac6967633b0536f39eea0f30ba    0.7941277411371283
d11fd863197cd3afb17194faa779257f    0.7920898080323314
f6018565e82d7bb86d64d0c0c13a3a97    0.7913514372147344
cc0f1866f9d45055efcb73b303230045    0.7910024697313413
33f1ea5c1370014d4f451f27d4e796ca    0.7909702557243253
5d6cf7ac4910372961e027ed78271f27    0.7905541801065626
c117d058ff915cc166ac81007b339c96    0.7902261778609331
fe387004478bcc805dead8ddf1d4e31f    0.7888525440034053
97b82b34d7ebc3453140aa12bab408d0    0.7887694871798254
45a4af65fbc133ec48b802eb16ae8628    0.7885550784074078
320a631105ea18c3751000da116908e4    0.7858469006390867
78507fc8a9443269b5d807d84229d6b6    0.7831875870551889
0978bdda57b67e019538d2220fffd168    0.7830523225762857
e8f68f13a0568962ce6b47f4e30a094e    0.783048732706701
3094d4da7ec95d392447308b6fb69d7f    0.7827147160717774
af678f279da2013f2fb430fc130e1fff    0.782010513914882
05835e0068f87e7f0ddd71e90314db42    0.781516558359416
04fafd3fd4381b7b3d7dae309aab9f68    0.7811809183000028
b8abb4395a83be73133df37e5ce309fe    0.7802661877999727
553723a72a2ecd8bd1d3d0b5a931bb51    0.7787894209720949
68f3d8e063cbb799bfe090ab3a986ab0    0.7777653702188724
7c80893f4d3a0de7805f11574965ac53    0.7777124388062419
1479d15c2e000ede07e2ee3e234a0081    0.7754812015122048
fa630c12ff790239058afb645361bb61    0.7754562859218574
0710f1d3883e68c793eacb34067eecf5    0.7742118350431032
487bc5a52ea25082a03ce24b214c4a49    0.7713758783621926
78cade405ada3f3f40f75f75516fb1e2    0.7708533354052572
73ab47da33a01d5a9a3725ef0d6a4f9b    0.7707864539464352
1ae5883068957729b29d4d6b31391b38    0.7704677703813373
d9ac7119982e97b1718264958b9c8f2a    0.7699800959567935
f2bdf467e915ea81c3a65f5ee1bb9ffb    0.7699297519648146
6eda74cd616991864136901f913ffcf1    0.76857964100311
27120eef0304d918b47b913ee20dc140    0.7685111021112772
5cb85acece7e2c6f015a501b002f608a    0.7684316303564702
0d3b65a44f5ca71c24e92e6d6bfb161e    0.7680830858540515
674344ec7b9582ec8cbfe3edb6c39d70    0.7678359543135352
f3186ca7f980db2d3e4cda64b7ac139c    0.7673885071287374
8186afb75425bb2e30cfda1b547cdb1c    0.7671414950239199
b207f1fe87d4ea2a634937851b983dc3    0.7668255412481352
3f779ae21775415227e191b648cf4ad0    0.7664845122213719
05146e227a1930cc20d265620735f7dc    0.7664064776311262
46ef66628655b5a93603adcc1d9267d1    0.7659201260870553
9b7593d10b7f47486ec293897c024beb    0.7655732152231812
cda0580eb2152626f70db7ab9a8716ef    0.7647936689526424
29c3cbe5df9eccf7c9e0d23f681c90c6    0.7643631195680398
1c57c83ee1f661a34a3babeb02cf7be2    0.7642982561659349
5d0bc1a3b98ff947bff6592ed809f96f    0.7642857757189578
2bea643aa2d1e01b6a2299c29861a09e    0.7635436538520467
966850298eff276a9c3b5d63e6536b74    0.760884053765486
e96be31edc909b076248583dd87e25b9    0.7607420473445721
26d6c1943c9e553ab8fd626d18aa9864    0.7603047471272383
c52a72f4ae0796c3db15e50411cac320    0.760021765883857
90e791ce5e55dcd1894113ae53d082b7    0.7600130342104535
b8f14cfa021cc732f730d4fe000ef5a0    0.759969251698576
6800f5b098dd5018abc088142f0a9563    0.759733750477506
bbd40e0435f14c7c1ceadffd5938ad96    0.7590587887300745
62b0df9a05a8466ac4ed4e7283e97cce    0.75903603332202
40321358ca00bd233cd107acb905be2b    0.7586875813276531
78257fb13aef1e6df00ef2ce3ebfc922    0.7585805269967948
daead463d7dd9cf0c86eb2afc01dd73f    0.7585612540911103
20b9dac6599809531d1188147e43ee29    0.7585612540911103
4d196ea092c2b1051850d7adc82bc714    0.7585257953921132
946576450c02d78016cb9e01345bae9d    0.7558513028824161
60cc49dbb2ec34b46c23676455c40284    0.7556922928736096
Time taken: 106.057 seconds, Fetched: 100 row(s)

总结:虽然算出来了,但是个人觉得mr过多,执行效率过低,需要考虑怎么进行sql优化一下,减少mr的运行,缩减执行时长。

作业7:为每个用户生成最近50个点击物品序列。user_id [i1,i2,…,i50]

思路:为每个用户生成…,则代表是需要进行分组排序;要求生成最近50个点击…,则代表以点击时间戳降序排列;…物品序列,则代表需要进行列转行,思路分析清楚,需要开始一步一步写sql。
第一步就是分组排序:

select guid,newsid,timstamp,row_number()
over(partition by guid order by timstamp) as rk from bigdata_train where target=1

很简单,不再多说,第二步就是进行列转行的处理,并进行最近50个的处理:

SELECTt.guid,collect_list ( t.newsid ) AS recentnewsid
FROM( SELECT guid, newsid, timstamp, row_number() over ( PARTITION BY guid ORDER BY timstamp ) AS rk FROM bigdata_train WHERE target = 1 ) AS t
WHERErk <= 50
GROUP BYt.guid;

最终的执行结果:

ffec479985df43275ba56ad2e31d1254 ["4118016728546004807","8108519714549931518"]
ffed6e0b5f83531d7587c68dd644c785    ["2453561375252023859"]
ffee03c4247b2ee150a1b5fa0cdb9214    ["8467359165217783541","703065440493764740","1381165483795584578","203506996471007831","7494515499930728612"]
ffee76f077b86f9c0e6a65b1d33efbd3    ["2166833252280174795","4128994139048058116","3824310398408621415"]
fff3741aca586fa069c771947f1dffed    ["1195930531095517938","6556160639576326434","8869791493312456230"]
fff58a9eedd5b038cabe25fdb3d0ce9b    ["2002073809129457968","506065036726949814","1844438586786454530","8722643406615612540","1350485643533617988"]
fff67299c66b5fc54dc8a7981d0f4c63    ["8978175858525193676","9067014443934292631"]
fff75b7680f66f1641775313a09ebd57    ["6951322394014417577","3188913944478160153","548751388329785434","3617717083438391728","2173131019721020080","1345413116448328580","1143233079472544785"]
fff866213ce7c58de08155f143dcdccd    ["8964567149806643055","8366589277883174547"]
fffb9ec12b793333bd7cfcce560c9b07    ["8263002218220275926","2138150099912110667","4071884773946245995","5541437709571993377","8971685400849575920","6791501415869495709","4030645114351224053","7824147602678726034","1858019139892144135","8672979883967043","3243308275299624497","4038651698269692465","4958122213520581612","24548014429189821","9115985122903434966","9177323417497387094"]
fffda083760eb42351312e86f8d64b9c    ["6854301870738144568","1751628725684446258","5624054192153814756","3802652622543393612","1422277181850096369","967576600204078483","307380727982827440","607763377689917098","8667884942683601898","6520915705449976989","6127531528193542176","67547043701846459"]
...
Time taken: 58.193 seconds, Fetched: 40092 row(s)

大作业2

表1:
orders.csv
order_id:订单编号
user_id:用户id
eval_set
order_number:用户下单先后排序。
(加个10个购物车,手机订单,拖鞋订单)
order_dow:用户购买的日期星期几
order_hour_of_day:产生订单的是哪个时间段:
days_since_prior_order:距离上次订单的时间

表2:order_products__prior.csv
order_id:订单编号
product_id:商品id
add_to_cart_order:订单支付的先后位置
reordered:是否重复下单

作业1:将orders 和 product建hive表

执行orders建表语句:

CREATE TABLE `orders`(`order_id` string, `user_id` string, `eval_set` string, `order_number` string, `order_dow` string, `order_hour_of_day` string, `days_since_prior_order` string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' tblproperties("skip.header.line.count"="1");

执行product建表语句:

CREATE TABLE `priors`(`order_id` string, `product_id` string, `add_to_cart_order` string, `reordered` string)
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','tblproperties("skip.header.line.count"="1");

再进行数据导入:

load data local inpath '/usr/local/hive_test/orders.csv' into table orders;
load data local inpath '/usr/local/hive_test/products.csv' into table priors;

执行结果:

hive> select * from orders limit 10;
OK
order_id    user_id eval_set    order_number    order_dow   order_hour_of_day     days_since_prior_order
2539329 1   prior   1   2   08
2398795 1   prior   2   3   07  15.0
473747  1   prior   3   3   12  21.0
2254736 1   prior   4   4   07  29.0
431534  1   prior   5   4   15  28.0
3367565 1   prior   6   2   07  19.0
550135  1   prior   7   1   09  20.0
3108588 1   prior   8   1   14  14.0
2295261 1   prior   9   1   16  0.0
2550362 1   prior   10  4   08  30.0
Time taken: 0.068 seconds, Fetched: 10 row(s)
hive> select * from priors limit 10;
OK
order_id    product_id  add_to_cart_order   reordered
2   33120   1   1
2   28985   2   1
2   9327    3   0
2   45918   4   1
2   30035   5   0
2   17794   6   1
2   40141   7   1
2   1819    8   1
2   43668   9   0
3   33754   1   1
Time taken: 0.05 seconds, Fetched: 10 row(s)

作业2:统计每个用户有多少个订单

思路:统计每个用户…,也就是根据用户进行分组排序,…有多少个订单,也就是进行对每一个用户id的order_num的累加sum。
第一步,根据用户进行分组排序:

select user_id,row_number()
over(partition by user_id) as rk from orders;

第二步,根据用户group by再进行分组求和就可以:

SELECTt.user_id,sum( order_number ) as sumprdt
FROM( SELECT user_id, order_number, row_number() over ( PARTITION BY user_id ORDER BY order_number ) AS rk FROM orders ) t
GROUP BYuser_id limit 10;

执行结果:

user_id  sumprdt
1       66.0
10      21.0
100     21.0
1000    36.0
10000   2701.0
100000  55.0
100001  2278.0
100002  91.0
100003  10.0
100004  45.0
Time taken: 40.581 seconds, Fetched: 10 row(s)

可以观察到是没有问题的。

由于不确定order_number指的是商品数量还是订单数量??这里需要留一个疑问点?
第二个:如果是理解错了题目,订单如果仅仅指的是order_id的数量,那sql将会是如下所示:

SELECTt.user_id,max(t.rk) as sumorder
FROM( SELECT user_id, order_number, row_number() over ( PARTITION BY user_id ORDER BY order_number ) AS rk FROM orders ) t
GROUP BYuser_id limit 10;

或者是直接count也可以得出同样的结果:

select user_id,count(order_id) as sumorder from orders group by user_id;

这样的sql执行效率应该会更高一点。
执行结果如下所示:

user_id  sumorder
1       11
10      6
100     6
1000    8
10000   73
100000  10
100001  67
100002  13
100003  4
100004  9
Time taken: 38.904 seconds, Fetched: 10 row(s)

作业3:每个用户平均每个订单多少个商品。

分析:由于上一个题解决了商品数量,也求出了订单数量,上一个题也就是这个题的过程,只需要进行两个sql当作新表进行表连接,并求取平均值即可,那么直接把上述sql取过来,便是此题的过程解析。第一步,计算每个用户的总商品数量:

SELECTt1.user_id,sum( order_number ) as sumprdt
FROM( SELECT user_id, order_number, row_number() over ( PARTITION BY user_id ORDER BY order_number ) AS rk FROM orders ) t1
GROUP BYuser_id;

第二步,计算每个用户的总订单数量:

select user_id,count(order_id) as sumorder from orders group by user_id;

第三步,将两个sql作为独立表根据user_id进行左连接,求取平均值:

SELECTt3.user_id,t3.sumprdt / t4.sumorder AS avgorder
FROM(SELECTt1.user_id,sum( t1.order_number ) AS sumprdt FROM( SELECT user_id, order_number, row_number() over ( PARTITION BY user_id ORDER BY order_number ) AS rk FROM orders ) t1 GROUP BYuser_id ) t3LEFT JOIN ( SELECT user_id, count( order_id ) AS sumorder FROM orders GROUP BY user_id ) t2
GROUP BYuser_id ) t4 ON t3.user_id = t4.user_id;

最终执行结果:

user_id  avgorder
1       6.0
10      3.5
100     3.5
1000    4.5
10000   37.0
100000  5.5
100001  34.0
100002  7.0
100003  2.5
100004  5.0
Time taken: 97.515 seconds, Fetched: 10 row(s)

总结:经过思考,发现执行时间过长,检查sql后发现,都是根据user_id进行分组排序,为什么要写两个sql进行订单数和商品数量的计算呢,经过修改后sql如下:

SELECTt1.user_id,sum( t1.order_number ) AS sumprdt,max( t1.rk ) AS sumorder,sum( t1.order_number )/ max( t1.rk ) AS avgorder
FROM( SELECT user_id, order_number, row_number() over ( PARTITION BY user_id ORDER BY order_number ) AS rk FROM orders ) t1
GROUP BYuser_id;

执行结果如下:

user_id  sumprdt   sumorder    avgorder
1       66.0        11          6.0
10      21.0        6           3.5
100     21.0        6           3.5
1000    36.0        8           4.5
10000   2701.0      73          37.0
100000  55.0        10          5.5
100001  2278.0      67          34.0
100002  91.0        13          7.0
100003  10.0        4           2.5
100004  45.0        9           5.0
Time taken: 40.01 seconds, Fetched: 10 row(s)

总结:执行时间减半,仅需要2个map-reduce即可完成计算。

作业4:每个用户在一周中购买订单的分布情况

思路:使用case when就可以查询分布情况,order_dow为1则为周一即可。
sql如下所示:

hive> select user_id,> order_number,> case when(order_dow=1) then 1 else 0 end as day01,> case when(order_dow=2) then 1 else 0 end as day02,> case when(order_dow=3) then 1 else 0 end as day03,> case when(order_dow=4) then 1 else 0 end as day04,> case when(order_dow=5) then 1 else 0 end as day05,> case when(order_dow=6) then 1 else 0 end as day06,> case when(order_dow=7) then 1 else 0 end as day07 from orders limit 10;
OK
user_id order_number    day01   day02   day03   day04   day05   day06   day07
1   1   0   1   0   0   0   0   0
1   2   0   0   1   0   0   0   0
1   3   0   0   1   0   0   0   0
1   4   0   0   0   1   0   0   0
1   5   0   0   0   1   0   0   0
1   6   0   1   0   0   0   0   0
1   7   1   0   0   0   0   0   0
1   8   1   0   0   0   0   0   0
1   9   1   0   0   0   0   0   0
1   10  0   0   0   1   0   0   0
Time taken: 0.138 seconds, Fetched: 10 row(s)

作业5:一个用户平均每个月购买多少个商品,平均30天。

思路:一个用户…,所以需要进行分组排序,平均每个月购买多少个商品…,则说明需要进行平均,首先需要知道平均一天是购买多少个商品,就是总商品数/总天数,最后再x30,就是平均每个月购买多少商品了,sql如下所示:

SELECTt.user_id,sum( t.days_since_prior_order ) AS sumdays,sum( t.order_number ) AS sumordernum,(sum( t.days_since_prior_order )/ sum( t.order_number ))* 30 AS monthorder
FROM( SELECT user_id, order_number, days_since_prior_order, row_number() over ( PARTITION BY user_id ORDER BY order_number ) AS rk FROM orders ) AS t
GROUP BYt.user_id;

最后执行结果如下:

Total MapReduce CPU Time Spent: 11 seconds 470 msec
OK
user_id sumdays sumordernum monthorder
1       190.0   66.0    86.36363636363637
10      109.0   21.0    155.71428571428572
100     134.0   21.0    191.42857142857144
1000    94.0    36.0    78.33333333333333
10000   326.0   2701.0  3.6208811551277305
100000  223.0   55.0    121.63636363636364
100001  363.0   2278.0  4.780509218612818
100002  218.0   91.0    71.86813186813187
100003  14.0    10.0    42.0
100004  153.0   45.0    102.0

总结:和之前算平均每个订单多少个商品的题目大概是一个类型的,都是求和后进行平均的运算。

然而,老师给的答案是:

select t2.user_id, sum(t1.prod_cnt ) as sumprdt,sum(cast(if(days_since_prior_order='',0.0,days_since_prior_order) as float))/30.0  as mon_cnt,sum(t1.prod_cnt)/sum(cast(if(days_since_prior_order='',0.0,days_since_prior_order) as float)/30.0) as per_mon_prod_cnt
from orders t2
join(select  order_id,count(1) as prod_cnt from priorsgroup by order_id )t1
on t1.order_id = t2.order_id
group by t2.user_id
limit 10;

执行结果是这样的:

user_id  sumprdt mon_cnt per_mon_prod_cnt
1000    1   0.23333333333333334 4.285714285714286
10000   1   0.03333333333333333 30.0
100006  1   0.13333333333333333 7.5
100010  1   0.4 2.5
100013  1   0.06666666666666667 15.0
100015  1   0.23333333333333334 4.285714285714286
100021  1   0.03333333333333333 30.0
100024  1   0.0 NULL
100031  1   0.2 5.0
100032  1   0.2 5.0
Time taken: 51.223 seconds, Fetched: 10 row(s)

大作业3

将代码打包过来,报错:

Could not transfer artifact org.pentaho:pentaho-aggdesigner-algorithm:pom:5.1.5-jhyde from/to spring

并且maven包一直导不过来,检查maven仓库路径,进行修改:

修改后如上图所示,需要修改为自己的路径,修改后依然不行,通过查资料所知,可能和maven镜像有关系。
打开setting配置文件:

发现之前阿里云镜像写错了位置,修改为放进标签里面去,成功了,不再报错,如图所示:

1.判断movies这张表的时间戳是否是周末,并加在最后一列. udf实现

思路:利用udf实现周末转换,Java中直接借用Calendar即可实现,Java代码如下:

package udf;import org.apache.hadoop.hive.ql.exec.UDF;import java.text.SimpleDateFormat;
import java.util.Calendar;/*** @author liqinglin*/public class UDFIsWeekend extends UDF {public static String evaluate(String ts){long timestamp = Long.parseLong(ts);SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");//定义返回消息String returnMsg="";//定义一个日历对象,将时间戳加进来Calendar calendar = Calendar.getInstance();calendar.setTimeInMillis(timestamp);//日历中的DAY_OF_WEEK属性则表示中一周的哪一天,周六和周日则分别是1和7if(calendar.get(Calendar.DAY_OF_WEEK)==Calendar.SATURDAY || calendar.get(Calendar.DAY_OF_WEEK)==Calendar.SUNDAY){returnMsg="周末";} else {returnMsg = "工作日";}//用于自己观察到底是哪一天System.out.println(sdf.format(timestamp));return returnMsg;}public static void main(String[] args) {System.out.println(evaluate("875071561"));}
}

main函数执行结果:

1970-01-11 11:04:31
周末

可以看到是没有问题的,继续利用maven进行clean和package打包,进入hive进行add jar:

add jar /usr/local/src/hive/udf/hiveudf-3.0-SNAPSHOT.jar;

再进行临时函数的创建:

create temporary function isweekend as 'udf.UDFIsWeekend';

至此,以为可以成功,执行sql进行检验:

hive> select isweekend('1688566003');
FAILED: SemanticException [Error 10014]: Line 1:7 Wrong arguments ''1688566003'': No matching method for class udf.UDFIsWeekend with (string). Possible choices:

以为是没有这个函数,但是函数也是创建成功,根据报错信息进行资料的查询,发现说的都不对,直到想起来因为一直导的是一样的jar包,所以有可能是jar包冲突,此时抱着试一试的想法找到了删除包的方法:

delete jar /usr/local/src/hive/udf/hiveudf-1.0-SNAPSHOT.jar;
delete jar /usr/local/src/hive/udf/hiveudf-2.0-SNAPSHOT.jar;
delete jar /usr/local/src/hive/udf/hiveudf-3.0-SNAPSHOT.jar;

依次进行三个jar包的删除后,重新导包,进行创建临时函数完成,重新执行sql检查udf是否成功:

hive> select isweekend('1688567299');
1970-01-20 08:02:47
1970-01-20 08:02:47
OK
_c0
工作日

发现自己的输出过多,再重新打包,进行导入后重新执行,结果如下:

hive> select user_id,movie_id,score,ts,isweekend(ts) as isweekend from movies limit 10;
OK
user_id movie_id    score   ts  isweekend
1   1   5   874965758   周末
1   2   3   876893171   周末
1   3   4   878542960   周末
1   4   3   876893119   周末
1   5   3   889751712   周末
1   6   5   887431973   周末
1   7   4   875071561   周末
1   8   1   875072484   周末
1   9   5   878543541   周末
1   10  3   875693118   周末

总结:时间戳取的都是同一天,所以都是周末,数据没有问题。但是在找问题的时候,浪费时间过多,在jar包冲突这里,一直都想不出来为什么报错,觉得自己写的也没有问题,main函数也通过了,在这里浪费了太多时间去查找问题。

2.hive实现编辑距离() udf实现

思路:根据字符串的长度进行距离的最小距离的计算,(max-min)/max即是相似度的计算结果。

package udf.homework;import org.apache.hadoop.hive.ql.exec.UDF;/*** @author liqinglin*/
public class UDFEditDistance extends UDF {public static int getMinDistance(String sourceStr,String targetStr){int sourceLength = sourceStr.length();int targetLength = targetStr.length();//定义二维数组,用于存放矩阵int[][] distancesArr = new int[sourceLength + 1][targetLength + 1];//遍历两个数组for (int i = 1; i < sourceLength + 1; i++) {Character sourceChar = sourceStr.charAt(i-1);for (int j = 1; j < targetLength + 1; j++) {Character targetChar = targetStr.charAt(j-1);//计算最小距离if (sourceChar.equals(targetChar)){//如果两个字符相等distancesArr[i][j] = distancesArr[i-1][j-1];}else {//不想等则取-1,看附近有没有相等的distancesArr[i][j] = Math.min(Math.min(distancesArr[i][j-1],distancesArr[i-1][j]),distancesArr[i-1][j-1]) + 1;}}}System.out.println("----------矩阵打印---------------");//矩阵打印for(int i=0;i<sourceLength+1;i++){for(int j=0;j<targetLength+1;j++){System.out.print(distancesArr[i][j]+"\t");}System.out.println();}System.out.println("----------矩阵打印---------------");return distancesArr[sourceLength][targetLength];}public static String evaluate(String str1,String str2) {double minDistance = getMinDistance(str1, str2);double maxDistance = (Math.max(str1.length(),str2.length()));double getSimilarity = (maxDistance - minDistance)/maxDistance;return String.valueOf(getSimilarity);}public static void main(String[] args) {String a = "我想买手机";String b = "我想买苹果手机";String c = "我爱大数据";System.out.println(evaluate(a,c));}
}

执行主函数的执行结果:

----------矩阵打印---------------
0   0   0   0   0   0
0   0   1   1   1   1
0   1   1   2   2   2
0   1   2   2   3   3
0   1   2   3   3   4
0   1   2   3   4   4
----------矩阵打印---------------
0.2

放在hive中执行:

hive> select editdis('editDis','ilovedis') as mindistance;
OK
mindistance
0.625
Time taken: 0.098 seconds, Fetched: 1 row(s)

3.hive实现威尔逊平滑() udf实现

思路:利用udf实现威尔逊平滑,首先公式如下所示:

p可以理解为点击率,z则是置信水平在正态分布中的值,n则代表样本量(也就是曝光量),知道这些变量分别代表什么,代码也就很好写了:

public Double evaluate(int exposureNum, int clickNum){if (exposureNum * clickNum == 0 || exposureNum < clickNum) {return 0.;}double score;double z = 1.96f;int n = exposureNum;double p = 1.0f * clickNum / exposureNum;score = (p + z*z/(2.f*n) - z*sqrt((p*(1.0f - p) + z*z /(4.f*n))/n)) / (1.f + z*z/n);return score;
}

执行结果如下:

hive> select wilson(10,1);
OK
_c0
0.017875749482468388
Time taken: 0.496 seconds, Fetched: 1 row(s)
hive> select wilson(10000,1000);
OK
_c0
0.09427272906089298
Time taken: 0.107 seconds, Fetched: 1 row(s)

【大数据学习之路之hive】相关推荐

  1. My Plan——大数据学习之路

    大数据学习之路 本文简介 相关书籍 计算机基础 数据结构与算法 计算机组成原理 操作系统 计算机网络 数据库 JAVA Python Linux 大数据 其他 本科专业课程安排 学习计划 计划 总结 ...

  2. 大数据学习之路(转载)

    #大数据学习之路(转载) 博文地址https://blog.csdn.net/zys_1997/article/details/78358992 看到一个博主写的大数据学习路线,看了比较心动,想着自己 ...

  3. 大数据学习之路 JUC篇

    大数据学习之路 JUC篇(1) 前提说明 本人是一名学生,茫茫it行业的一名卑微的小白,这是我第一次写博客.其原因是学着学着知识发现回顾的时候差不多全忘记了!!为了总结.复习自己以往学到过的有关大数据 ...

  4. 大数据学习之路(电脑配置)

    大数据学习之路 第一天:大数据环境搭建(电脑配置) 一想到大数据,我们可能想到的是大数据可视化平台,展示的有多么的炫酷,可是你可能没有想到的是大数据中数据的存储,数据的计算(mapreduce)会是有 ...

  5. 大数据学习资料_2020大数据学习之路必备

    介绍 大数据领域相当广阔,对于任何开始学习大数据及其相关技术的人来说,这都是一项非常艰巨的任务.大数据技术数不胜数,决定从哪里开始可能是迷茫的. 这就是我想写这篇文章的原因.本文为您提供了一条引导您开 ...

  6. 大数据学习之路(七)——学习小结

    个人目前学习的总结,如有问题,发现的时候会修正,用于个人回顾,有错误的地方欢迎留言指出 通过前几篇的学习 hadoop单节点伪分布式 hadoop完全分布式 hadoop完全分布式高可用(HA) zo ...

  7. 大数据学习之路-Hive

    Hive 1. Hive基本概念 1.1 什么是Hive 1.2 Hive的优缺点 1.2.1 优点 1.2.2 缺点 1.3 Hive架构原理 1.4 Hive和 数据库比较 1.4.1 查询语言 ...

  8. 大数据学习之路-Hadoop

    Hadoop 1. 大数据导论 1.1 大数据概念 1.2 大数据特点 1.3 大数据应用场景 1.4 大数据部门组织结构 2. Hadoop简介与大数据生态 2.1 Hadoop的介绍 2.2 Ha ...

  9. 985在读硕士晓文大数据学习之路1:出发

    1.个人情况和转行原因: 哈喽大家我是晓文,目前本硕就读于某985的传统工科(电子信息方向),目前研一.在这个夕阳行业挣扎浪费了好几年的时间终于还是决心走出来.传统行业的培养导向是成绩决定论的,本科前 ...

最新文章

  1. Blender从头到尾创建一辆宝马轿车视频教程
  2. VMware vMotion 配置要求
  3. 计算机专业的金书,《计算机专业英语》书评,金书网
  4. 信息系统项目管理知识--项目人力资源管理
  5. 可以救命的生活小常识
  6. Send data format set as XML
  7. 服务器io修改,更改 Linux I/O 调度器来改善服务器性能
  8. iptables原理知识
  9. 车辆销售系统用例_使用OpenCV和Python构建自己的车辆检测模型
  10. 【Linux】mkdir命令
  11. FA_固定资产六大业务增加、修改、报废、在建、折旧、盘点概述(概念)
  12. M1 Mac 是否入手,先了解这些常用软件兼容性!!
  13. ConcurrentHashMap1.7到1.8变化
  14. JDK安装环境变量配置以及java命令可用但javac命令不可用解决方案
  15. win10应用商店linux_win10应用商店中有哪些推荐的应用?
  16. tomcat访问localhost:8080不能显示tom猫页面的问题
  17. 第80天-红蓝对抗-AWD 模式准备攻防监控批量
  18. MFRC522与单片机测试过程及代码
  19. 职业也如学习一样,逆水行舟不进则退
  20. 安卓app之按键美化

热门文章

  1. 繁华与落寞不过是过眼烟云
  2. 基于ARMR和白噪声特性模型及风速威布尔分布研究(Matlab代码实现)
  3. 【转】数字华容道怎样才能有解
  4. navicat导入 sqlserver备份文件(bak文件)步骤
  5. 十一 SQL UNION 与 SELECT INTO 与 INSERT INTO SELECT
  6. 23种设计模式及应用场景
  7. HDR视频到SDR的转换过程,及部分设备对HDR视频的播放现象
  8. 均值滤波计算_图像处理之低通滤波
  9. 来自30+女生的脱单分享,总结了5种脱单渠道让你快速脱单
  10. SOAP方式的CXF WebService实现