hadoop集成kerberos
节点 | ip | 进程 | user |
---|---|---|---|
master | 192.168.1.115 | NameNode | root |
slave1 | 192.168.1.116 | DataNode | root |
kdcserver | 192.168.1.118 | kdc,kadmin | root |
在kdcserver节点配置
kerberos安装
参照:http://www.fanlegefan.com/archives/kerberosinstall
配置kdc.conf
vi /var/kerberos/krb5kdc/kdc.conf [kdcdefaults]kdc_ports = 88kdc_tcp_ports = 88[realms]FAN.HADOOP = {acl_file = /var/kerberos/krb5kdc/kadm5.acldict_file = /usr/share/dict/wordsadmin_keytab = /var/kerberos/krb5kdc/kadm5.keytabdatabase_name = /var/kerberos/principalmax_renewable_life = 7dsupported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal}
配置krb5.conf
vi /etc/krb5.conf[logging]default = FILE:/var/log/krb5libs.logkdc = FILE:/var/log/krb5kdc.logadmin_server = FILE:/var/log/kadmind.log[libdefaults]default_realm = FAN.HADOOPdns_lookup_realm = falsedns_lookup_kdc = falseticket_lifetime = 24hrenew_lifetime = 7dforwardable = true[realms]FAN.HADOOP = {kdc = kdcserveradmin_server = kdcserver}[domain_realm].FAN.hadoop = FAN.HADOOPFAN.hadoop = FAN.HADOOP
初始化databases
kdb5_util create -s -r FAN.HADOOP
添加database administractor
kadmin.local -q “addprinc admin/admin”
添加keytab
kadmin.local -q "addprinc -randkey hdfs/kdcserver@FAN.HADOOP"
kadmin.local -q "addprinc -randkey hdfs/master@FAN.HADOOP"
kadmin.local -q "addprinc -randkey hdfs/slave1@FAN.HADOOP"
kadmin.local -q "addprinc -randkey HTTP/kdcserver@FAN.HADOOP"
kadmin.local -q "addprinc -randkey HTTP/master@FAN.HADOOP"
kadmin.local -q "addprinc -randkey HTTP/slave1@FAN.HADOOP"
kadmin.local -q "addprinc -randkey qun/kdcserver@FAN.HADOOP"
kadmin.local -q "addprinc -randkey qun/master@FAN.HADOOP"
kadmin.local -q "addprinc -randkey qun/slave1@FAN.HADOOP"
kadmin.local -q "xst -k hdfs-unmerged.keytab hdfs/kdcserver@FAN.HADOOP"
kadmin.local -q "xst -k hdfs-unmerged.keytab hdfs/master@FAN.HADOOP"
kadmin.local -q "xst -k hdfs-unmerged.keytab hdfs/slave1@FAN.HADOOP"
kadmin.local -q "xst -k HTTP.keytab HTTP/kdcserver@FAN.HADOOP"
kadmin.local -q "xst -k HTTP.keytab HTTP/master@FAN.HADOOP"
kadmin.local -q "xst -k HTTP.keytab HTTP/slave1@FAN.HADOOP"
kadmin.local -q "xst -k qun.keytab qun/kdcserver@FAN.HADOOP"
kadmin.local -q "xst -k qun.keytab qun/master@FAN.HADOOP"
kadmin.local -q "xst -k qun.keytab qun/slave1@FAN.HADOOP"合并
$ ktutil
ktutil: rkt hdfs-unmerged.keytab
ktutil: rkt HTTP.keytab
ktutil: rkt qun.keytab
ktutil: wkt hdfs.keytab
ktutil: exit
执行完上面的命令后会生成hdfs.keytab文件,这个文件需要复制到master和slave1节点上,并且将/etc/krb5.conf 这个文件复制到master和slave1的/etc/这个目录下
在master和slave1节点上配置
hadoop 集成kerberos
core-site.xml
<configuration><property><name>fs.defaultFS</name><value>hdfs://master:9000</value></property><property> <name>hadoop.tmp.dir</name> <value>/home/qun/data/hadoop/tmp</value> </property><property><name>fs.checkpoint.period</name><value>3600</value></property><property><name>fs.checkpoint.size</name><value>67108864</value></property><property><name>fs.checkpoint.dir</name><value>/home/qun/data/hadoop/namesecondary</value></property><property><name>hadoop.security.authentication</name><value>kerberos</value></property><property><name>hadoop.security.authorization</name><value>true</value></property>
</configuration>
hdfs-site.xml
<configuration><property> <name>dfs.replication</name> <value>2</value> </property> <property><name>dfs.namenode.name.dir</name><value>/home/qun/data/hadoop/name</value></property><property><name>dfs.datanode.data.dir</name><value>/home/qun/data/hadoop/data</value></property><property><name>dfs.block.access.token.enable</name><value>true</value></property><property> <name>dfs.datanode.data.dir.perm</name> <value>700</value> </property><property><name>dfs.namenode.keytab.file</name><value>/home/qun/hadoop-2.6.0/etc/hadoop/hdfs.keytab</value></property><property><name>dfs.namenode.kerberos.principal</name><value>hdfs/_HOST@FAN.HADOOP</value></property><property><name>dfs.namenode.kerberos.https.principal</name><value>HTTP/_HOST@FAN.HADOOP</value></property><property><name>dfs.datanode.address</name><value>0.0.0.0:1004</value></property><property><name>dfs.datanode.http.address</name><value>0.0.0.0:1006</value></property><property><name>dfs.datanode.keytab.file</name><value>/home/qun/hadoop-2.6.0/etc/hadoop/hdfs.keytab</value></property><property><name>dfs.datanode.kerberos.principal</name><value>hdfs/_HOST@FAN.HADOOP</value></property><property><name>dfs.datanode.kerberos.https.principal</name><value>HTTP/_HOST@FAN.HADOOP</value></property><property><name>dfs.webhdfs.enabled</name><value>true</value></property><property><name>dfs.web.authentication.kerberos.principal</name><value>HTTP/_HOST@FAN.HADOOP</value></property><property><name>dfs.web.authentication.kerberos.keytab</name><value>/home/qun/hadoop-2.6.0/etc/hadoop/hdfs.keytab</value></property><property><name>dfs.namenode.kerberos.internal.spnego.principal</name><value>HTTP/_HOST@FAN.HADOOP</value></property><property><name>dfs.secondary.https.address</name><value>master:50495</value></property><property><name>dfs.secondary.https.port</name><value>50495</value></property><property><name>dfs.secondary.namenode.keytab.file</name><value>/home/qun/hadoop-2.6.0/etc/hadoop/hdfs.keytab</value></property><property><name>dfs.secondary.namenode.kerberos.principal</name><value>hdfs/_HOST@FAN.HADOOP</value></property><property><name>dfs.secondary.namenode.kerberos.https.principal</name><value>hdfs/_HOST@FAN.HADOOP</value></property></configuration>
安装JVSC
下载
http://commons.apache.org/proper/commons-daemon/jsvc.html
编译
cd /home/qun/commons-daemon-1.1.0-src/src/native/unix
./configure --with-java=$JAVA_HOME
make
生成jsvc 64位executable,把它拷贝到$HADOOP_HOME/libexec,然后需要在hadoop-env.sh中指定JSVC_HOME到此路径,否则会报错"It looks like you’re trying to start a secure DN, but $JSVC_HOME isn’t set. Falling back to starting insecure DN."
打包
cd /home/qun/commons-daemon-1.1.0-src
mvn package
编译commons-daemon-1.1.10.jar,拷贝到$HADOOP_HOME/share/hadoop/hdfs/lib下,同时删除自带版本的commons-daemon jar包
配置JCE
下载地址
http://www.oracle.com/technetwork/java/javase/downloads/jce8-download-2133166.html
将解压的jar:US_export_policy.jar和local_policy.jar替换$HADOOP_HOME/jre/lib/security/下的jar
配置hadoop-env.sh
export JAVA_HOME=/home/qun/soft/jdk1.8.0_91
export HADOOP_SECURE_DN_USER=qun
export HADOOP_SECURE_DN_PID_DIR=/home/qun/hadoop-2.6.0/pids
export HADOOP_SECURE_DN_LOG_DIR=/home/qun/hadoop-2.6.0/logs
export JSVC_HOME=/home/qun/hadoop-2.6.0/libexec
启动namenode和datanode
启动NameNode和secondarynamenode
./hadoop-daemon.sh start namenode
./hadoop-daemon.sh start secondarynamenode
启动DataNode
start-secure-dns.sh
配置yarn
vi yarn-site.xml<configuration><property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>master:8032</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>master:8033</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master:8030</value> </property> <property> <name>yarn.web-proxy.address</name> <value>master:8888</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.keytab</name> <value>/home/qun/hadoop-2.6.0/etc/hadoop/hdfs.keytab</value> </property> <property> <name>yarn.resourcemanager.principal</name> <value>hdfs/_HOST@FAN.HADOOP</value> </property> <property> <name>yarn.nodemanager.keytab</name> <value>/home/qun/hadoop-2.6.0/etc/hadoop/hdfs.keytab</value> </property> <property> <name>yarn.nodemanager.principal</name> <value>hdfs/_HOST@FAN.HADOOP</value> </property> <property> <name>yarn.nodemanager.container-executor.class</name> <value>org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor</value> </property> <property> <name>yarn.nodemanager.linux-container-executor.group</name> <value>yarn</value> </property>
</configuration>
修改container-executor.cfg
yarn.nodemanager.linux-container-executor.group=yarn
banned.users=root,nobody,impala,hive,hdfs,yarn,bin,qun
min.user.id=1000
allowed.system.users=root,nobody,impala,hive,hdfs,yarn,qun
启动yarn
start-yarn.sh
yarn配置中出现的问题
container-executor.cfg must be owned by root, but is owned by 500
解决办法:
下载hadoop源码,重新编译container-executor,文件生成路径为target/usr/local/bin/,将container-executor复制到$HADOOP_HOME/bin/下
cd hadoop-2.6.0-src/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/
cmake src -DHADOOP_CONF_DIR=/etc/hadoop
make
注意要将container-executor.cfg复制到/etc/hadoop这个目录下
可以参考:http://blog.csdn.net/lipeng_bigdata/article/details/52687821
Caused by: ExitCodeException exitCode=24: Can’t get group information for yarn - Success.Exit code from container executor initialization is : 22 ExitCodeException exitCode=22: Invalid permissions on container-executor binary
chown root:yarn container-executor
chmod 6050 container-executor
改完后如下
---Sr-s---. 1 root yarn 108276 Dec 30 19:05 container-executor
Login failure for hdfs/master@FAN.HADOOP from keytab /home/qun/hadoop-2.6.0/etc/hadoop/hdfs.keytab: javax.security.auth.login.LoginException: Unable to obtain password from user
出现这个问题,除了kerberos账号的问题,也有可能是hdfs.keytab文件权限的问题,
之前在hadoop-env.sh中设置HADOOP_SECURE_DN_USER=qun,所以要确保hdfs.keytab属于qun这个用户,最简单的办法就是切换到qun账号执行下:kinit -k -t hdfs.keytab hdfs/master@FAN.HADOOP ,看看会不会报权限问题
要点难点
hadoop结合kerberos非常麻烦,有几个要点需要注意
- JCE配置
- keytab文件权限
- JSVC_HOME
- container-executor
参考链接
- http://blog.javachen.com/2014/11/04/config-kerberos-in-cdh-hdfs.html
- http://blog.chinaunix.net/uid-1838361-id-3243243.html
- http://blog.csdn.net/lalaguozhe/article/details/11570009
- http://blog.csdn.net/liliwei0213/article/details/40656455
- http://blog.csdn.net/lipeng_bigdata/article/details/52687821
- http://www.cloudera.com/documentation/cdh/5-0-x/CDH5-Security-Guide/cdh5sg_yarn_container_exec_errors.html
hadoop集成kerberos相关推荐
- 大数据入门教程,小白快速掌握Hadoop集成Kerberos安全技术
Kerberos是一种计算机网络授权协议,用来在非安全网络中,对个人通信以安全的手段进行身份认证.今天分享的视频教程从零学习Kerberos安全认证机制,并和Hadoop.YARN.HIVE进行集成, ...
- 【hadoop】hadoop 安装 kerberos
1.概述 本次是视频:[大数据基础]快速掌握Hadoop集成Kerberos安全技术 笔记. 2.环境信息 2.1 版本信息 本次课程基于使用的软件环境如下: .CDH5.14.4 .hadoop2. ...
- 报错:Ticket expired while renewing credentials 原因:Hue 集成Kerberos 导致Kerberos Ticket Renewer 起不来
报错:Ticket expired while renewing credentials 原因:Hue 集成Kerberos 导致Kerberos Ticket Renewer 起不来 图片: 报错, ...
- CDH集成Kerberos配置
转载自 JavaChen Blog,作者:JavaChen 原文链接地址:http://blog.javachen.com/2014/11/04/config-kerberos-in-cdh-hdfs ...
- HDP安全之集成kerberos/LDAP、ranger;安装部署kerberos;安装Knox;安装LDAP;启动LDAP;验证Knox网关
5.HDP安全之集成kerberos/LDAP.ranger 集成HDP kerberos /LDAP/ranger之前必须先了解为什么要这样做,kerberos/LDAP是用来做身份认证的,rang ...
- Ambari2.6.2集成Kerberos
"坑"说明 如果 HDP 版本是 2.6.5.Ambari 版本是 2.6.2.2 ,切记与 Kerberos 集成时,注意一下 Kerberos 的版本. Kerberos 版本 ...
- hadoop和kerberos的整合总结
由于手上负责的hadoop集群需要对公司外部提供服务,所有会有多个部门访问我们的hadoop集群,这个就涉及到了hadoop的安全性. 而hadoop的安全性是很弱的,只提供类似linux文件系统的帐 ...
- Hadoop开启Kerberos安全模式
Hadoop开启Kerberos安全模式, 基于已经安装好的Hadoop的2.7.1环境, 在此基础上开启Kerberos安全模式. 1.安装规划 已经安装好Hadoop的环境 10.43.159.7 ...
- Hadoop集成环境搭建
Hadoop集成环境搭建 一.准备工作 首先,我们需要准备好Java和Hadoop的安装包,我这里使用的包名为:jdk-8u144-linux-x64.tar.gz 和 hadoop-2.8.0.ta ...
最新文章
- 一文讲解特征工程 | 经典外文PPT及中文解析
- (03) spring Boot 的配置
- C语言插入排序(解析)
- 江湖救急,换对姿势比《颈椎病康复指南》更有效丨极客官舍
- CodeForce 463C Gargari and Bishops(贪心+暴力)
- rar 文件头crc版本_php实现rar文件的读取和解压
- 一步步创建第一个Docker App —— 4. 部署应用
- 学后端,一步一坑,遇坑就跳,跳完再爬---之mysql与mysql可视化工具
- 三.Mybatis 多对一与一对多
- jmail邮件服务器,asp+JMAIL实现发送邮件
- 理解 Joomla 的几个概念 - 读《Aprees.Begging.Joomla.From.Novice.to.Professional》
- 判断当前时间为本月的第几周,本周的第几天
- 微信服务号如何创建一个带参数的微信二维码?
- 1.1 嵌入式系统的定义和组成
- 空中夺命“杀手锏”!以色列研发致命性无人机,让人毛骨悚然
- HMACSHA加密方法
- TP5.1自定义创建命令(php think make:controller app\index\User)
- 2013年中南大学复试-惠民工程
- Mysql--instr函数的介绍及使用
- 地下管线探测仪的原理与性能——TFN T-6000地下管线探测仪
热门文章
- 【专家Insight】行动的力量:CDO的数字化转型行动指南
- 被打了打回去算不算是正当防卫
- 计算机应用与基础教学计划,《计算机应用基础》教学计划
- FZU-1493-ElGamal数字签名-A^X=B(%C)求x
- MOSS与ChatGPT,人工智能真的会取代人类吗?
- web自动化时,怎么定位鼠标悬浮时才出现的元素
- Python 获取当天日期、前一天日期、前半个月
- 华为荣耀小K2拆机参考
- CPP2022-28-期末模拟测试01
- linux系统安装时无线键盘吗,[操作系统]用无线键盘鼠标装多系统或者装红旗LINUX4.0的朋友们进...