shellz只需要在opt准备两个文件夹,一个install存放压缩包,soft存放解压后的文件就行

接着就开始运行脚本,脚本随便创建一个文件吧脚本拷贝进去,赋权运行就行

赋权:chmod -R 777 ./aa.sh    我拿aa.sh做列子

里面所有的ip地址改为自己的就行。虚拟机你的hostname一定要设置好,不然脚本里面所有的$hostname,你都要改为你的ip地址就行

注意点,

1:可能有些压缩包的环境变量的一些开头会有不一样的地方,注意修改

2:此版本适用于单机版集群,集群不可用,会报错

3:脚本运行前需要安装好mysql远程连接,系统环境变量也都安装好,还有需要你window系统那边添加好变量参数别忘了就和java jdk环境安装一样

4:运行完之后请自行搞定免密登录措施

常见问题:

如果出现文件未找到的问题:尽量避免不要同时开两个窗口运行,如果在一个窗口运行还是不行,可以先运行一个脚本,比如只运行jdk安装,测试一下jdk安装是否成功,成功呢,就直接运行所有脚本,每一个if都能单独拿出来运行的

还有错误就是上面提到的你的hostname一定要设置好,ip地址检查一下是否不一样

#!/bin/bashjdk=true
hadoop=true
zookeeper=true
hive=true
zeeplin=true
sqoop=true
hbase=true
scala=true
spark=true
flume=true
kafka=truehostname=`hostname`
echo "current host name is $hostname"
whoami=`whoami`
echo "current user is $whoami"installdir=/opt/soft
if [ ! -d "$installdir" ]; thenmkdir $installdir
fiif [ "$jdk" = true ]; thenecho "---------安装jdk----------- "tar -zxf /opt/install/jdk-8u111-linux-x64.tar.gz -C /opt/soft/mv /opt/soft/jdk1.8.0_111 /opt/soft/jdk180echo "#jdk" >>/etc/profileecho "export JAVA_HOME=/opt/soft/jdk180" >>/etc/profileecho "export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar">>/etc/profileecho "export PATH=$PATH:$JAVA_HOME/bin" >> /etc/profile
source /etc/profilefiif [ "$hadoop" = true ];then
echo "------------ 安装hadoop ----- "
tar -zxf /opt/install/hadoop-2.6.0-cdh5.14.2.tar.gz  -C  /opt/soft/
mv /opt/soft/hadoop-2.6.0-cdh5.14.2   /opt/soft/hadoop260echo  "------------------- 修改hadoop-env.sh -------"
sed -i "/^export JAVA_HOME=/cexport JAVA_HOME=$JAVA_HOME/" /opt/soft/hadoop260/etc/hadoop/hadoop-env.shecho  "------------------- 修改mapred-env.sh -------"
sed -i "/^# export JAVA_HOME=/cexport JAVA_HOME=$JAVA_HOME/" /opt/soft/hadoop260/etc/hadoop/mapred-env.shecho  "------------------- 修改yarn-env.sh -------"
sed -i "/^# export JAVA_HOME=/cexport JAVA_HOME=$JAVA_HOME/" /opt/soft/hadoop260/etc/hadoop/yarn-env.shecho "-=--------------------- 配置core.site.xml --------------"
core_path="/opt/soft/hadoop260/etc/hadoop/core-site.xml "
sed -i '19a\<property><name>doop.proxyuser.bigdata.groups</name><value>*</value></property>' $core_path
sed -i '19a\<property><name>hadoop.proxyuser.bigdata.hosts</name><value>*</value></property>' $core_path
sed -i '19a\<property><name>hadoop.tmp.dir</name><value>/opt/soft/hadoop260/hadooptmp</value></property>' $core_path
sed -i "19a\<property><name>fs.defaultFS</name><value>hdfs://$hostname:9000</value></property>"  $core_path echo "-=--------------------- 配置hdfs-site.xml --------------"
hdfs_path="/opt/soft/hadoop260/etc/hadoop/hdfs-site.xml "
sed -i "19a\<property><name>dfs.namenode.secondary.http-address</name><value>$hostname:50090</value></property>" $hdfs_path
sed -i '19a\<property><name>dfs.replication</name><value>1</value></property>' $hdfs_pathecho "-=--------------------- 配置mapred-site.xml --------------"
cp /opt/soft/hadoop260/etc/hadoop/mapred-site.xml.template /opt/soft/hadoop260/etc/hadoop/mapred-site.xml
mapred_path="/opt/soft/hadoop260/etc/hadoop/mapred-site.xml "
sed -i "19a\<property><name>mapreduce.jobhistory.mapapp.address</name><value>$hostname:19888</value></property>" $mapred_path
sed -i "19a\<property><name>mapreduce.jobhistory.address</name><value>$hostname:10020</value></property>" $mapred_path
sed -i '19a\<property><name>mapreduce.framework.name</name><value>yarn</value></property>' $mapred_pathecho "-=--------------------- 配置yarn-site.xml --------------"
yarn_path="/opt/soft/hadoop260/etc/hadoop/yarn-site.xml "sed -i '15a\<property><name>yarn.log-aggregation.retain-seconds</name><value>604800</value></property>' $yarn_path
sed -i '15a\<property><name>yarn.log-aggregation-enable</name><value>true</value></property>' $yarn_path
sed -i "15a\<property><name>yarn.resourcemanager.hostname</name><value>$hostname</value></property>" $yarn_pathsed -i '15a\<property><name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name><value>org.apache.hadoop.mapred.ShuffleHandler</value></property>' $yarn_pathsed -i '15a\<property><name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value></property>' $yarn_pathecho "--------------------- 配置slaves.xml --------------"
sed -i "s/localhost/$hostname/g" /opt/soft/hadoop260/etc/hadoop/slavesecho "#hadoop" >> /etc/profile
echo 'export HADOOP_HOME=/opt/soft/hadoop260' >> /etc/profile
echo 'export HADOOP_MAPRED_HOME=$HADOOP_HOME' >> /etc/profile
echo 'export HADOOP_COMMON_HOME=$HADOOP_HOME' >> /etc/profile
echo 'export HADOOP_HDFS_HOME=$HADOOP_HOME' >> /etc/profile
echo 'export YARN_HOME=$HADOOP_HOME' >> /etc/profile
echo 'export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native' >> /etc/profile
echo 'export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"' >> /etc/profile
echo 'export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin' >> /etc/profilesource /etc/profile
hadoop namenode -format
fiif [ "$hive"=true ]; thenecho "-----------安装hive------------"tar -zxf /opt/install/hive-1.1.0-cdh5.14.2.tar.gz -C /opt/softmv /opt/soft/hive-1.1.0-cdh5.14.2 /opt/soft/hive110cp /opt/install/mysql-connector-java-5.1.25.jar /opt/soft/hive110/lib/touch /opt/soft/hive110/conf/hive-site.xmlhive_path="/opt/soft/hive110/conf/hive-site.xml"echo '<?xml version="1.0" encoding="UTF-8" standalone="no"?>' >>$hive_pathecho '<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>'  >>$hive_pathecho '<configuration>'  >>$hive_pathecho "<property><name>javax.jdo.option.ConnectionURL</name><value>jdbc:mysql://$hostname:3306/hive144?createDatabaseIfNotExist=true</value></property>"  >>$hive_pathecho '<property><name>javax.jdo.option.ConnectionDriverName</name><value>com.mysql.jdbc.Driver</value></property>'  >>$hive_pathecho '<property><name>javax.jdo.option.ConnectionUserName</name><value>root</value></property>'  >>$hive_pathecho '<property><name>javax.jdo.option.ConnectionPassword</name><value>123456</value></property>'  >>$hive_pathecho '<property><name>hive.server2.thrift.client.user</name><value>root</value></property>'  >>$hive_pathecho '<property><name>hive.server2.thrift.client.password</name><value>123456</value></property>'  >>$hive_pathecho '</configuration>'  >>$hive_pathecho '#hive' >>/etc/profileecho 'export HIVE_HOME=/opt/soft/hive110'  >>/etc/profileecho 'export PATH=$PATH:$HIVE_HOME/bin'  >>/etc/profilesource /etc/profileschematool -dbType mysql -initSchema
fiif [ "$zookeeper" = true ]; then
echo "--------------- 安装zookeeper ---------"
tar -zxf /opt/install/zookeeper-3.4.5-cdh5.14.2.tar.gz -C /opt/soft
mv /opt/soft/zookeeper-3.4.5-cdh5.14.2 /opt/soft/zookeeper345
cp /opt/soft/zookeeper345/conf/zoo_sample.cfg /opt/soft/zookeeper345/conf/zoo.cfg
sed -i '/^dataDir=/cdataDir=/opt/soft/zookeeper345/datatmp' /opt/soft/zookeeper345/conf/zoo.cfg
echo "server.1=$hostname:2888:3888" >> /opt/soft/zookeeper345/conf/zoo.cfg
mkdir -p /opt/soft/zookeeper345/datatmp/
echo "1"> /opt/soft/zookeeper345/datatmp/myid
echo 'export ZOOKEEPER_HOME=/opt/soft/zookeeper345' >> /etc/profile
echo 'export PATH=$PATH:$ZOOKEEPER_HOME/bin' >> /etc/profile
source /etc/profilefiif [ "$zeeplin" = true ]; then
echo "---------------------安装zeppelin------------------"tar -zxf   /opt/install/zeppelin-0.9.0-preview1-bin-all.tgz -C /opt/soft
mv  /opt/soft/zeppelin-0.9.0-preview1-bin-all/  /opt/soft/zeppelin090
cd  /opt/soft/zeppelin090/conf/
echo "----------成功----"
cp /opt/soft/zeppelin090/conf/zeppelin-site.xml.template  /opt/soft/zeppelin090/conf/zeppelin-site.xml
echo "--------------------修改zeppelin-site.xml------------"
sed -i "/^  <value>127/c<value>192.168.113.202</value>/"  /opt/soft/zeppelin090/conf/zeppelin-site.xml
sed -i "/^  <value>8080/c<value>8000</value>/"  ./zeppelin-site.xml
echo "-------------------修改zeppelin-env.sh.template-----------"
sed -i "/^# export JAVA_HOME/cexport JAVA_HOME=$JAVA_HOME/"  /opt/soft/zeppelin090/conf/zeppelin-env.sh.template
sed -i "19a\export HADOOP_CONF_DIR=/opt/soft/hadoop260/etc/hadoop/"  /opt/soft/zeppelin090/conf/zeppelin-env.sh.templatefiif [ "$hbase" = true ]; thenecho '----------------hbase安装中-------------------'tar -zxf /opt/install/hbase-1.2.0-cdh5.14.2.tar.gz -C /opt/softmv /opt/soft/hbase-1.2.0-cdh5.14.2 /opt/soft/hbase120sed -i '/^# export JAVA_HOME=/cexport JAVA_HOME=/opt/soft/jdk180/' /opt/soft/hbase120/conf/hbase-env.shsed -i '/^# export HBASE_MANAGES_ZK/cexport HBASE_MANAGES_ZK=false/' /opt/soft/hbase120/conf/hbase-env.shhbase_site_path=/opt/soft/hbase120/conf/hbase-site.xmlsed -i "23a\<property><name>hbase.rootdir</name><value>hdfs://$hostname:9000/hbase</value></property>" $hbase_site_pathsed -i '23a\<property><name>hbase.cluster.distributed</name><value>true</value></property>'  $hbase_site_pathsed -i '23a\<property><name>hbase.zookeeper.property.dataDir</name><value>/opt/soft/hbase120/hbasedir</value></property>' $hbase_site_pathsed -i '23a\<property><name>hbase.zookeeper.property.clientPort</name><value>2181</value></property>' $hbase_site_pathecho '#hbase' >>/etc/profileecho 'export HBASE_HOME=/opt/soft/hbase120' >>/etc/profileecho 'export PATH=$PATH:$HBASE_HOME/bin' >>/etc/profilefiif [ "$sqoop" = true ]; then
echo "---------------------安装sqoop------------------"tar -zxf   /opt/install/sqoop-1.4.6.bin__hadoop-2.0.4-alpha.tar.gz  -C /opt/soft
mv  /opt/soft/sqoop-1.4.6.bin__hadoop-2.0.4-alpha/  /opt/soft/sqoop146
cd  /opt/soft/sqoop146/conf/
echo "----------成功---------------"mv sqoop-env-template.sh sqoop-env.sh
echo "--------------------修改sqoop-env.sh------------"
sed -i "/^#export HADOOP_COMMON_HOME/cexport HADOOP_COMMON_HOME=/opt/soft/hadoop260/"  ./sqoop-env.sh
sed -i "/^#export HADOOP_MAPRED_HOME/cexport HADOOP_MAPRED_HOME=/opt/soft/hadoop260/"  ./sqoop-env.sh
sed -i "/^#export HBASE_HOME/cexport HBASE_HOME=/opt/soft/hbase120/"  ./sqoop-env.sh
sed -i "/^#export HIVE_HOM/cexport HIVE_HOME=/opt/soft/hive110/"  ./sqoop-env.sh
sed -i "/^#export ZOOCFGDIR/cexport ZOOCFGDIR=/opt/soft/zookeeper345/conf/"  ./sqoop-env.sh
echo "-------------------修改成功------------------"cp /opt/soft/hive110/lib/mysql-connector-java-5.1.25.jar ../lib/
echo '#sqoop' >> /etc/profile
echo 'export SQOOP_HOME=/opt/soft/sqoop146' >> /etc/profile
echo 'export PATH=$PATH:$SQOOP_HOME/bin' >> /etc/profile
source /etc/profilefiif [ "$scala" = true ]; then
echo "---------------------安装scala------------------"tar -zxf   /opt/install/scala-2.11.12.tgz   -C /opt/soft
mv  /opt/soft/scala-2.11.12   /opt/soft/scala211fiif [ "$spark" = true ]; then
echo "---------------------安装spark------------------"tar -zxf   /opt/install/spark-2.4.5-bin-hadoop2.6.tgz   -C /opt/soft
mv  /opt/soft/spark-2.4.5-bin-hadoop2.6   /opt/soft/spark245
cp /opt/soft/spark245/conf/spark-env.sh.template    /opt/soft/spark245/conf/spark-env.shspark_path="/opt/soft/spark245/conf/spark-env.sh"echo 'export JAVA_HOME=/opt/soft/jdk180' >>   $spark_path
echo 'export SCALA_HOME=/opt/soft/scala211' >>   $spark_path
echo 'export SPARK_HOME=/opt/soft/spark245' >>   $spark_path
echo 'export HADOOP_INSTALL=/opt/soft/hadoop260' >>   $spark_path
echo 'export HADOOP_CONF_DIR=$HADOOP_INSTALL/etc/hadoop' >>   $spark_path
echo 'export SPARK_MASTER_IP=gree2' >>   $spark_path
echo 'export SPARK_DRIVER_MEMORY=2G' >>   $spark_path
echo 'export SPARK_EXECUTOR_MEMORY=2G' >>   $spark_path
echo 'export SPARK_LOCAL_DIRS=/opt/soft/spark245' >>   $spark_pathecho '#scala' >> /etc/profile
echo 'export SCALA_HOME=/opt/soft/scala211' >> /etc/profile
echo 'export PATH=$PATH:$SCALA_HOME/bin' >> /etc/profile
echo '#spark' >> /etc/profile
echo 'export SPARK_HOME=/opt/soft/spark245' >> /etc/profile
echo 'export PATH=$PATH:$SPARK_HOME/bin' >> /etc/profile
source /etc/profile
cp  /opt/install/mysql-connector-java-5.1.25.jar /opt/soft/spark245/jars/
cp /opt/soft/hive110/conf/hive-site.xml /opt/soft/spark245/conf/fiif [ "$flume" = true ]; thenecho "------------安装flume---------------"tar -zxf /opt/install/flume-ng-1.6.0-cdh5.14.0.tar.gz -C /opt/softmv /opt/soft/apache-flume-1.6.0-cdh5.14.0-bin /opt/soft/flume160cp /opt/soft/flume160/conf/flume-env.sh.template /opt/soft/flume160/conf/flume-env.shflume_path=/opt/soft/flume160/conf/flume-env.shsed -i "/^# export JAVA_HOME=/cexport JAVA_HOME=/opt/soft/jdk180" $flume_pathsed -i '/^# export JAVA_OPTS=/cexport JAVA_OPTS="-Xms4000m -Xmx7000m -Dcom.sun.management.jmxremote"' $flume_pathecho '#flume' >>/etc/profileecho 'export FLUME_HOME=/opt/soft/flume160' >>/etc/profileecho 'export PATH=$PATH:$FLUME_HOME/bin'  >>/etc/profilesource /etc/profile
fiif [ "$kafka" = true ]; thenecho "------------kafka安装---------------"tar -zxf /opt/install/kafka_2.11-2.0.0.tgz -C /opt/softmv /opt/soft/kafka_2.11-2.0.0/ /opt/soft/kafka211mkdir -p /opt/soft/kafka211/kafka-logskafka_conf_path=/opt/soft/kafka211/config/server.propertiessed -i "/^#advertised.listeners=/cadvertised.listeners=PLAINTEXT://$hostname:9092" $kafka_conf_pathsed -i "/^log.dirs=/clog.dirs=/opt/soft/kafka211/kafka-logs" $kafka_conf_pathsed -i "/^zookeeper.connect=/czookeeper.connect=$hostname:2181" $kafka_conf_pathsed -i "127adelete.topic.enable=true" $kafka_conf_pathecho "#kafka" >>/etc/profileecho 'export KAFKA_HOME=/opt/soft/kafka211' >>/etc/profileecho 'export PATH=$PATH:$KAFKA_HOME/bin' >>/etc/profile
source /etc/profile
fi

spark部分我添加了spark连接hive不需要的可以不用加

编写改错不易,请大家点赞收藏,后续还会更新

linux jdk,hadoop,zookeeper, hive , zeppelin ,sqoop ,hbase,scala,spark,flume,kafka 安装终极脚本全家桶安装相关推荐

  1. Scala Spark Streaming + Kafka + Zookeeper完成数据的发布和消费

    一.Spark Streaming Spark Streaming是核心Spark API的扩展,可实现实时数据流的可扩展,高吞吐量,容错流处理.数据可以从许多来源(如Kafka,Flume,Kine ...

  2. (2021红亚杯开放竞赛)hadoop,zookeeper,hive搭建要点截图

    截图记录一下,方便日后学习~~

  3. Linux安装Flash脚本,CentOS6如何安装Adobe Flash Player

    Adobe Flash Player是一种广泛使用的.专有的多媒体程序播放器.那么大家知道CentOS6如何安装Adobe Flash Player吗?今天学习啦小编与大家分享下CentOS6安装Ad ...

  4. 离线分析:Flume+Kafka+HBase+Hadoop通话数据统计

    文章目录 项目背景 项目架构 系统环境 系统配置 框架安装 JDK Hadoop Zookeeper Kafka Flume HBase 项目实现 项目结构 表设计 HBase Mysql 功能编写 ...

  5. hbase hive java_hive 与 hbase 结合

    一.hive与hbase的结合 Hive会经常和Hbase结合使用,把Hbase作为Hive的存储路径,所以Hive整合Hbase尤其重要.使用Hive读取Hbase中的数据,可以使用HQL语句在HB ...

  6. linux ps 脚本下载,适用于GNU/Linux的Photoshop CC v19安装程序脚本

    Photoshop CC v19 installer for Linux是适用于Linux的Photoshop CC v19安装程序,此bash脚本可帮助您在后台使用Wine在Linux机器上安装Ph ...

  7. hadoop+zookeeper+hbase+hive

    hadoop安装配置 hadoop安装文档:https://blog.csdn.net/pucao_cug/article/details/71698903 zookeeper安装文档:https:/ ...

  8. 大数据集群搭建(jdk、hadoop、hive、mysql、spark、flume、zookeeper)

    集群环境 各个机器安装的组件列表 大数据各个组件版本 192.168.248.10 192.168.248.11 192.168.248.12 jdk1.80 √ √ √ hadoop-2.6.1 √ ...

  9. 大数据集群搭建全部过程(Vmware虚拟机、hadoop、zookeeper、hive、flume、hbase、spark、yarn)

    大数据集群搭建进度及问题总结 所有资料在评论区那里可以得到 第一章: 1.网关配置(参照文档) 注意事项:第一台虚拟机改了,改为centos 101 ,地址为192.168.181.130 网关依然是 ...

最新文章

  1. mysql代理中间件_MySQL-ProxySQL中间件(二)
  2. 编码原理(附一)--算术编码
  3. 论文笔记--网络新闻图像中人脸标注技术的研究-2011
  4. iOS版本更新的方法
  5. 帆软连接数据库的步骤
  6. 数字电路基础知识(一) 复位设计-同步复位与异步复位
  7. UGUI内核大探究(十)Layout与Fitter
  8. python监控服务器cpu温度实例_用python访问CPU温度
  9. 计算机用户名显示TEMP,win10只要打开ie桌面出现temp文件夹如何解决
  10. 解决ps不能直接把文件拖进去的问题
  11. Python练习3:求N的多次方
  12. Aggressive cows-疯牛POJ(2456)-详解
  13. Lambda基本语法及使用
  14. Linux详细安装教程(Centos)
  15. 报错JDBC Connection [com.mysql.jdbc.JDBC4Connection@184c65da] will not be managed by Spring
  16. 加州伯克利市计划发起“首次社区发行”,发售代币化债券
  17. 小程序的老祖宗PWA为什么没有火起来?
  18. [内附完整源码和文档] 基于Java的商场促销活动信息管理系统
  19. 1.生命游戏(netlogo)
  20. 引导盘的引导文件在哪

热门文章

  1. 开发框架-.Net:Learun(力软敏捷开发)
  2. 一文读懂什么是智能制造,企业又该如何实施智能制造?
  3. 江兴华老师在武汉讲座
  4. 强化学习蘑菇书学习笔记04
  5. 关于Win10和win7下输出txt文件的换行问题
  6. iPhone 可以DIY了?苹果推出自助维修计划
  7. 从陈磊接棒后首份财报看拼多多农业版图2.0
  8. 台式机安装linux软件,台式机如何安装Ubuntu
  9. 六年级上册计算机教案人教版,人教版数学六年级上册教学设计
  10. centos /bin/sh: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)