操作系统: MAC OS X

一、准备

1、 JDK 1.8

  下载地址:http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html

2、Hadoop CDH

  下载地址:https://archive.cloudera.com/cdh5/cdh/5/

  本次安装版本:hadoop-2.6.0-cdh5.9.2.tar.gz

二、配置SSH(免密码登录)

1、打开iTerm2 终端,输入:ssh-keygen -t rsa   ,回车,next  -- 生成秘钥
2、cat id_rsa_xxx.pub >> authorized_keys         -- 用于授权你的公钥到本地可以无密码登录
3、chmod 600 authorized_keys      -- 赋权限
4、ssh localhost                              -- 免密码登录,如果显示最后一次登录时间,则登录成功

三、配置Hadoop&环境变量

1、创建hadoop目录&解压

  mkdir -p work/install/hadoop-cdh5.9.2 -- hadoop 主目录
  mkdir -p work/install/hadoop-cdh5.9.2/current/tmp work/install/hadoop-cdh5.9.2/current/nmnode work/install/hadoop-cdh5.9.2/current/dtnode -- hadoop 临时、名称节点、数据节点目录

  tar -xvf hadoop-2.6.0-cdh5.9.2.tar.gz    -- 解压包

2、配置 .bash_profile 环境变量

1 HADOOP_HOME="/Users/kimbo/work/install/hadoop-cdh5.9.2/hadoop-2.6.0-cdh5.9.2"
2
3 JAVA_HOME="/Library/Java/JavaVirtualMachines/jdk1.8.0_152.jdk/Contents/Home"
4 HADOOP_HOME="/Users/kimbo/work/install/hadoop-cdh5.9.2/hadoop-2.6.0-cdh5.9.2"
5
6 PATH="/usr/local/bin:~/cmd:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH"
7 CLASSPATH=".:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar"
8
9 export JAVA_HOME PATH CLASSPATH HADOOP_HOME

View Code

  source .bash_profile   -- 生效环境变量

3、修改配置文件(重点)

  cd $HADOOP_HOME/etc/hadoop

  • core-site.xml

<configuration><property><name>hadoop.tmp.dir</name><value>/Users/zhangshaosheng/work/install/hadoop-cdh5.9.2/current/tmp</value></property><property><name>fs.defaultFS</name><value>hdfs://localhost:8020</value></property><property><name>fs.trash.interval</name><value>4320</value><description> 3 days = 60min*24h*3day </description></property>
</configuration>

View Code

  • hdfs-site.xml

 1 <configuration>
 2   <property>
 3     <name>dfs.namenode.name.dir</name>
 4     <value>/Users/zhangshaosheng/work/install/hadoop-cdh5.9.2/current/nmnode</value>
 5   </property>
 6   <property>
 7     <name>dfs.datanode.data.dir</name>
 8     <value>/Users/zhangshaosheng/work/install/hadoop-cdh5.9.2/current/dtnode</value>
 9   </property>
10   <property>
11     <name>dfs.datanode.http.address</name>
12     <value>localhost:50075</value>
13   </property>
14   <property>
15     <name>dfs.replication</name>
16     <value>1</value>
17   </property>
18   <property>
19     <name>dfs.permissions.enabled</name>
20     <value>false</value>
21   </property>
22 </configuration>

View Code

  • yarn-site.xml

 1 <configuration>
 2  <property>
 3     <name>yarn.nodemanager.aux-services</name>
 4     <value>mapreduce_shuffle</value>
 5   </property>
 6   <property>
 7     <name>yarn.log-aggregation-enable</name>
 8     <value>true</value>
 9     <description>Whether to enable log aggregation</description>
10   </property>
11   <property>
12     <name>yarn.nodemanager.remote-app-log-dir</name>
13     <value>/Users/zhangshaosheng/work/install/hadoop-cdh5.9.2/current/tmp/yarn-logs</value>
14     <description>Where to aggregate logs to.</description>
15   </property>
16   <property>
17     <name>yarn.nodemanager.resource.memory-mb</name>
18     <value>8192</value>
19     <description>Amount of physical memory, in MB, that can be allocated
20       for containers.</description>
21   </property>
22   <property>
23     <name>yarn.nodemanager.resource.cpu-vcores</name>
24     <value>2</value>
25     <description>Number of CPU cores that can be allocated
26       for containers.</description>
27   </property>
28   <property>
29     <name>yarn.scheduler.minimum-allocation-mb</name>
30     <value>1024</value>
31     <description>The minimum allocation for every container request at the RM,
32       in MBs. Memory requests lower than this won't take effect,
33       and the specified value will get allocated at minimum.</description>
34   </property>
35   <property>
36     <name>yarn.scheduler.maximum-allocation-mb</name>
37     <value>2048</value>
38     <description>The maximum allocation for every container request at the RM,
39       in MBs. Memory requests higher than this won't take effect,
40       and will get capped to this value.</description>
41   </property>
42   <property>
43     <name>yarn.scheduler.minimum-allocation-vcores</name>
44     <value>1</value>
45     <description>The minimum allocation for every container request at the RM,
46       in terms of virtual CPU cores. Requests lower than this won't take effect,
47       and the specified value will get allocated the minimum.</description>
48   </property>
49   <property>
50     <name>yarn.scheduler.maximum-allocation-vcores</name>
51     <value>2</value>
52     <description>The maximum allocation for every container request at the RM,
53       in terms of virtual CPU cores. Requests higher than this won't take effect,
54       and will get capped to this value.</description>
55   </property>
56 </configuration>

View Code

  • mapred-site.xml

 1  <property>
 2     <name>mapreduce.jobtracker.address</name>
 3     <value>localhost:8021</value>
 4   </property>
 5   <property>
 6     <name>mapreduce.jobhistory.done-dir</name>
 7     <value>/Users/zhangshaosheng/work/install/hadoop-cdh5.9.2/current/tmp/job-history/</value>
 8     <description></description>
 9   </property>
10   <property>
11     <name>mapreduce.framework.name</name>
12     <value>yarn</value>
13     <description>The runtime framework for executing MapReduce jobs.
14     Can be one of local, classic or yarn.
15     </description>
16   </property>
17
18   <property>
19     <name>mapreduce.map.cpu.vcores</name>
20     <value>1</value>
21     <description>
22         The number of virtual cores required for each map task.
23     </description>
24   </property>
25   <property>
26     <name>mapreduce.reduce.cpu.vcores</name>
27     <value>1</value>
28     <description>
29         The number of virtual cores required for each reduce task.
30     </description>
31   </property>
32
33   <property>
34     <name>mapreduce.map.memory.mb</name>
35     <value>1024</value>
36     <description>Larger resource limit for maps.</description>
37   </property>
38   <property>
39     <name>mapreduce.reduce.memory.mb</name>
40     <value>1024</value>
41     <description>Larger resource limit for reduces.</description>
42   </property>
43 <configuration>
44   <property>
45     <name>mapreduce.map.java.opts</name>
46     <value>-Xmx768m</value>
47     <description>Heap-size for child jvms of maps.</description>
48   </property>
49   <property>
50     <name>mapreduce.reduce.java.opts</name>
51     <value>-Xmx768m</value>
52     <description>Heap-size for child jvms of reduces.</description>
53   </property>
54
55   <property>
56     <name>yarn.app.mapreduce.am.resource.mb</name>
57     <value>1024</value>
58     <description>The amount of memory the MR AppMaster needs.</description>
59   </property>
60 </configuration>

View Code

  • hadoop-env.sh

export JAVA_HOME=${JAVA_HOME}    -- 添加 java环境变量

四、启动

  1、格式化

    hdfs namenode -format

  如果hdfs命令识别不了, 检查环境变量,是否配置正确了。

  2、启动

    cd $HADOOP_HOME/sbin

    执行命名:start-all.sh  ,按照提示,输入密码

五、验证

  1、在终端输入: jps 

    出现如下截图,说明ok了

  2、登录web页面

    a)HDFS :  http://localhost:50070/dfshealth.html#tab-overview

      

    b)YARN Cluster:  http://localhost:8088/cluster

      

    c)YARN ResourceManager/NodeManager: http://localhost:8042/node

    

转载于:https://www.cnblogs.com/kimbo/p/8724062.html

Mac Hadoop2.6(CDH5.9.2)伪分布式集群安装相关推荐

  1. ZooKeeper伪分布式集群安装及使用

    为什么80%的码农都做不了架构师?>>>    ZooKeeper伪分布式集群安装及使用 让Hadoop跑在云端系列文章,介绍了如何整合虚拟化和Hadoop,让Hadoop集群跑在V ...

  2. ZooKeeper伪分布式集群安装

    为什么80%的码农都做不了架构师?>>>    获取ZooKeeper安装包 下载地址:http://apache.dataguru.cn/zookeeper 选择一个稳定版本进行下 ...

  3. Hadoop集群安装部署_伪分布式集群安装_02

    文章目录 一.解压安装 1. 安装包上传 2. 解压hadoop安装包 二.修改Hadoop相关配置文件 2.1. hadoop-env.sh 2.2. core-site.xml 2.3. hdfs ...

  4. Hadoop集群安装部署_伪分布式集群安装_01

    文章目录 一.配置基础环境 1. 设置静态ip 2. hostname 3. firewalld 4. ssh免密码登录 5. JDK 一.配置基础环境 1. 设置静态ip [root@bigdata ...

  5. Tachyon 0.7.1伪分布式集群安装与测试

    Tachyon是一个高容错的分布式文件系统,允许文件以内存的速度在集群框架中进行可靠的共享,就像Spark和 MapReduce那样.通过利用信息继承,内存侵入,Tachyon获得了高性能.Tachy ...

  6. Hadoop单机/伪分布式集群搭建(新手向)

    此文已由作者朱笑笑授权网易云社区发布. 欢迎访问网易云社区,了解更多网易技术产品运营经验. 本文主要参照官网的安装步骤实现了Hadoop伪分布式集群的搭建,希望能够为初识Hadoop的小伙伴带来借鉴意 ...

  7. 手把手教你搭建Hadoop生态系统伪分布式集群

    Hello,我是 Alex 007,一个热爱计算机编程和硬件设计的小白,为啥是007呢?因为叫 Alex 的人太多了,再加上每天007的生活,Alex 007就诞生了. 手把手教你搭建Hadoop生态 ...

  8. Hadoop伪分布式集群的安装部署

    Hadoop伪分布式集群的安装部署Hadoop伪分布式集群的安装部署 首先可以为Linux虚拟机搭建起来的最初状态做一个快照,方便后期搭建分布式集群时多台Linux虚拟机的准备. 一.如何为虚拟机做快 ...

  9. hadoop搭建伪分布式集群(centos7+hadoop-3.1.1)

    原文地址:https://www.cnblogs.com/zhengna/p/9316424.html Hadoop三种安装模式 搭建伪分布式集群准备条件 第一部分 安装前部署 1.查看虚拟机版本 2 ...

最新文章

  1. JQuery笔记(一)
  2. HDU 2094 产生冠军 (map容器)
  3. AspectJ的实现机制
  4. 给vmstat加上时间戳
  5. 手把手教你搭建开发环境之Java开发
  6. CodeForces 362B Petya and Staircases
  7. 人工智能发展及其伦理问题思考
  8. login RPi via serial port
  9. 有什么做电音的软件?3款好用App让你爱上唱歌
  10. word如何一次将所有英文改为新罗马字体
  11. 办公软件应用计算机操作员四级,计算机操作员四级证书有用吗
  12. 网络协议 18 - CDN:家门口的小卖铺 1
  13. halcon程序安装破解与VC6.0结合
  14. python 条形图填充疏密_可视化库-Matplotlib-条形图(第四天)
  15. 《我学区块链》—— 二、一些概念和工具
  16. linux—常用gdb调试命令汇总
  17. 郑州轨道交通2050规划图
  18. 新概念英语,音频文档
  19. java电商销量排序_外贸零售网站TOP50(按流量排序)
  20. GitChat · 人工智能 | 人工智能产品测试方法探索

热门文章

  1. SSH Config 允许使用root密码登陆 PermitRootLogin
  2. linux raid卡缓存,RAID卡的缓存与磁盘自带的缓存的关系
  3. 狂神说Mybatis3 29道习题
  4. 关于网络训练过程中过拟合的一些问题及解决方法
  5. VSCode中值得推荐的前端插件(工具篇)
  6. 21世纪前端最牛学习资料
  7. 超级计算机想象作文700字,未来的世界作文700字(精选3篇)
  8. 你见过闷骚到极致的程序员吗?
  9. 《深度学习》 之 AlexNet卷积神经网络 原理 详解
  10. 错误信息:检索 COM 类工厂中 CLSID 为{00024500-0000-0000-C000-000000000046} 的组件失败,原因是出现以下错误:80070005 拒绝访问。