配置hadoop

1:下载hadoop-1.2.1.tar.gz

在/home/jifeng 创建目录   mkdir hadoop

2:解压

[jifeng@jifeng01 hadoop]$ ls
hadoop-1.2.1.tar.gz
[jifeng@jifeng01 hadoop]$ tar zxf hadoop-1.2.1.tar.gz
[jifeng@jifeng01 hadoop]$ ls
hadoop-1.2.1  hadoop-1.2.1.tar.gz
[jifeng@jifeng01 hadoop]$ 

3: 修改hadoop-env.sh配置文件

[jifeng@jifeng01 hadoop]$ cd hadoop-1.2.1
[jifeng@jifeng01 hadoop-1.2.1]$ ls
bin          hadoop-ant-1.2.1.jar          ivy          sbin
build.xml    hadoop-client-1.2.1.jar       ivy.xml      share
c++          hadoop-core-1.2.1.jar         lib          src
CHANGES.txt  hadoop-examples-1.2.1.jar     libexec      webapps
conf         hadoop-minicluster-1.2.1.jar  LICENSE.txt
contrib      hadoop-test-1.2.1.jar         NOTICE.txt
docs         hadoop-tools-1.2.1.jar        README.txt
[jifeng@jifeng01 hadoop-1.2.1]$ cd conf
[jifeng@jifeng01 conf]$ ls
capacity-scheduler.xml      hadoop-policy.xml      slaves
configuration.xsl           hdfs-site.xml          ssl-client.xml.example
core-site.xml               log4j.properties       ssl-server.xml.example
fair-scheduler.xml          mapred-queue-acls.xml  taskcontroller.cfg
hadoop-env.sh               mapred-site.xml        task-log4j.properties
hadoop-metrics2.properties  masters
[jifeng@jifeng01 conf]$ vi hadoop-env.sh# Set Hadoop-specific environment variables here.# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.# The java implementation to use.  Required.
export JAVA_HOME=/home/jifeng/jdk1.7.0_45# Extra Java CLASSPATH elements.  Optional.
# export HADOOP_CLASSPATH=# The maximum amount of heap to use, in MB. Default is 1000.
# export HADOOP_HEAPSIZE=2000# Extra Java runtime options.  Empty by default.
# export HADOOP_OPTS=-server# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS"
@
"hadoop-env.sh" 57L, 2436C 已写入
[jifeng@jifeng01 conf]$ cat hadoop-env.sh

把#  export JAVA_HOME 修改为“export JAVA_HOME=/home/jifeng/jdk1.7.0_45”

4:修改core-site.xml文件

在hadoop目录下创建目录

[jifeng@jifeng01 hadoop]$ mkdir tmp

[jifeng@jifeng01 conf]$ vi core-site.xml

修改后如下:

[jifeng@jifeng01 conf]$ cat core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!-- Put site-specific property overrides in this file. --><configuration>
<property>
<name>fs.default.name</name>  <value>hdfs://jifeng01:9000</value>
</property><property>
<name>hadoop.tmp.dir</name>  <value>/home/jifeng/hadoop/tmp</value>
</property>
</configuration>

5:修改hdfs-site.xml

修改后如下:

[jifeng@jifeng01 conf]$ cat hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!-- Put site-specific property overrides in this file. --><configuration>
<property>  <name>dfs.replication</name>  <value>1</value>  <description></description>
</property>
</configuration>

6:修改mapred-site.xml文件

修改后如下:

[jifeng@jifeng01 conf]$ cat  mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!-- Put site-specific property overrides in this file. --><configuration>
<property>  <name>mapred.job.tracker</name>  <value>jifeng01:9001</value>  <description>NameNode</description>
</property>
</configuration>
[jifeng@jifeng01 conf]$ 

7:修改masters和slaves文件

修改后路下

[jifeng@jifeng01 conf]$ cat masters
jifeng01
[jifeng@jifeng01 conf]$ cat slaves
jifeng02
jifeng03
[jifeng@jifeng01 conf]$ 

8:先其它2个节点复制hadoop-1.2.1

[jifeng@jifeng01 hadoop]$ scp -r ./hadoop-1.2.1 jifeng02:/home/jifeng/hadoop

[jifeng@jifeng01 hadoop]$ scp -r ./hadoop-1.2.1 jifeng03:/home/jifeng/hadoop

9:格式化分布式文件系统

[jifeng@jifeng01 hadoop-1.2.1]$ bin/hadoop namenode -format
14/07/24 10:29:43 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = jifeng01/10.3.7.214
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 1.2.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG:   java = 1.7.0_45
************************************************************/
14/07/24 10:29:43 INFO util.GSet: Computing capacity for map BlocksMap
14/07/24 10:29:43 INFO util.GSet: VM type       = 64-bit
14/07/24 10:29:43 INFO util.GSet: 2.0% max memory = 932184064
14/07/24 10:29:43 INFO util.GSet: capacity      = 2^21 = 2097152 entries
14/07/24 10:29:43 INFO util.GSet: recommended=2097152, actual=2097152
14/07/24 10:29:43 INFO namenode.FSNamesystem: fsOwner=jifeng
14/07/24 10:29:43 INFO namenode.FSNamesystem: supergroup=supergroup
14/07/24 10:29:43 INFO namenode.FSNamesystem: isPermissionEnabled=true
14/07/24 10:29:43 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
14/07/24 10:29:43 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
14/07/24 10:29:43 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
14/07/24 10:29:43 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/07/24 10:29:43 INFO common.Storage: Image file /home/jifeng/hadoop/tmp/dfs/name/current/fsimage of size 112 bytes saved in 0 seconds.
14/07/24 10:29:44 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/home/jifeng/hadoop/tmp/dfs/name/current/edits
14/07/24 10:29:44 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/home/jifeng/hadoop/tmp/dfs/name/current/edits
14/07/24 10:29:44 INFO common.Storage: Storage directory /home/jifeng/hadoop/tmp/dfs/name has been successfully formatted.
14/07/24 10:29:44 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at jifeng01/10.3.7.214
************************************************************/
[jifeng@jifeng01 hadoop-1.2.1]$ 

10:启动hadoop

[jifeng@jifeng01 hadoop-1.2.1]$ bin/start-all.sh
starting namenode, logging to /home/jifeng/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-jifeng-namenode-jifeng01.out
jifeng03: starting datanode, logging to /home/jifeng/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-jifeng-datanode-jifeng03.out
jifeng02: starting datanode, logging to /home/jifeng/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-jifeng-datanode-jifeng02.out
The authenticity of host 'jifeng01 (10.3.7.214)' can't be established.
RSA key fingerprint is a8:9d:34:63:fa:c2:47:4f:81:10:94:fa:4b:ba:08:55.
Are you sure you want to continue connecting (yes/no)? yes
jifeng01: Warning: Permanently added 'jifeng01,10.3.7.214' (RSA) to the list of known hosts.
jifeng@jifeng01's password:
jifeng@jifeng01's password: jifeng01: Permission denied, please try again.jifeng01: starting secondarynamenode, logging to /home/jifeng/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-jifeng-secondarynamenode-jifeng01.out
starting jobtracker, logging to /home/jifeng/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-jifeng-jobtracker-jifeng01.out
jifeng03: starting tasktracker, logging to /home/jifeng/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-jifeng-tasktracker-jifeng03.out
jifeng02: starting tasktracker, logging to /home/jifeng/hadoop/hadoop-1.2.1/libexec/../logs/hadoop-jifeng-tasktracker-jifeng02.out
[jifeng@jifeng01 hadoop-1.2.1]$ 

需要输入密码

11:检测守护进程

[jifeng@jifeng01 hadoop-1.2.1]$ jps
4539 JobTracker
4454 SecondaryNameNode
4269 NameNode
4667 Jps
[jifeng@jifeng01 hadoop-1.2.1]$ 
[jifeng@jifeng02 hadoop]$ jps
2734 TaskTracker
2815 Jps
2647 DataNode
[jifeng@jifeng02 hadoop]$ 
[jifeng@jifeng03 hadoop]$ jps
4070 Jps
3878 DataNode
3993 TaskTracker
[jifeng@jifeng03 hadoop]$ 

Hadoop1.2.1集群安装三相关推荐

  1. hadoop-1.2.0集群安装与配置

    http://bbs.itcast.cn/thread-17487-1-1.html .硬件环境1.windows7旗舰版64位 2.VMwareWorkstationACE版6.0.2 3.Redh ...

  2. Hadoop1.2.1集群安装二

    1:安装JDK 下载好jdk-7u45-linux-x64.gz 或从其它电脑copy过去 [jifeng@feng01 ~]$scp -r ./jdk-7u45-linux-x64.gz jifen ...

  3. hadoop 2.4.1 集群安装一

    配置主机名参考 Hadoop 1.2.1 集群安装一 配置JDK环境参考Hadoop1.2.1集群安装二 配置hadoop A:下载解压hadoop http://mirrors.cnnic.cn/a ...

  4. 全网最细最全OLAP之clickhouse笔记|clickhouse文档|clickhouse揭秘文档(三)--clickhouse单机安装和clickhouse集群安装

    免费视频教程 https://www.51doit.com/ 或者联系博主微信 17710299606 https://apppunf4gqb9193.h5.xiaoeknow.com/v1/cour ...

  5. 大数据入门第五天——离线计算之hadoop(上)概述与集群安装

    一.概述 根据之前的凡技术必登其官网的原则,我们当然先得找到它的官网:http://hadoop.apache.org/ 1.什么是hadoop 先看官网介绍: The Apache™ Hadoop® ...

  6. 一步步教你Hadoop多节点集群安装配置

    一步步教你Hadoop多节点集群安装配置 1.集群部署介绍 1.1 Hadoop简介  Hadoop是Apache软件基金会旗下的一个开源分布式计算平台.以Hadoop分布式文件系统HDFS(Hado ...

  7. Hadoop实战-中高级部分 之 Hadoop 集群安装

    Hadoop RestFul Hadoop HDFS原理1 Hadoop HDFS原理2 Hadoop作业调优参数调整及原理 Hadoop HA Hadoop MapReduce高级编程 Hadoop ...

  8. 大数据介绍及集群安装

    大数据介绍及集群安装 第一部分 <大数据概述> 传统数据如何处理? 什么是大数据? 传统数据与大数据的对比 大数据的特点? 大数据前/后服务器系统安装部署区别是什么?. 大数据生态系统以及 ...

  9. 2021年大数据HBase(二):HBase集群安装操作

    全网最详细的大数据HBase文章系列,强烈建议收藏加关注! 新文章都已经列出历史文章目录,帮助大家回顾前面的知识重点. 目录 系列历史文章 前言 HBase集群安装操作 一.上传解压HBase安装包 ...

最新文章

  1. Timer 的简单介绍
  2. WindowsAPI每日一练(2) 使用应用程序句柄
  3. swift:使用协议protocol设置颜色,UIImage的切圆角ImageWithCornerRadius
  4. spring boot集成swagger,自定义注解,拦截器,xss过滤,异步调用,定时任务案例...
  5. 在Visual Studio 2010中实现数据驱动Coded UI Tests
  6. 网络推广外包“重拳出击”中小企业网站优化力求超越网络推广外包行业站
  7. objdump and readelf
  8. Redis的常用命令——String的常用命令
  9. harfbuzz安装位置 linux_最新Ubuntu 20.04 LTS已发布,在Win10中该如何进行安装和使用?...
  10. puppet语法学习
  11. 微软直播马上开始,近百岗位等你来,快戳进直播间
  12. Java基础篇:如何使用return语句
  13. Apache2.2整合PHP5.2
  14. 阿里笔试题:求两个子序列的最大连续子序列
  15. mac版百度网盘客户端
  16. 用iperf在ambarella s2l上进行网络性能测试
  17. 小红书推广方式和技巧有哪些?
  18. 路由器芯片和服务器,软路由就是软路由,还是回归它本该有的身份吧。一个越折腾越迷茫者的经历...
  19. Excel·VBA按列拆分工作表、工作簿
  20. 用vlookup在excel表格里查找数据

热门文章

  1. android 自定义刷新控件,Android开发中MJRefresh自定义刷新动画效果
  2. 计算机教学中因才施教,浅析高校《大学计算机基础》教学中的因材施教
  3. 计算机系学生的职业生涯作文,医学生职业生涯规划的作文800字
  4. 使用Hyperopt实现机器学习自动调参
  5. 四十、SPSS数据汇总,图表制作,频率分析和描述分析
  6. 额外篇 | basemap(上)
  7. 年末最大AI盛典!2020深度学习开发者峰会报名启动
  8. 直播|百度AI开发者大会深度学习直播课程表
  9. db2数据库连接数 linux_linux db2 连接数据库
  10. 语言中拟合函数 计算aic_Go语言函数深度解析(中)