一、规划
(一)硬件资源
10.171.29.191 master
10.173.54.84  slave1
10.171.114.223 slave2

(二)基本资料
用户:  jediael
目录:/opt/jediael/

二、环境配置
(一)统一用户名密码,并为jediael赋予执行所有命令的权限

#passwd
# useradd jediael
# passwd jediael
# vi /etc/sudoers

增加以下一行:

jediael ALL=(ALL) ALL

(二)创建目录/opt/jediael

$sudo chown jediael:jediael /opt
$ cd /opt
$ sudo mkdir jediael

注意:/opt必须是jediael的,否则会在format namenode时出错。

(三)修改用户名及/etc/hosts文件
1、修改/etc/sysconfig/network

NETWORKING=yes
HOSTNAME=*******

2、修改/etc/hosts

10.171.29.191 master
10.173.54.84  slave1
10.171.114.223 slave2

注 意hosts文件不能有127.0.0.1  *****配置,否则会导致出现异常。org.apache.hadoop.ipc.Client: Retrying connect to server: master/10.171.29.191:9000. Already trie
3、hostname命令

hostname ****

(四)配置免密码登录
以上命令在master上使用jediael用户执行:

$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

然后,将authorized_keys复制到slave1,slave2

scp ~/.ssh/authorized_keys slave1:~/.ssh/
scp ~/.ssh/authorized_keys slave2:~/.ssh/

注意
(1)若提示.ssh目录不存在,则表示此机器从未运行过ssh,因此运行一次即可创建.ssh目录。
(2).ssh/的权限为600,authorized_keys的权限为700,权限大了小了都不行。

(五)在3台机器上分别安装java,并设置相关环境变量
参考http://blog.csdn.net/jediael_lu/article/details/38925871

(六)下载hadoop-1.2.1.tar.gz,并将其解压到/opt/jediael

三、修改配置文件
【3台机器上均要执行】
(一)修改conf/hadoop_env.sh

export JAVA_HOME=/usr/java/jdk1.7.0_51

(二)修改core-site.xml

<property><name>fs.default.name</name><value>hdfs://master:9000</value>
</property><property><name>hadoop.tmp.dir</name><value>/opt/tmphadoop</value>
</property>

(三)修改hdfs-site.xml

<property><name>dfs.replication</name><value>2</value>
</property>

(四)修改mapred-site.xml

<property><name>mapred.job.tracker</name><value>master:9001</value>
</property>

(五)修改master及slaves

master:
masterslaves:
slave1
slave2

可以在master中完成上述配置,然后使用scp命令复制到slave1与slave2上。
 如:

$scp core-site.xml slave2:/opt/jediael/hadoop-1.2.1/conf

四、启动并验证

1、格式 化namenode【此步骤在3台机器上均要运行】

[jediael@master hadoop-1.2.1]$  bin/hadoop namenode -format

15/01/21 15:13:40 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = master/10.171.29.191
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 1.2.1
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:23:09 PDT 2013
STARTUP_MSG:   java = 1.7.0_51
************************************************************/
Re-format filesystem in /opt/tmphadoop/dfs/name ? (Y or N) Y
15/01/21 15:13:43 INFO util.GSet: Computing capacity for map BlocksMap
15/01/21 15:13:43 INFO util.GSet: VM type       = 64-bit
15/01/21 15:13:43 INFO util.GSet: 2.0% max memory = 1013645312
15/01/21 15:13:43 INFO util.GSet: capacity      = 2^21 = 2097152 entries
15/01/21 15:13:43 INFO util.GSet: recommended=2097152, actual=2097152
15/01/21 15:13:43 INFO namenode.FSNamesystem: fsOwner=jediael
15/01/21 15:13:43 INFO namenode.FSNamesystem: supergroup=supergroup
15/01/21 15:13:43 INFO namenode.FSNamesystem: isPermissionEnabled=true
15/01/21 15:13:43 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
15/01/21 15:13:43 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
15/01/21 15:13:43 INFO namenode.FSEditLog: dfs.namenode.edits.toleration.length = 0
15/01/21 15:13:43 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/01/21 15:13:44 INFO common.Storage: Image file /opt/tmphadoop/dfs/name/current/fsimage of size 113 bytes saved in 0 seconds.
15/01/21 15:13:44 INFO namenode.FSEditLog: closing edit log: position=4, editlog=/opt/tmphadoop/dfs/name/current/edits
15/01/21 15:13:44 INFO namenode.FSEditLog: close success: truncate to 4, editlog=/opt/tmphadoop/dfs/name/current/edits
15/01/21 15:13:44 INFO common.Storage: Storage directory /opt/tmphadoop/dfs/name has been successfully formatted.
15/01/21 15:13:44 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/10.171.29.191
************************************************************/

2、启动hadoop【此步骤只需要在master上执行】

[jediael@master hadoop-1.2.1]$ bin/start-all.sh 

starting namenode, logging to /opt/jediael/hadoop-1.2.1/libexec/../logs/hadoop-jediael-namenode-master.out
slave1: starting datanode, logging to /opt/jediael/hadoop-1.2.1/libexec/../logs/hadoop-jediael-datanode-slave1.out
slave2: starting datanode, logging to /opt/jediael/hadoop-1.2.1/libexec/../logs/hadoop-jediael-datanode-slave2.out
master: starting secondarynamenode, logging to /opt/jediael/hadoop-1.2.1/libexec/../logs/hadoop-jediael-secondarynamenode-master.out
starting jobtracker, logging to /opt/jediael/hadoop-1.2.1/libexec/../logs/hadoop-jediael-jobtracker-master.out
slave1: starting tasktracker, logging to /opt/jediael/hadoop-1.2.1/libexec/../logs/hadoop-jediael-tasktracker-slave1.out
slave2: starting tasktracker, logging to /opt/jediael/hadoop-1.2.1/libexec/../logs/hadoop-jediael-tasktracker-slave2.out

3、登录页面验证
NameNode    http://ip:50070  
JobTracker     http://ip50030

4、查看各个主机的java进程
(1)master:
$ jps
17963 NameNode
18280 JobTracker
18446 Jps
18171 SecondaryNameNode
(2)slave1:
$ jps
16019 Jps
15858 DataNode
15954 TaskTracker
(3)slave2:
$ jps
15625 Jps
15465 DataNode
15561 TaskTracker

五、运行一个完整的mapreduce程序。

以下内容均只是master上执行
1、将wordcount.jar包复制至服务器上
程序见http://blog.csdn.net/jediael_lu/article/details/37596469

2、创建输入目录,并将相关文件复制至目录

[jediael@master166 ~]$ hadoop fs -mkdir /wcin
[jediael@master166 projects]$ hadoop fs -copyFromLocal /opt/jediael/hadoop-1.2.1/conf/hdfs-site.xml /wcin

3、运行程序
[jediael@master166 projects]$ hadoop jar wordcount.jar org.jediael.hadoopdemo.wordcount.WordCount /wcin /wcout
14/08/31 20:04:26 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/08/31 20:04:26 INFO input.FileInputFormat: Total input paths to process : 1
14/08/31 20:04:26 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/08/31 20:04:26 WARN snappy.LoadSnappy: Snappy native library not loaded
14/08/31 20:04:26 INFO mapred.JobClient: Running job: job_201408311554_0003
14/08/31 20:04:27 INFO mapred.JobClient: map 0% reduce 0%
14/08/31 20:04:31 INFO mapred.JobClient: map 100% reduce 0%
14/08/31 20:04:40 INFO mapred.JobClient: map 100% reduce 100%
14/08/31 20:04:40 INFO mapred.JobClient: Job complete: job_201408311554_0003
14/08/31 20:04:40 INFO mapred.JobClient: Counters: 29
14/08/31 20:04:40 INFO mapred.JobClient: Job Counters
14/08/31 20:04:40 INFO mapred.JobClient: Launched reduce tasks=1
14/08/31 20:04:40 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=4230
14/08/31 20:04:40 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
14/08/31 20:04:40 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
14/08/31 20:04:40 INFO mapred.JobClient: Launched map tasks=1
14/08/31 20:04:40 INFO mapred.JobClient: Data-local map tasks=1
14/08/31 20:04:40 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=8531
14/08/31 20:04:40 INFO mapred.JobClient: File Output Format Counters
14/08/31 20:04:40 INFO mapred.JobClient: Bytes Written=284
14/08/31 20:04:40 INFO mapred.JobClient: FileSystemCounters
14/08/31 20:04:40 INFO mapred.JobClient: FILE_BYTES_READ=370
14/08/31 20:04:40 INFO mapred.JobClient: HDFS_BYTES_READ=357
14/08/31 20:04:40 INFO mapred.JobClient: FILE_BYTES_WRITTEN=104958
14/08/31 20:04:40 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=284
14/08/31 20:04:40 INFO mapred.JobClient: File Input Format Counters
14/08/31 20:04:40 INFO mapred.JobClient: Bytes Read=252
14/08/31 20:04:40 INFO mapred.JobClient: Map-Reduce Framework
14/08/31 20:04:40 INFO mapred.JobClient: Map output materialized bytes=370
14/08/31 20:04:40 INFO mapred.JobClient: Map input records=11
14/08/31 20:04:40 INFO mapred.JobClient: Reduce shuffle bytes=370
14/08/31 20:04:40 INFO mapred.JobClient: Spilled Records=40
14/08/31 20:04:40 INFO mapred.JobClient: Map output bytes=324
14/08/31 20:04:40 INFO mapred.JobClient: Total committed heap usage (bytes)=238026752
14/08/31 20:04:40 INFO mapred.JobClient: CPU time spent (ms)=1130
14/08/31 20:04:40 INFO mapred.JobClient: Combine input records=0
14/08/31 20:04:40 INFO mapred.JobClient: SPLIT_RAW_BYTES=105
14/08/31 20:04:40 INFO mapred.JobClient: Reduce input records=20
14/08/31 20:04:40 INFO mapred.JobClient: Reduce input groups=20
14/08/31 20:04:40 INFO mapred.JobClient: Combine output records=0
14/08/31 20:04:40 INFO mapred.JobClient: Physical memory (bytes) snapshot=289288192
14/08/31 20:04:40 INFO mapred.JobClient: Reduce output records=20
14/08/31 20:04:40 INFO mapred.JobClient: Virtual memory (bytes) snapshot=1533636608
14/08/31 20:04:40 INFO mapred.JobClient: Map output records=20

4、查看结果

[jediael@master166 projects]$ hadoop fs -cat /wcout/* 

--> 1
<!-- 1
</configuration> 1
</property> 1
<?xml 1
<?xml-stylesheet 1
<configuration> 1
<name>dfs.replication</name> 1
<property> 1
<value>2</value> 1
Put 1
file. 1
href="configuration.xsl"?> 1
in 1
overrides 1
property 1
site-specific 1
this 1
type="text/xsl" 1
version="1.0"?> 1
cat: File does not exist: /wcout/_logs

安装hadoop1.2.1集群环境相关推荐

  1. 【Nutch2.3基础教程】集成Nutch/Hadoop/Hbase/Solr构建搜索引擎:安装及运行【集群环境】

    1.下载相关软件,并解压 版本号如下: (1)apache-nutch-2.3 (2) hadoop-1.2.1 (3)hbase-0.92.1 (4)solr-4.9.0 并解压至/opt/jedi ...

  2. Hadoop化繁为简(一)-从安装Linux到搭建集群环境

    Hadoop化繁为简(一)-从安装Linux到搭建集群环境 简介与环境准备 hadoop的核心是分布式文件系统HDFS以及批处理计算MapReduce.近年,随着大数据.云计算.物联网的兴起,也极大的 ...

  3. Hadoop化繁为简-从安装Linux到搭建集群环境

    Hadoop化繁为简-从安装Linux到搭建集群环境 摘要: 简介与环境准备hadoop的核心是分布式文件系统HDFS以及批处理计算MapReduce.近年,随着大数据.云计算.物联网的兴起,也极大的 ...

  4. Hadoop从安装Linux到搭建集群环境

    简介与环境准备 hadoop的核心是分布式文件系统HDFS以及批处理计算MapReduce.近年,随着大数据.云计算.物联网的兴起,也极大的吸引了我的兴趣,看了网上很多文章,感觉还是云里雾里,很多不必 ...

  5. hadoop-1.2.0集群安装与配置

    http://bbs.itcast.cn/thread-17487-1-1.html .硬件环境1.windows7旗舰版64位 2.VMwareWorkstationACE版6.0.2 3.Redh ...

  6. Hadoop教程(二)Hadoop伪集群环境安装

    Hadoop教程(二)Hadoop伪集群环境安装 本文链接:https://blog.csdn.net/yuan_xw/article/details/50039325 Hadoop教程(二)Hado ...

  7. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(九)安装kafka_2.11-1.1.0

    如何搭建配置centos虚拟机请参考<Kafka:ZK+Kafka+Spark Streaming集群环境搭建(一)VMW安装四台CentOS,并实现本机与它们能交互,虚拟机内部实现可以上网.& ...

  8. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(二十一)NIFI1.7.1安装

    一.nifi基本配置 1. 修改各节点主机名,修改/etc/hosts文件内容. 192.168.0.120master192.168.0.121slave1192.168.0.122 slave2 ...

  9. centos7 docker-compose安装_Docker Compose 搭建 Redis Cluster 集群环境

    在前文<Docker 搭建 Redis Cluster 集群环境>中我已经教过大家如何搭建了,本文使用 Docker Compose 再带大家搭建一遍,其目的主要是为了让大家感受 Dock ...

最新文章

  1. 静态常量放在什么包里面_在沙雕游戏里面用表情包打架,是一种什么体验
  2. RDKit | 化合物亚结构搜索与结果输出
  3. 多比矢量图开发手册(六)-Web高级图元编程
  4. python下载的文件放在哪里的-Python下载文件的方法
  5. ROS知识:安装rosdep中出现time out的问题
  6. App设计灵感之十二组精美的旅行App设计案例
  7. ZooKeeper与Eureka的区别
  8. html5播放器 迅雷,搜狗浏览器HTML5视频播放器插件(HTML5.Video.Player)
  9. mysql scrapy 重复数据_小心避坑:MySQL分页时使用 limit+order by 会出现数据重复问题...
  10. go 递归tree关系_Go实现一个二叉搜索树
  11. 最简单的方法实现小程序按钮跳转到指定界面
  12. 《武义九州》隐私政策
  13. 代码与国家地区对照表
  14. java简单小程序输出所有汉字代码实例
  15. YOLOv5剪枝 | 模型剪枝理论篇
  16. 当电脑80端口被占用怎么办
  17. 靠java_基础不牢靠,何以争朝夕?Java基础面试82道详细解析!(一)
  18. 多家支付机构叫停网络销售POS机 但“POSS机”“破死机”还在
  19. i58400升级可以换什么cpu_宝贝标题关键词顺序可以换吗?关键词顺序对标题有什么影响?...
  20. asp.net 文件操作

热门文章

  1. 简单易懂,ThreadPoolExecutor参数说明
  2. 1027 Colors in Mars (20 分)_20行代码AC
  3. python找指定内容_python查找指定具有相同内容文件的方法
  4. linux安装trac+svn+apache+wike,windos中Trac+apache+svn的安装与配置
  5. OpenStack(五)——Neutron组件
  6. 1231 sqlserver_sqlserver 删除表中 指定字符串
  7. python标准库math用来计算平方根的函数_《Python程序设计方案》题库
  8. Ubuntu12.04 root用户登录设置
  9. mysql rowdatapacket_arrays – 将此RowDataPacket对象数组缩小为单个对象
  10. linux禁用用户账号,技术|在 Linux 系统中禁用与解禁用户的账号