第一部分,安装Hadoop

一般配置都是在文件尾新增几行配置,也有文件路径需要手动新建。

1 修改主机名
# hostname
xinyanfei

2 配置host

3 ping主机名
# ping xinyanfei
PING xinyanfei (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.036 ms

4 下载Hadoop,并解压
# cd /export/servers/hadoop-1.0.4/conf/

5 查看修改的配置 core-site.xml
# cat core-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://xinyanfei:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/export/Data/hadoop/tmp</value>
</property>
</configuration>

6 查看修改的配置 hadoop-env.sh
# cat hadoop-env.sh
# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME. All others are
# optional. When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.

# The java implementation to use. Required.
# export JAVA_HOME=/usr/lib/j2sdk1.5-sun

# Extra Java CLASSPATH elements. Optional.
# export HADOOP_CLASSPATH=

# The maximum amount of heap to use, in MB. Default is 1000.
# export HADOOP_HEAPSIZE=2000

# Extra Java runtime options. Empty by default.
# export HADOOP_OPTS=-server

# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS"
export HADOOP_SECONDARYNAMENODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS"
export HADOOP_BALANCER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS"
export HADOOP_JOBTRACKER_OPTS="-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS"
# export HADOOP_TASKTRACKER_OPTS=
# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
# export HADOOP_CLIENT_OPTS

# Extra ssh options. Empty by default.
# export HADOOP_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR"

# Where log files are stored. $HADOOP_HOME/logs by default.
# export HADOOP_LOG_DIR=${HADOOP_HOME}/logs

# File naming remote slave hosts. $HADOOP_HOME/conf/slaves by default.
# export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves

# host:path where hadoop code should be rsync'd from. Unset by default.
# export HADOOP_MASTER=master:/home/$USER/src/hadoop

# Seconds to sleep between slave commands. Unset by default. This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.
# export HADOOP_SLAVE_SLEEP=0.1

# The directory where pid files are stored. /tmp by default.
# export HADOOP_PID_DIR=/var/hadoop/pids

# A string representing this instance of hadoop. $USER by default.
# export HADOOP_IDENT_STRING=$USER

# The scheduling priority for daemon processes. See 'man nice'.
# export HADOOP_NICENESS=10

export JAVA_HOME=/export/servers/jdk1.7.0_80/

7 查看修改的配置 hdfs-site.xml

# cat hdfs-site.xml
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>

8 查看修改的配置 mapred-site.xml

# cat mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>xinyanfei:9001</value>
</property>
</configuration>

9 查看修改的配置 masters 和 slaves文件
# cat masters
localhost
# cat slaves
localhost

10 hadoop namenode -format
# hadoop namenode -format

Warning: $HADOOP_HOME is deprecated.

16/11/17 13:44:59 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost.localdomain/127.0.0.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.0.4
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r 1393290; compiled by 'hortonfo' on Wed Oct 3 05:13:58 UTC 2012
************************************************************/
16/11/17 13:44:59 INFO util.GSet: VM type = 64-bit
16/11/17 13:44:59 INFO util.GSet: 2% max memory = 17.78 MB
16/11/17 13:44:59 INFO util.GSet: capacity = 2^21 = 2097152 entries
16/11/17 13:44:59 INFO util.GSet: recommended=2097152, actual=2097152
16/11/17 13:45:00 INFO namenode.FSNamesystem: fsOwner=root
16/11/17 13:45:00 INFO namenode.FSNamesystem: supergroup=supergroup
16/11/17 13:45:00 INFO namenode.FSNamesystem: isPermissionEnabled=false
16/11/17 13:45:00 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
16/11/17 13:45:00 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
16/11/17 13:45:00 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/11/17 13:45:00 INFO common.Storage: Image file of size 110 saved in 0 seconds.
16/11/17 13:45:00 INFO common.Storage: Storage directory /export/Data/hadoop/tmp/dfs/name has been successfully formatted.
16/11/17 13:45:00 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/
#

=============================

第二部分,安装Hbase,并配置

1、修改hbase-0.94.18下的conf目录下的配置文件hbase-env.sh和hbase-site.xml

hbase-env.sh修改如下:

export JAVA_HOME=/usr/Java/jdk1.6

export HBASE_CLASSPATH=/usr/hadoop/conf

export HBASE_MANAGES_ZK=true

#Hbase日志目录
export HBASE_LOG_DIR=/root/hadoop/hbase-0.94.6.1/logs

hbase-site.xml修改如下:

<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>

<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
</configuration>

完成以上操作,就可以正常启动Hbase了,启动顺序:先启动Hadoop——>再启动Hbase,关闭顺序:先关闭Hbase——>再关闭Hadoop。

启动hbase:

zcf@zcf-K42JZ:/usr/local/hbase$ bin/start-hbase.sh

査看进程jps:

4798 SecondaryNameNode
16790 Jps
4275 NameNode
5154 TaskTracker
16269 HQuorumPeer
4908 JobTracker
16610 HRegionServer
5305
4549 DataNode
16348 HMaster

进入shell模式: bin/hbase shell

HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.94.18, r1577788, Sat Mar 15 04:46:47 UTC 2014

hbase(main):001:0>

先停止hbase,再停止hadoop。

我们也可以通过WEB页面来管理查看HBase数据库。

HMaster:http://192.168.0.10:60010/master.jsp

注:Hbase默认的hbase.master端口是60000

<property>
<name>hbase.master</name>
<value>192.168.0.10:60000</value>
</property>
如果在配置文件修改了master端口,在用java
api的时候要为configuration指定下xml文件configuration.addResource(new FileInputStream(new File("hbase-site.xml")));,否则会报:org.apache.hadoop.hbase.MasterNotRunningException: com.google.protobuf.ServiceException: java.io.IOException: Call to master1/172.22.2.170:60000的错误

Hadoop 安装 + HBase伪分布式安装相关推荐

  1. 单机版安装,伪分布式安装

    单机版安装,伪分布式安装 单机版安装:适合做一些调试,mapreduce调试(debug),实际开发中不用 伪分布式安装:在一台服务器上模拟出来多台服务器的效果(模拟多服务的启动方式) 官网地址 单机 ...

  2. hadoop hbase java_Hadoop、Hbase伪分布式安装

    环境 本文介绍Hadoop.Hbase的伪分布式安装. 操作系统: Centos7 Hadoop: 2.7.3 Hbase: 1.2.3 Hadoop安装 JAVA_HOME环境变量配置 由于Hbas ...

  3. Hbase伪分布式安装

    安装Hbase版本为0.94.7 1.修改hbase-0.94.7下的conf目录下的配置文件hbase-env.sh和hbase-site.xml hbase-env.sh修改如下: export ...

  4. java 链接为分布式 hbase,hbase学习记录(一):hbase伪分布式安装

    将安装包解压缩到/usr/local下,并将文件夹重命名为hbase tar -xvf hbase-1.2.6-bin.tar.gz -C /usr/local #解压缩到/usr/local文件夹 ...

  5. ZooKeeper:win7上安装单机及伪分布式安装

    zookeeper是一个为分布式应用所设计的分布式的.开源的调度服务,它主要用来解决分布式应用中经常遇到的一些数据管理问题,简化分布式应用,协调及其管理的难度,提高性能的分布式服务. 本章的目的:如何 ...

  6. HBase基础和伪分布式安装配置

    版权声明:本文为博主原创文章,未经博主允许不得转载. https://blog.csdn.net/qq1010885678/article/details/43796441 一.HBase(NoSQL ...

  7. hbase 伪分布安装 java_HBase基础和伪分布式安装配置

    一.HBase(NoSQL)的数据模型 1.1 表(table),是存储管理数据的. 1.2 行键(row key),类似于MySQL中的主键,行键是HBase表天然自带的,创建表时不需要指定 1.3 ...

  8. Hadoop单机伪分布式安装(完整版)

    在学习Hadoop时,我发现网上的各种安装的资料要不不全,要不前后不匹配(比如有的是伪分布式,有的是完全分布式).此篇文章,我总结了身边的同学在安装Hadoop时遇到的毛病,在前面安装配置环节,尽可能 ...

  9. hadoop伪分布式安装

    版权声明:本文为博主原创文章,未经博主允许不得转载. https://blog.csdn.net/qq1010885678/article/details/43447733 首先需要有一台linux的 ...

最新文章

  1. 分享jQuery的常用技巧12招
  2. 女人 不要让身边的男人太累
  3. php-fpm自启动,php-fpm自启脚本
  4. 【bzoj1726/Usaco2006 Nov】Roadblocks第二短路——SPFA
  5. mysql dns反向解析_Mysql DNS反向解析导致连接超时过程分析(skip-name-resolve)
  6. [转]如何处理机器学习中的不平衡类别
  7. centos6.9升级openssl版本
  8. python打开摄像头cmd_Python调用shell cmd方法代码示例解析
  9. python为什么难_为什么python这么难
  10. 前后端分离开发,六大方案全揭秘:HTTP API 认证授权术
  11. 新概念_please send me a card.
  12. 聊聊测试工程师的核心能力模型
  13. Python开发系列课程(14) - 玩转正则表达式
  14. 第114课:SparkStreaming+Kafka+Spark SQL+TopN+Mysql+KafkaOffsetMonitor电商广告点击综合案例实战(详细内幕版本)
  15. Cadence Orcad Capture Place pin的窗口的深入讲解图文
  16. windows命令大全_建议收藏!这是最全的Windows快捷键使用指南
  17. 什么是中台系统以及挑战和解决方案?
  18. tidymodels搞定二分类资料多个模型评价和比较
  19. 如何将日语在线翻译成中文
  20. 【信号去噪】基于硬阈值、软阈值、半软阈值、Maxmin阈值、Garrote阈值小波变换实现心音去噪附matlab代码

热门文章

  1. 记录一下,学习express的小成就
  2. Spring的注解@Qualifier用法
  3. python API生成demo
  4. 【文末福利】什么是 Adobe Creative Cloud 创意应用软件?
  5. android中点击加号动画,android animation之scale 缩放(仿微信加号弹出菜单的动画效果)...
  6. Python中的argv
  7. 推荐几个在线PDF转化成Word网站
  8. 2019 vs 安装odt_2019年12月6日罗马协会直播预告
  9. Word 2003 视频教程-Word 自动保存(转)
  10. 【ChatGPT】只需要2分钟,ChatGPT帮我生成了一份PPT