1)首先配置好了四个linux虚拟机 root pwd:z****l*3

关闭了防火墙

开通了 sshd服务

开通了 ftp服务

配置了 jdk 1.8

配置好了互信 (之前配置的过程忘了!--检查了一下可以)

注意 /etc/hosts 中需配置主机名(每台都要设置)

注意 互信建立时 第一次连接会提示要输入yes才行,即使用ip试过 换了主机名同样会在换过主机名后的第一次 提示输入yes 所以最后 用主机名试一下 scp

linux版本是

Fedora release 22 (Twenty Two)  企业过界面版

2)

修改配置文件 参考这个贴子

https://www.cnblogs.com/lzxlfly/p/7221890.html

启动

总是提示  hocalhost (::1%1)' can't be established.

解决办法是 执行  ssh -o StrictHostKeyChecking=no root@localhost

输入:

http://192.168.1.19:50070 检查是否成功

启动hbase

输出ignoring option PermSize=128m; support was removed in 8.0告警信息

hbase/conf/hbase-env.sh

由于JDK使用的是jdk1.8.0_65

注释掉以下:

# Configure PermSize. Only needed in JDK7. You can safely remove it for JDK8+
                    export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
                    export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"

启动手 主节点没有

查看输出日志 .out  原因是  hbase 和 hadoop 中日志包有冲突

解决方法:  移除 hbase 中的 日志包

再重启 主节点 hbase.out里面

Hadoop 启动完成 进程正常 页面访问正常

然后启动HBase

一直报错,  Hadoop _ Hdfs java.io.IOException: No FileSystem for scheme: hdfs

最后说是要将

hadoop  core-site.xml 和 hdfs-site.xml 复制到 hbase的配置目录

复制完成后 还是报错,是没有配这个

<property>
<name>fs.hdfs.impl</name>
<value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
<description>The FileSystem for hdfs: uris.</description>
</property>

配完后,全部重删,重新解压,重新启动  还是报错

找不到  org.apache.hadoop.hdfs.DistributedFileSystem 类

然看再 百度,发现

https://stackoverflow.com/questions/26063359/org-apache-hadoop-fs-filesystem-provider-org-apache-hadoop-hdfs-distributedfile

贴子里面,有一条是说因为 我现在用的 hadoop-2.7.1 和 hbase-1.2.1

hadoop-common-2.xxx.jar 这个包不统一。虽然兼这两个版本官网上标注是支持兼容的但要自己去统一包冲突。

然后我就想,找一对不用自己去兼容的配对版本。

先是找到了比较新的

1) 我先找到了hbase-2.0.1-bin.tar.gz  。

因为它里面的 hadoop-common-2.7.4 版本与我前面的 hadoop-2.7.1相近。

2) 然后我再找到了hadoop-2.7.4.tar.gz 因为 hadoop-common-2.7.4就是依赖 它开发的。

然后,为了省事不想改配置文件,目录还是 用 /opt/hbase/hadoop-2.7.1 和  /opt/hbase/hbase-1.2.1

实际上 现在版本换成了     hadoop-common-2.7.4 +  hbase-2.0.1 。

然后重新按上面部骤来。启动成功。

总结一下:

1) 先配好一个虚拟机,包括配置ftp 服务, 配置ssh服务,关闭防火墙 ,关闭selinux,配置好jdk,配置好 hadoopHome ,HbaseHome. 将压缩包通过FTP上传到目录。

2) 将虚拟机复制几份,换一下网卡物理编号即可,配置好ip,测试物量机 虚拟机之前网络能不能ping通。

这里讲一下 VMware 网络配置

两种办法

方法1. 方便,用桥接模式。

这种情况适用于自己的路由器是连宽带,也就是说自己的路由器是NAT模式(子网模式)。

这时

虚拟机连到一个虚拟的交换机,虚拟的交换机连到真实路由器上。

物理机通过真实网卡连到路由器上。

真实机与物理机的消息转发通过真实路由器实现。

网关的具体地址通过修改真实路由器设置。

这种情况下没有多余的配置。

只要求把虚拟机当一个物理机就行了。

方法2  用虚拟子网模式NAT

方法1 好像很方便,但是你的笔记本电脑移动到别的办公地点,你又想通过物理路由器与虚拟机交换信息。

就不好办了,很多时候外面的路由器不由你控制,这意味着你不能随意修改外面路由器网关地址

为了连上外面路由,你不得不修改自己物理IP和虚拟机IP使它们和外面的路由器网关一样。

这意味着,要改N多配置。奶奶的。有什么更好的办法??

好吧用NAT模式吧。

用NAT模式时

物理机与真实机通过虚拟路由交换消息。

虚拟子网(路由器)关网(例 192.168.8.1) 由 Virtual Network Edito配置,动态分配ip不需要直接禁用。

虚拟机网卡 <---> 虚拟子网(路由器)<-->物理路由器

物理网卡 <--> 物理路由器

虚拟机网卡<--->虚拟子网(路由器)<--> VMnet8

VMnet8  默认是VMnet8 ( 可通过Virtual Network Editor修改)。

要求

虚拟机网卡(例 IP:192.168.8.100,网关192.168.8.1,掩码,255.255.255.0) 、 VMnet8 (例 IP:192.168.8.101,网关192.168.8.1掩码,255.255.255.0)、  虚拟网关(192.168.8.1)

三个要在同一网段 这里网段是 192.168.8 .x。

物理网卡IP (例 IP:192.168.0.100,网关192.168.0.1,掩码,255.255.255.0)与 虚拟机网卡的 ip就可以不在同一网段。

这个192.168.0.1 为物理路由器网关

好像也必须不在同一网段(没试一下)

3)建立免密登陆,在各个机器间测试互传文件。

前面三步 基本和hadoop没多大关系,可独立测试,网上资料大把。

4)ssh连接各个机器,配置好hadoop 和 hbase 。

配置都集中在 opt/hbase/hadoop-2.7.1/etc/hadoop 和 /opt/hbase/hbase-1.2.1/conf

opt/hbase/hadoop-2.7.1/etc/hadoop  下面修改 或添加

core-site.xml

hdfs-site.xml

yarn-env.sh

hadoop-env.sh

                       mapred-site.xml

                       slaves

                       yarn-site.xml

             启动步骤

              格式化    bin/hdfs namenode -format

      启动   停止    sbin/start-all.sh  sbin/stop-all.sh

查看

         master 上输入命令:jps, 看到ResourceManager、NameNode、SecondaryNameNode进程

         在slave 上输入命令:jps, 看到DataNode、NodeManager进程

                 出现这5个进程就表示Hadoop启动成功。

master状态 http://master:50070

               集群状态   http://192.168.172.72:8088

               /opt/hbase/hbase-1.2.1/conf    下面修改 或添加

core-site.xml <同上面一样的>

                        hbase-site.xml<同上面一样的>

                        regionservers

                        hbase-env.sh

                        hdfs-site.xml     

启动   停止    start-hbase.sh  stop-hbase.sh

查看

jps

                                   master上出现HMaster、HQuormPeer,

                    slave上出现HRegionServer、HQuorumPeer,就是启动成功了。

hbase的配置 http://master:16010

相关脚本

hadoop.cop.sh

#ssh root@192.168.1.21 rm -rf /opt/hbase/hadoop-2.7.1/*
#ssh root@192.168.1.22 rm -rf /opt/hbase/hadoop-2.7.1/*
#ssh root@192.168.1.20 rm -rf /opt/hbase/hadoop-2.7.1/*#tar -zvxf /opt/hbase/hadoop-2.7.1.tar.gz -C /opt/hbase/scp core-site.xml root@192.168.1.20:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp hdfs-site.xml root@192.168.1.20:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp mapred-site.xml root@192.168.1.20:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp slaves root@192.168.1.20:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp yarn-site.xml root@192.168.1.20:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp hadoop-env.sh root@192.168.1.20:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp yarn-env.sh root@192.168.1.20:/opt/hbase/hadoop-2.7.1/etc/hadoopscp core-site.xml root@192.168.1.21:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp hdfs-site.xml root@192.168.1.21:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp mapred-site.xml root@192.168.1.21:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp slaves root@192.168.1.21:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp yarn-site.xml root@192.168.1.21:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp hadoop-env.sh root@192.168.1.21:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp yarn-env.sh root@192.168.1.21:/opt/hbase/hadoop-2.7.1/etc/hadoopscp core-site.xml root@192.168.1.22:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp hdfs-site.xml root@192.168.1.22:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp mapred-site.xml root@192.168.1.22:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp slaves root@192.168.1.22:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp yarn-site.xml root@192.168.1.22:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp hadoop-env.sh root@192.168.1.22:/opt/hbase/hadoop-2.7.1/etc/hadoop
scp yarn-env.sh root@192.168.1.22:/opt/hbase/hadoop-2.7.1/etc/hadoop

HBase.cop.sh

# rm -rf /opt/hbase/hbase-1.2.1/*
#tar -zvxf /opt/hbase/hbase-1.2.1-bin.tar.gz -C /opt/hbase/
#tar -zvxf /opt/hbase/hbase-2.0.1-bin.tar.gz -C /opt/hbase/
#tar -zvxf /opt/hbase/hadoop-2.7.4.tar.gz -C /opt/hbase/scp hbase-env.sh root@192.168.1.20:/opt/hbase/hbase-1.2.1/conf
scp hbase-site.xml root@192.168.1.20:/opt/hbase/hbase-1.2.1/conf
scp regionservers root@192.168.1.20:/opt/hbase/hbase-1.2.1/conf
scp core-site.xml root@192.168.1.20:/opt/hbase/hbase-1.2.1/conf
scp hdfs-site.xml root@192.168.1.20:/opt/hbase/hbase-1.2.1/confscp hbase-env.sh root@192.168.1.21:/opt/hbase/hbase-1.2.1/conf
scp hbase-site.xml root@192.168.1.21:/opt/hbase/hbase-1.2.1/conf
scp regionservers root@192.168.1.21:/opt/hbase/hbase-1.2.1/conf
scp core-site.xml root@192.168.1.21:/opt/hbase/hbase-1.2.1/conf
scp hdfs-site.xml root@192.168.1.21:/opt/hbase/hbase-1.2.1/confscp hbase-env.sh root@192.168.1.22:/opt/hbase/hbase-1.2.1/conf
scp hbase-site.xml root@192.168.1.22:/opt/hbase/hbase-1.2.1/conf
scp regionservers root@192.168.1.22:/opt/hbase/hbase-1.2.1/conf
scp core-site.xml root@192.168.1.22:/opt/hbase/hbase-1.2.1/conf
scp hdfs-site.xml root@192.168.1.22:/opt/hbase/hbase-1.2.1/conf#rm /opt/hbase/hbase-1.2.1/lib/hadoop-common-2.5.1.jar
#cp -r /opt/hbase/hadoop-2.7.1/share/hadoop/common/hadoop-common-2.7.1.jar /opt/hbase/hbase-1.2.1/lib

全部配置文件

core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration><property> <name>fs.defaultFS</name> <!--NameNode 的URI--><value>hdfs://master:9000</value> </property>
<!-- <property>
<name>fs.hdfs.impl</name>
<value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
<description>The FileSystem for hdfs: uris.</description>
</property> --><property> <name>hadoop.tmp.dir</name> <!--hadoop临时文件的存放目录--><value>/opt/hbase/hadoop-2.7.1/temp</value> </property>
</configuration>

hadoop-env.sh

 # Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.# Set Hadoop-specific environment variables here.# The only required environment variable is JAVA_HOME.  All others are
# optional.  When running a distributed configuration it is best to
# set JAVA_HOME in this file, so that it is correctly defined on
# remote nodes.# The java implementation to use.
export JAVA_HOME=/usr/lib/jdk/jdk1.8.0_91# The jsvc implementation to use. Jsvc is required to run secure datanodes
# that bind to privileged ports to provide authentication of data transfer
# protocol.  Jsvc is not required if SASL is configured for authentication of
# data transfer protocol using non-privileged ports.
#export JSVC_HOME=${JSVC_HOME}export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}# Extra Java CLASSPATH elements.  Automatically insert capacity-scheduler.
for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; doif [ "$HADOOP_CLASSPATH" ]; thenexport HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$felseexport HADOOP_CLASSPATH=$ffi
done# The maximum amount of heap to use, in MB. Default is 1000.
#export HADOOP_HEAPSIZE=
#export HADOOP_NAMENODE_INIT_HEAPSIZE=""# Extra Java runtime options.  Empty by default.
export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS"export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS"export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"
export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS"# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
#HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData $HADOOP_JAVA_PLATFORM_OPTS"# On secure datanodes, user to run the datanode as after dropping privileges.
# This **MUST** be uncommented to enable secure HDFS if using privileged ports
# to provide authentication of data transfer protocol.  This **MUST NOT** be
# defined if SASL is configured for authentication of data transfer protocol
# using non-privileged ports.
export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}# Where log files are stored.  $HADOOP_HOME/logs by default.
#export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER# Where log files are stored in the secure data environment.
export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}###
# HDFS Mover specific parameters
###
# Specify the JVM options to be used when starting the HDFS Mover.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# export HADOOP_MOVER_OPTS=""###
# Advanced Users Only!
#### The directory where pid files are stored. /tmp by default.
# NOTE: this should be set to a directory that can only be written to by
#       the user that will run the hadoop daemons.  Otherwise there is the
#       potential for a symlink attack.
export HADOOP_PID_DIR=${HADOOP_PID_DIR}
export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}# A string representing this instance of hadoop. $USER by default.
export HADOOP_IDENT_STRING=$USER
export JAVA_HOME=/usr/lib/jdk/jdk1.8.0_91

hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!-- Put site-specific property overrides in this file. -->
<configuration><property> <!--namenode持久存储名字空间及事务日志的本地文件系统路径--> <name>dfs.namenode.name.dir</name> <value>/opt/hbase/hadoop-2.7.1/dfs/name</value> <!--目录无需预先创建,会自动创建--></property> <property>  <!--DataNode存放块数据的本地文件系统路径--> <name>dfs.datanode.data.dir</name><value>/opt/hbase/hadoop-2.7.1/dfs/data</value> </property> <property>  <!--数据需要备份的数量,不能大于集群的机器数量,默认为3--><name>dfs.replication</name><value>2</value> </property><property> <name>dfs.namenode.secondary.http-address</name> <value>master:9001</value> </property>  <property>  <!--设置为true,可以在浏览器中IP+port查看--> <name>dfs.webhdfs.enabled</name><value>true</value> </property>
</configuration>

mapred-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?><configuration><property> <!--mapreduce运用了yarn框架,设置name为yarn--> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <!--历史服务器,查看Mapreduce作业记录--> <name>mapreduce.jobhistory.address</name> <value>master:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>master:19888</value> </property>
</configuration>

slaves

master
slave1
slave2
slave3

yarn-env.sh

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.# User for YARN daemons
export HADOOP_YARN_USER=${HADOOP_YARN_USER:-yarn}
export JAVA_HOME=/usr/lib/jdk/jdk1.8.0_91# resolve links - $0 may be a softlink
export YARN_CONF_DIR="${YARN_CONF_DIR:-$HADOOP_YARN_HOME/conf}"# some Java parameters
# export JAVA_HOME=/home/y/libexec/jdk1.6.0/
if [ "$JAVA_HOME" != "" ]; then#echo "run java in $JAVA_HOME"JAVA_HOME=$JAVA_HOME
fiif [ "$JAVA_HOME" = "" ]; thenecho "Error: JAVA_HOME is not set."exit 1
fiJAVA=$JAVA_HOME/bin/java
JAVA_HEAP_MAX=-Xmx1000m # For setting YARN specific HEAP sizes please use this
# Parameter and set appropriately
# YARN_HEAPSIZE=1000# check envvars which might override default args
if [ "$YARN_HEAPSIZE" != "" ]; thenJAVA_HEAP_MAX="-Xmx""$YARN_HEAPSIZE""m"
fi# Resource Manager specific parameters# Specify the max Heapsize for the ResourceManager using a numerical value
# in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set
# the value to 1000.
# This value will be overridden by an Xmx setting specified in either YARN_OPTS
# and/or YARN_RESOURCEMANAGER_OPTS.
# If not specified, the default value will be picked from either YARN_HEAPMAX
# or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.
#export YARN_RESOURCEMANAGER_HEAPSIZE=1000# Specify the max Heapsize for the timeline server using a numerical value
# in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set
# the value to 1000.
# This value will be overridden by an Xmx setting specified in either YARN_OPTS
# and/or YARN_TIMELINESERVER_OPTS.
# If not specified, the default value will be picked from either YARN_HEAPMAX
# or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.
#export YARN_TIMELINESERVER_HEAPSIZE=1000# Specify the JVM options to be used when starting the ResourceManager.
# These options will be appended to the options specified as YARN_OPTS
# and therefore may override any similar flags set in YARN_OPTS
#export YARN_RESOURCEMANAGER_OPTS=# Node Manager specific parameters# Specify the max Heapsize for the NodeManager using a numerical value
# in the scale of MB. For example, to specify an jvm option of -Xmx1000m, set
# the value to 1000.
# This value will be overridden by an Xmx setting specified in either YARN_OPTS
# and/or YARN_NODEMANAGER_OPTS.
# If not specified, the default value will be picked from either YARN_HEAPMAX
# or JAVA_HEAP_MAX with YARN_HEAPMAX as the preferred option of the two.
#export YARN_NODEMANAGER_HEAPSIZE=1000# Specify the JVM options to be used when starting the NodeManager.
# These options will be appended to the options specified as YARN_OPTS
# and therefore may override any similar flags set in YARN_OPTS
#export YARN_NODEMANAGER_OPTS=# so that filenames w/ spaces are handled correctly in loops below
IFS=# default log directory & file
if [ "$YARN_LOG_DIR" = "" ]; thenYARN_LOG_DIR="$HADOOP_YARN_HOME/logs"
fi
if [ "$YARN_LOGFILE" = "" ]; thenYARN_LOGFILE='yarn.log'
fi# default policy file for service-level authorization
if [ "$YARN_POLICYFILE" = "" ]; thenYARN_POLICYFILE="hadoop-policy.xml"
fi# restore ordinary behaviour
unset IFSYARN_OPTS="$YARN_OPTS -Dhadoop.log.dir=$YARN_LOG_DIR"
YARN_OPTS="$YARN_OPTS -Dyarn.log.dir=$YARN_LOG_DIR"
YARN_OPTS="$YARN_OPTS -Dhadoop.log.file=$YARN_LOGFILE"
YARN_OPTS="$YARN_OPTS -Dyarn.log.file=$YARN_LOGFILE"
YARN_OPTS="$YARN_OPTS -Dyarn.home.dir=$YARN_COMMON_HOME"
YARN_OPTS="$YARN_OPTS -Dyarn.id.str=$YARN_IDENT_STRING"
YARN_OPTS="$YARN_OPTS -Dhadoop.root.logger=${YARN_ROOT_LOGGER:-INFO,console}"
YARN_OPTS="$YARN_OPTS -Dyarn.root.logger=${YARN_ROOT_LOGGER:-INFO,console}"
if [ "x$JAVA_LIBRARY_PATH" != "x" ]; thenYARN_OPTS="$YARN_OPTS -Djava.library.path=$JAVA_LIBRARY_PATH"
fi
YARN_OPTS="$YARN_OPTS -Dyarn.policy.file=$YARN_POLICYFILE"
export JAVA_HOME=/usr/lib/jdk/jdk1.8.0_91

yarn-site.xml

<?xml version="1.0"?>
<configuration><property> <!--NodeManager上运行的附属服务,用于运行mapreduce--> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <!--ResourceManager 对客户端暴露的地址--> <name>yarn.resourcemanager.address</name> <value>master:8032</value> </property> <property> <!--ResourceManager 对ApplicationMaster暴露的地址-->  <name>yarn.resourcemanager.scheduler.address</name> <value>master:8030</value> </property> <property> <!--ResourceManager 对NodeManager暴露的地址--> <name>yarn.resourcemanager.resource-tracker.address</name>  <value>master:8031</value> </property> <property> <!--ResourceManager 对管理员暴露的地址--> <name>yarn.resourcemanager.admin.address</name>   <value>master:8033</value> </property> <property> <!--ResourceManager 对外web暴露的地址,可在浏览器查看-->   <name>yarn.resourcemanager.webapp.address</name> <value>master:8088</value> </property> </configuration>

hbase-env.sh

#
#/**
# * Licensed to the Apache Software Foundation (ASF) under one
# * or more contributor license agreements.  See the NOTICE file
# * distributed with this work for additional information
# * regarding copyright ownership.  The ASF licenses this file
# * to you under the Apache License, Version 2.0 (the
# * "License"); you may not use this file except in compliance
# * with the License.  You may obtain a copy of the License at
# *
# *     http://www.apache.org/licenses/LICENSE-2.0
# *
# * Unless required by applicable law or agreed to in writing, software
# * distributed under the License is distributed on an "AS IS" BASIS,
# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# * See the License for the specific language governing permissions and
# * limitations under the License.
# */# Set environment variables here.# This script sets variables multiple times over the course of starting an hbase process,
# so try to keep things idempotent unless you want to take an even deeper look
# into the startup scripts (bin/hbase, etc.)# The java implementation to use.  Java 1.7+ required.
# export JAVA_HOME=/usr/java/jdk1.6.0/# Extra Java CLASSPATH elements.  Optional.
# export HBASE_CLASSPATH=# The maximum amount of heap to use. Default is left to JVM default.
# export HBASE_HEAPSIZE=1G# Uncomment below if you intend to use off heap cache. For example, to allocate 8G of
# offheap, set the value to "8G".
# export HBASE_OFFHEAPSIZE=1G# Extra Java runtime options.
# Below are what we set by default.  May only work with SUN JVM.
# For more on why as well as other possible settings,
# see http://wiki.apache.org/hadoop/PerformanceTuning
export HBASE_OPTS="-XX:+UseConcMarkSweepGC"# Configure PermSize. Only needed in JDK7. You can safely remove it for JDK8+
#export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
#export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"# Uncomment one of the below three options to enable java garbage collection logging for the server-side processes.# This enables basic gc logging to the .out file.
# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps"# This enables basic gc logging to its own file.
# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .
# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>"# This enables basic GC logging to its own file with automatic log rolling. Only applies to jdk 1.6.0_34+ and 1.7.0_2+.
# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .
# export SERVER_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=512M"# Uncomment one of the below three options to enable java garbage collection logging for the client processes.# This enables basic gc logging to the .out file.
# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps"# This enables basic gc logging to its own file.
# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .
# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH>"# This enables basic GC logging to its own file with automatic log rolling. Only applies to jdk 1.6.0_34+ and 1.7.0_2+.
# If FILE-PATH is not replaced, the log file(.gc) would still be generated in the HBASE_LOG_DIR .
# export CLIENT_GC_OPTS="-verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:<FILE-PATH> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=1 -XX:GCLogFileSize=512M"# See the package documentation for org.apache.hadoop.hbase.io.hfile for other configurations
# needed setting up off-heap block caching. # Uncomment and adjust to enable JMX exporting
# See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access.
# More details at: http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
# NOTE: HBase provides an alternative JMX implementation to fix the random ports issue, please see JMX
# section in HBase Reference Guide for instructions.# export HBASE_JMX_BASE="-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"
# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10101"
# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10102"
# export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10103"
# export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10104"
# export HBASE_REST_OPTS="$HBASE_REST_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10105"# File naming hosts on which HRegionServers will run.  $HBASE_HOME/conf/regionservers by default.
# export HBASE_REGIONSERVERS=${HBASE_HOME}/conf/regionservers# Uncomment and adjust to keep all the Region Server pages mapped to be memory resident
#HBASE_REGIONSERVER_MLOCK=true
#HBASE_REGIONSERVER_UID="hbase"# File naming hosts on which backup HMaster will run.  $HBASE_HOME/conf/backup-masters by default.
# export HBASE_BACKUP_MASTERS=${HBASE_HOME}/conf/backup-masters# Extra ssh options.  Empty by default.
# export HBASE_SSH_OPTS="-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR"# Where log files are stored.  $HBASE_HOME/logs by default.
# export HBASE_LOG_DIR=${HBASE_HOME}/logs# Enable remote JDWP debugging of major HBase processes. Meant for Core Developers
# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070"
# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071"
# export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8072"
# export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8073"# A string representing this instance of hbase. $USER by default.
# export HBASE_IDENT_STRING=$USER# The scheduling priority for daemon processes.  See 'man nice'.
# export HBASE_NICENESS=10# The directory where pid files are stored. /tmp by default.
# export HBASE_PID_DIR=/var/hadoop/pids# Seconds to sleep between slave commands.  Unset by default.  This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.f
# export HBASE_SLAVE_SLEEP=0.1# Tell HBase whether it should manage it's own instance of Zookeeper or not.
export HBASE_MANAGES_ZK=true# The default log rolling policy is RFA, where the log file is rolled as per the size defined for the
# RFA appender. Please refer to the log4j.properties file to see more details on this appender.
# In case one needs to do log rolling on a date change, one should set the environment property
# HBASE_ROOT_LOGGER to "<DESIRED_LOG LEVEL>,DRFA".
# For example:
# HBASE_ROOT_LOGGER=INFO,DRFA
# The reason for changing default to RFA is to avoid the boundary case of filling out disk space as
# DRFA doesn't put any cap on the log size. Please refer to HBase-5655 for more context.
export JAVA_HOME=/usr/lib/jdk/jdk1.8.0_91

hbase-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/**** Licensed to the Apache Software Foundation (ASF) under one* or more contributor license agreements.  See the NOTICE file* distributed with this work for additional information* regarding copyright ownership.  The ASF licenses this file* to you under the Apache License, Version 2.0 (the* "License"); you may not use this file except in compliance* with the License.  You may obtain a copy of the License at**     http://www.apache.org/licenses/LICENSE-2.0** Unless required by applicable law or agreed to in writing, software* distributed under the License is distributed on an "AS IS" BASIS,* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.* See the License for the specific language governing permissions and* limitations under the License.*/
-->
<configuration><property> <name>hbase.rootdir</name> <!-- hbase存放数据目录 --><value>hdfs://master:9000/opt/hbase/hbase_db</value><!-- 端口要和Hadoop的fs.defaultFS端口一致--></property> <property> <name>hbase.cluster.distributed</name> <!-- 是否分布式部署 --><value>true</value> </property> <property> <name>hbase.zookeeper.quorum</name> <!-- list of  zookooper --><value>master,slave1,slave2,slave3</value> </property>     <property><!--zookooper配置、日志等的存储位置 --><name>hbase.zookeeper.property.dataDir</name> <value>/opt/hbase/zookeeper</value></property></configuration>

regionservers

master
slave1
slave2
slave3

最后感谢 

未知的风fly https://www.cnblogs.com/lzxlfly/p/7221890.html

Hadoop2.7.3+Hbase-1.2.6完全分布式安装部署

Hadoop安装部署基本步骤:

1、安装jdk,配置环境变量。

jdk可以去网上自行下载,环境变量如下:

编辑  vim  /etc/profile 文件,添加如下内容:

export JAVA_HOME=/opt/java_environment/jdk1.7.0_80(填写自己的jdk安装路径)
       export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
       export PATH=$PATH:$JAVA_HOME/bin

输入命令,source /etc/profile  使配置生效

分别输入命令,java 、 javac 、 java -version,查看jdk环境变量是否配置成功

2、linux环境下,至少需要3台机子,一台作为master,2台(以上)作为slave。

    这里我以3台机器为例,linux用的是CentOS  6.5  x64为机器。

master 192.168.172.71
         slave1 192.168.172.72
         slave2 192.168.172.73

3、配置所有机器的hostname和hosts。

(1)更改hostname,可以编辑 vim /etc/sysconfig/network

     更改master的HOSTNAME,这里改为HOSTNAME=master

     其它slave为HOSTNAME=slave1、HOSTNAME=slave2 ,重启后生效。

     或者直接输:  hostname   名字,更改成功,这种方式无需重启即可生效,

     但是重启系统后更改的名字会失效,仍是原来的名字

   (2)更改host,可以编辑 vim /etc/hosts,增加如下内容:

         192.168.172.71    master 
            192.168.172.72    slave1 
              192.168.172.73     slave2

       hosts可以和hostname不一致 ,这里为了好记就写一致了。

4、配置SSH所有机器之间免密码登录

  (1)CentOS默认没有启动ssh无密登录,编辑 vim  /etc/ssh/sshd_config,

      去掉以下两行注释,开启Authentication免登陆。

      #RSAAuthentication yes
           #PubkeyAuthentication yes

     如果是root用户下进行操作,还要去掉 #PermitRootLogin yes注释,允许root用户登录。

  (2)输入命令,ssh-keygen -t rsa,生成key,一直按回车,

      就会在/root/.ssh生成:authorized_keys   id_rsa.pub   id_rsa 三个文件,

     这里要说的是,为了各个机器之间的免登陆,在每一台机器上都要进行此操作。

  (3) 接下来,在master服务器,合并公钥到authorized_keys文件,

     进入/root/.ssh目录,输入以下命令

          cat id_rsa.pub>> authorized_keys    把master公钥合并到authorized_keys 中

       ssh root@192.168.172.72 cat ~/.ssh/id_rsa.pub>> authorized_keys

       ssh root@192.168.172.73 cat ~/.ssh/id_rsa.pub>> authorized_keys

       把slave1、slave2公钥合并到authorized_keys 中

完成之后输入命令,把authorized_keys远程copy到slave1和slave2之中

      scp authorized_keys 192.168.172.72:/root/.ssh/

        scp authorized_keys 192.168.172.73:/root/.ssh/

      最好在每台机器上进行chmod 600  authorized_keys操作,

       使当前用户具有 authorized_keys的读写权限。

      拷贝完成后,在每台机器上进行 service sshd restart  操作, 重新启动ssh服务。

      之后在每台机器输入 ssh 192.168.172.xx,测试能否无需输入密码连接另外两台机器。

5、配置Hadoop环境变量,HADOOP_HOME、hadoop-env.sh、yarn-env.sh。

  (1)配置HADOOP_HOME,编辑  vim  /etc/profile 文件,添加如下内容:

     export HADOOP_HOME=/opt/hbase/hadoop-2.7.3 (Hadoop的安装路径)

     export PATH=$PATH:$HADOOP_HOME/sbin

     export PATH=$PATH:$HADOOP_HOME/bin 

     (以下两行最好加上,若没有启动Hadoop、hbase时都会有没加载lib成功的警告)    

       export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
     export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"

  (2)配置hadoop-env.sh、yarn-env.sh,在Hadoop安装目录下

     编辑  vim etc/hadoop/hadoop-env.sh

      加入export JAVA_HOME=/opt/java_environment/jdk1.7.0_80(jdk安装路径)

     编辑  vim etc/hadoop/yarn-env.sh

      加入export JAVA_HOME=/opt/java_environment/jdk1.7.0_80(jdk安装路径)

     保存退出

6、配置基本相关xml,core-site.xml、hdfs-site.xml、mapred-site.xml、mapred-site.xml

  (1)配置core-site.xml,在Hadoop安装目录下 编辑  vim etc/hadoop/core-site.xml

    <configuration>

      <property> 
        <name>fs.defaultFS</name> <!--NameNode 的URI-->
        <value>hdfs://mater:9000</value> 
      </property> 
      <property> 
        <name>hadoop.tmp.dir</name> <!--hadoop临时文件的存放目录-->
        <value>/opt/hbase/hadoop-2.7.3/temp</value> 
      </property> 
    </configuration>

  (2)配置hdfs-site.xml,在Hadoop安装目录下 编辑  vim etc/hadoop/hdfs-site.xml

     <configuration>

      <property> <!--namenode持久存储名字空间及事务日志的本地文件系统路径--> 
        <name>dfs.namenode.name.dir</name> 
        <value>/opt/hbase/hadoop-2.7.3/dfs/name</value>

          <!--目录无需预先创建,会自动创建-->
      </property> 
      <property>  <!--DataNode存放块数据的本地文件系统路径--> 
        <name>dfs.datanode.data.dir</name>
        <value>/opt/hbase/hadoop-2.7.3/dfs/data</value> 
       </property> 
      <property>  <!--数据需要备份的数量,不能大于集群的机器数量,默认为3-->
        <name>dfs.replication</name>
        <value>2</value> 
      </property>

      <property> 
        <name>dfs.namenode.secondary.http-address</name> 
        <value>master:9001</value> 
      </property>  
      <property>  <!--设置为true,可以在浏览器中IP+port查看--> 
        <name>dfs.webhdfs.enabled</name>
        <value>true</value> 
      </property> 
    </configuration>

(3)配置mapred-site.xml,在Hadoop安装目录下 编辑  vim etc/hadoop/mapred-site.xml

   <configuration>

    <property> <!--mapreduce运用了yarn框架,设置name为yarn--> 
      <name>mapreduce.framework.name</name> 
      <value>yarn</value> 
    </property> 
    <property> <!--历史服务器,查看Mapreduce作业记录--> 
      <name>mapreduce.jobhistory.address</name> 
      <value>master:10020</value> 
    </property> 
    <property> 
      <name>mapreduce.jobhistory.webapp.address</name> 
      <value>master:19888</value> 
    </property> 
  </configuration>

(4)配置yarn-site.xml,在Hadoop安装目录下 编辑  vim etc/hadoop/yarn-site.xml

  <configuration>

    <property> <!--NodeManager上运行的附属服务,用于运行mapreduce--> 
      <name>yarn.nodemanager.aux-services</name> 
      <value>mapreduce_shuffle</value> 
    </property> 
    <property> 
      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> 
      <value>org.apache.hadoop.mapred.ShuffleHandler</value> 
    </property> 
    <property> <!--ResourceManager 对客户端暴露的地址--> 
      <name>yarn.resourcemanager.address</name> 
      <value>master:8032</value> 
    </property> 
    <property> <!--ResourceManager 对ApplicationMaster暴露的地址-->  
      <name>yarn.resourcemanager.scheduler.address</name> 
      <value>master:8030</value> 
    </property> 
    <property> <!--ResourceManager 对NodeManager暴露的地址--> 
      <name>yarn.resourcemanager.resource-tracker.address</name>  
      <value>master:8031</value> 
    </property> 
    <property> <!--ResourceManager 对管理员暴露的地址--> 
      <name>yarn.resourcemanager.admin.address</name>   
      <value>master:8033</value> 
    </property> 
    <property> <!--ResourceManager 对外web暴露的地址,可在浏览器查看-->   
      <name>yarn.resourcemanager.webapp.address</name> 
      <value>master:8088</value> 
    </property> 
  </configuration>

7、配置slaves文件

  在Hadoop安装目录下,编辑vim etc/hadoop/slaves,

  去除默认的localhost,加入slave1、slave2,保存退出。

8、通过远程复制命令scp,将配置好的Hadoop复制到各个节点对应位置

  scp -r /opt/hadoop-2.7.3 192.168.172.72:/opt/hadoop-2.7.3 
  scp -r /opt/hadoop-2.7.3 192.168.172.73:/opt/hadoop-2.7.3

9、Hadoop的启动与停止

  (1)在Master服务器启动hadoop,从节点会自动启动,进入Hadoop目录下,

      输入命令,bin/hdfs namenode -format进行hdfs格式化

      输入命令,sbin/start-all.sh,进行启动

      也可以分开启动,sbin/start-dfs.sh、sbin/start-yarn.sh

      在master 上输入命令:jps, 看到ResourceManager、

      NameNode、SecondaryNameNode进程

         在slave 上输入命令:jps, 看到DataNode、NodeManager进程

      出现这5个进程就表示Hadoop启动成功。

  (2)接下来配置本地hosts,编辑 C:\Windows\System32\drivers\etc的hosts文件,加入

      192.168.172.71   master

      192.168.172.72   slave1

      192.168.172.73   slave2

     在浏览器中输入http://master:50070查看master状态,

     输入http://192.168.172.72:8088查看集群状态

  (3)停止hadoop,进入Hadoop目录下,输入命令:sbin/stop-all.sh,

      即可停止master和slave的Hadoop进程

Hbase安装部署基本步骤:

  1、在Hadoop配置的基础上,配置环境变量HBASE_HOME、hbase-env.sh

    编辑 vim /etc/profile  加入

      export  HBASE_HOME=/opt/hbase-1.2.6

         export  PATH=$HBASE_HOME/bin:$PATH

    编辑vim /opt/hbase-1.2.6/conf/hbase-env.sh  加入

      export JAVA_HOME=/opt/java_environment/jdk1.7.0_80(jdk安装路径)

    去掉注释 # export  HBASE_MANAGES_ZK=true,使用hbase自带zookeeper。

   2、配置hbase-site.xml文件 

    <configuration>

      <property> 
        <name>hbase.rootdir</name> <!-- hbase存放数据目录 -->
        <value>hdfs://master:9000/opt/hbase/hbase_db</value>

          <!-- 端口要和Hadoop的fs.defaultFS端口一致-->
      </property> 
      <property> 
        <name>hbase.cluster.distributed</name> <!-- 是否分布式部署 -->
        <value>true</value> 
      </property> 
      <property> 
        <name>hbase.zookeeper.quorum</name> <!-- list of  zookooper -->
        <value>master,slave1,slave2</value> 
      </property>    

       <property><!--zookooper配置、日志等的存储位置 -->
          <name>hbase.zookeeper.property.dataDir</name> 
          <value>/opt/hbase/zookeeper</value>
       </property>

    </configuration>

  3、配置regionservers

    编辑 vim /opt/hbase-1.2.6/conf/regionservers   去掉默认的localhost,
     加入slave1、slave2,保存退出 

     然后把在master上配置好的hbase,通过远程复制命令

     scp -r /opt/hbase-1.2.6  192.168.172.72/73:/opt/hbase-1.2.6

     复制到slave1、slave2对应的位置

  4、启动与停止Hbase

     (1)在Hadoop已经启动成功的基础上,输入start-hbase.sh,过几秒钟便启动完成,

      输入jps命令查看进程是否启动成功,若 master上出现HMaster、HQuormPeer,

      slave上出现HRegionServer、HQuorumPeer,就是启动成功了。

      (2)输入hbase shell 命令 进入hbase命令模式

          输入status命令可以看到如下内容,1个master,2 servers,3机器全部成功启动。

          1 active master, 0 backup masters, 2 servers, 0 dead, 2.0000 average load

    (3)接下来配置本地hosts,(前边配置过的无需再配置了)

       编辑 C:\Windows\System32\drivers\etc的hosts文件,加入

        192.168.172.71   master

        192.168.172.72   slave1

        192.168.172.73   slave2

      在浏览器中输入http://master:16010就可以在界面上看到hbase的配置了

    (4)当要停止hbase时输入stop-hbase.sh,过几秒后hbase就会被停止了。

    

转载于:https://www.cnblogs.com/heling/p/9292596.html

hadoop 安装过程记录相关推荐

  1. CV之detectron2:detectron2安装过程记录

    CV之detectron2:detectron2安装过程记录 detectron2安装记录 python setup.py build develop Microsoft Windows [版本 10 ...

  2. linux chrome 安装过程记录

    最近,由于公司需要做爬虫抓取一些新闻,在开发过程中,发现有些网站有一定的反爬措施,通过浏览器访问一切正常,通过其他方式,包括:curl,urlconnection 等,就算加入了cookie,agen ...

  3. 安卓模拟器安装过程记录 20200926

    安卓模拟器安装过程记录 20200926 使用的软件 网易MuMu模拟器-安卓模拟器-极速最安全 http://mumu.163.com/baidu/ 下载并安装 选择路径 在线下载并且安装 安装好后 ...

  4. ubuntu下安装PCL并测试(含视频安装过程记录)

    ☛☛ 视频安装过程记录 ☚☚ 原文章的标题为Ubuntu16.04下安装PCL1.7并测试(含视频安装过程记录),但我觉得随着时间的推移,版本会发生改变 1.更新源 sudo apt-get upda ...

  5. VPB安装过程记录-20200310

    VPB安装过程记录-20200310 内容概述 1.环境及版本 2.所需内容下载 3.总体配置路线 4.OSG安装过程 5.GDAL编译 6.VPB编译 内容概述 本文主要记录VPB配置过程及其中遇到 ...

  6. 北塔网管软件BTSO2.5安装过程记录

    北塔网管软件据说是同类比较好的,原来的BTIM系列好像停止更新了,用BTSO版本代替,叫智慧运维平台,据说有各种改进,先把安装过程记录下来,以备以后重装. BTSO分两个部分:平台服务器和注册服务器, ...

  7. 云服务器主控系统,NoKvm云主机管理系统主控面板安装过程记录

    老左平时接触网站运营和服务器云主机简单的运维处理比较多,且对于云服务器商家也仅仅停留在遇到和尝试使用的一些商家而已.未来在博客中也依旧保持这样的风格,只分享和接触过的商家,包括一些软件面板产品.对于服 ...

  8. OpenStack Train 安装过程记录(一):基础环境准备

    文章目录 规划 硬件配置 IP规划 修改 hosts 解析 挂载安装磁盘,配置本地源 安装基础服务 NTP 时间同步 安装 OpenStack 包 控制节点需要安装的服务 数据库 消息队列 Memca ...

  9. Ubuntu18.04 小米游戏本最早一代 双硬盘 安装 过程记录

    Ubuntu18.04 小米游戏本最早一代 双硬盘 安装 过程记录.踩了很多坑,折腾了无数次,总结一下,方便日后查阅. UEFI+GPT 新买了一个1T的西数SN550,779元.疫情期间,价格大涨, ...

最新文章

  1. .net之工作流工程展示及代码分享(二)工作流引擎
  2. 扩展源_Ubuntu14版本下无法使用php7.2版本的bcmath扩展
  3. (转载)VS2010/MFC编程入门之五十四(Ribbon界面开发:使用更多控件并为控件添加消息处理函数)...
  4. C语言斐波那契数列(附完整源码)
  5. 搞懂这些SQL优化技巧,面试横着走
  6. 更新了一个新版本的editplus 语法文件(for nagios)
  7. 大数据之Kafka内部原理详细介绍
  8. struts1.2上传文件到服务器
  9. HDU1865 1sting【递推】
  10. windows10计算机用户密码,忘记Windows 10系统密码?教你重置
  11. JS原生读取 本地 JSON
  12. IIS无法启动解决方案
  13. 好好说话 -简单概括
  14. [原创摄影]西藏行(一)从不同角度看布达拉
  15. exsi rh2288hv5 驱动_华为服务器RH2288H V3 引导ServiceCD安装Windows系统方
  16. Order by 多条件排序
  17. 微信小程序实现简单的点击切换功能(微信开发者工具)
  18. 深度搜索和广度搜索特点的深刻理解
  19. 计算机管理无用怎么办,win7系统如何将资源管理器窗口中无用的图标删除掉?...
  20. [数据库实战]sql创建一个view视图

热门文章

  1. 人类思想史上的黄金时代
  2. Linux中运行可执行文件时找不到lib文件
  3. QSFP28 100G CWDM4
  4. 太原铁警严厉打击倒卖车票 查获假票60张
  5. 计算机系统大作业 程序人生-Hello’s P2P From Program to Process
  6. CentOS7+安装Docker,并部署为知笔记服务端Docker镜像
  7. MQTT协议规范总结
  8. 游戏AI,行为树,Lua框架
  9. 机器人炒菜感想_炒菜机器人_五年级作文
  10. 常见的离散型和连续型随机变量的概率分布