Hadoop伪集群部署
环境准备
[root@jiagoushi ~]# yum -y install lrzsz 已加载插件:fastestmirror Repository 'saltstack-repo': Error parsing config: Error parsing "gpgkey = 'https\xef\xbc\x9a// repo.saltstack.com/yum/redhat/7/x86_64/latest/SALTSTACK-GPG-KEY.pub'": URL must be http, ftp, f ile or https not ""Determining fastest mirrors* base: centos.ustc.edu.cn* extras: centos.ustc.edu.cn* updates: centos.ustc.edu.cn base | 3.6 kB 00:00:00 epel | 3.2 kB 00:00:00 extras | 3.4 kB 00:00:00 updates | 3.4 kB 00:00:00 (1/7): epel/x86_64/group_gz | 88 kB 00:00:05 (2/7): base/7/x86_64/group_gz | 166 kB 00:00:06 (3/7): epel/x86_64/updateinfo | 950 kB 00:00:06 (4/7): epel/x86_64/primary | 3.6 MB 00:00:01 (5/7): extras/7/x86_64/primary_db | 156 kB 00:00:06 (6/7): base/7/x86_64/primary_db | 6.0 MB 00:00:22 (7/7): updates/7/x86_64/primary_db | 1.3 MB 00:00:20 epel 12851/12851 正在解决依赖关系 --> 正在检查事务 ---> 软件包 lrzsz.x86_64.0.0.12.20-36.el7 将被 安装 --> 解决依赖关系完成依赖关系解决===============================================================================================================================================================================================Package 架构 版本 源 大小 =============================================================================================================================================================================================== 正在安装:lrzsz x86_64 0.12.20-36.el7 base 78 k事务概要 =============================================================================================================================================================================================== 安装 1 软件包总下载量:78 k 安装大小:181 k Downloading packages: lrzsz-0.12.20-36.el7.x86_64.rpm | 78 kB 00:00:05 Running transaction check Running transaction test Transaction test succeeded Running transaction正在安装 : lrzsz-0.12.20-36.el7.x86_64 1/1 验证中 : lrzsz-0.12.20-36.el7.x86_64 1/1 已安装:lrzsz.x86_64 0:0.12.20-36.el7 完毕! [root@jiagoushi ~]# yum -y install vim wget 已加载插件:fastestmirror Repository 'saltstack-repo': Error parsing config: Error parsing "gpgkey = 'https\xef\xbc\x9a// repo.saltstack.com/yum/redhat/7/x86_64/latest/SALTSTACK-GPG-KEY.pub'": URL must be http, ftp, f ile or https not ""Loading mirror speeds from cached hostfile* base: centos.ustc.edu.cn* extras: centos.ustc.edu.cn* updates: centos.ustc.edu.cn 正在解决依赖关系 --> 正在检查事务 ---> 软件包 vim-enhanced.x86_64.2.7.4.160-4.el7 将被 升级 ---> 软件包 vim-enhanced.x86_64.2.7.4.160-5.el7 将被 更新 --> 正在处理依赖关系 vim-common = 2:7.4.160-5.el7,它被软件包 2:vim-enhanced-7.4.160-5.el7.x86_64 需要 ---> 软件包 wget.x86_64.0.1.14-15.el7_4.1 将被 升级 ---> 软件包 wget.x86_64.0.1.14-18.el7 将被 更新 --> 正在检查事务 ---> 软件包 vim-common.x86_64.2.7.4.160-4.el7 将被 升级 ---> 软件包 vim-common.x86_64.2.7.4.160-5.el7 将被 更新 --> 解决依赖关系完成依赖关系解决===============================================================================================================================================================================================Package 架构 版本 源 大小 =============================================================================================================================================================================================== 正在更新:vim-enhanced x86_64 2:7.4.160-5.el7 base 1.0 Mwget x86_64 1.14-18.el7 base 547 k 为依赖而更新:vim-common x86_64 2:7.4.160-5.el7 base 5.9 M事务概要 =============================================================================================================================================================================================== 升级 2 软件包 (+1 依赖软件包)总下载量:7.5 M Downloading packages: Delta RPMs disabled because /usr/bin/applydeltarpm not installed. (1/3): vim-common-7.4.160-5.el7.x86_64.rpm | 5.9 MB 00:00:07 (2/3): vim-enhanced-7.4.160-5.el7.x86_64.rpm | 1.0 MB 00:00:08 (3/3): wget-1.14-18.el7.x86_64.rpm | 547 kB 00:00:08 ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 总计 859 kB/s | 7.5 MB 00:00:08 Running transaction check Running transaction test Transaction test succeeded Running transaction正在更新 : 2:vim-common-7.4.160-5.el7.x86_64 1/6 正在更新 : 2:vim-enhanced-7.4.160-5.el7.x86_64 2/6 正在更新 : wget-1.14-18.el7.x86_64 3/6 清理 : 2:vim-enhanced-7.4.160-4.el7.x86_64 4/6 清理 : 2:vim-common-7.4.160-4.el7.x86_64 5/6 清理 : wget-1.14-15.el7_4.1.x86_64 6/6 验证中 : 2:vim-enhanced-7.4.160-5.el7.x86_64 1/6 验证中 : wget-1.14-18.el7.x86_64 2/6 验证中 : 2:vim-common-7.4.160-5.el7.x86_64 3/6 验证中 : wget-1.14-15.el7_4.1.x86_64 4/6 验证中 : 2:vim-enhanced-7.4.160-4.el7.x86_64 5/6 验证中 : 2:vim-common-7.4.160-4.el7.x86_64 6/6 更新完毕:vim-enhanced.x86_64 2:7.4.160-5.el7 wget.x86_64 0:1.14-18.el7 作为依赖被升级:vim-common.x86_64 2:7.4.160-5.el7 完毕! [root@jiagoushi ~]# cd /usr/local/ [root@jiagoushi local]# ls bin etc games include lib lib64 libexec sbin share src [root@jiagoushi local]# ls bin etc games hadoop-2.6.4.tar.gz include jdk-7u80-linux-x64.tar.gz lib lib64 libexec sbin share src
安装jdk 设置环境变量
[root@jiagoushi local]# tar xf jdk-7u80-linux-x64.tar.gz [root@jiagoushi local]# ln -s /usr/local/jdk1.7.0_80 /usr/local/jdk [root@jiagoushi local]# vim /etc/profile.d/jdk.sh export JAVA_HOME=/usr/local/jdk export PATH=$JAVA_HOME/bin:$PATH [root@jiagoushi local]# . /etc/profile.d/jdk.sh [root@jiagoushi local]# java -version java version "1.7.0_80" Java(TM) SE Runtime Environment (build 1.7.0_80-b15) Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
安装Hadoop
[root@jiagoushi local]# mkdir hadoop [root@jiagoushi local]# tar xf hadoop-2.6.4.tar.gz -C hadoop [root@jiagoushi local]# ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa Generating public/private dsa key pair. Created directory '/root/.ssh'. Your identification has been saved in /root/.ssh/id_dsa. Your public key has been saved in /root/.ssh/id_dsa.pub. The key fingerprint is: SHA256:IR6QuB5NAjRUx/6kEwDGEInPdGzxT5LLkV6UC1Ru3og root@jiagoushi The key's randomart image is: +---[DSA 1024]----+ |XXo++=..o. | |+.=.B+.=. | | + Bo O.*. | | = .* #.+ | | . . E S . | | . o . | | . | | | | | +----[SHA256]-----+ [root@jiagoushi local]# cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys [root@jiagoushi local]# ssh 192.168.10.5 Last login: Mon Jan 7 12:12:19 2019 from jiagoushi [root@jiagoushi ~]# hostnamectl set-hostname hadoop [root@jiagoushi ~]# exec bash [root@hadoop ~]# hostnamectl set-hostname Master && exec bash [root@master ~]# vim /etc/hosts#127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 #::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.10.5 master[root@master ~]# mkdir /usr/local/hadoop/tmp [root@master ~]# mkdir /usr/local/hadoop/hdfs/name -p [root@master ~]# mkdir /usr/local/hadoop/hdfs/data [root@master ~]# vim ~/.bash_profile # .bash_profile# Get the aliases and functions if [ -f ~/.bashrc ]; then. ~/.bashrc fi# User specific environment and startup programsPATH=$PATH:$HOME/binexport PATH export HADOOP_HAME=/usr/local/hadoop/hadoop-2.6.4 export PATH=$PATH:$HADOOP_HAME/bin
配置hadoop安装启动
[root@master ~]# cd /usr/local/hadoop/hadoop-2.6.4/ [root@master hadoop-2.6.4]# cd etc/ [root@master etc]# ls hadoop [root@master etc]# cd hadoop/ [root@master hadoop]# ls capacity-scheduler.xml httpfs-env.sh mapred-env.sh configuration.xsl httpfs-log4j.properties mapred-queues.xml.template container-executor.cfg httpfs-signature.secret mapred-site.xml.template core-site.xml httpfs-site.xml slaves hadoop-env.cmd kms-acls.xml ssl-client.xml.example hadoop-env.sh kms-env.sh ssl-server.xml.example hadoop-metrics2.properties kms-log4j.properties yarn-env.cmd hadoop-metrics.properties kms-site.xml yarn-env.sh hadoop-policy.xml log4j.properties yarn-site.xml hdfs-site.xml mapred-env.cmd [root@master hadoop]# vim hadoop-env.sh 1 # Licensed to the Apache Software Foundation (ASF) under one2 # or more contributor license agreements. See the NOTICE file3 # distributed with this work for additional information4 # regarding copyright ownership. The ASF licenses this file5 # to you under the Apache License, Version 2.0 (the6 # "License"); you may not use this file except in compliance7 # with the License. You may obtain a copy of the License at8 #9 # http://www.apache.org/licenses/LICENSE-2.010 #11 # Unless required by applicable law or agreed to in writing, software12 # distributed under the License is distributed on an "AS IS" BASIS,13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.14 # See the License for the specific language governing permissions and15 # limitations under the License.16 17 # Set Hadoop-specific environment variables here.18 19 # The only required environment variable is JAVA_HOME. All others are20 # optional. When running a distributed configuration it is best to21 # set JAVA_HOME in this file, so that it is correctly defined on22 # remote nodes.23 24 # The java implementation to use.25 #export JAVA_HOME=${JAVA_HOME}26 export JAVA_HOME=/usr/local/jdk java环境变量的指定 [root@master hadoop]# vim yarn-env.sh 1 # Licensed to the Apache Software Foundation (ASF) under one or more2 # contributor license agreements. See the NOTICE file distributed with3 # this work for additional information regarding copyright ownership.4 # The ASF licenses this file to You under the Apache License, Version 2.05 # (the "License"); you may not use this file except in compliance with6 # the License. You may obtain a copy of the License at7 #8 # http://www.apache.org/licenses/LICENSE-2.09 #10 # Unless required by applicable law or agreed to in writing, software11 # distributed under the License is distributed on an "AS IS" BASIS,12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.13 # See the License for the specific language governing permissions and14 # limitations under the License.15 16 # User for YARN daemons17 export HADOOP_YARN_USER=${HADOOP_YARN_USER:-yarn}18 19 # resolve links - $0 may be a softlink20 export YARN_CONF_DIR="${YARN_CONF_DIR:-$HADOOP_YARN_HOME/conf}"21 22 # some Java parameters23 # export JAVA_HOME=/home/y/libexec/jdk1.6.0/24 export JAVA_HOME=/usr/local/jdk25 [root@master hadoop]# vim core-site.xml <?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!--Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andUnless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file. --><!-- Put site-specific property overrides in this file. --><configuration> <property> <name>fs.defaultFS</name> <value>hdfs://Master:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/usr/local/hadoop/tmp</value> </property>[root@master hadoop]# cp -a mapred-site.xml.template mapred-site.xml [root@master hadoop]# vim hdfs-site.xml See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file. --><!-- Put site-specific property overrides in this file. --><configuration> <property> <name>dfs.namenode.name.dir</name> <value>file:/usr/local/hadoop/hdfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/usr/local/hadoop/hdfs/data</value> </property> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>[root@master hadoop]# vim mapred-site.xmlyou may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file. --><!-- Put site-specific property overrides in this file. --><configuration> <name>mapreduce.framwork.name</name> <value>yarn</value> </configuration>[root@master hadoop]# vim yarn-site.xml Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License athttp://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. See accompanying LICENSE file. --> <configuration><!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration> [root@master hadoop]# cd ../../bin/ [root@master bin]# ./hdfs namenode -format 格式化文件系统 19/01/07 13:23:56 INFO namenode.NameNode: STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting NameNode STARTUP_MSG: host = master/192.168.10.5 STARTUP_MSG: args = [-format] STARTUP_MSG: version = 2.6.4 STARTUP_MSG: classpath = /usr/local/hadoop/hadoop-2.6.4/etc/hadoop:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/c ommon/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/hadoop-annotations-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/htrace-core-3.0.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/curator-client-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/jsr305-1.3.9.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/hadoop-auth-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/commons-el-1.0.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/curator-framework-2.6.0.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/hadoop-nfs-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/hadoop-common-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/common/hadoop-common-2.6.4-tests.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/commons-el-1.0.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/hadoop-hdfs-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/hdfs/hadoop-hdfs-2.6.4-tests.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/jline-0.9.94.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-client-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-api-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-common-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-registry-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/yarn/hadoop-yarn-server-common-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.4-tests.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.4.jar:/usr/local/hadoop/hadoop-2.6.4/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.4.jar:/contrib/capacity-scheduler/*.jarSTARTUP_MSG: build = Unknown -r Unknown; compiled by 'hadoop' on 2016-03-08T03:44Z STARTUP_MSG: java = 1.7.0_80 ************************************************************/ 19/01/07 13:23:56 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 19/01/07 13:23:56 INFO namenode.NameNode: createNameNode [-format] 19/01/07 13:23:58 WARN conf.Configuration: bad conf file: element not <property> 19/01/07 13:23:58 WARN conf.Configuration: bad conf file: element not <property> 19/01/07 13:23:58 WARN conf.Configuration: bad conf file: element not <property> 19/01/07 13:23:58 WARN conf.Configuration: bad conf file: element not <property> Formatting using clusterid: CID-4b99d05c-bbea-448b-a7dc-4e7a839b4cc3 19/01/07 13:23:58 INFO namenode.FSNamesystem: No KeyProvider found. 19/01/07 13:23:59 INFO namenode.FSNamesystem: fsLock is fair:true 19/01/07 13:23:59 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 19/01/07 13:23:59 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 19/01/07 13:23:59 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 19/01/07 13:23:59 INFO blockmanagement.BlockManager: The block deletion will start around 2019 一月 07 13:23:59 19/01/07 13:23:59 INFO util.GSet: Computing capacity for map BlocksMap 19/01/07 13:23:59 INFO util.GSet: VM type = 64-bit 19/01/07 13:23:59 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB 19/01/07 13:23:59 INFO util.GSet: capacity = 2^21 = 2097152 entries 19/01/07 13:23:59 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false 19/01/07 13:23:59 INFO blockmanagement.BlockManager: defaultReplication = 1 19/01/07 13:23:59 INFO blockmanagement.BlockManager: maxReplication = 512 19/01/07 13:23:59 INFO blockmanagement.BlockManager: minReplication = 1 19/01/07 13:23:59 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 19/01/07 13:23:59 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 19/01/07 13:23:59 INFO blockmanagement.BlockManager: encryptDataTransfer = false 19/01/07 13:23:59 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 19/01/07 13:23:59 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE) 19/01/07 13:23:59 INFO namenode.FSNamesystem: supergroup = supergroup 19/01/07 13:23:59 INFO namenode.FSNamesystem: isPermissionEnabled = true 19/01/07 13:23:59 INFO namenode.FSNamesystem: HA Enabled: false 19/01/07 13:23:59 INFO namenode.FSNamesystem: Append Enabled: true 19/01/07 13:23:59 INFO util.GSet: Computing capacity for map INodeMap 19/01/07 13:23:59 INFO util.GSet: VM type = 64-bit 19/01/07 13:23:59 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB 19/01/07 13:23:59 INFO util.GSet: capacity = 2^20 = 1048576 entries 19/01/07 13:23:59 INFO namenode.NameNode: Caching file names occuring more than 10 times 19/01/07 13:23:59 INFO util.GSet: Computing capacity for map cachedBlocks 19/01/07 13:23:59 INFO util.GSet: VM type = 64-bit 19/01/07 13:23:59 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB 19/01/07 13:23:59 INFO util.GSet: capacity = 2^18 = 262144 entries 19/01/07 13:23:59 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 19/01/07 13:23:59 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 19/01/07 13:23:59 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 19/01/07 13:23:59 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 19/01/07 13:23:59 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 19/01/07 13:23:59 INFO util.GSet: Computing capacity for map NameNodeRetryCache 19/01/07 13:23:59 INFO util.GSet: VM type = 64-bit 19/01/07 13:23:59 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB 19/01/07 13:23:59 INFO util.GSet: capacity = 2^15 = 32768 entries 19/01/07 13:23:59 INFO namenode.NNConf: ACLs enabled? false 19/01/07 13:23:59 INFO namenode.NNConf: XAttrs enabled? true 19/01/07 13:23:59 INFO namenode.NNConf: Maximum size of an xattr: 16384 19/01/07 13:23:59 WARN conf.Configuration: bad conf file: element not <property> 19/01/07 13:23:59 WARN conf.Configuration: bad conf file: element not <property> 19/01/07 13:23:59 INFO namenode.FSImage: Allocated new BlockPoolId: BP-587833549-192.168.10.5-1546838639745 19/01/07 13:23:59 INFO common.Storage: Storage directory /usr/local/hadoop/hdfs/name has been successfully formatted. 19/01/07 13:24:00 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 19/01/07 13:24:00 INFO util.ExitUtil: Exiting with status 0 19/01/07 13:24:00 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at master/192.168.10.5 [root@master bin]# cd ../sbin/ [root@master sbin]# ./start-all.sh This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 19/01/07 13:32:29 WARN conf.Configuration: bad conf file: element not <property> 19/01/07 13:32:29 WARN conf.Configuration: bad conf file: element not <property> 19/01/07 13:32:29 WARN conf.Configuration: bad conf file: element not <property> 19/01/07 13:32:29 WARN conf.Configuration: bad conf file: element not <property> Starting namenodes on [Master] Master: namenode running as process 15561. Stop it first. localhost: datanode running as process 15675. Stop it first. Starting secondary namenodes [0.0.0.0] 0.0.0.0: secondarynamenode running as process 15823. Stop it first. 19/01/07 13:32:55 WARN conf.Configuration: bad conf file: element not <property> 19/01/07 13:32:55 WARN conf.Configuration: bad conf file: element not <property> 19/01/07 13:32:55 WARN conf.Configuration: bad conf file: element not <property> 19/01/07 13:32:55 WARN conf.Configuration: bad conf file: element not <property> starting yarn daemons starting resourcemanager, logging to /usr/local/hadoop/hadoop-2.6.4/logs/yarn-root-resourcemanager-jiagoushi.out localhost: starting nodemanager, logging to /usr/local/hadoop/hadoop-2.6.4/logs/yarn-root-nodemanager-master.out [root@master sbin]# jps 15675 DataNode 17407 Jps 17247 NodeManager 16973 ResourceManager 15823 SecondaryNameNode 15561 NameNode [root@master sbin]# ss -lntp | grep java LISTEN 0 128 *:50020 *:* users:(("java",pid=15675,fd=202)) LISTEN 0 128 192.168.10.5:9000 *:* users:(("java",pid=15561,fd=202)) LISTEN 0 128 *:50090 *:* users:(("java",pid=15823,fd=196)) LISTEN 0 128 *:50070 *:* users:(("java",pid=15561,fd=189)) LISTEN 0 50 *:50010 *:* users:(("java",pid=15675,fd=189)) LISTEN 0 128 *:50075 *:* users:(("java",pid=15675,fd=193)) LISTEN 0 128 :::8030 :::* users:(("java",pid=16973,fd=205)) LISTEN 0 128 :::8031 :::* users:(("java",pid=16973,fd=194)) LISTEN 0 128 :::8032 :::* users:(("java",pid=16973,fd=215)) LISTEN 0 128 :::8033 :::* users:(("java",pid=16973,fd=229)) LISTEN 0 128 :::8040 :::* users:(("java",pid=17247,fd=232)) LISTEN 0 128 :::8042 :::* users:(("java",pid=17247,fd=243)) LISTEN 0 128 :::35564 :::* users:(("java",pid=17247,fd=221)) LISTEN 0 128 :::8088 :::* users:(("java",pid=16973,fd=225)) LISTEN 0 50 :::13562 :::* users:(("java",pid=17247,fd=242))
浏览器访问http://192.168.10.5:8088
存储访问路径http://192.168.10.5:50070
转载于:https://www.cnblogs.com/rdchenxi/p/10232778.html
Hadoop伪集群部署相关推荐
- 【部署】Apache DolphinScheduler 伪集群部署
[部署]Apache DolphinScheduler(海豚) 伪集群部署(Pseudo-Cluster) Standalone极速体验版 DolphinScheduler 伪集群部署 前置准备工作 ...
- zookeeper的单实例和伪集群部署
原文链接: http://gudaoyufu.com/?p=1395 zookeeper工作方式 ZooKeeper 是一个开源的分布式协调服务,由雅虎创建,是 Google Chubby 的开源实现 ...
- centos7 kafka2.3.1单机伪集群部署
接上篇文章centos7 zookeeper单点部署,准备好相应的包 cp config/server.properties config/server0.properties vi config/s ...
- centos7 zookeeper3.5.6单机伪集群部署
接上篇文章centos7 zookeeper单点部署准备好zookeeper包,进行集群部署 单机伪集群部署 zookeeper1 zookeeper2 zookeeper3 三个目录分别部署一个服务 ...
- Hadoop+Spark 集群部署
研究了几天 Hadoop+Spark 集群部署,虽然现在还是有点不懂(一脸懵B),想写下自己的总结,日后有新的发现再补充. 我安装时候的参考教程: http://www.powerxing.com/i ...
- Hadoop教程(二)Hadoop伪集群环境安装
Hadoop教程(二)Hadoop伪集群环境安装 本文链接:https://blog.csdn.net/yuan_xw/article/details/50039325 Hadoop教程(二)Hado ...
- Hadoop HA集群部署 - A - 详解
理论简介: HA 概念以及作用 HA(High Available), 高可用性群集,是保证业务连续性的有效解决方案,一般有两个或两个以上的节点,且分为活动节点及备用节点.通常把正在执行业务 ...
- zookeeper伪集群部署
zookeeper 伪集群安装 Time : 20181024 环境 centos7 zookeeper-3.4.6 zookeerper安装启动 将下载好的zookeeper-3.4.6.tar通过 ...
- docker 容器实现 hadoop分布式集群部署
在学习hadoop课程中,讲师介绍了hadoop的单机以及集群部署方式,由于本地资源限制,只有一台虚拟机,所以考虑使用docker的方式实现分布式集群搭建. 如上图: 需要在主节点启动NameNode ...
- Dubbo与Zookeeper伪集群部署
1.美图 官网:http://dubbo.apache.org/#!/docs/user/preface/background.md?lang=zh-cn 1.准备Zookeeper zookeepe ...
最新文章
- 非升即走的博士们日后该怎么办?院长给出这5点建议
- IJCAI 2021:周志华任大会首个华人程序主席,南大校友获AIJ杰出论文奖
- 部分 CM11 系统 Android 平板执行植物大战僵尸 2 黑屏的解决的方法
- Fail to connect to camera service的几种原因和解决方法
- Linux环境下安装jenkins
- Redis总结(四)Redis 的持久化
- HDU 5752 Sqrt Bo【枚举,大水题】
- 【转载】MyBatis+MySQL 返回插入的主键ID
- Maven安装与配置(详细步骤)
- 【材料计算】第一性原理、密度泛函理论、从头算之间的关系
- Linux傻瓜式安装k8s
- matlab中如何画柱状图,如何在用Matlab画柱状图
- mysql中FIND_IN_SET函数用法
- 【​观察】同方谋定而后动 云局突破三板斧
- word选择性粘贴没有HTML选项,Word选择性粘贴如何使用?Word选择性粘贴的快捷键是什么?...
- vs2013 c++小代码运行完了不退出的方法
- 《算法通识16讲》学习笔记
- DDR4 时序参数理解
- JRE JDK是什么?
- what is vmagent
热门文章
- Emulator: PANIC: Cannot find AVD system path. Please define ANDROID_SDK_ROOT,博主亲自验证有效
- 一步一步使用标c编写跨平台图像处理库_让一个图像变成反向图像
- UEBA能够检测的七大类安全风险
- 数据库树形结构、多级联动的表设计
- 【SQL Server】CONVERT() 函数
- AC日记——可能的路径 51nod 1247
- JDBC 学习笔记(一)—— 基础知识 + 分页技术
- DirectX 发展历程
- PySide QtCore.Signal帮助手册
- 嵌入式Linux驱动笔记--转自风筝丶